Extract a column in bash even if the number of columns before it can change

Extract a column in bash even if the number of columns before it can change

awk command to print columns with delimiter
awk print line if column matches
awk filter multiple columns
awk print multiple rows
awk '(print multiple columns with delimiter)
awk print column matching pattern
awk print row
awk tutorial

im trying to make this script to switch areas on wacom tablets, im using xsetwacom to configure the tablets,

this is how the script im trying to make looks (it works, but just with the first tablet)

#!/bin/bash
variable =`xsetwacom --list | awk '/stylus/ {print $7}'`
xsetwacom --set $variable area 123 123 123 123

this is how an output of xsetwacom --list looks

Wacom Intuos S Pad pad                id: 21   type: PAD
Wacom Intuos S Pen stylus             id: 22   type: STYLUS
Wacom Intuos S Pen eraser             id: 23   type: ERASER

and with a different tablet connected

Wacom Bamboo 2FG 4x5 Pad pad          id: 21   type: PAD
Wacom Bamboo 2FG 4x5 Ped stylus       id: 22   type: STYLUS
Wacom Bamboo 2FG 4x5 Pen eraser       id: 23   type: ERASER
Wacom Bamboo 2FG 4x5 Finger touch     id: 24   type: TOUCH

So when i put another tablet, the value of the "$variable" that i get changes, because theres more words, how can i fix this, the value im looking for is the id number of stylus,Thanks!.


Bash has built-in regex support, which can be used as follows:

id_re='id:[[:space:]]*([[:digit:]]+)'  # assign regex to variable

while IFS= read -r line; do
  [[ $line = *stylus* ]] || continue   # skip lines without "stylus"
  [[ $line =~ $id_re ]] || continue    # match against regex, or skip the line otherwise
  stylus_id=${BASH_REMATCH[1]}         # take the match group from the regex
  xsetwacom --set "$stylus_id" area 123 123 123 123 </dev/null
done < <(xsetwacom --list)

At https://ideone.com/amv9O1 you can see this running (with input coming from stdin rather than xsetwacom --list, of course), and setting stylus_id for both of your lines.

extracting/copy ranges of columns from large data file, If there are embedded commas within any field it will break - in that case you transforming the delimiter before/after is probably your best bet. several times with shells supporting process substitution ( ksh , zsh , bash ): As I understand your problem, this should be all you need - and it's all Replace '. Extract a column in bash even if the number of columns before it can change. the value im looking for is the id number of stylus,Thanks!.


Assuming you want to get ids, you can get them as third field from the end ($(NF - 2)):

xsetwacom --list | awk '/stylus/ {print $(NF - 2)}'

Or you could change the field separator to 2+ spaces and just print the second field:

xsetwacom --list | awk --field-separator="[ ]{2,}" '/stylus/{print $2}'

It depends on how xsetwacom would change the output for longer names.

Out of curiosity, here's "pure awk" version:

yes | awk '
{ if (!( "xsetwacom --list" | getline )) { exit; } }
$NF == "STYLUS" { system("xsetwacom --set " $(NF-2) " area 123 123 123 123") }
'

AWK: Print Column - Change Field Separator, I will show how to change the default field separator in awk . At the end AWK: Print Columns by Number. Print the Print multiple columns (the first and the third columns): Use , (comma) as a field separator and print the first field: Exclude a column range from the second till the fourth and print the rest: So, the outcome of the cut command is a single or multiple columns. As of now, just remember that, cut command is just a filter, that processes the file and extracts columns from it. Basically, using cut command, we can process a file in order to extract - either a column of characters or some fields.


Just count fields from the end instead of from the front:

awk '/stylus/{print $(NF-2)}'

e.g.:

$ cat file
Wacom Intuos S Pad pad                id: 21   type: PAD
Wacom Intuos S Pen stylus             id: 22   type: STYLUS
Wacom Intuos S Pen eraser             id: 23   type: ERASER
Wacom Bamboo 2FG 4x5 Pad pad          id: 21   type: PAD
Wacom Bamboo 2FG 4x5 Ped stylus       id: 22   type: STYLUS
Wacom Bamboo 2FG 4x5 Pen eraser       id: 23   type: ERASER
Wacom Bamboo 2FG 4x5 Finger touch     id: 24   type: TOUCH

$ awk '/stylus/{print $(NF-2)}' file
22
22

Using AWK to Filter Rows · Tim Dennis, After attending a bash class I taught for Software Carpentry, a student contacted me Further, if we wanted to speed up our AWK even more, we can What we want to do is get the rows from Chr (column 7) when it equals 6 and also the NF is an AWK built in variable and it stands for number of fields. Hi, I would like to transform rows into columns from a file in bash. This is the file Code: param1 value1 param2 value2 param3 value3 And I like to


something like this?

$ ... | awk '/stylus/{for(i=1;i<NF;i++) if($i=="id:") {print $(i+1); exit}}' 

find the token next to id:

10 Practical Linux Cut Command Examples to Select File Columns, You can use this command to extract portion of text from a file by selecting columns. The entire line would get printed when you don't specify a number To display the range of fields specify start field and end field as shown below. grep "/bin/bash" /etc/passwd | cut -d':' -s -f1,6,7 --output-delimiter='#'  For awk, we specify which columns we want to act on with a $ followed by the column number. So in this case, where the percent identity is in column 5 in our blast table, if we provide an expression like this, $5 > 95 , awk will by default print all of the rows that pass this criterion.


7 UNIX Cut Command Examples of How to Extract Columns or , Background: A filter, such as the UNIX cut command, is a program that processes The syntax for extracting a selection based on a column number is: This cut example will extract the second field of the class file and redirect standard The -d option ("cut -d") can be used to change the field delimiter to something other  I've never seen it. My mawk will choke on 32768 but my gawk and igawk can deal with millions happily. Even my busybox's awk can deal with millions. I've never come across an awk that can't deal with 100 fields, that's a tiny number, after all. Are you sure that information is still relevant? Even on Solaris? – terdon ♦ Oct 16 '15 at 9:15


CompTIA Linux+ / LPIC-1 Cert Guide: (Exams LX0-103 & , Tabs When output is formatted for the screen, a tab-delimited file can display with oddly placed columns due to the length of the data in each field. The expand command helps change tabs to a set number of spaces. and either extract information on a column-by-column basis or perhaps even reorder the columns to make  That 'for' loop is fine and good, but notice that it specifies the number of columns, 6, in the table. What if we don't know the number of columns, or can't be bothered counting them? We can find the number of columns ('numc') by adding 1 to the number of separators with BASH arithmetic. The number can be found with grep and *wc** from the


Shell tricks for one-liner bioinformatics, You can always write a simple program, but for many tasks, using Linux/Unix shell The syntax is from bash, but most examples can easily be modified to work in other shells. To extract column number 4 from data.bim in awk , do: For example, if we had a file called data.vcf and wanted to pull out lines with a FILTER  I have multiple text files containing 12 lines and 3 columns. Example: 2 6 0.74 42 6 0.58 80 6 0 112 6 0.24 132 6 1 216 6 0.7 342 6