Group by with further grouping

sql group by
group by multiple columns mysql
group by having sql
sql group by multiple columns having count
sql group rows with same value
sql group by one column with multiple columns
what is grouping in dbms
group by multiple columns sql

I have the following sample data:

    ID | SectionID | LocID

    1      32        12
    1      32        2
    1      32        2
    1      34        3
    1      34        4
    2      36        8
    2      36        9
    2      37        8
    2      37        9
    2      37        4 

The output should be grouped by ID. The Count LocID field should show the number of DISTINCT LocIDs per sectionID totaled together.

For ID of 1, we have 2 distinct LocID for SectionID 32 and 2 for SectionID 34. Totaled equals 4

For ID of 2, we have 2 distinct LocID for SectionID 36 and 3 for sectionID 37. Total equals 5


    ID  Count 
    1   4
    2   5

I did a group by ID but not sure how to do further grouping based on what I need. I am using SQL Server 2016.

You could use a nested group by, such as

    FROM Table
    GROUP BY ID, SectionID
) Q

10.3 Grouping on Two or More Columns, The result is grouped not on one column, but on two. All rows with the same team number and the same player number form a group. The  SQL Server GROUP BY clause and aggregate functions In practice, the GROUP BY clause is often used with aggregate functions for generating summary reports. An aggregate function performs a calculation on a group and returns a unique value per group.

The easiest way, I think, is to group by your ID and do some kind of count distinct on a concatenation of SectionID and LocID. If these are character data, you can get away with just concatenating with some kind of delimiter. If their numeric, you can do something like the example below, or convert them to strings and concat with a delimiter.

-- set up sample data

declare @datatable as table(ID int, SectionID int, LocID int)
insert into @datatable(ID, SectionID, LocID) VALUES
    (1,32,12    ),
    (1,32,2 ),
    (1,32,2 ),
    (1,34,3 ),
    (1,34,4 ),
    (2,36,8 ),
    (2,36,9 ),
    (2,37,8 ),
    (2,37,9 ),
    (2,37,4     )

-- The query

     ,COUNT (DISTINCT SectionID * 10000 + LocID)

Gives the result:

(10 row(s) affected)
----------- -----------
1           4
2           5

(2 row(s) affected)

SQL GROUP BY: How To Apply It Effectively, This tutorial introduces you SQL GROUP BY that combines rows into groups and to use SQL GROUP BY clause to group rows based on one or more columns. The GROUP BY statement groups rows that have the same values into summary rows, like "find the number of customers in each country". The GROUP BY statement is often used with aggregate functions (COUNT, MAX, MIN, SUM, AVG) to group the result-set by one or more columns.

A down and dirty way is to just nest your groupings.

 SectionID INT,

  ( 1,32,12),
  ( 1,32,2),
  ( 1,32,2),
  ( 1,34,3),
  ( 1,34,4),
  ( 2,36,8),
  ( 2,36,9),
  ( 2,37,8),
  ( 2,37,9),
  ( 2,37,4) 

 ,SUM(d.LocIDs) AS LocIDCnt
  ) AS d

Result set:

| ID | Count |
|  1 |     4 |
|  2 |     5 |

GROUP BY and HAVING Clause in SQL, It groups the databases on the basis of one or more column and aggregates the results. After Grouping the data, you can filter the grouped  Grouping is one of the most important tasks that you have to deal with while working with the databases. To group rows into groups, you use the GROUP BY clause. The GROUP BY clause is an optional clause of the SELECT statement that combines rows into groups based on matching values in specified columns. One row is returned for each group.

One more way:

select ID, COUNT(*) as SecLocCount
from (
    select distinct ID, SectionID, LocID from [MyTable]
) AS distinctRows
group by ID

GROUP BY Clause, The GROUP BY clause groups the selected rows based on identical values in a A simple GROUP BY clause consists of a list of one or more columns or  Group By: split-apply-combine¶ By “group by” we are referring to a process involving one or more of the following steps: Splitting the data into groups based on some criteria. Applying a function to each group independently. Combining the results into a data structure. Out of these, the split step is the most straightforward.

SQL GROUP BY, Group by is one of the most frequently used SQL clauses. This clause is most often used with aggregations to show one value per grouped field or combination of fields. More Interesting Things About GROUP BY. 1. GROUP BY GROUPING SETS ( ) The GROUPING SETS option gives you the ability to combine multiple GROUP BY clauses into one GROUP BY clause. The results are the equivalent of UNION ALL of the specified groups. For example, GROUP BY ROLLUP (Country, Region) and GROUP BY GROUPING SETS ( ROLLUP (Country, Region) ) return the same results.

SQL, And those whose only SUBJECT is same but not YEAR belongs to different groups. So here we have grouped the table according to two columns or more than  In order to access the members of the groups, you can add exactly the same member loop as in step 1 above. Step 3, Group key binding for grouping by one column. Besides the representative binding, where the INTO-clause of LOOP AT is reused for accessing the group, you can define a group key binding: LOOP AT spfli_tab INTO wa. GROUP BY wa-carrid

Grouping Rows with GROUP BY, If a grouping column contains more than one null, the nulls are put into a single group. A group that contains multiple nulls doesn't imply that the  Note that: When using the PIVOT table operator, you do not need to explicitly specify the grouping elements, in your case a, b or in the source table, you need to remove it from the the query and from the source table. The PIVOT operator will automatically figures out the columns you want to GROUP BY, and these columns are those that were not

  • Is the thing you wrote in the Result box the output you want?
  • @CaiusJard Yes that is the result. I am looking for distinct LocID counts per SectionID totaled together grouped by ID.
  • That is indeed an interesting solution. I never would have thought of doing that. According to the statistics of my very limited dataset, of 10000 rows, they are literally neck in neck on performance.