Pandas groupby multiple columns, count, and resample

pandas groupby count
pandas groupby aggregate multiple columns
pandas groupby aggregate to list
pandas groupby sum multiple columns
pandas groupby max
pandas groupby add column
pandas resample
group by two columns pandas

Having the following dataframe:

                     UserID TweetLanguage
2014-08-25 21:00:00  001        english
2014-08-27 21:04:00  001        arabic
2014-08-29 22:07:00  001        espanish
2014-08-25 22:09:00  002        english
2014-08-26 22:09:00  002        espanish
2014-08-25 22:09:00  003        english 

I need to plot the weekly number of users who have posted in more than one language.

For example, in the above dataframe, user 001 and 002 have tweeted in more than one languages. So in the plot, the corresponding value for this week should be 2. Same story for other weeks.

df.groupby([pd.Grouper(freq='W'), 'User ID'])['TweetLanguage'].nunique().unstack().plot()

pandas.core.groupby.DataFrameGroupBy.resample, We could naturally group by either the A or B columns or both: ('foo', 'two'): Int64Index([2, 4], dtype='int64')} In [23]: len(grouped) Out[23]: 6 gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth Working with the resample, expanding or rolling operations on the groupby level used  Combining groupby() and resample() A very powerful method in Pandas is .groupby() . Whereas .resample() groups rows by some time or date information, .groupby() groups rows based on the values in one or more columns.

df.groupby(pd.Grouper(key='datetime', freq='W')).apply(lambda df:\
df.groupby('UserID').apply(lambda df: len(df.TweetLanguage.value_counts())))

This is a one liner that will seperate the week and get number of language in a week

df.groupby('UserID').apply(lambda df: len(df.TweetLanguage.value_counts()))

This will return a series with index: value of user ID : number of language used for each week..

Group By: split-apply-combine, For DataFrame objects, a string indicating a column to be used to group. 2 foo two -1.509059 -0.494929 3 bar three -1.135632 1.071804 4 foo two 1.212112 gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.​min gb.nth gb.prod gb.resample gb.sum gb.var gb.apply gb.cummax gb.cumsum​  Groupby single column in pandas – groupby count Groupby count multiple columns in pandas. First, let us transpose the data >>> df = df. Then visualize the aggregate data using a bar plot. To use Pandas groupby with multiple columns we add a list containing the column names. How to sum a column but keep the same shape of the df.

Use 2 groupbys. The first finds the users who post in more than one language every week, the second counts how many there are per week.

(df.groupby([df.index.year.rename('year'), df.index.week.rename('week'), 'UserID']).TweetLanguage.nunique() > 1).groupby(level=[0,1]).sum()

#year  week
#2014  35      2.0
#Name: TweetLanguage, dtype: float64

Group By: split-apply-combine, Grouping on Derived Arrays; Resampling How to use Pandas GroupBy operations on real-world data; How the split-apply-combine chain of Here's an example of grouping jointly on two columns, which finds the count of  In a more complex example I was trying to return many aggregated results that are calculated with several columns. It seems resample with apply is unable to return anything but a Series that has the same index as the calling DataFrame columns. Expected Output. Should look exactly like the output from df.groupby(pd.TimeGrouper('M')).apply(calc)

Pandas GroupBy: Your Guide to Grouping Data in Python – Real , Explanation of panda's grouper and aggregation (agg) functions. it work, use set_index to make the date column an index and then resample: we can use our normal groupby syntax but provide a little more info on how to  Python Pandas: Group datetime column into hour and minute aggregations (2) Came across this when I was searching for this type of groupby. mean() # Downsample to daily data and count the. resample('H'). pandas: powerful Python data analysis toolkit¶. 134: Stack.

Pandas Grouper and Agg Functions Explained, Resample equivalent in pysaprk is groupby + window : /40006395/applying-​udfs-on-groupeddata-in-pyspark-with-functioning-python-example Pyspark 2.0 - Count nulls in Grouped Dataframe 1 Answer Identify value changes in multiple columns, order by index (row #) in which value changed,  pandas.core.groupby.DataFrameGroupBy.resample¶ DataFrameGroupBy.resample (self, rule, *args, **kwargs) [source] ¶ Provide resampling when using a TimeGrouper. Given a grouper, the function resamples it according to a string “string” -> “frequency”.

Pyspark equivalent for df.groupby('id').resample('D').last() in pandas, When trying to resample timestamps into 5 minute time slots grouping on an id column (tried both counting and summing aggregation in 'how'  Resample time-series data. Convenience method for frequency conversion and resampling of time series. Object must have a datetime-like index ( DatetimeIndex , PeriodIndex, or TimedeltaIndex ), or pass datetime-like values to the on or level keyword. The offset string or object representing target conversion. Method for down/re-sampling, default

Comments
  • Nice, I was wondering what column was his datetime, looks like it's index so no key needed.
  • Many thanks. But this gives the frequency for each language. Not the number of language for each user.
  • However, this does not give weekly stats as indicated in the question. I would give each day a corresponding week number and use an extended groupby statement.
  • Yes you're correct, I think the better way to do it is to seperate the weeks first. Potentially doable in one line but probably not very pythonic
  • Thanks. But the first code doesn't work. It give this error: 'TimeGrouper' object has no attribute 'apply'
  • I missed a ) lol, also, if the datetime field is the index, you don't need the key argument in the first groupby
  • Many thanks. But this only works for one year (due to df.index.week). For the next years, it adds the count to the previous year.
  • @msmazh see the update. You should be able to use pd.to_datetime to create a datetime out of the year and week fairly easily if you need to plot these as dates over multiple years.