title
stringlengths 5
65
| summary
stringlengths 5
98.2k
| context
stringlengths 9
121k
| path
stringlengths 10
84
⌀ |
---|---|---|---|
pandas.Period.year | `pandas.Period.year`
Return the year this Period falls on. | Period.year#
Return the year this Period falls on.
| reference/api/pandas.Period.year.html |
pandas.core.resample.Resampler.median | `pandas.core.resample.Resampler.median`
Compute median of groups, excluding missing values. | Resampler.median(numeric_only=_NoDefault.no_default, *args, **kwargs)[source]#
Compute median of groups, excluding missing values.
For multiple groupings, the result index will be a MultiIndex
Parameters
numeric_onlybool, default TrueInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data.
Returns
Series or DataFrameMedian of values within each group.
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
| reference/api/pandas.core.resample.Resampler.median.html |
pandas.RangeIndex.stop | `pandas.RangeIndex.stop`
The value of the stop parameter. | property RangeIndex.stop[source]#
The value of the stop parameter.
| reference/api/pandas.RangeIndex.stop.html |
pandas.tseries.offsets.BQuarterBegin.copy | `pandas.tseries.offsets.BQuarterBegin.copy`
Return a copy of the frequency.
Examples
```
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
``` | BQuarterBegin.copy()#
Return a copy of the frequency.
Examples
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
| reference/api/pandas.tseries.offsets.BQuarterBegin.copy.html |
pandas.tseries.offsets.Hour.name | `pandas.tseries.offsets.Hour.name`
Return a string representing the base frequency.
Examples
```
>>> pd.offsets.Hour().name
'H'
``` | Hour.name#
Return a string representing the base frequency.
Examples
>>> pd.offsets.Hour().name
'H'
>>> pd.offsets.Hour(5).name
'H'
| reference/api/pandas.tseries.offsets.Hour.name.html |
pandas.io.formats.style.Styler.bar | `pandas.io.formats.style.Styler.bar`
Draw bar chart in the cell backgrounds. | Styler.bar(subset=None, axis=0, *, color=None, cmap=None, width=100, height=100, align='mid', vmin=None, vmax=None, props='width: 10em;')[source]#
Draw bar chart in the cell backgrounds.
Changed in version 1.4.0.
Parameters
subsetlabel, array-like, IndexSlice, optionalA valid 2d input to DataFrame.loc[<subset>], or, in the case of a 1d input
or single key, to DataFrame.loc[:, <subset>] where the columns are
prioritised, to limit data to before applying the function.
axis{0 or ‘index’, 1 or ‘columns’, None}, default 0Apply to each column (axis=0 or 'index'), to each row
(axis=1 or 'columns'), or to the entire DataFrame at once
with axis=None.
colorstr or 2-tuple/listIf a str is passed, the color is the same for both
negative and positive numbers. If 2-tuple/list is used, the
first element is the color_negative and the second is the
color_positive (eg: [‘#d65f5f’, ‘#5fba7d’]).
cmapstr, matplotlib.cm.ColorMapA string name of a matplotlib Colormap, or a Colormap object. Cannot be
used together with color.
New in version 1.4.0.
widthfloat, default 100The percentage of the cell, measured from the left, in which to draw the
bars, in [0, 100].
heightfloat, default 100The percentage height of the bar in the cell, centrally aligned, in [0,100].
New in version 1.4.0.
alignstr, int, float, callable, default ‘mid’How to align the bars within the cells relative to a width adjusted center.
If string must be one of:
‘left’ : bars are drawn rightwards from the minimum data value.
‘right’ : bars are drawn leftwards from the maximum data value.
‘zero’ : a value of zero is located at the center of the cell.
‘mid’ : a value of (max-min)/2 is located at the center of the cell,
or if all values are negative (positive) the zero is
aligned at the right (left) of the cell.
‘mean’ : the mean value of the data is located at the center of the cell.
If a float or integer is given this will indicate the center of the cell.
If a callable should take a 1d or 2d array and return a scalar.
Changed in version 1.4.0.
vminfloat, optionalMinimum bar value, defining the left hand limit
of the bar drawing range, lower values are clipped to vmin.
When None (default): the minimum value of the data will be used.
vmaxfloat, optionalMaximum bar value, defining the right hand limit
of the bar drawing range, higher values are clipped to vmax.
When None (default): the maximum value of the data will be used.
propsstr, optionalThe base CSS of the cell that is extended to add the bar chart. Defaults to
“width: 10em;”.
New in version 1.4.0.
Returns
selfStyler
Notes
This section of the user guide:
Table Visualization gives
a number of examples for different settings and color coordination.
| reference/api/pandas.io.formats.style.Styler.bar.html |
pandas.DatetimeIndex.to_frame | `pandas.DatetimeIndex.to_frame`
Create a DataFrame with a column containing the Index.
```
>>> idx = pd.Index(['Ant', 'Bear', 'Cow'], name='animal')
>>> idx.to_frame()
animal
animal
Ant Ant
Bear Bear
Cow Cow
``` | DatetimeIndex.to_frame(index=True, name=_NoDefault.no_default)[source]#
Create a DataFrame with a column containing the Index.
Parameters
indexbool, default TrueSet the index of the returned DataFrame as the original Index.
nameobject, default NoneThe passed name should substitute for the index name (if it has
one).
Returns
DataFrameDataFrame containing the original Index data.
See also
Index.to_seriesConvert an Index to a Series.
Series.to_frameConvert Series to DataFrame.
Examples
>>> idx = pd.Index(['Ant', 'Bear', 'Cow'], name='animal')
>>> idx.to_frame()
animal
animal
Ant Ant
Bear Bear
Cow Cow
By default, the original Index is reused. To enforce a new Index:
>>> idx.to_frame(index=False)
animal
0 Ant
1 Bear
2 Cow
To override the name of the resulting column, specify name:
>>> idx.to_frame(index=False, name='zoo')
zoo
0 Ant
1 Bear
2 Cow
| reference/api/pandas.DatetimeIndex.to_frame.html |
pandas.tseries.offsets.CustomBusinessMonthEnd.offset | `pandas.tseries.offsets.CustomBusinessMonthEnd.offset`
Alias for self._offset. | CustomBusinessMonthEnd.offset#
Alias for self._offset.
| reference/api/pandas.tseries.offsets.CustomBusinessMonthEnd.offset.html |
pandas.tseries.offsets.Micro.isAnchored | pandas.tseries.offsets.Micro.isAnchored | Micro.isAnchored()#
| reference/api/pandas.tseries.offsets.Micro.isAnchored.html |
pandas.tseries.offsets.Nano.is_on_offset | `pandas.tseries.offsets.Nano.is_on_offset`
Return boolean whether a timestamp intersects with this frequency.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
``` | Nano.is_on_offset()#
Return boolean whether a timestamp intersects with this frequency.
Parameters
dtdatetime.datetimeTimestamp to check intersections with frequency.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
>>> ts = pd.Timestamp(2022, 8, 6)
>>> ts.day_name()
'Saturday'
>>> freq = pd.offsets.BusinessDay(1)
>>> freq.is_on_offset(ts)
False
| reference/api/pandas.tseries.offsets.Nano.is_on_offset.html |
pandas.Index.duplicated | `pandas.Index.duplicated`
Indicate duplicate index values.
```
>>> idx = pd.Index(['lama', 'cow', 'lama', 'beetle', 'lama'])
>>> idx.duplicated()
array([False, False, True, False, True])
``` | Index.duplicated(keep='first')[source]#
Indicate duplicate index values.
Duplicated values are indicated as True values in the resulting
array. Either all duplicates, all except the first, or all except the
last occurrence of duplicates can be indicated.
Parameters
keep{‘first’, ‘last’, False}, default ‘first’The value or values in a set of duplicates to mark as missing.
‘first’ : Mark duplicates as True except for the first
occurrence.
‘last’ : Mark duplicates as True except for the last
occurrence.
False : Mark all duplicates as True.
Returns
np.ndarray[bool]
See also
Series.duplicatedEquivalent method on pandas.Series.
DataFrame.duplicatedEquivalent method on pandas.DataFrame.
Index.drop_duplicatesRemove duplicate values from Index.
Examples
By default, for each set of duplicated values, the first occurrence is
set to False and all others to True:
>>> idx = pd.Index(['lama', 'cow', 'lama', 'beetle', 'lama'])
>>> idx.duplicated()
array([False, False, True, False, True])
which is equivalent to
>>> idx.duplicated(keep='first')
array([False, False, True, False, True])
By using ‘last’, the last occurrence of each set of duplicated values
is set on False and all others on True:
>>> idx.duplicated(keep='last')
array([ True, False, True, False, False])
By setting keep on False, all duplicates are True:
>>> idx.duplicated(keep=False)
array([ True, False, True, False, True])
| reference/api/pandas.Index.duplicated.html |
pandas.tseries.offsets.QuarterBegin.normalize | pandas.tseries.offsets.QuarterBegin.normalize | QuarterBegin.normalize#
| reference/api/pandas.tseries.offsets.QuarterBegin.normalize.html |
pandas.tseries.offsets.Micro.apply | pandas.tseries.offsets.Micro.apply | Micro.apply()#
| reference/api/pandas.tseries.offsets.Micro.apply.html |
pandas.arrays.PandasArray | `pandas.arrays.PandasArray`
A pandas ExtensionArray for NumPy data. | class pandas.arrays.PandasArray(values, copy=False)[source]#
A pandas ExtensionArray for NumPy data.
This is mostly for internal compatibility, and is not especially
useful on its own.
Parameters
valuesndarrayThe NumPy ndarray to wrap. Must be 1-dimensional.
copybool, default FalseWhether to copy values.
Attributes
None
Methods
None
| reference/api/pandas.arrays.PandasArray.html |
pandas.tseries.offsets.Hour.onOffset | pandas.tseries.offsets.Hour.onOffset | Hour.onOffset()#
| reference/api/pandas.tseries.offsets.Hour.onOffset.html |
pandas.tseries.offsets.Day.isAnchored | pandas.tseries.offsets.Day.isAnchored | Day.isAnchored()#
| reference/api/pandas.tseries.offsets.Day.isAnchored.html |
pandas.core.groupby.SeriesGroupBy.is_monotonic_decreasing | `pandas.core.groupby.SeriesGroupBy.is_monotonic_decreasing`
Return boolean if values in the object are monotonically decreasing. | property SeriesGroupBy.is_monotonic_decreasing[source]#
Return boolean if values in the object are monotonically decreasing.
Returns
bool
| reference/api/pandas.core.groupby.SeriesGroupBy.is_monotonic_decreasing.html |
pandas.Timestamp.fromordinal | `pandas.Timestamp.fromordinal`
Construct a timestamp from a a proleptic Gregorian ordinal.
```
>>> pd.Timestamp.fromordinal(737425)
Timestamp('2020-01-01 00:00:00')
``` | classmethod Timestamp.fromordinal(ordinal, freq=None, tz=None)#
Construct a timestamp from a a proleptic Gregorian ordinal.
Parameters
ordinalintDate corresponding to a proleptic Gregorian ordinal.
freqstr, DateOffsetOffset to apply to the Timestamp.
tzstr, pytz.timezone, dateutil.tz.tzfile or NoneTime zone for the Timestamp.
Notes
By definition there cannot be any tz info on the ordinal itself.
Examples
>>> pd.Timestamp.fromordinal(737425)
Timestamp('2020-01-01 00:00:00')
| reference/api/pandas.Timestamp.fromordinal.html |
Testing | Testing | Assertion functions#
testing.assert_frame_equal(left, right[, ...])
Check that left and right DataFrame are equal.
testing.assert_series_equal(left, right[, ...])
Check that left and right Series are equal.
testing.assert_index_equal(left, right[, ...])
Check that left and right Index are equal.
testing.assert_extension_array_equal(left, right)
Check that left and right ExtensionArrays are equal.
Exceptions and warnings#
errors.AbstractMethodError(class_instance[, ...])
Raise this error instead of NotImplementedError for abstract methods.
errors.AccessorRegistrationWarning
Warning for attribute conflicts in accessor registration.
errors.AttributeConflictWarning
Warning raised when index attributes conflict when using HDFStore.
errors.CategoricalConversionWarning
Warning is raised when reading a partial labeled Stata file using a iterator.
errors.ClosedFileError
Exception is raised when trying to perform an operation on a closed HDFStore file.
errors.CSSWarning
Warning is raised when converting css styling fails.
errors.DatabaseError
Error is raised when executing sql with bad syntax or sql that throws an error.
errors.DataError
Exceptionn raised when performing an operation on non-numerical data.
errors.DtypeWarning
Warning raised when reading different dtypes in a column from a file.
errors.DuplicateLabelError
Error raised when an operation would introduce duplicate labels.
errors.EmptyDataError
Exception raised in pd.read_csv when empty data or header is encountered.
errors.IncompatibilityWarning
Warning raised when trying to use where criteria on an incompatible HDF5 file.
errors.IndexingError
Exception is raised when trying to index and there is a mismatch in dimensions.
errors.InvalidColumnName
Warning raised by to_stata the column contains a non-valid stata name.
errors.InvalidIndexError
Exception raised when attempting to use an invalid index key.
errors.IntCastingNaNError
Exception raised when converting (astype) an array with NaN to an integer type.
errors.MergeError
Exception raised when merging data.
errors.NullFrequencyError
Exception raised when a freq cannot be null.
errors.NumbaUtilError
Error raised for unsupported Numba engine routines.
errors.NumExprClobberingError
Exception raised when trying to use a built-in numexpr name as a variable name.
errors.OptionError
Exception raised for pandas.options.
errors.OutOfBoundsDatetime
Raised when the datetime is outside the range that can be represented.
errors.OutOfBoundsTimedelta
Raised when encountering a timedelta value that cannot be represented.
errors.ParserError
Exception that is raised by an error encountered in parsing file contents.
errors.ParserWarning
Warning raised when reading a file that doesn't use the default 'c' parser.
errors.PerformanceWarning
Warning raised when there is a possible performance impact.
errors.PossibleDataLossError
Exception raised when trying to open a HDFStore file when already opened.
errors.PossiblePrecisionLoss
Warning raised by to_stata on a column with a value outside or equal to int64.
errors.PyperclipException
Exception raised when clipboard functionality is unsupported.
errors.PyperclipWindowsException(message)
Exception raised when clipboard functionality is unsupported by Windows.
errors.SettingWithCopyError
Exception raised when trying to set on a copied slice from a DataFrame.
errors.SettingWithCopyWarning
Warning raised when trying to set on a copied slice from a DataFrame.
errors.SpecificationError
Exception raised by agg when the functions are ill-specified.
errors.UndefinedVariableError(name[, is_local])
Exception raised by query or eval when using an undefined variable name.
errors.UnsortedIndexError
Error raised when slicing a MultiIndex which has not been lexsorted.
errors.UnsupportedFunctionCall
Exception raised when attempting to call a unsupported numpy function.
errors.ValueLabelTypeMismatch
Warning raised by to_stata on a category column that contains non-string values.
Bug report function#
show_versions([as_json])
Provide useful information, important for bug reports.
Test suite runner#
test([extra_args])
Run the pandas test suite using pytest.
| reference/testing.html |
pandas.MultiIndex.get_locs | `pandas.MultiIndex.get_locs`
Get location for a sequence of labels.
```
>>> mi = pd.MultiIndex.from_arrays([list('abb'), list('def')])
``` | MultiIndex.get_locs(seq)[source]#
Get location for a sequence of labels.
Parameters
seqlabel, slice, list, mask or a sequence of suchYou should use one of the above for each level.
If a level should not be used, set it to slice(None).
Returns
numpy.ndarrayNumPy array of integers suitable for passing to iloc.
See also
MultiIndex.get_locGet location for a label or a tuple of labels.
MultiIndex.slice_locsGet slice location given start label(s) and end label(s).
Examples
>>> mi = pd.MultiIndex.from_arrays([list('abb'), list('def')])
>>> mi.get_locs('b')
array([1, 2], dtype=int64)
>>> mi.get_locs([slice(None), ['e', 'f']])
array([1, 2], dtype=int64)
>>> mi.get_locs([[True, False, True], slice('e', 'f')])
array([2], dtype=int64)
| reference/api/pandas.MultiIndex.get_locs.html |
pandas.tseries.offsets.Day.nanos | `pandas.tseries.offsets.Day.nanos`
Return an integer of the total number of nanoseconds.
If the frequency is non-fixed.
```
>>> pd.offsets.Hour(5).nanos
18000000000000
``` | Day.nanos#
Return an integer of the total number of nanoseconds.
Raises
ValueErrorIf the frequency is non-fixed.
Examples
>>> pd.offsets.Hour(5).nanos
18000000000000
| reference/api/pandas.tseries.offsets.Day.nanos.html |
Scaling to large datasets | Scaling to large datasets | pandas provides data structures for in-memory analytics, which makes using pandas
to analyze datasets that are larger than memory datasets somewhat tricky. Even datasets
that are a sizable fraction of memory become unwieldy, as some pandas operations need
to make intermediate copies.
This document provides a few recommendations for scaling your analysis to larger datasets.
It’s a complement to Enhancing performance, which focuses on speeding up analysis
for datasets that fit in memory.
But first, it’s worth considering not using pandas. pandas isn’t the right
tool for all situations. If you’re working with very large datasets and a tool
like PostgreSQL fits your needs, then you should probably be using that.
Assuming you want or need the expressiveness and power of pandas, let’s carry on.
Load less data#
Suppose our raw dataset on disk has many columns:
id_0 name_0 x_0 y_0 id_1 name_1 x_1 ... name_8 x_8 y_8 id_9 name_9 x_9 y_9
timestamp ...
2000-01-01 00:00:00 1015 Michael -0.399453 0.095427 994 Frank -0.176842 ... Dan -0.315310 0.713892 1025 Victor -0.135779 0.346801
2000-01-01 00:01:00 969 Patricia 0.650773 -0.874275 1003 Laura 0.459153 ... Ursula 0.913244 -0.630308 1047 Wendy -0.886285 0.035852
2000-01-01 00:02:00 1016 Victor -0.721465 -0.584710 1046 Michael 0.524994 ... Ray -0.656593 0.692568 1064 Yvonne 0.070426 0.432047
2000-01-01 00:03:00 939 Alice -0.746004 -0.908008 996 Ingrid -0.414523 ... Jerry -0.958994 0.608210 978 Wendy 0.855949 -0.648988
2000-01-01 00:04:00 1017 Dan 0.919451 -0.803504 1048 Jerry -0.569235 ... Frank -0.577022 -0.409088 994 Bob -0.270132 0.335176
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
2000-12-30 23:56:00 999 Tim 0.162578 0.512817 973 Kevin -0.403352 ... Tim -0.380415 0.008097 1041 Charlie 0.191477 -0.599519
2000-12-30 23:57:00 970 Laura -0.433586 -0.600289 958 Oliver -0.966577 ... Zelda 0.971274 0.402032 1038 Ursula 0.574016 -0.930992
2000-12-30 23:58:00 1065 Edith 0.232211 -0.454540 971 Tim 0.158484 ... Alice -0.222079 -0.919274 1022 Dan 0.031345 -0.657755
2000-12-30 23:59:00 1019 Ingrid 0.322208 -0.615974 981 Hannah 0.607517 ... Sarah -0.424440 -0.117274 990 George -0.375530 0.563312
2000-12-31 00:00:00 937 Ursula -0.906523 0.943178 1018 Alice -0.564513 ... Jerry 0.236837 0.807650 985 Oliver 0.777642 0.783392
[525601 rows x 40 columns]
That can be generated by the following code snippet:
In [1]: import pandas as pd
In [2]: import numpy as np
In [3]: def make_timeseries(start="2000-01-01", end="2000-12-31", freq="1D", seed=None):
...: index = pd.date_range(start=start, end=end, freq=freq, name="timestamp")
...: n = len(index)
...: state = np.random.RandomState(seed)
...: columns = {
...: "name": state.choice(["Alice", "Bob", "Charlie"], size=n),
...: "id": state.poisson(1000, size=n),
...: "x": state.rand(n) * 2 - 1,
...: "y": state.rand(n) * 2 - 1,
...: }
...: df = pd.DataFrame(columns, index=index, columns=sorted(columns))
...: if df.index[-1] == end:
...: df = df.iloc[:-1]
...: return df
...:
In [4]: timeseries = [
...: make_timeseries(freq="1T", seed=i).rename(columns=lambda x: f"{x}_{i}")
...: for i in range(10)
...: ]
...:
In [5]: ts_wide = pd.concat(timeseries, axis=1)
In [6]: ts_wide.to_parquet("timeseries_wide.parquet")
To load the columns we want, we have two options.
Option 1 loads in all the data and then filters to what we need.
In [7]: columns = ["id_0", "name_0", "x_0", "y_0"]
In [8]: pd.read_parquet("timeseries_wide.parquet")[columns]
Out[8]:
id_0 name_0 x_0 y_0
timestamp
2000-01-01 00:00:00 977 Alice -0.821225 0.906222
2000-01-01 00:01:00 1018 Bob -0.219182 0.350855
2000-01-01 00:02:00 927 Alice 0.660908 -0.798511
2000-01-01 00:03:00 997 Bob -0.852458 0.735260
2000-01-01 00:04:00 965 Bob 0.717283 0.393391
... ... ... ... ...
2000-12-30 23:56:00 1037 Bob -0.814321 0.612836
2000-12-30 23:57:00 980 Bob 0.232195 -0.618828
2000-12-30 23:58:00 965 Alice -0.231131 0.026310
2000-12-30 23:59:00 984 Alice 0.942819 0.853128
2000-12-31 00:00:00 1003 Alice 0.201125 -0.136655
[525601 rows x 4 columns]
Option 2 only loads the columns we request.
In [9]: pd.read_parquet("timeseries_wide.parquet", columns=columns)
Out[9]:
id_0 name_0 x_0 y_0
timestamp
2000-01-01 00:00:00 977 Alice -0.821225 0.906222
2000-01-01 00:01:00 1018 Bob -0.219182 0.350855
2000-01-01 00:02:00 927 Alice 0.660908 -0.798511
2000-01-01 00:03:00 997 Bob -0.852458 0.735260
2000-01-01 00:04:00 965 Bob 0.717283 0.393391
... ... ... ... ...
2000-12-30 23:56:00 1037 Bob -0.814321 0.612836
2000-12-30 23:57:00 980 Bob 0.232195 -0.618828
2000-12-30 23:58:00 965 Alice -0.231131 0.026310
2000-12-30 23:59:00 984 Alice 0.942819 0.853128
2000-12-31 00:00:00 1003 Alice 0.201125 -0.136655
[525601 rows x 4 columns]
If we were to measure the memory usage of the two calls, we’d see that specifying
columns uses about 1/10th the memory in this case.
With pandas.read_csv(), you can specify usecols to limit the columns
read into memory. Not all file formats that can be read by pandas provide an option
to read a subset of columns.
Use efficient datatypes#
The default pandas data types are not the most memory efficient. This is
especially true for text data columns with relatively few unique values (commonly
referred to as “low-cardinality” data). By using more efficient data types, you
can store larger datasets in memory.
In [10]: ts = make_timeseries(freq="30S", seed=0)
In [11]: ts.to_parquet("timeseries.parquet")
In [12]: ts = pd.read_parquet("timeseries.parquet")
In [13]: ts
Out[13]:
id name x y
timestamp
2000-01-01 00:00:00 1041 Alice 0.889987 0.281011
2000-01-01 00:00:30 988 Bob -0.455299 0.488153
2000-01-01 00:01:00 1018 Alice 0.096061 0.580473
2000-01-01 00:01:30 992 Bob 0.142482 0.041665
2000-01-01 00:02:00 960 Bob -0.036235 0.802159
... ... ... ... ...
2000-12-30 23:58:00 1022 Alice 0.266191 0.875579
2000-12-30 23:58:30 974 Alice -0.009826 0.413686
2000-12-30 23:59:00 1028 Charlie 0.307108 -0.656789
2000-12-30 23:59:30 1002 Alice 0.202602 0.541335
2000-12-31 00:00:00 987 Alice 0.200832 0.615972
[1051201 rows x 4 columns]
Now, let’s inspect the data types and memory usage to see where we should focus our
attention.
In [14]: ts.dtypes
Out[14]:
id int64
name object
x float64
y float64
dtype: object
In [15]: ts.memory_usage(deep=True) # memory usage in bytes
Out[15]:
Index 8409608
id 8409608
name 65176434
x 8409608
y 8409608
dtype: int64
The name column is taking up much more memory than any other. It has just a
few unique values, so it’s a good candidate for converting to a
pandas.Categorical. With a pandas.Categorical, we store each unique name once and use
space-efficient integers to know which specific name is used in each row.
In [16]: ts2 = ts.copy()
In [17]: ts2["name"] = ts2["name"].astype("category")
In [18]: ts2.memory_usage(deep=True)
Out[18]:
Index 8409608
id 8409608
name 1051495
x 8409608
y 8409608
dtype: int64
We can go a bit further and downcast the numeric columns to their smallest types
using pandas.to_numeric().
In [19]: ts2["id"] = pd.to_numeric(ts2["id"], downcast="unsigned")
In [20]: ts2[["x", "y"]] = ts2[["x", "y"]].apply(pd.to_numeric, downcast="float")
In [21]: ts2.dtypes
Out[21]:
id uint16
name category
x float32
y float32
dtype: object
In [22]: ts2.memory_usage(deep=True)
Out[22]:
Index 8409608
id 2102402
name 1051495
x 4204804
y 4204804
dtype: int64
In [23]: reduction = ts2.memory_usage(deep=True).sum() / ts.memory_usage(deep=True).sum()
In [24]: print(f"{reduction:0.2f}")
0.20
In all, we’ve reduced the in-memory footprint of this dataset to 1/5 of its
original size.
See Categorical data for more on pandas.Categorical and dtypes
for an overview of all of pandas’ dtypes.
Use chunking#
Some workloads can be achieved with chunking: splitting a large problem like “convert this
directory of CSVs to parquet” into a bunch of small problems (“convert this individual CSV
file into a Parquet file. Now repeat that for each file in this directory.”). As long as each chunk
fits in memory, you can work with datasets that are much larger than memory.
Note
Chunking works well when the operation you’re performing requires zero or minimal
coordination between chunks. For more complicated workflows, you’re better off
using another library.
Suppose we have an even larger “logical dataset” on disk that’s a directory of parquet
files. Each file in the directory represents a different year of the entire dataset.
In [25]: import pathlib
In [26]: N = 12
In [27]: starts = [f"20{i:>02d}-01-01" for i in range(N)]
In [28]: ends = [f"20{i:>02d}-12-13" for i in range(N)]
In [29]: pathlib.Path("data/timeseries").mkdir(exist_ok=True)
In [30]: for i, (start, end) in enumerate(zip(starts, ends)):
....: ts = make_timeseries(start=start, end=end, freq="1T", seed=i)
....: ts.to_parquet(f"data/timeseries/ts-{i:0>2d}.parquet")
....:
data
└── timeseries
├── ts-00.parquet
├── ts-01.parquet
├── ts-02.parquet
├── ts-03.parquet
├── ts-04.parquet
├── ts-05.parquet
├── ts-06.parquet
├── ts-07.parquet
├── ts-08.parquet
├── ts-09.parquet
├── ts-10.parquet
└── ts-11.parquet
Now we’ll implement an out-of-core pandas.Series.value_counts(). The peak memory usage of this
workflow is the single largest chunk, plus a small series storing the unique value
counts up to this point. As long as each individual file fits in memory, this will
work for arbitrary-sized datasets.
In [31]: %%time
....: files = pathlib.Path("data/timeseries/").glob("ts*.parquet")
....: counts = pd.Series(dtype=int)
....: for path in files:
....: df = pd.read_parquet(path)
....: counts = counts.add(df["name"].value_counts(), fill_value=0)
....: counts.astype(int)
....:
CPU times: user 698 ms, sys: 79.6 ms, total: 778 ms
Wall time: 768 ms
Out[31]:
Alice 1994645
Bob 1993692
Charlie 1994875
dtype: int64
Some readers, like pandas.read_csv(), offer parameters to control the
chunksize when reading a single file.
Manually chunking is an OK option for workflows that don’t
require too sophisticated of operations. Some operations, like pandas.DataFrame.groupby(), are
much harder to do chunkwise. In these cases, you may be better switching to a
different library that implements these out-of-core algorithms for you.
Use other libraries#
pandas is just one library offering a DataFrame API. Because of its popularity,
pandas’ API has become something of a standard that other libraries implement.
The pandas documentation maintains a list of libraries implementing a DataFrame API
in our ecosystem page.
For example, Dask, a parallel computing library, has dask.dataframe, a
pandas-like API for working with larger than memory datasets in parallel. Dask
can use multiple threads or processes on a single machine, or a cluster of
machines to process data in parallel.
We’ll import dask.dataframe and notice that the API feels similar to pandas.
We can use Dask’s read_parquet function, but provide a globstring of files to read in.
In [32]: import dask.dataframe as dd
In [33]: ddf = dd.read_parquet("data/timeseries/ts*.parquet", engine="pyarrow")
In [34]: ddf
Out[34]:
Dask DataFrame Structure:
id name x y
npartitions=12
int64 object float64 float64
... ... ... ...
... ... ... ... ...
... ... ... ...
... ... ... ...
Dask Name: read-parquet, 1 graph layer
Inspecting the ddf object, we see a few things
There are familiar attributes like .columns and .dtypes
There are familiar methods like .groupby, .sum, etc.
There are new attributes like .npartitions and .divisions
The partitions and divisions are how Dask parallelizes computation. A Dask
DataFrame is made up of many pandas pandas.DataFrame. A single method call on a
Dask DataFrame ends up making many pandas method calls, and Dask knows how to
coordinate everything to get the result.
In [35]: ddf.columns
Out[35]: Index(['id', 'name', 'x', 'y'], dtype='object')
In [36]: ddf.dtypes
Out[36]:
id int64
name object
x float64
y float64
dtype: object
In [37]: ddf.npartitions
Out[37]: 12
One major difference: the dask.dataframe API is lazy. If you look at the
repr above, you’ll notice that the values aren’t actually printed out; just the
column names and dtypes. That’s because Dask hasn’t actually read the data yet.
Rather than executing immediately, doing operations build up a task graph.
In [38]: ddf
Out[38]:
Dask DataFrame Structure:
id name x y
npartitions=12
int64 object float64 float64
... ... ... ...
... ... ... ... ...
... ... ... ...
... ... ... ...
Dask Name: read-parquet, 1 graph layer
In [39]: ddf["name"]
Out[39]:
Dask Series Structure:
npartitions=12
object
...
...
...
...
Name: name, dtype: object
Dask Name: getitem, 2 graph layers
In [40]: ddf["name"].value_counts()
Out[40]:
Dask Series Structure:
npartitions=1
int64
...
Name: name, dtype: int64
Dask Name: value-counts-agg, 4 graph layers
Each of these calls is instant because the result isn’t being computed yet.
We’re just building up a list of computation to do when someone needs the
result. Dask knows that the return type of a pandas.Series.value_counts
is a pandas pandas.Series with a certain dtype and a certain name. So the Dask version
returns a Dask Series with the same dtype and the same name.
To get the actual result you can call .compute().
In [41]: %time ddf["name"].value_counts().compute()
CPU times: user 767 ms, sys: 44.4 ms, total: 811 ms
Wall time: 788 ms
Out[41]:
Charlie 1994875
Alice 1994645
Bob 1993692
Name: name, dtype: int64
At that point, you get back the same thing you’d get with pandas, in this case
a concrete pandas pandas.Series with the count of each name.
Calling .compute causes the full task graph to be executed. This includes
reading the data, selecting the columns, and doing the value_counts. The
execution is done in parallel where possible, and Dask tries to keep the
overall memory footprint small. You can work with datasets that are much larger
than memory, as long as each partition (a regular pandas pandas.DataFrame) fits in memory.
By default, dask.dataframe operations use a threadpool to do operations in
parallel. We can also connect to a cluster to distribute the work on many
machines. In this case we’ll connect to a local “cluster” made up of several
processes on this single machine.
>>> from dask.distributed import Client, LocalCluster
>>> cluster = LocalCluster()
>>> client = Client(cluster)
>>> client
<Client: 'tcp://127.0.0.1:53349' processes=4 threads=8, memory=17.18 GB>
Once this client is created, all of Dask’s computation will take place on
the cluster (which is just processes in this case).
Dask implements the most used parts of the pandas API. For example, we can do
a familiar groupby aggregation.
In [42]: %time ddf.groupby("name")[["x", "y"]].mean().compute().head()
CPU times: user 1.24 s, sys: 91.7 ms, total: 1.33 s
Wall time: 1.2 s
Out[42]:
x y
name
Alice -0.000224 -0.000194
Bob -0.000746 0.000349
Charlie 0.000604 0.000250
The grouping and aggregation is done out-of-core and in parallel.
When Dask knows the divisions of a dataset, certain optimizations are
possible. When reading parquet datasets written by dask, the divisions will be
known automatically. In this case, since we created the parquet files manually,
we need to supply the divisions manually.
In [43]: N = 12
In [44]: starts = [f"20{i:>02d}-01-01" for i in range(N)]
In [45]: ends = [f"20{i:>02d}-12-13" for i in range(N)]
In [46]: divisions = tuple(pd.to_datetime(starts)) + (pd.Timestamp(ends[-1]),)
In [47]: ddf.divisions = divisions
In [48]: ddf
Out[48]:
Dask DataFrame Structure:
id name x y
npartitions=12
2000-01-01 int64 object float64 float64
2001-01-01 ... ... ... ...
... ... ... ... ...
2011-01-01 ... ... ... ...
2011-12-13 ... ... ... ...
Dask Name: read-parquet, 1 graph layer
Now we can do things like fast random access with .loc.
In [49]: ddf.loc["2002-01-01 12:01":"2002-01-01 12:05"].compute()
Out[49]:
id name x y
timestamp
2002-01-01 12:01:00 971 Bob -0.659481 0.556184
2002-01-01 12:02:00 1015 Charlie 0.120131 -0.609522
2002-01-01 12:03:00 991 Bob -0.357816 0.811362
2002-01-01 12:04:00 984 Alice -0.608760 0.034187
2002-01-01 12:05:00 998 Charlie 0.551662 -0.461972
Dask knows to just look in the 3rd partition for selecting values in 2002. It
doesn’t need to look at any other data.
Many workflows involve a large amount of data and processing it in a way that
reduces the size to something that fits in memory. In this case, we’ll resample
to daily frequency and take the mean. Once we’ve taken the mean, we know the
results will fit in memory, so we can safely call compute without running
out of memory. At that point it’s just a regular pandas object.
In [50]: ddf[["x", "y"]].resample("1D").mean().cumsum().compute().plot()
Out[50]: <AxesSubplot: xlabel='timestamp'>
These Dask examples have all be done using multiple processes on a single
machine. Dask can be deployed on a cluster to scale up to even larger
datasets.
You see more dask examples at https://examples.dask.org.
| user_guide/scale.html |
pandas.IntervalDtype.subtype | `pandas.IntervalDtype.subtype`
The dtype of the Interval bounds. | property IntervalDtype.subtype[source]#
The dtype of the Interval bounds.
| reference/api/pandas.IntervalDtype.subtype.html |
pandas.tseries.offsets.BYearBegin.is_on_offset | `pandas.tseries.offsets.BYearBegin.is_on_offset`
Return boolean whether a timestamp intersects with this frequency.
Timestamp to check intersections with frequency.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
``` | BYearBegin.is_on_offset()#
Return boolean whether a timestamp intersects with this frequency.
Parameters
dtdatetime.datetimeTimestamp to check intersections with frequency.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
>>> ts = pd.Timestamp(2022, 8, 6)
>>> ts.day_name()
'Saturday'
>>> freq = pd.offsets.BusinessDay(1)
>>> freq.is_on_offset(ts)
False
| reference/api/pandas.tseries.offsets.BYearBegin.is_on_offset.html |
pandas.DataFrame.rolling | `pandas.DataFrame.rolling`
Provide rolling window calculations.
```
>>> df = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]})
>>> df
B
0 0.0
1 1.0
2 2.0
3 NaN
4 4.0
``` | DataFrame.rolling(window, min_periods=None, center=False, win_type=None, on=None, axis=0, closed=None, step=None, method='single')[source]#
Provide rolling window calculations.
Parameters
windowint, offset, or BaseIndexer subclassSize of the moving window.
If an integer, the fixed number of observations used for
each window.
If an offset, the time period of each window. Each
window will be a variable sized based on the observations included in
the time-period. This is only valid for datetimelike indexes.
To learn more about the offsets & frequency strings, please see this link.
If a BaseIndexer subclass, the window boundaries
based on the defined get_window_bounds method. Additional rolling
keyword arguments, namely min_periods, center, closed and
step will be passed to get_window_bounds.
min_periodsint, default NoneMinimum number of observations in window required to have a value;
otherwise, result is np.nan.
For a window that is specified by an offset, min_periods will default to 1.
For a window that is specified by an integer, min_periods will default
to the size of the window.
centerbool, default FalseIf False, set the window labels as the right edge of the window index.
If True, set the window labels as the center of the window index.
win_typestr, default NoneIf None, all points are evenly weighted.
If a string, it must be a valid scipy.signal window function.
Certain Scipy window types require additional parameters to be passed
in the aggregation function. The additional parameters must match
the keywords specified in the Scipy window type method signature.
onstr, optionalFor a DataFrame, a column label or Index level on which
to calculate the rolling window, rather than the DataFrame’s index.
Provided integer column is ignored and excluded from result since
an integer index is not used to calculate the rolling window.
axisint or str, default 0If 0 or 'index', roll across the rows.
If 1 or 'columns', roll across the columns.
For Series this parameter is unused and defaults to 0.
closedstr, default NoneIf 'right', the first point in the window is excluded from calculations.
If 'left', the last point in the window is excluded from calculations.
If 'both', the no points in the window are excluded from calculations.
If 'neither', the first and last points in the window are excluded
from calculations.
Default None ('right').
Changed in version 1.2.0: The closed parameter with fixed windows is now supported.
stepint, default None
New in version 1.5.0.
Evaluate the window at every step result, equivalent to slicing as
[::step]. window must be an integer. Using a step argument other
than None or 1 will produce a result with a different shape than the input.
methodstr {‘single’, ‘table’}, default ‘single’
New in version 1.3.0.
Execute the rolling operation per single column or row ('single')
or over the entire object ('table').
This argument is only implemented when specifying engine='numba'
in the method call.
Returns
Window subclass if a win_type is passed
Rolling subclass if win_type is not passed
See also
expandingProvides expanding transformations.
ewmProvides exponential weighted functions.
Notes
See Windowing Operations for further usage details
and examples.
Examples
>>> df = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]})
>>> df
B
0 0.0
1 1.0
2 2.0
3 NaN
4 4.0
window
Rolling sum with a window length of 2 observations.
>>> df.rolling(2).sum()
B
0 NaN
1 1.0
2 3.0
3 NaN
4 NaN
Rolling sum with a window span of 2 seconds.
>>> df_time = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]},
... index = [pd.Timestamp('20130101 09:00:00'),
... pd.Timestamp('20130101 09:00:02'),
... pd.Timestamp('20130101 09:00:03'),
... pd.Timestamp('20130101 09:00:05'),
... pd.Timestamp('20130101 09:00:06')])
>>> df_time
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 2.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
>>> df_time.rolling('2s').sum()
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 3.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
Rolling sum with forward looking windows with 2 observations.
>>> indexer = pd.api.indexers.FixedForwardWindowIndexer(window_size=2)
>>> df.rolling(window=indexer, min_periods=1).sum()
B
0 1.0
1 3.0
2 2.0
3 4.0
4 4.0
min_periods
Rolling sum with a window length of 2 observations, but only needs a minimum of 1
observation to calculate a value.
>>> df.rolling(2, min_periods=1).sum()
B
0 0.0
1 1.0
2 3.0
3 2.0
4 4.0
center
Rolling sum with the result assigned to the center of the window index.
>>> df.rolling(3, min_periods=1, center=True).sum()
B
0 1.0
1 3.0
2 3.0
3 6.0
4 4.0
>>> df.rolling(3, min_periods=1, center=False).sum()
B
0 0.0
1 1.0
2 3.0
3 3.0
4 6.0
step
Rolling sum with a window length of 2 observations, minimum of 1 observation to
calculate a value, and a step of 2.
>>> df.rolling(2, min_periods=1, step=2).sum()
B
0 0.0
2 3.0
4 4.0
win_type
Rolling sum with a window length of 2, using the Scipy 'gaussian'
window type. std is required in the aggregation function.
>>> df.rolling(2, win_type='gaussian').sum(std=3)
B
0 NaN
1 0.986207
2 2.958621
3 NaN
4 NaN
| reference/api/pandas.DataFrame.rolling.html |
pandas.tseries.offsets.CustomBusinessHour.normalize | pandas.tseries.offsets.CustomBusinessHour.normalize | CustomBusinessHour.normalize#
| reference/api/pandas.tseries.offsets.CustomBusinessHour.normalize.html |
pandas.DataFrame.plot.hexbin | `pandas.DataFrame.plot.hexbin`
Generate a hexagonal binning plot.
```
>>> n = 10000
>>> df = pd.DataFrame({'x': np.random.randn(n),
... 'y': np.random.randn(n)})
>>> ax = df.plot.hexbin(x='x', y='y', gridsize=20)
``` | DataFrame.plot.hexbin(x, y, C=None, reduce_C_function=None, gridsize=None, **kwargs)[source]#
Generate a hexagonal binning plot.
Generate a hexagonal binning plot of x versus y. If C is None
(the default), this is a histogram of the number of occurrences
of the observations at (x[i], y[i]).
If C is specified, specifies values at given coordinates
(x[i], y[i]). These values are accumulated for each hexagonal
bin and then reduced according to reduce_C_function,
having as default the NumPy’s mean function (numpy.mean()).
(If C is specified, it must also be a 1-D sequence
of the same length as x and y, or a column label.)
Parameters
xint or strThe column label or position for x points.
yint or strThe column label or position for y points.
Cint or str, optionalThe column label or position for the value of (x, y) point.
reduce_C_functioncallable, default np.meanFunction of one argument that reduces all the values in a bin to
a single number (e.g. np.mean, np.max, np.sum, np.std).
gridsizeint or tuple of (int, int), default 100The number of hexagons in the x-direction.
The corresponding number of hexagons in the y-direction is
chosen in a way that the hexagons are approximately regular.
Alternatively, gridsize can be a tuple with two elements
specifying the number of hexagons in the x-direction and the
y-direction.
**kwargsAdditional keyword arguments are documented in
DataFrame.plot().
Returns
matplotlib.AxesSubplotThe matplotlib Axes on which the hexbin is plotted.
See also
DataFrame.plotMake plots of a DataFrame.
matplotlib.pyplot.hexbinHexagonal binning plot using matplotlib, the matplotlib function that is used under the hood.
Examples
The following examples are generated with random data from
a normal distribution.
>>> n = 10000
>>> df = pd.DataFrame({'x': np.random.randn(n),
... 'y': np.random.randn(n)})
>>> ax = df.plot.hexbin(x='x', y='y', gridsize=20)
The next example uses C and np.sum as reduce_C_function.
Note that ‘observations’ values ranges from 1 to 5 but the result
plot shows values up to more than 25. This is because of the
reduce_C_function.
>>> n = 500
>>> df = pd.DataFrame({
... 'coord_x': np.random.uniform(-3, 3, size=n),
... 'coord_y': np.random.uniform(30, 50, size=n),
... 'observations': np.random.randint(1,5, size=n)
... })
>>> ax = df.plot.hexbin(x='coord_x',
... y='coord_y',
... C='observations',
... reduce_C_function=np.sum,
... gridsize=10,
... cmap="viridis")
| reference/api/pandas.DataFrame.plot.hexbin.html |
pandas.period_range | `pandas.period_range`
Return a fixed frequency PeriodIndex.
```
>>> pd.period_range(start='2017-01-01', end='2018-01-01', freq='M')
PeriodIndex(['2017-01', '2017-02', '2017-03', '2017-04', '2017-05', '2017-06',
'2017-07', '2017-08', '2017-09', '2017-10', '2017-11', '2017-12',
'2018-01'],
dtype='period[M]')
``` | pandas.period_range(start=None, end=None, periods=None, freq=None, name=None)[source]#
Return a fixed frequency PeriodIndex.
The day (calendar) is the default frequency.
Parameters
startstr or period-like, default NoneLeft bound for generating periods.
endstr or period-like, default NoneRight bound for generating periods.
periodsint, default NoneNumber of periods to generate.
freqstr or DateOffset, optionalFrequency alias. By default the freq is taken from start or end
if those are Period objects. Otherwise, the default is "D" for
daily frequency.
namestr, default NoneName of the resulting PeriodIndex.
Returns
PeriodIndex
Notes
Of the three parameters: start, end, and periods, exactly two
must be specified.
To learn more about the frequency strings, please see this link.
Examples
>>> pd.period_range(start='2017-01-01', end='2018-01-01', freq='M')
PeriodIndex(['2017-01', '2017-02', '2017-03', '2017-04', '2017-05', '2017-06',
'2017-07', '2017-08', '2017-09', '2017-10', '2017-11', '2017-12',
'2018-01'],
dtype='period[M]')
If start or end are Period objects, they will be used as anchor
endpoints for a PeriodIndex with frequency matching that of the
period_range constructor.
>>> pd.period_range(start=pd.Period('2017Q1', freq='Q'),
... end=pd.Period('2017Q2', freq='Q'), freq='M')
PeriodIndex(['2017-03', '2017-04', '2017-05', '2017-06'],
dtype='period[M]')
| reference/api/pandas.period_range.html |
pandas.tseries.offsets.BusinessMonthBegin.is_month_start | `pandas.tseries.offsets.BusinessMonthBegin.is_month_start`
Return boolean whether a timestamp occurs on the month start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
``` | BusinessMonthBegin.is_month_start()#
Return boolean whether a timestamp occurs on the month start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
| reference/api/pandas.tseries.offsets.BusinessMonthBegin.is_month_start.html |
pandas.errors.ParserError | `pandas.errors.ParserError`
Exception that is raised by an error encountered in parsing file contents. | exception pandas.errors.ParserError[source]#
Exception that is raised by an error encountered in parsing file contents.
This is a generic error raised for errors encountered when functions like
read_csv or read_html are parsing contents of a file.
See also
read_csvRead CSV (comma-separated) file into a DataFrame.
read_htmlRead HTML table into a DataFrame.
| reference/api/pandas.errors.ParserError.html |
pandas.tseries.offsets.Easter.is_on_offset | `pandas.tseries.offsets.Easter.is_on_offset`
Return boolean whether a timestamp intersects with this frequency.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
``` | Easter.is_on_offset()#
Return boolean whether a timestamp intersects with this frequency.
Parameters
dtdatetime.datetimeTimestamp to check intersections with frequency.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
>>> ts = pd.Timestamp(2022, 8, 6)
>>> ts.day_name()
'Saturday'
>>> freq = pd.offsets.BusinessDay(1)
>>> freq.is_on_offset(ts)
False
| reference/api/pandas.tseries.offsets.Easter.is_on_offset.html |
pandas.Series.groupby | `pandas.Series.groupby`
Group Series using a mapper or by a Series of columns.
A groupby operation involves some combination of splitting the
object, applying a function, and combining the results. This can be
used to group large amounts of data and compute operations on these
groups.
```
>>> ser = pd.Series([390., 350., 30., 20.],
... index=['Falcon', 'Falcon', 'Parrot', 'Parrot'], name="Max Speed")
>>> ser
Falcon 390.0
Falcon 350.0
Parrot 30.0
Parrot 20.0
Name: Max Speed, dtype: float64
>>> ser.groupby(["a", "b", "a", "b"]).mean()
a 210.0
b 185.0
Name: Max Speed, dtype: float64
>>> ser.groupby(level=0).mean()
Falcon 370.0
Parrot 25.0
Name: Max Speed, dtype: float64
>>> ser.groupby(ser > 100).mean()
Max Speed
False 25.0
True 370.0
Name: Max Speed, dtype: float64
``` | Series.groupby(by=None, axis=0, level=None, as_index=True, sort=True, group_keys=_NoDefault.no_default, squeeze=_NoDefault.no_default, observed=False, dropna=True)[source]#
Group Series using a mapper or by a Series of columns.
A groupby operation involves some combination of splitting the
object, applying a function, and combining the results. This can be
used to group large amounts of data and compute operations on these
groups.
Parameters
bymapping, function, label, or list of labelsUsed to determine the groups for the groupby.
If by is a function, it’s called on each value of the object’s
index. If a dict or Series is passed, the Series or dict VALUES
will be used to determine the groups (the Series’ values are first
aligned; see .align() method). If a list or ndarray of length
equal to the selected axis is passed (see the groupby user guide),
the values are used as-is to determine the groups. A label or list
of labels may be passed to group by the columns in self.
Notice that a tuple is interpreted as a (single) key.
axis{0 or ‘index’, 1 or ‘columns’}, default 0Split along rows (0) or columns (1). For Series this parameter
is unused and defaults to 0.
levelint, level name, or sequence of such, default NoneIf the axis is a MultiIndex (hierarchical), group by a particular
level or levels. Do not specify both by and level.
as_indexbool, default TrueFor aggregated output, return object with group labels as the
index. Only relevant for DataFrame input. as_index=False is
effectively “SQL-style” grouped output.
sortbool, default TrueSort group keys. Get better performance by turning this off.
Note this does not influence the order of observations within each
group. Groupby preserves the order of rows within each group.
group_keysbool, optionalWhen calling apply and the by argument produces a like-indexed
(i.e. a transform) result, add group keys to
index to identify pieces. By default group keys are not included
when the result’s index (and column) labels match the inputs, and
are included otherwise. This argument has no effect if the result produced
is not like-indexed with respect to the input.
Changed in version 1.5.0: Warns that group_keys will no longer be ignored when the
result from apply is a like-indexed Series or DataFrame.
Specify group_keys explicitly to include the group keys or
not.
squeezebool, default FalseReduce the dimensionality of the return type if possible,
otherwise return a consistent type.
Deprecated since version 1.1.0.
observedbool, default FalseThis only applies if any of the groupers are Categoricals.
If True: only show observed values for categorical groupers.
If False: show all values for categorical groupers.
dropnabool, default TrueIf True, and if group keys contain NA values, NA values together
with row/column will be dropped.
If False, NA values will also be treated as the key in groups.
New in version 1.1.0.
Returns
SeriesGroupByReturns a groupby object that contains information about the groups.
See also
resampleConvenience method for frequency conversion and resampling of time series.
Notes
See the user guide for more
detailed usage and examples, including splitting an object into groups,
iterating through groups, selecting a group, aggregation, and more.
Examples
>>> ser = pd.Series([390., 350., 30., 20.],
... index=['Falcon', 'Falcon', 'Parrot', 'Parrot'], name="Max Speed")
>>> ser
Falcon 390.0
Falcon 350.0
Parrot 30.0
Parrot 20.0
Name: Max Speed, dtype: float64
>>> ser.groupby(["a", "b", "a", "b"]).mean()
a 210.0
b 185.0
Name: Max Speed, dtype: float64
>>> ser.groupby(level=0).mean()
Falcon 370.0
Parrot 25.0
Name: Max Speed, dtype: float64
>>> ser.groupby(ser > 100).mean()
Max Speed
False 25.0
True 370.0
Name: Max Speed, dtype: float64
Grouping by Indexes
We can groupby different levels of a hierarchical index
using the level parameter:
>>> arrays = [['Falcon', 'Falcon', 'Parrot', 'Parrot'],
... ['Captive', 'Wild', 'Captive', 'Wild']]
>>> index = pd.MultiIndex.from_arrays(arrays, names=('Animal', 'Type'))
>>> ser = pd.Series([390., 350., 30., 20.], index=index, name="Max Speed")
>>> ser
Animal Type
Falcon Captive 390.0
Wild 350.0
Parrot Captive 30.0
Wild 20.0
Name: Max Speed, dtype: float64
>>> ser.groupby(level=0).mean()
Animal
Falcon 370.0
Parrot 25.0
Name: Max Speed, dtype: float64
>>> ser.groupby(level="Type").mean()
Type
Captive 210.0
Wild 185.0
Name: Max Speed, dtype: float64
We can also choose to include NA in group keys or not by defining
dropna parameter, the default setting is True.
>>> ser = pd.Series([1, 2, 3, 3], index=["a", 'a', 'b', np.nan])
>>> ser.groupby(level=0).sum()
a 3
b 3
dtype: int64
>>> ser.groupby(level=0, dropna=False).sum()
a 3
b 3
NaN 3
dtype: int64
>>> arrays = ['Falcon', 'Falcon', 'Parrot', 'Parrot']
>>> ser = pd.Series([390., 350., 30., 20.], index=arrays, name="Max Speed")
>>> ser.groupby(["a", "b", "a", np.nan]).mean()
a 210.0
b 350.0
Name: Max Speed, dtype: float64
>>> ser.groupby(["a", "b", "a", np.nan], dropna=False).mean()
a 210.0
b 350.0
NaN 20.0
Name: Max Speed, dtype: float64
| reference/api/pandas.Series.groupby.html |
pandas.Series.diff | `pandas.Series.diff`
First discrete difference of element.
```
>>> s = pd.Series([1, 1, 2, 3, 5, 8])
>>> s.diff()
0 NaN
1 0.0
2 1.0
3 1.0
4 2.0
5 3.0
dtype: float64
``` | Series.diff(periods=1)[source]#
First discrete difference of element.
Calculates the difference of a Series element compared with another
element in the Series (default is element in previous row).
Parameters
periodsint, default 1Periods to shift for calculating difference, accepts negative
values.
Returns
SeriesFirst differences of the Series.
See also
Series.pct_changePercent change over given number of periods.
Series.shiftShift index by desired number of periods with an optional time freq.
DataFrame.diffFirst discrete difference of object.
Notes
For boolean dtypes, this uses operator.xor() rather than
operator.sub().
The result is calculated according to current dtype in Series,
however dtype of the result is always float64.
Examples
Difference with previous row
>>> s = pd.Series([1, 1, 2, 3, 5, 8])
>>> s.diff()
0 NaN
1 0.0
2 1.0
3 1.0
4 2.0
5 3.0
dtype: float64
Difference with 3rd previous row
>>> s.diff(periods=3)
0 NaN
1 NaN
2 NaN
3 2.0
4 4.0
5 6.0
dtype: float64
Difference with following row
>>> s.diff(periods=-1)
0 0.0
1 -1.0
2 -1.0
3 -2.0
4 -3.0
5 NaN
dtype: float64
Overflow in input dtype
>>> s = pd.Series([1, 0], dtype=np.uint8)
>>> s.diff()
0 NaN
1 255.0
dtype: float64
| reference/api/pandas.Series.diff.html |
pandas ecosystem | pandas ecosystem | Increasingly, packages are being built on top of pandas to address specific needs
in data preparation, analysis and visualization.
This is encouraging because it means pandas is not only helping users to handle
their data tasks but also that it provides a better starting point for developers to
build powerful and more focused data tools.
The creation of libraries that complement pandas’ functionality also allows pandas
development to remain focused around it’s original requirements.
This is an inexhaustive list of projects that build on pandas in order to provide
tools in the PyData space. For a list of projects that depend on pandas,
see the
Github network dependents for pandas
or search pypi for pandas.
We’d like to make it easier for users to find these projects, if you know of other
substantial projects that you feel should be on this list, please let us know.
Data cleaning and validation#
Pyjanitor#
Pyjanitor provides a clean API for cleaning data, using method chaining.
Pandera#
Pandera provides a flexible and expressive API for performing data validation on dataframes
to make data processing pipelines more readable and robust.
Dataframes contain information that pandera explicitly validates at runtime. This is useful in
production-critical data pipelines or reproducible research settings.
pandas-path#
Since Python 3.4, pathlib has been
included in the Python standard library. Path objects provide a simple
and delightful way to interact with the file system. The pandas-path package enables the
Path API for pandas through a custom accessor .path. Getting just the filenames from
a series of full file paths is as simple as my_files.path.name. Other convenient operations like
joining paths, replacing file extensions, and checking if files exist are also available.
Statistics and machine learning#
pandas-tfrecords#
Easy saving pandas dataframe to tensorflow tfrecords format and reading tfrecords to pandas.
Statsmodels#
Statsmodels is the prominent Python “statistics and econometrics library” and it has
a long-standing special relationship with pandas. Statsmodels provides powerful statistics,
econometrics, analysis and modeling functionality that is out of pandas’ scope.
Statsmodels leverages pandas objects as the underlying data container for computation.
sklearn-pandas#
Use pandas DataFrames in your scikit-learn
ML pipeline.
Featuretools#
Featuretools is a Python library for automated feature engineering built on top of pandas. It excels at transforming temporal and relational datasets into feature matrices for machine learning using reusable feature engineering “primitives”. Users can contribute their own primitives in Python and share them with the rest of the community.
Compose#
Compose is a machine learning tool for labeling data and prediction engineering. It allows you to structure the labeling process by parameterizing prediction problems and transforming time-driven relational data into target values with cutoff times that can be used for supervised learning.
STUMPY#
STUMPY is a powerful and scalable Python library for modern time series analysis.
At its core, STUMPY efficiently computes something called a
matrix profile,
which can be used for a wide variety of time series data mining tasks.
Visualization#
Pandas has its own Styler class for table visualization, and while
pandas also has built-in support for data visualization through charts with matplotlib,
there are a number of other pandas-compatible libraries.
Altair#
Altair is a declarative statistical visualization library for Python.
With Altair, you can spend more time understanding your data and its
meaning. Altair’s API is simple, friendly and consistent and built on
top of the powerful Vega-Lite JSON specification. This elegant
simplicity produces beautiful and effective visualizations with a
minimal amount of code. Altair works with pandas DataFrames.
Bokeh#
Bokeh is a Python interactive visualization library for large datasets that natively uses
the latest web technologies. Its goal is to provide elegant, concise construction of novel
graphics in the style of Protovis/D3, while delivering high-performance interactivity over
large data to thin clients.
Pandas-Bokeh provides a high level API
for Bokeh that can be loaded as a native pandas plotting backend via
pd.set_option("plotting.backend", "pandas_bokeh")
It is very similar to the matplotlib plotting backend, but provides interactive
web-based charts and maps.
Seaborn#
Seaborn is a Python visualization library based on
matplotlib. It provides a high-level, dataset-oriented
interface for creating attractive statistical graphics. The plotting functions
in seaborn understand pandas objects and leverage pandas grouping operations
internally to support concise specification of complex visualizations. Seaborn
also goes beyond matplotlib and pandas with the option to perform statistical
estimation while plotting, aggregating across observations and visualizing the
fit of statistical models to emphasize patterns in a dataset.
plotnine#
Hadley Wickham’s ggplot2 is a foundational exploratory visualization package for the R language.
Based on “The Grammar of Graphics” it
provides a powerful, declarative and extremely general way to generate bespoke plots of any kind of data.
Various implementations to other languages are available.
A good implementation for Python users is has2k1/plotnine.
IPython vega#
IPython Vega leverages Vega to create plots within Jupyter Notebook.
Plotly#
Plotly’s Python API enables interactive figures and web shareability. Maps, 2D, 3D, and live-streaming graphs are rendered with WebGL and D3.js. The library supports plotting directly from a pandas DataFrame and cloud-based collaboration. Users of matplotlib, ggplot for Python, and Seaborn can convert figures into interactive web-based plots. Plots can be drawn in IPython Notebooks , edited with R or MATLAB, modified in a GUI, or embedded in apps and dashboards. Plotly is free for unlimited sharing, and has offline, or on-premise accounts for private use.
Lux#
Lux is a Python library that facilitates fast and easy experimentation with data by automating the visual data exploration process. To use Lux, simply add an extra import alongside pandas:
import lux
import pandas as pd
df = pd.read_csv("data.csv")
df # discover interesting insights!
By printing out a dataframe, Lux automatically recommends a set of visualizations that highlights interesting trends and patterns in the dataframe. Users can leverage any existing pandas commands without modifying their code, while being able to visualize their pandas data structures (e.g., DataFrame, Series, Index) at the same time. Lux also offers a powerful, intuitive language that allow users to create Altair, matplotlib, or Vega-Lite visualizations without having to think at the level of code.
Qtpandas#
Spun off from the main pandas library, the qtpandas
library enables DataFrame visualization and manipulation in PyQt4 and PySide applications.
D-Tale#
D-Tale is a lightweight web client for visualizing pandas data structures. It
provides a rich spreadsheet-style grid which acts as a wrapper for a lot of
pandas functionality (query, sort, describe, corr…) so users can quickly
manipulate their data. There is also an interactive chart-builder using Plotly
Dash allowing users to build nice portable visualizations. D-Tale can be
invoked with the following command
import dtale
dtale.show(df)
D-Tale integrates seamlessly with Jupyter notebooks, Python terminals, Kaggle
& Google Colab. Here are some demos of the grid.
hvplot#
hvPlot is a high-level plotting API for the PyData ecosystem built on HoloViews.
It can be loaded as a native pandas plotting backend via
pd.set_option("plotting.backend", "hvplot")
IDE#
IPython#
IPython is an interactive command shell and distributed computing
environment. IPython tab completion works with pandas methods and also
attributes like DataFrame columns.
Jupyter Notebook / Jupyter Lab#
Jupyter Notebook is a web application for creating Jupyter notebooks.
A Jupyter notebook is a JSON document containing an ordered list
of input/output cells which can contain code, text, mathematics, plots
and rich media.
Jupyter notebooks can be converted to a number of open standard output formats
(HTML, HTML presentation slides, LaTeX, PDF, ReStructuredText, Markdown,
Python) through ‘Download As’ in the web interface and jupyter convert
in a shell.
pandas DataFrames implement _repr_html_ and _repr_latex methods
which are utilized by Jupyter Notebook for displaying
(abbreviated) HTML or LaTeX tables. LaTeX output is properly escaped.
(Note: HTML tables may or may not be
compatible with non-HTML Jupyter output formats.)
See Options and Settings and
Available Options
for pandas display. settings.
Quantopian/qgrid#
qgrid is “an interactive grid for sorting and filtering
DataFrames in IPython Notebook” built with SlickGrid.
Spyder#
Spyder is a cross-platform PyQt-based IDE combining the editing, analysis,
debugging and profiling functionality of a software development tool with the
data exploration, interactive execution, deep inspection and rich visualization
capabilities of a scientific environment like MATLAB or Rstudio.
Its Variable Explorer
allows users to view, manipulate and edit pandas Index, Series,
and DataFrame objects like a “spreadsheet”, including copying and modifying
values, sorting, displaying a “heatmap”, converting data types and more.
pandas objects can also be renamed, duplicated, new columns added,
copied/pasted to/from the clipboard (as TSV), and saved/loaded to/from a file.
Spyder can also import data from a variety of plain text and binary files
or the clipboard into a new pandas DataFrame via a sophisticated import wizard.
Most pandas classes, methods and data attributes can be autocompleted in
Spyder’s Editor and
IPython Console,
and Spyder’s Help pane can retrieve
and render Numpydoc documentation on pandas objects in rich text with Sphinx
both automatically and on-demand.
API#
pandas-datareader#
pandas-datareader is a remote data access library for pandas (PyPI:pandas-datareader).
It is based on functionality that was located in pandas.io.data and pandas.io.wb but was
split off in v0.19.
See more in the pandas-datareader docs:
The following data feeds are available:
Google Finance
Tiingo
Morningstar
IEX
Robinhood
Enigma
Quandl
FRED
Fama/French
World Bank
OECD
Eurostat
TSP Fund Data
Nasdaq Trader Symbol Definitions
Stooq Index Data
MOEX Data
Quandl/Python#
Quandl API for Python wraps the Quandl REST API to return
pandas DataFrames with timeseries indexes.
Pydatastream#
PyDatastream is a Python interface to the
Refinitiv Datastream (DWS)
REST API to return indexed pandas DataFrames with financial data.
This package requires valid credentials for this API (non free).
pandaSDMX#
pandaSDMX is a library to retrieve and acquire statistical data
and metadata disseminated in
SDMX 2.1, an ISO-standard
widely used by institutions such as statistics offices, central banks,
and international organisations. pandaSDMX can expose datasets and related
structural metadata including data flows, code-lists,
and data structure definitions as pandas Series
or MultiIndexed DataFrames.
fredapi#
fredapi is a Python interface to the Federal Reserve Economic Data (FRED)
provided by the Federal Reserve Bank of St. Louis. It works with both the FRED database and ALFRED database that
contains point-in-time data (i.e. historic data revisions). fredapi provides a wrapper in Python to the FRED
HTTP API, and also provides several convenient methods for parsing and analyzing point-in-time data from ALFRED.
fredapi makes use of pandas and returns data in a Series or DataFrame. This module requires a FRED API key that
you can obtain for free on the FRED website.
dataframe_sql#
dataframe_sql is a Python package that translates SQL syntax directly into
operations on pandas DataFrames. This is useful when migrating from a database to
using pandas or for users more comfortable with SQL looking for a way to interface
with pandas.
Domain specific#
Geopandas#
Geopandas extends pandas data objects to include geographic information which support
geometric operations. If your work entails maps and geographical coordinates, and
you love pandas, you should take a close look at Geopandas.
staircase#
staircase is a data analysis package, built upon pandas and numpy, for modelling and
manipulation of mathematical step functions. It provides a rich variety of arithmetic
operations, relational operations, logical operations, statistical operations and
aggregations for step functions defined over real numbers, datetime and timedelta domains.
xarray#
xarray brings the labeled data power of pandas to the physical sciences by
providing N-dimensional variants of the core pandas data structures. It aims to
provide a pandas-like and pandas-compatible toolkit for analytics on multi-
dimensional arrays, rather than the tabular data for which pandas excels.
IO#
BCPandas#
BCPandas provides high performance writes from pandas to Microsoft SQL Server,
far exceeding the performance of the native df.to_sql method. Internally, it uses
Microsoft’s BCP utility, but the complexity is fully abstracted away from the end user.
Rigorously tested, it is a complete replacement for df.to_sql.
Deltalake#
Deltalake python package lets you access tables stored in
Delta Lake natively in Python without the need to use Spark or
JVM. It provides the delta_table.to_pyarrow_table().to_pandas() method to convert
any Delta table into Pandas dataframe.
Out-of-core#
Blaze#
Blaze provides a standard API for doing computations with various
in-memory and on-disk backends: NumPy, pandas, SQLAlchemy, MongoDB, PyTables,
PySpark.
Cylon#
Cylon is a fast, scalable, distributed memory parallel runtime with a pandas
like Python DataFrame API. ”Core Cylon” is implemented with C++ using Apache
Arrow format to represent the data in-memory. Cylon DataFrame API implements
most of the core operators of pandas such as merge, filter, join, concat,
group-by, drop_duplicates, etc. These operators are designed to work across
thousands of cores to scale applications. It can interoperate with pandas
DataFrame by reading data from pandas or converting data to pandas so users
can selectively scale parts of their pandas DataFrame applications.
from pycylon import read_csv, DataFrame, CylonEnv
from pycylon.net import MPIConfig
# Initialize Cylon distributed environment
config: MPIConfig = MPIConfig()
env: CylonEnv = CylonEnv(config=config, distributed=True)
df1: DataFrame = read_csv('/tmp/csv1.csv')
df2: DataFrame = read_csv('/tmp/csv2.csv')
# Using 1000s of cores across the cluster to compute the join
df3: Table = df1.join(other=df2, on=[0], algorithm="hash", env=env)
print(df3)
Dask#
Dask is a flexible parallel computing library for analytics. Dask
provides a familiar DataFrame interface for out-of-core, parallel and distributed computing.
Dask-ML#
Dask-ML enables parallel and distributed machine learning using Dask alongside existing machine learning libraries like Scikit-Learn, XGBoost, and TensorFlow.
Ibis#
Ibis offers a standard way to write analytics code, that can be run in multiple engines. It helps in bridging the gap between local Python environments (like pandas) and remote storage and execution systems like Hadoop components (like HDFS, Impala, Hive, Spark) and SQL databases (Postgres, etc.).
Koalas#
Koalas provides a familiar pandas DataFrame interface on top of Apache Spark. It enables users to leverage multi-cores on one machine or a cluster of machines to speed up or scale their DataFrame code.
Modin#
The modin.pandas DataFrame is a parallel and distributed drop-in replacement
for pandas. This means that you can use Modin with existing pandas code or write
new code with the existing pandas API. Modin can leverage your entire machine or
cluster to speed up and scale your pandas workloads, including traditionally
time-consuming tasks like ingesting data (read_csv, read_excel,
read_parquet, etc.).
# import pandas as pd
import modin.pandas as pd
df = pd.read_csv("big.csv") # use all your cores!
Odo#
Odo provides a uniform API for moving data between different formats. It uses
pandas own read_csv for CSV IO and leverages many existing packages such as
PyTables, h5py, and pymongo to move data between non pandas formats. Its graph
based approach is also extensible by end users for custom formats that may be
too specific for the core of odo.
Pandarallel#
Pandarallel provides a simple way to parallelize your pandas operations on all your CPUs by changing only one line of code.
If also displays progress bars.
from pandarallel import pandarallel
pandarallel.initialize(progress_bar=True)
# df.apply(func)
df.parallel_apply(func)
Vaex#
Increasingly, packages are being built on top of pandas to address specific needs in data preparation, analysis and visualization. Vaex is a Python library for Out-of-Core DataFrames (similar to pandas), to visualize and explore big tabular datasets. It can calculate statistics such as mean, sum, count, standard deviation etc, on an N-dimensional grid up to a billion (109) objects/rows per second. Visualization is done using histograms, density plots and 3d volume rendering, allowing interactive exploration of big data. Vaex uses memory mapping, zero memory copy policy and lazy computations for best performance (no memory wasted).
vaex.from_pandas
vaex.to_pandas_df
Extension data types#
pandas provides an interface for defining
extension types to extend NumPy’s type
system. The following libraries implement that interface to provide types not
found in NumPy or pandas, which work well with pandas’ data containers.
Cyberpandas#
Cyberpandas provides an extension type for storing arrays of IP Addresses. These
arrays can be stored inside pandas’ Series and DataFrame.
Pandas-Genomics#
Pandas-Genomics provides extension types, extension arrays, and extension accessors for working with genomics data
Pint-Pandas#
Pint-Pandas provides an extension type for
storing numeric arrays with units. These arrays can be stored inside pandas’
Series and DataFrame. Operations between Series and DataFrame columns which
use pint’s extension array are then units aware.
Text Extensions for Pandas#
Text Extensions for Pandas
provides extension types to cover common data structures for representing natural language
data, plus library integrations that convert the outputs of popular natural language
processing libraries into Pandas DataFrames.
Accessors#
A directory of projects providing
extension accessors. This is for users to
discover new accessors and for library authors to coordinate on the namespace.
Library
Accessor
Classes
Description
cyberpandas
ip
Series
Provides common operations for working with IP addresses.
pdvega
vgplot
Series, DataFrame
Provides plotting functions from the Altair library.
pandas-genomics
genomics
Series, DataFrame
Provides common operations for quality control and analysis of genomics data.
pandas_path
path
Index, Series
Provides pathlib.Path functions for Series.
pint-pandas
pint
Series, DataFrame
Provides units support for numeric Series and DataFrames.
composeml
slice
DataFrame
Provides a generator for enhanced data slicing.
datatest
validate
Series, DataFrame, Index
Provides validation, differences, and acceptance managers.
woodwork
ww
Series, DataFrame
Provides physical, logical, and semantic data typing information for Series and DataFrames.
staircase
sc
Series
Provides methods for querying, aggregating and plotting step functions
Development tools#
pandas-stubs#
While pandas repository is partially typed, the package itself doesn’t expose this information for external use.
Install pandas-stubs to enable basic type coverage of pandas API.
Learn more by reading through GH14468, GH26766, GH28142.
See installation and usage instructions on the github page.
| ecosystem.html |
pandas.DataFrame.slice_shift | `pandas.DataFrame.slice_shift`
Equivalent to shift without copying data.
Deprecated since version 1.2.0: slice_shift is deprecated,
use DataFrame/Series.shift instead. | DataFrame.slice_shift(periods=1, axis=0)[source]#
Equivalent to shift without copying data.
Deprecated since version 1.2.0: slice_shift is deprecated,
use DataFrame/Series.shift instead.
The shifted data will not include the dropped periods and the
shifted axis will be smaller than the original.
Parameters
periodsintNumber of periods to move, can be positive or negative.
axis{0 or ‘index’, 1 or ‘columns’, None}, default 0For Series this parameter is unused and defaults to 0.
Returns
shiftedsame type as caller
Notes
While the slice_shift is faster than shift, you may pay for it
later during alignment.
| reference/api/pandas.DataFrame.slice_shift.html |
pandas.core.groupby.DataFrameGroupBy.idxmin | `pandas.core.groupby.DataFrameGroupBy.idxmin`
Return index of first occurrence of minimum over requested axis.
NA/null values are excluded.
```
>>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48],
... 'co2_emissions': [37.2, 19.66, 1712]},
... index=['Pork', 'Wheat Products', 'Beef'])
``` | DataFrameGroupBy.idxmin(axis=0, skipna=True, numeric_only=_NoDefault.no_default)[source]#
Return index of first occurrence of minimum over requested axis.
NA/null values are excluded.
Parameters
axis{0 or ‘index’, 1 or ‘columns’}, default 0The axis to use. 0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise.
skipnabool, default TrueExclude NA/null values. If an entire row/column is NA, the result
will be NA.
numeric_onlybool, default True for axis=0, False for axis=1Include only float, int or boolean data.
New in version 1.5.0.
Returns
SeriesIndexes of minima along the specified axis.
Raises
ValueError
If the row/column is empty
See also
Series.idxminReturn index of the minimum element.
Notes
This method is the DataFrame version of ndarray.argmin.
Examples
Consider a dataset containing food consumption in Argentina.
>>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48],
... 'co2_emissions': [37.2, 19.66, 1712]},
... index=['Pork', 'Wheat Products', 'Beef'])
>>> df
consumption co2_emissions
Pork 10.51 37.20
Wheat Products 103.11 19.66
Beef 55.48 1712.00
By default, it returns the index for the minimum value in each column.
>>> df.idxmin()
consumption Pork
co2_emissions Wheat Products
dtype: object
To return the index for the minimum value in each row, use axis="columns".
>>> df.idxmin(axis="columns")
Pork consumption
Wheat Products co2_emissions
Beef consumption
dtype: object
| reference/api/pandas.core.groupby.DataFrameGroupBy.idxmin.html |
pandas.tseries.offsets.CustomBusinessHour.is_month_start | `pandas.tseries.offsets.CustomBusinessHour.is_month_start`
Return boolean whether a timestamp occurs on the month start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
``` | CustomBusinessHour.is_month_start()#
Return boolean whether a timestamp occurs on the month start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
| reference/api/pandas.tseries.offsets.CustomBusinessHour.is_month_start.html |
pandas.tseries.offsets.Micro.is_month_end | `pandas.tseries.offsets.Micro.is_month_end`
Return boolean whether a timestamp occurs on the month end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
``` | Micro.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
| reference/api/pandas.tseries.offsets.Micro.is_month_end.html |
pandas.Series.to_xarray | `pandas.Series.to_xarray`
Return an xarray object from the pandas object.
Data in the pandas structure converted to Dataset if the object is
a DataFrame, or a DataArray if the object is a Series.
```
>>> df = pd.DataFrame([('falcon', 'bird', 389.0, 2),
... ('parrot', 'bird', 24.0, 2),
... ('lion', 'mammal', 80.5, 4),
... ('monkey', 'mammal', np.nan, 4)],
... columns=['name', 'class', 'max_speed',
... 'num_legs'])
>>> df
name class max_speed num_legs
0 falcon bird 389.0 2
1 parrot bird 24.0 2
2 lion mammal 80.5 4
3 monkey mammal NaN 4
``` | Series.to_xarray()[source]#
Return an xarray object from the pandas object.
Returns
xarray.DataArray or xarray.DatasetData in the pandas structure converted to Dataset if the object is
a DataFrame, or a DataArray if the object is a Series.
See also
DataFrame.to_hdfWrite DataFrame to an HDF5 file.
DataFrame.to_parquetWrite a DataFrame to the binary parquet format.
Notes
See the xarray docs
Examples
>>> df = pd.DataFrame([('falcon', 'bird', 389.0, 2),
... ('parrot', 'bird', 24.0, 2),
... ('lion', 'mammal', 80.5, 4),
... ('monkey', 'mammal', np.nan, 4)],
... columns=['name', 'class', 'max_speed',
... 'num_legs'])
>>> df
name class max_speed num_legs
0 falcon bird 389.0 2
1 parrot bird 24.0 2
2 lion mammal 80.5 4
3 monkey mammal NaN 4
>>> df.to_xarray()
<xarray.Dataset>
Dimensions: (index: 4)
Coordinates:
* index (index) int64 0 1 2 3
Data variables:
name (index) object 'falcon' 'parrot' 'lion' 'monkey'
class (index) object 'bird' 'bird' 'mammal' 'mammal'
max_speed (index) float64 389.0 24.0 80.5 nan
num_legs (index) int64 2 2 4 4
>>> df['max_speed'].to_xarray()
<xarray.DataArray 'max_speed' (index: 4)>
array([389. , 24. , 80.5, nan])
Coordinates:
* index (index) int64 0 1 2 3
>>> dates = pd.to_datetime(['2018-01-01', '2018-01-01',
... '2018-01-02', '2018-01-02'])
>>> df_multiindex = pd.DataFrame({'date': dates,
... 'animal': ['falcon', 'parrot',
... 'falcon', 'parrot'],
... 'speed': [350, 18, 361, 15]})
>>> df_multiindex = df_multiindex.set_index(['date', 'animal'])
>>> df_multiindex
speed
date animal
2018-01-01 falcon 350
parrot 18
2018-01-02 falcon 361
parrot 15
>>> df_multiindex.to_xarray()
<xarray.Dataset>
Dimensions: (date: 2, animal: 2)
Coordinates:
* date (date) datetime64[ns] 2018-01-01 2018-01-02
* animal (animal) object 'falcon' 'parrot'
Data variables:
speed (date, animal) int64 350 18 361 15
| reference/api/pandas.Series.to_xarray.html |
pandas.api.extensions.register_extension_dtype | `pandas.api.extensions.register_extension_dtype`
Register an ExtensionType with pandas as class decorator.
```
>>> from pandas.api.extensions import register_extension_dtype, ExtensionDtype
>>> @register_extension_dtype
... class MyExtensionDtype(ExtensionDtype):
... name = "myextension"
``` | pandas.api.extensions.register_extension_dtype(cls)[source]#
Register an ExtensionType with pandas as class decorator.
This enables operations like .astype(name) for the name
of the ExtensionDtype.
Returns
callableA class decorator.
Examples
>>> from pandas.api.extensions import register_extension_dtype, ExtensionDtype
>>> @register_extension_dtype
... class MyExtensionDtype(ExtensionDtype):
... name = "myextension"
| reference/api/pandas.api.extensions.register_extension_dtype.html |
pandas.tseries.offsets.Nano.is_month_end | `pandas.tseries.offsets.Nano.is_month_end`
Return boolean whether a timestamp occurs on the month end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
``` | Nano.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
| reference/api/pandas.tseries.offsets.Nano.is_month_end.html |
pandas.tseries.offsets.Minute.is_month_end | `pandas.tseries.offsets.Minute.is_month_end`
Return boolean whether a timestamp occurs on the month end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
``` | Minute.is_month_end()#
Return boolean whether a timestamp occurs on the month end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_end(ts)
False
| reference/api/pandas.tseries.offsets.Minute.is_month_end.html |
pandas.tseries.offsets.MonthEnd.normalize | pandas.tseries.offsets.MonthEnd.normalize | MonthEnd.normalize#
| reference/api/pandas.tseries.offsets.MonthEnd.normalize.html |
pandas.to_timedelta | `pandas.to_timedelta`
Convert argument to timedelta.
```
>>> pd.to_timedelta('1 days 06:05:01.00003')
Timedelta('1 days 06:05:01.000030')
>>> pd.to_timedelta('15.5us')
Timedelta('0 days 00:00:00.000015500')
``` | pandas.to_timedelta(arg, unit=None, errors='raise')[source]#
Convert argument to timedelta.
Timedeltas are absolute differences in times, expressed in difference
units (e.g. days, hours, minutes, seconds). This method converts
an argument from a recognized timedelta format / value into
a Timedelta type.
Parameters
argstr, timedelta, list-like or SeriesThe data to be converted to timedelta.
Deprecated since version 1.2: Strings with units ‘M’, ‘Y’ and ‘y’ do not represent
unambiguous timedelta values and will be removed in a future version
unitstr, optionalDenotes the unit of the arg for numeric arg. Defaults to "ns".
Possible values:
‘W’
‘D’ / ‘days’ / ‘day’
‘hours’ / ‘hour’ / ‘hr’ / ‘h’
‘m’ / ‘minute’ / ‘min’ / ‘minutes’ / ‘T’
‘S’ / ‘seconds’ / ‘sec’ / ‘second’
‘ms’ / ‘milliseconds’ / ‘millisecond’ / ‘milli’ / ‘millis’ / ‘L’
‘us’ / ‘microseconds’ / ‘microsecond’ / ‘micro’ / ‘micros’ / ‘U’
‘ns’ / ‘nanoseconds’ / ‘nano’ / ‘nanos’ / ‘nanosecond’ / ‘N’
Changed in version 1.1.0: Must not be specified when arg context strings and
errors="raise".
errors{‘ignore’, ‘raise’, ‘coerce’}, default ‘raise’
If ‘raise’, then invalid parsing will raise an exception.
If ‘coerce’, then invalid parsing will be set as NaT.
If ‘ignore’, then invalid parsing will return the input.
Returns
timedeltaIf parsing succeeded.
Return type depends on input:
list-like: TimedeltaIndex of timedelta64 dtype
Series: Series of timedelta64 dtype
scalar: Timedelta
See also
DataFrame.astypeCast argument to a specified dtype.
to_datetimeConvert argument to datetime.
convert_dtypesConvert dtypes.
Notes
If the precision is higher than nanoseconds, the precision of the duration is
truncated to nanoseconds for string inputs.
Examples
Parsing a single string to a Timedelta:
>>> pd.to_timedelta('1 days 06:05:01.00003')
Timedelta('1 days 06:05:01.000030')
>>> pd.to_timedelta('15.5us')
Timedelta('0 days 00:00:00.000015500')
Parsing a list or array of strings:
>>> pd.to_timedelta(['1 days 06:05:01.00003', '15.5us', 'nan'])
TimedeltaIndex(['1 days 06:05:01.000030', '0 days 00:00:00.000015500', NaT],
dtype='timedelta64[ns]', freq=None)
Converting numbers by specifying the unit keyword argument:
>>> pd.to_timedelta(np.arange(5), unit='s')
TimedeltaIndex(['0 days 00:00:00', '0 days 00:00:01', '0 days 00:00:02',
'0 days 00:00:03', '0 days 00:00:04'],
dtype='timedelta64[ns]', freq=None)
>>> pd.to_timedelta(np.arange(5), unit='d')
TimedeltaIndex(['0 days', '1 days', '2 days', '3 days', '4 days'],
dtype='timedelta64[ns]', freq=None)
| reference/api/pandas.to_timedelta.html |
pandas.Timestamp.min | pandas.Timestamp.min | Timestamp.min = Timestamp('1677-09-21 00:12:43.145224193')#
| reference/api/pandas.Timestamp.min.html |
pandas.tseries.offsets.QuarterBegin.nanos | pandas.tseries.offsets.QuarterBegin.nanos | QuarterBegin.nanos#
| reference/api/pandas.tseries.offsets.QuarterBegin.nanos.html |
pandas.tseries.offsets.QuarterEnd.apply | pandas.tseries.offsets.QuarterEnd.apply | QuarterEnd.apply()#
| reference/api/pandas.tseries.offsets.QuarterEnd.apply.html |
pandas.DataFrame.__iter__ | `pandas.DataFrame.__iter__`
Iterate over info axis. | DataFrame.__iter__()[source]#
Iterate over info axis.
Returns
iteratorInfo axis as iterator.
| reference/api/pandas.DataFrame.__iter__.html |
pandas.Series.cat.rename_categories | `pandas.Series.cat.rename_categories`
Rename categories.
```
>>> c = pd.Categorical(['a', 'a', 'b'])
>>> c.rename_categories([0, 1])
[0, 0, 1]
Categories (2, int64): [0, 1]
``` | Series.cat.rename_categories(*args, **kwargs)[source]#
Rename categories.
Parameters
new_categorieslist-like, dict-like or callableNew categories which will replace old categories.
list-like: all items must be unique and the number of items in
the new categories must match the existing number of categories.
dict-like: specifies a mapping from
old categories to new. Categories not contained in the mapping
are passed through and extra categories in the mapping are
ignored.
callable : a callable that is called on all items in the old
categories and whose return values comprise the new categories.
inplacebool, default FalseWhether or not to rename the categories inplace or return a copy of
this categorical with renamed categories.
Deprecated since version 1.3.0.
Returns
catCategorical or NoneCategorical with removed categories or None if inplace=True.
Raises
ValueErrorIf new categories are list-like and do not have the same number of
items than the current categories or do not validate as categories
See also
reorder_categoriesReorder categories.
add_categoriesAdd new categories.
remove_categoriesRemove the specified categories.
remove_unused_categoriesRemove categories which are not used.
set_categoriesSet the categories to the specified ones.
Examples
>>> c = pd.Categorical(['a', 'a', 'b'])
>>> c.rename_categories([0, 1])
[0, 0, 1]
Categories (2, int64): [0, 1]
For dict-like new_categories, extra keys are ignored and
categories not in the dictionary are passed through
>>> c.rename_categories({'a': 'A', 'c': 'C'})
['A', 'A', 'b']
Categories (2, object): ['A', 'b']
You may also provide a callable to create the new categories
>>> c.rename_categories(lambda x: x.upper())
['A', 'A', 'B']
Categories (2, object): ['A', 'B']
| reference/api/pandas.Series.cat.rename_categories.html |
pandas.MultiIndex.codes | pandas.MultiIndex.codes | property MultiIndex.codes[source]#
| reference/api/pandas.MultiIndex.codes.html |
pandas.Timestamp.to_julian_date | `pandas.Timestamp.to_julian_date`
Convert TimeStamp to a Julian Date.
0 Julian date is noon January 1, 4713 BC.
```
>>> ts = pd.Timestamp('2020-03-14T15:32:52')
>>> ts.to_julian_date()
2458923.147824074
``` | Timestamp.to_julian_date()#
Convert TimeStamp to a Julian Date.
0 Julian date is noon January 1, 4713 BC.
Examples
>>> ts = pd.Timestamp('2020-03-14T15:32:52')
>>> ts.to_julian_date()
2458923.147824074
| reference/api/pandas.Timestamp.to_julian_date.html |
pandas.tseries.offsets.Nano.is_year_end | `pandas.tseries.offsets.Nano.is_year_end`
Return boolean whether a timestamp occurs on the year end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
``` | Nano.is_year_end()#
Return boolean whether a timestamp occurs on the year end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_end(ts)
False
| reference/api/pandas.tseries.offsets.Nano.is_year_end.html |
Extensions | These are primarily intended for library authors looking to extend pandas
objects.
api.extensions.register_extension_dtype(cls)
Register an ExtensionType with pandas as class decorator.
api.extensions.register_dataframe_accessor(name)
Register a custom accessor on DataFrame objects.
api.extensions.register_series_accessor(name)
Register a custom accessor on Series objects.
api.extensions.register_index_accessor(name)
Register a custom accessor on Index objects.
api.extensions.ExtensionDtype()
A custom data type, to be paired with an ExtensionArray.
api.extensions.ExtensionArray()
Abstract base class for custom 1-D array types.
arrays.PandasArray(values[, copy])
A pandas ExtensionArray for NumPy data.
Additionally, we have some utility methods for ensuring your object
behaves correctly.
api.indexers.check_array_indexer(array, indexer)
Check if indexer is a valid array indexer for array.
The sentinel pandas.api.extensions.no_default is used as the default
value in some methods. Use an is comparison to check if the user
provides a non-default value.
| reference/extensions.html | null |
pandas.DataFrame.iat | `pandas.DataFrame.iat`
Access a single value for a row/column pair by integer position.
```
>>> df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]],
... columns=['A', 'B', 'C'])
>>> df
A B C
0 0 2 3
1 0 4 1
2 10 20 30
``` | property DataFrame.iat[source]#
Access a single value for a row/column pair by integer position.
Similar to iloc, in that both provide integer-based lookups. Use
iat if you only need to get or set a single value in a DataFrame
or Series.
Raises
IndexErrorWhen integer position is out of bounds.
See also
DataFrame.atAccess a single value for a row/column label pair.
DataFrame.locAccess a group of rows and columns by label(s).
DataFrame.ilocAccess a group of rows and columns by integer position(s).
Examples
>>> df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]],
... columns=['A', 'B', 'C'])
>>> df
A B C
0 0 2 3
1 0 4 1
2 10 20 30
Get value at specified row/column pair
>>> df.iat[1, 2]
1
Set value at specified row/column pair
>>> df.iat[1, 2] = 10
>>> df.iat[1, 2]
10
Get value within a series
>>> df.loc[0].iat[1]
2
| reference/api/pandas.DataFrame.iat.html |
pandas.tseries.offsets.FY5253Quarter.kwds | `pandas.tseries.offsets.FY5253Quarter.kwds`
Return a dict of extra parameters for the offset.
Examples
```
>>> pd.DateOffset(5).kwds
{}
``` | FY5253Quarter.kwds#
Return a dict of extra parameters for the offset.
Examples
>>> pd.DateOffset(5).kwds
{}
>>> pd.offsets.FY5253Quarter().kwds
{'weekday': 0,
'startingMonth': 1,
'qtr_with_extra_week': 1,
'variation': 'nearest'}
| reference/api/pandas.tseries.offsets.FY5253Quarter.kwds.html |
pandas.Series.quantile | `pandas.Series.quantile`
Return value at the given quantile.
The quantile(s) to compute, which can lie in range: 0 <= q <= 1.
```
>>> s = pd.Series([1, 2, 3, 4])
>>> s.quantile(.5)
2.5
>>> s.quantile([.25, .5, .75])
0.25 1.75
0.50 2.50
0.75 3.25
dtype: float64
``` | Series.quantile(q=0.5, interpolation='linear')[source]#
Return value at the given quantile.
Parameters
qfloat or array-like, default 0.5 (50% quantile)The quantile(s) to compute, which can lie in range: 0 <= q <= 1.
interpolation{‘linear’, ‘lower’, ‘higher’, ‘midpoint’, ‘nearest’}This optional parameter specifies the interpolation method to use,
when the desired quantile lies between two data points i and j:
linear: i + (j - i) * fraction, where fraction is the
fractional part of the index surrounded by i and j.
lower: i.
higher: j.
nearest: i or j whichever is nearest.
midpoint: (i + j) / 2.
Returns
float or SeriesIf q is an array, a Series will be returned where the
index is q and the values are the quantiles, otherwise
a float will be returned.
See also
core.window.Rolling.quantileCalculate the rolling quantile.
numpy.percentileReturns the q-th percentile(s) of the array elements.
Examples
>>> s = pd.Series([1, 2, 3, 4])
>>> s.quantile(.5)
2.5
>>> s.quantile([.25, .5, .75])
0.25 1.75
0.50 2.50
0.75 3.25
dtype: float64
| reference/api/pandas.Series.quantile.html |
pandas.tseries.offsets.Day.is_quarter_end | `pandas.tseries.offsets.Day.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
``` | Day.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
| reference/api/pandas.tseries.offsets.Day.is_quarter_end.html |
pandas.Series.dt.month_name | `pandas.Series.dt.month_name`
Return the month names with specified locale.
```
>>> s = pd.Series(pd.date_range(start='2018-01', freq='M', periods=3))
>>> s
0 2018-01-31
1 2018-02-28
2 2018-03-31
dtype: datetime64[ns]
>>> s.dt.month_name()
0 January
1 February
2 March
dtype: object
``` | Series.dt.month_name(*args, **kwargs)[source]#
Return the month names with specified locale.
Parameters
localestr, optionalLocale determining the language in which to return the month name.
Default is English locale.
Returns
Series or IndexSeries or Index of month names.
Examples
>>> s = pd.Series(pd.date_range(start='2018-01', freq='M', periods=3))
>>> s
0 2018-01-31
1 2018-02-28
2 2018-03-31
dtype: datetime64[ns]
>>> s.dt.month_name()
0 January
1 February
2 March
dtype: object
>>> idx = pd.date_range(start='2018-01', freq='M', periods=3)
>>> idx
DatetimeIndex(['2018-01-31', '2018-02-28', '2018-03-31'],
dtype='datetime64[ns]', freq='M')
>>> idx.month_name()
Index(['January', 'February', 'March'], dtype='object')
| reference/api/pandas.Series.dt.month_name.html |
pandas.tseries.offsets.BYearBegin.rollback | `pandas.tseries.offsets.BYearBegin.rollback`
Roll provided date backward to next offset only if not on offset.
Rolled timestamp if not on offset, otherwise unchanged timestamp. | BYearBegin.rollback()#
Roll provided date backward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
| reference/api/pandas.tseries.offsets.BYearBegin.rollback.html |
pandas.tseries.offsets.Micro.is_quarter_start | `pandas.tseries.offsets.Micro.is_quarter_start`
Return boolean whether a timestamp occurs on the quarter start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
``` | Micro.is_quarter_start()#
Return boolean whether a timestamp occurs on the quarter start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_start(ts)
True
| reference/api/pandas.tseries.offsets.Micro.is_quarter_start.html |
pandas.DataFrame.copy | `pandas.DataFrame.copy`
Make a copy of this object’s indices and data.
```
>>> s = pd.Series([1, 2], index=["a", "b"])
>>> s
a 1
b 2
dtype: int64
``` | DataFrame.copy(deep=True)[source]#
Make a copy of this object’s indices and data.
When deep=True (default), a new object will be created with a
copy of the calling object’s data and indices. Modifications to
the data or indices of the copy will not be reflected in the
original object (see notes below).
When deep=False, a new object will be created without copying
the calling object’s data or index (only references to the data
and index are copied). Any changes to the data of the original
will be reflected in the shallow copy (and vice versa).
Parameters
deepbool, default TrueMake a deep copy, including a copy of the data and the indices.
With deep=False neither the indices nor the data are copied.
Returns
copySeries or DataFrameObject type matches caller.
Notes
When deep=True, data is copied but actual Python objects
will not be copied recursively, only the reference to the object.
This is in contrast to copy.deepcopy in the Standard Library,
which recursively copies object data (see examples below).
While Index objects are copied when deep=True, the underlying
numpy array is not copied for performance reasons. Since Index is
immutable, the underlying data can be safely shared and a copy
is not needed.
Since pandas is not thread safe, see the
gotchas when copying in a threading
environment.
Examples
>>> s = pd.Series([1, 2], index=["a", "b"])
>>> s
a 1
b 2
dtype: int64
>>> s_copy = s.copy()
>>> s_copy
a 1
b 2
dtype: int64
Shallow copy versus default (deep) copy:
>>> s = pd.Series([1, 2], index=["a", "b"])
>>> deep = s.copy()
>>> shallow = s.copy(deep=False)
Shallow copy shares data and index with original.
>>> s is shallow
False
>>> s.values is shallow.values and s.index is shallow.index
True
Deep copy has own copy of data and index.
>>> s is deep
False
>>> s.values is deep.values or s.index is deep.index
False
Updates to the data shared by shallow copy and original is reflected
in both; deep copy remains unchanged.
>>> s[0] = 3
>>> shallow[1] = 4
>>> s
a 3
b 4
dtype: int64
>>> shallow
a 3
b 4
dtype: int64
>>> deep
a 1
b 2
dtype: int64
Note that when copying an object containing Python objects, a deep copy
will copy the data, but will not do so recursively. Updating a nested
data object will be reflected in the deep copy.
>>> s = pd.Series([[1, 2], [3, 4]])
>>> deep = s.copy()
>>> s[0][0] = 10
>>> s
0 [10, 2]
1 [3, 4]
dtype: object
>>> deep
0 [10, 2]
1 [3, 4]
dtype: object
| reference/api/pandas.DataFrame.copy.html |
pandas.Series.index | `pandas.Series.index`
The index (axis labels) of the Series. | Series.index#
The index (axis labels) of the Series.
| reference/api/pandas.Series.index.html |
pandas.core.groupby.DataFrameGroupBy.skew | `pandas.core.groupby.DataFrameGroupBy.skew`
Return unbiased skew over requested axis.
Normalized by N-1. | property DataFrameGroupBy.skew[source]#
Return unbiased skew over requested axis.
Normalized by N-1.
Parameters
axis{index (0), columns (1)}Axis for the function to be applied on.
For Series this parameter is unused and defaults to 0.
skipnabool, default TrueExclude NA/null values when computing the result.
levelint or level name, default NoneIf the axis is a MultiIndex (hierarchical), count along a
particular level, collapsing into a Series.
Deprecated since version 1.3.0: The level keyword is deprecated. Use groupby instead.
numeric_onlybool, default NoneInclude only float, int, boolean columns. If None, will attempt to use
everything, then use only numeric data. Not implemented for Series.
Deprecated since version 1.5.0: Specifying numeric_only=None is deprecated. The default value will be
False in a future version of pandas.
**kwargsAdditional keyword arguments to be passed to the function.
Returns
Series or DataFrame (if level specified)
| reference/api/pandas.core.groupby.DataFrameGroupBy.skew.html |
pandas.Period.dayofweek | `pandas.Period.dayofweek`
Day of the week the period lies in, with Monday=0 and Sunday=6.
```
>>> per = pd.Period('2017-12-31 22:00', 'H')
>>> per.day_of_week
6
``` | Period.dayofweek#
Day of the week the period lies in, with Monday=0 and Sunday=6.
If the period frequency is lower than daily (e.g. hourly), and the
period spans over multiple days, the day at the start of the period is
used.
If the frequency is higher than daily (e.g. monthly), the last day
of the period is used.
Returns
intDay of the week.
See also
Period.day_of_weekDay of the week the period lies in.
Period.weekdayAlias of Period.day_of_week.
Period.dayDay of the month.
Period.dayofyearDay of the year.
Examples
>>> per = pd.Period('2017-12-31 22:00', 'H')
>>> per.day_of_week
6
For periods that span over multiple days, the day at the beginning of
the period is returned.
>>> per = pd.Period('2017-12-31 22:00', '4H')
>>> per.day_of_week
6
>>> per.start_time.day_of_week
6
For periods with a frequency higher than days, the last day of the
period is returned.
>>> per = pd.Period('2018-01', 'M')
>>> per.day_of_week
2
>>> per.end_time.day_of_week
2
| reference/api/pandas.Period.dayofweek.html |
pandas.tseries.offsets.Hour.rollback | `pandas.tseries.offsets.Hour.rollback`
Roll provided date backward to next offset only if not on offset. | Hour.rollback()#
Roll provided date backward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
| reference/api/pandas.tseries.offsets.Hour.rollback.html |
pandas.tseries.offsets.Milli.is_quarter_end | `pandas.tseries.offsets.Milli.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
Examples
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
``` | Milli.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
| reference/api/pandas.tseries.offsets.Milli.is_quarter_end.html |
pandas.tseries.offsets.SemiMonthEnd.is_year_start | `pandas.tseries.offsets.SemiMonthEnd.is_year_start`
Return boolean whether a timestamp occurs on the year start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
``` | SemiMonthEnd.is_year_start()#
Return boolean whether a timestamp occurs on the year start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_year_start(ts)
True
| reference/api/pandas.tseries.offsets.SemiMonthEnd.is_year_start.html |
pandas.Series.nsmallest | `pandas.Series.nsmallest`
Return the smallest n elements.
```
>>> countries_population = {"Italy": 59000000, "France": 65000000,
... "Brunei": 434000, "Malta": 434000,
... "Maldives": 434000, "Iceland": 337000,
... "Nauru": 11300, "Tuvalu": 11300,
... "Anguilla": 11300, "Montserrat": 5200}
>>> s = pd.Series(countries_population)
>>> s
Italy 59000000
France 65000000
Brunei 434000
Malta 434000
Maldives 434000
Iceland 337000
Nauru 11300
Tuvalu 11300
Anguilla 11300
Montserrat 5200
dtype: int64
``` | Series.nsmallest(n=5, keep='first')[source]#
Return the smallest n elements.
Parameters
nint, default 5Return this many ascending sorted values.
keep{‘first’, ‘last’, ‘all’}, default ‘first’When there are duplicate values that cannot all fit in a
Series of n elements:
first : return the first n occurrences in order
of appearance.
last : return the last n occurrences in reverse
order of appearance.
all : keep all occurrences. This can result in a Series of
size larger than n.
Returns
SeriesThe n smallest values in the Series, sorted in increasing order.
See also
Series.nlargestGet the n largest elements.
Series.sort_valuesSort Series by values.
Series.headReturn the first n rows.
Notes
Faster than .sort_values().head(n) for small n relative to
the size of the Series object.
Examples
>>> countries_population = {"Italy": 59000000, "France": 65000000,
... "Brunei": 434000, "Malta": 434000,
... "Maldives": 434000, "Iceland": 337000,
... "Nauru": 11300, "Tuvalu": 11300,
... "Anguilla": 11300, "Montserrat": 5200}
>>> s = pd.Series(countries_population)
>>> s
Italy 59000000
France 65000000
Brunei 434000
Malta 434000
Maldives 434000
Iceland 337000
Nauru 11300
Tuvalu 11300
Anguilla 11300
Montserrat 5200
dtype: int64
The n smallest elements where n=5 by default.
>>> s.nsmallest()
Montserrat 5200
Nauru 11300
Tuvalu 11300
Anguilla 11300
Iceland 337000
dtype: int64
The n smallest elements where n=3. Default keep value is
‘first’ so Nauru and Tuvalu will be kept.
>>> s.nsmallest(3)
Montserrat 5200
Nauru 11300
Tuvalu 11300
dtype: int64
The n smallest elements where n=3 and keeping the last
duplicates. Anguilla and Tuvalu will be kept since they are the last
with value 11300 based on the index order.
>>> s.nsmallest(3, keep='last')
Montserrat 5200
Anguilla 11300
Tuvalu 11300
dtype: int64
The n smallest elements where n=3 with all duplicates kept. Note
that the returned Series has four elements due to the three duplicates.
>>> s.nsmallest(3, keep='all')
Montserrat 5200
Nauru 11300
Tuvalu 11300
Anguilla 11300
dtype: int64
| reference/api/pandas.Series.nsmallest.html |
pandas.Index.all | `pandas.Index.all`
Return whether all elements are Truthy.
```
>>> pd.Index([1, 2, 3]).all()
True
``` | Index.all(*args, **kwargs)[source]#
Return whether all elements are Truthy.
Parameters
*argsRequired for compatibility with numpy.
**kwargsRequired for compatibility with numpy.
Returns
allbool or array-like (if axis is specified)A single element array-like may be converted to bool.
See also
Index.anyReturn whether any element in an Index is True.
Series.anyReturn whether any element in a Series is True.
Series.allReturn whether all elements in a Series are True.
Notes
Not a Number (NaN), positive infinity and negative infinity
evaluate to True because these are not equal to zero.
Examples
True, because nonzero integers are considered True.
>>> pd.Index([1, 2, 3]).all()
True
False, because 0 is considered False.
>>> pd.Index([0, 1, 2]).all()
False
| reference/api/pandas.Index.all.html |
Index objects | Index objects | Index#
Many of these methods or variants thereof are available on the objects
that contain an index (Series/DataFrame) and those should most likely be
used before calling these methods directly.
Index([data, dtype, copy, name, tupleize_cols])
Immutable sequence used for indexing and alignment.
Properties#
Index.values
Return an array representing the data in the Index.
Index.is_monotonic
(DEPRECATED) Alias for is_monotonic_increasing.
Index.is_monotonic_increasing
Return a boolean if the values are equal or increasing.
Index.is_monotonic_decreasing
Return a boolean if the values are equal or decreasing.
Index.is_unique
Return if the index has unique values.
Index.has_duplicates
Check if the Index has duplicate values.
Index.hasnans
Return True if there are any NaNs.
Index.dtype
Return the dtype object of the underlying data.
Index.inferred_type
Return a string of the type inferred from the values.
Index.is_all_dates
Whether or not the index values only consist of dates.
Index.shape
Return a tuple of the shape of the underlying data.
Index.name
Return Index or MultiIndex name.
Index.names
Index.nbytes
Return the number of bytes in the underlying data.
Index.ndim
Number of dimensions of the underlying data, by definition 1.
Index.size
Return the number of elements in the underlying data.
Index.empty
Index.T
Return the transpose, which is by definition self.
Index.memory_usage([deep])
Memory usage of the values.
Modifying and computations#
Index.all(*args, **kwargs)
Return whether all elements are Truthy.
Index.any(*args, **kwargs)
Return whether any element is Truthy.
Index.argmin([axis, skipna])
Return int position of the smallest value in the Series.
Index.argmax([axis, skipna])
Return int position of the largest value in the Series.
Index.copy([name, deep, dtype, names])
Make a copy of this object.
Index.delete(loc)
Make new Index with passed location(-s) deleted.
Index.drop(labels[, errors])
Make new Index with passed list of labels deleted.
Index.drop_duplicates(*[, keep])
Return Index with duplicate values removed.
Index.duplicated([keep])
Indicate duplicate index values.
Index.equals(other)
Determine if two Index object are equal.
Index.factorize([sort, na_sentinel, ...])
Encode the object as an enumerated type or categorical variable.
Index.identical(other)
Similar to equals, but checks that object attributes and types are also equal.
Index.insert(loc, item)
Make new Index inserting new item at location.
Index.is_(other)
More flexible, faster check like is but that works through views.
Index.is_boolean()
Check if the Index only consists of booleans.
Index.is_categorical()
Check if the Index holds categorical data.
Index.is_floating()
Check if the Index is a floating type.
Index.is_integer()
Check if the Index only consists of integers.
Index.is_interval()
Check if the Index holds Interval objects.
Index.is_mixed()
Check if the Index holds data with mixed data types.
Index.is_numeric()
Check if the Index only consists of numeric data.
Index.is_object()
Check if the Index is of the object dtype.
Index.min([axis, skipna])
Return the minimum value of the Index.
Index.max([axis, skipna])
Return the maximum value of the Index.
Index.reindex(target[, method, level, ...])
Create index with target's values.
Index.rename(name[, inplace])
Alter Index or MultiIndex name.
Index.repeat(repeats[, axis])
Repeat elements of a Index.
Index.where(cond[, other])
Replace values where the condition is False.
Index.take(indices[, axis, allow_fill, ...])
Return a new Index of the values selected by the indices.
Index.putmask(mask, value)
Return a new Index of the values set with the mask.
Index.unique([level])
Return unique values in the index.
Index.nunique([dropna])
Return number of unique elements in the object.
Index.value_counts([normalize, sort, ...])
Return a Series containing counts of unique values.
Compatibility with MultiIndex#
Index.set_names(names, *[, level, inplace])
Set Index or MultiIndex name.
Index.droplevel([level])
Return index with requested level(s) removed.
Missing values#
Index.fillna([value, downcast])
Fill NA/NaN values with the specified value.
Index.dropna([how])
Return Index without NA/NaN values.
Index.isna()
Detect missing values.
Index.notna()
Detect existing (non-missing) values.
Conversion#
Index.astype(dtype[, copy])
Create an Index with values cast to dtypes.
Index.item()
Return the first element of the underlying data as a Python scalar.
Index.map(mapper[, na_action])
Map values using an input mapping or function.
Index.ravel([order])
Return an ndarray of the flattened values of the underlying data.
Index.to_list()
Return a list of the values.
Index.to_native_types([slicer])
(DEPRECATED) Format specified values of self and return them.
Index.to_series([index, name])
Create a Series with both index and values equal to the index keys.
Index.to_frame([index, name])
Create a DataFrame with a column containing the Index.
Index.view([cls])
Sorting#
Index.argsort(*args, **kwargs)
Return the integer indices that would sort the index.
Index.searchsorted(value[, side, sorter])
Find indices where elements should be inserted to maintain order.
Index.sort_values([return_indexer, ...])
Return a sorted copy of the index.
Time-specific operations#
Index.shift([periods, freq])
Shift index by desired number of time frequency increments.
Combining / joining / set operations#
Index.append(other)
Append a collection of Index options together.
Index.join(other, *[, how, level, ...])
Compute join_index and indexers to conform data structures to the new index.
Index.intersection(other[, sort])
Form the intersection of two Index objects.
Index.union(other[, sort])
Form the union of two Index objects.
Index.difference(other[, sort])
Return a new Index with elements of index not in other.
Index.symmetric_difference(other[, ...])
Compute the symmetric difference of two Index objects.
Selecting#
Index.asof(label)
Return the label from the index, or, if not present, the previous one.
Index.asof_locs(where, mask)
Return the locations (indices) of labels in the index.
Index.get_indexer(target[, method, limit, ...])
Compute indexer and mask for new index given the current index.
Index.get_indexer_for(target)
Guaranteed return of an indexer even when non-unique.
Index.get_indexer_non_unique(target)
Compute indexer and mask for new index given the current index.
Index.get_level_values(level)
Return an Index of values for requested level.
Index.get_loc(key[, method, tolerance])
Get integer location, slice or boolean mask for requested label.
Index.get_slice_bound(label, side[, kind])
Calculate slice bound that corresponds to given label.
Index.get_value(series, key)
Fast lookup of value from 1-dimensional ndarray.
Index.isin(values[, level])
Return a boolean array where the index values are in values.
Index.slice_indexer([start, end, step, kind])
Compute the slice indexer for input labels and step.
Index.slice_locs([start, end, step, kind])
Compute slice locations for input labels.
Numeric Index#
RangeIndex([start, stop, step, dtype, copy, ...])
Immutable Index implementing a monotonic integer range.
Int64Index([data, dtype, copy, name])
(DEPRECATED) Immutable sequence used for indexing and alignment.
UInt64Index([data, dtype, copy, name])
(DEPRECATED) Immutable sequence used for indexing and alignment.
Float64Index([data, dtype, copy, name])
(DEPRECATED) Immutable sequence used for indexing and alignment.
RangeIndex.start
The value of the start parameter (0 if this was not supplied).
RangeIndex.stop
The value of the stop parameter.
RangeIndex.step
The value of the step parameter (1 if this was not supplied).
RangeIndex.from_range(data[, name, dtype])
Create RangeIndex from a range object.
CategoricalIndex#
CategoricalIndex([data, categories, ...])
Index based on an underlying Categorical.
Categorical components#
CategoricalIndex.codes
The category codes of this categorical.
CategoricalIndex.categories
The categories of this categorical.
CategoricalIndex.ordered
Whether the categories have an ordered relationship.
CategoricalIndex.rename_categories(*args, ...)
Rename categories.
CategoricalIndex.reorder_categories(*args, ...)
Reorder categories as specified in new_categories.
CategoricalIndex.add_categories(*args, **kwargs)
Add new categories.
CategoricalIndex.remove_categories(*args, ...)
Remove the specified categories.
CategoricalIndex.remove_unused_categories(...)
Remove categories which are not used.
CategoricalIndex.set_categories(*args, **kwargs)
Set the categories to the specified new_categories.
CategoricalIndex.as_ordered(*args, **kwargs)
Set the Categorical to be ordered.
CategoricalIndex.as_unordered(*args, **kwargs)
Set the Categorical to be unordered.
Modifying and computations#
CategoricalIndex.map(mapper)
Map values using input an input mapping or function.
CategoricalIndex.equals(other)
Determine if two CategoricalIndex objects contain the same elements.
IntervalIndex#
IntervalIndex(data[, closed, dtype, copy, ...])
Immutable index of intervals that are closed on the same side.
IntervalIndex components#
IntervalIndex.from_arrays(left, right[, ...])
Construct from two arrays defining the left and right bounds.
IntervalIndex.from_tuples(data[, closed, ...])
Construct an IntervalIndex from an array-like of tuples.
IntervalIndex.from_breaks(breaks[, closed, ...])
Construct an IntervalIndex from an array of splits.
IntervalIndex.left
IntervalIndex.right
IntervalIndex.mid
IntervalIndex.closed
String describing the inclusive side the intervals.
IntervalIndex.length
IntervalIndex.values
Return an array representing the data in the Index.
IntervalIndex.is_empty
Indicates if an interval is empty, meaning it contains no points.
IntervalIndex.is_non_overlapping_monotonic
Return a boolean whether the IntervalArray is non-overlapping and monotonic.
IntervalIndex.is_overlapping
Return True if the IntervalIndex has overlapping intervals, else False.
IntervalIndex.get_loc(key[, method, tolerance])
Get integer location, slice or boolean mask for requested label.
IntervalIndex.get_indexer(target[, method, ...])
Compute indexer and mask for new index given the current index.
IntervalIndex.set_closed(*args, **kwargs)
Return an identical IntervalArray closed on the specified side.
IntervalIndex.contains(*args, **kwargs)
Check elementwise if the Intervals contain the value.
IntervalIndex.overlaps(*args, **kwargs)
Check elementwise if an Interval overlaps the values in the IntervalArray.
IntervalIndex.to_tuples(*args, **kwargs)
Return an ndarray of tuples of the form (left, right).
MultiIndex#
MultiIndex([levels, codes, sortorder, ...])
A multi-level, or hierarchical, index object for pandas objects.
IndexSlice
Create an object to more easily perform multi-index slicing.
MultiIndex constructors#
MultiIndex.from_arrays(arrays[, sortorder, ...])
Convert arrays to MultiIndex.
MultiIndex.from_tuples(tuples[, sortorder, ...])
Convert list of tuples to MultiIndex.
MultiIndex.from_product(iterables[, ...])
Make a MultiIndex from the cartesian product of multiple iterables.
MultiIndex.from_frame(df[, sortorder, names])
Make a MultiIndex from a DataFrame.
MultiIndex properties#
MultiIndex.names
Names of levels in MultiIndex.
MultiIndex.levels
MultiIndex.codes
MultiIndex.nlevels
Integer number of levels in this MultiIndex.
MultiIndex.levshape
A tuple with the length of each level.
MultiIndex.dtypes
Return the dtypes as a Series for the underlying MultiIndex.
MultiIndex components#
MultiIndex.set_levels(levels, *[, level, ...])
Set new levels on MultiIndex.
MultiIndex.set_codes(codes, *[, level, ...])
Set new codes on MultiIndex.
MultiIndex.to_flat_index()
Convert a MultiIndex to an Index of Tuples containing the level values.
MultiIndex.to_frame([index, name, ...])
Create a DataFrame with the levels of the MultiIndex as columns.
MultiIndex.sortlevel([level, ascending, ...])
Sort MultiIndex at the requested level.
MultiIndex.droplevel([level])
Return index with requested level(s) removed.
MultiIndex.swaplevel([i, j])
Swap level i with level j.
MultiIndex.reorder_levels(order)
Rearrange levels using input order.
MultiIndex.remove_unused_levels()
Create new MultiIndex from current that removes unused levels.
MultiIndex selecting#
MultiIndex.get_loc(key[, method])
Get location for a label or a tuple of labels.
MultiIndex.get_locs(seq)
Get location for a sequence of labels.
MultiIndex.get_loc_level(key[, level, ...])
Get location and sliced index for requested label(s)/level(s).
MultiIndex.get_indexer(target[, method, ...])
Compute indexer and mask for new index given the current index.
MultiIndex.get_level_values(level)
Return vector of label values for requested level.
DatetimeIndex#
DatetimeIndex([data, freq, tz, normalize, ...])
Immutable ndarray-like of datetime64 data.
Time/date components#
DatetimeIndex.year
The year of the datetime.
DatetimeIndex.month
The month as January=1, December=12.
DatetimeIndex.day
The day of the datetime.
DatetimeIndex.hour
The hours of the datetime.
DatetimeIndex.minute
The minutes of the datetime.
DatetimeIndex.second
The seconds of the datetime.
DatetimeIndex.microsecond
The microseconds of the datetime.
DatetimeIndex.nanosecond
The nanoseconds of the datetime.
DatetimeIndex.date
Returns numpy array of python datetime.date objects.
DatetimeIndex.time
Returns numpy array of datetime.time objects.
DatetimeIndex.timetz
Returns numpy array of datetime.time objects with timezones.
DatetimeIndex.dayofyear
The ordinal day of the year.
DatetimeIndex.day_of_year
The ordinal day of the year.
DatetimeIndex.weekofyear
(DEPRECATED) The week ordinal of the year.
DatetimeIndex.week
(DEPRECATED) The week ordinal of the year.
DatetimeIndex.dayofweek
The day of the week with Monday=0, Sunday=6.
DatetimeIndex.day_of_week
The day of the week with Monday=0, Sunday=6.
DatetimeIndex.weekday
The day of the week with Monday=0, Sunday=6.
DatetimeIndex.quarter
The quarter of the date.
DatetimeIndex.tz
Return the timezone.
DatetimeIndex.freq
Return the frequency object if it is set, otherwise None.
DatetimeIndex.freqstr
Return the frequency object as a string if its set, otherwise None.
DatetimeIndex.is_month_start
Indicates whether the date is the first day of the month.
DatetimeIndex.is_month_end
Indicates whether the date is the last day of the month.
DatetimeIndex.is_quarter_start
Indicator for whether the date is the first day of a quarter.
DatetimeIndex.is_quarter_end
Indicator for whether the date is the last day of a quarter.
DatetimeIndex.is_year_start
Indicate whether the date is the first day of a year.
DatetimeIndex.is_year_end
Indicate whether the date is the last day of the year.
DatetimeIndex.is_leap_year
Boolean indicator if the date belongs to a leap year.
DatetimeIndex.inferred_freq
Tries to return a string representing a frequency generated by infer_freq.
Selecting#
DatetimeIndex.indexer_at_time(time[, asof])
Return index locations of values at particular time of day.
DatetimeIndex.indexer_between_time(...[, ...])
Return index locations of values between particular times of day.
Time-specific operations#
DatetimeIndex.normalize(*args, **kwargs)
Convert times to midnight.
DatetimeIndex.strftime(date_format)
Convert to Index using specified date_format.
DatetimeIndex.snap([freq])
Snap time stamps to nearest occurring frequency.
DatetimeIndex.tz_convert(tz)
Convert tz-aware Datetime Array/Index from one time zone to another.
DatetimeIndex.tz_localize(tz[, ambiguous, ...])
Localize tz-naive Datetime Array/Index to tz-aware Datetime Array/Index.
DatetimeIndex.round(*args, **kwargs)
Perform round operation on the data to the specified freq.
DatetimeIndex.floor(*args, **kwargs)
Perform floor operation on the data to the specified freq.
DatetimeIndex.ceil(*args, **kwargs)
Perform ceil operation on the data to the specified freq.
DatetimeIndex.month_name(*args, **kwargs)
Return the month names with specified locale.
DatetimeIndex.day_name(*args, **kwargs)
Return the day names with specified locale.
Conversion#
DatetimeIndex.to_period(*args, **kwargs)
Cast to PeriodArray/Index at a particular frequency.
DatetimeIndex.to_perioddelta(freq)
Calculate deltas between self values and self converted to Periods at a freq.
DatetimeIndex.to_pydatetime(*args, **kwargs)
Return an ndarray of datetime.datetime objects.
DatetimeIndex.to_series([keep_tz, index, name])
Create a Series with both index and values equal to the index keys.
DatetimeIndex.to_frame([index, name])
Create a DataFrame with a column containing the Index.
Methods#
DatetimeIndex.mean(*args, **kwargs)
Return the mean value of the Array.
DatetimeIndex.std(*args, **kwargs)
Return sample standard deviation over requested axis.
TimedeltaIndex#
TimedeltaIndex([data, unit, freq, closed, ...])
Immutable Index of timedelta64 data.
Components#
TimedeltaIndex.days
Number of days for each element.
TimedeltaIndex.seconds
Number of seconds (>= 0 and less than 1 day) for each element.
TimedeltaIndex.microseconds
Number of microseconds (>= 0 and less than 1 second) for each element.
TimedeltaIndex.nanoseconds
Number of nanoseconds (>= 0 and less than 1 microsecond) for each element.
TimedeltaIndex.components
Return a DataFrame of the individual resolution components of the Timedeltas.
TimedeltaIndex.inferred_freq
Tries to return a string representing a frequency generated by infer_freq.
Conversion#
TimedeltaIndex.to_pytimedelta(*args, **kwargs)
Return an ndarray of datetime.timedelta objects.
TimedeltaIndex.to_series([index, name])
Create a Series with both index and values equal to the index keys.
TimedeltaIndex.round(*args, **kwargs)
Perform round operation on the data to the specified freq.
TimedeltaIndex.floor(*args, **kwargs)
Perform floor operation on the data to the specified freq.
TimedeltaIndex.ceil(*args, **kwargs)
Perform ceil operation on the data to the specified freq.
TimedeltaIndex.to_frame([index, name])
Create a DataFrame with a column containing the Index.
Methods#
TimedeltaIndex.mean(*args, **kwargs)
Return the mean value of the Array.
PeriodIndex#
PeriodIndex([data, ordinal, freq, dtype, ...])
Immutable ndarray holding ordinal values indicating regular periods in time.
Properties#
PeriodIndex.day
The days of the period.
PeriodIndex.dayofweek
The day of the week with Monday=0, Sunday=6.
PeriodIndex.day_of_week
The day of the week with Monday=0, Sunday=6.
PeriodIndex.dayofyear
The ordinal day of the year.
PeriodIndex.day_of_year
The ordinal day of the year.
PeriodIndex.days_in_month
The number of days in the month.
PeriodIndex.daysinmonth
The number of days in the month.
PeriodIndex.end_time
Get the Timestamp for the end of the period.
PeriodIndex.freq
Return the frequency object if it is set, otherwise None.
PeriodIndex.freqstr
Return the frequency object as a string if its set, otherwise None.
PeriodIndex.hour
The hour of the period.
PeriodIndex.is_leap_year
Logical indicating if the date belongs to a leap year.
PeriodIndex.minute
The minute of the period.
PeriodIndex.month
The month as January=1, December=12.
PeriodIndex.quarter
The quarter of the date.
PeriodIndex.qyear
PeriodIndex.second
The second of the period.
PeriodIndex.start_time
Get the Timestamp for the start of the period.
PeriodIndex.week
The week ordinal of the year.
PeriodIndex.weekday
The day of the week with Monday=0, Sunday=6.
PeriodIndex.weekofyear
The week ordinal of the year.
PeriodIndex.year
The year of the period.
Methods#
PeriodIndex.asfreq([freq, how])
Convert the PeriodArray to the specified frequency freq.
PeriodIndex.strftime(*args, **kwargs)
Convert to Index using specified date_format.
PeriodIndex.to_timestamp([freq, how])
Cast to DatetimeArray/Index.
| reference/indexing.html |
pandas.DataFrame.select_dtypes | `pandas.DataFrame.select_dtypes`
Return a subset of the DataFrame’s columns based on the column dtypes.
A selection of dtypes or strings to be included/excluded. At least
one of these parameters must be supplied.
```
>>> df = pd.DataFrame({'a': [1, 2] * 3,
... 'b': [True, False] * 3,
... 'c': [1.0, 2.0] * 3})
>>> df
a b c
0 1 True 1.0
1 2 False 2.0
2 1 True 1.0
3 2 False 2.0
4 1 True 1.0
5 2 False 2.0
``` | DataFrame.select_dtypes(include=None, exclude=None)[source]#
Return a subset of the DataFrame’s columns based on the column dtypes.
Parameters
include, excludescalar or list-likeA selection of dtypes or strings to be included/excluded. At least
one of these parameters must be supplied.
Returns
DataFrameThe subset of the frame including the dtypes in include and
excluding the dtypes in exclude.
Raises
ValueError
If both of include and exclude are empty
If include and exclude have overlapping elements
If any kind of string dtype is passed in.
See also
DataFrame.dtypesReturn Series with the data type of each column.
Notes
To select all numeric types, use np.number or 'number'
To select strings you must use the object dtype, but note that
this will return all object dtype columns
See the numpy dtype hierarchy
To select datetimes, use np.datetime64, 'datetime' or
'datetime64'
To select timedeltas, use np.timedelta64, 'timedelta' or
'timedelta64'
To select Pandas categorical dtypes, use 'category'
To select Pandas datetimetz dtypes, use 'datetimetz' (new in
0.20.0) or 'datetime64[ns, tz]'
Examples
>>> df = pd.DataFrame({'a': [1, 2] * 3,
... 'b': [True, False] * 3,
... 'c': [1.0, 2.0] * 3})
>>> df
a b c
0 1 True 1.0
1 2 False 2.0
2 1 True 1.0
3 2 False 2.0
4 1 True 1.0
5 2 False 2.0
>>> df.select_dtypes(include='bool')
b
0 True
1 False
2 True
3 False
4 True
5 False
>>> df.select_dtypes(include=['float64'])
c
0 1.0
1 2.0
2 1.0
3 2.0
4 1.0
5 2.0
>>> df.select_dtypes(exclude=['int64'])
b c
0 True 1.0
1 False 2.0
2 True 1.0
3 False 2.0
4 True 1.0
5 False 2.0
| reference/api/pandas.DataFrame.select_dtypes.html |
pandas.util.hash_pandas_object | `pandas.util.hash_pandas_object`
Return a data hash of the Index/Series/DataFrame. | pandas.util.hash_pandas_object(obj, index=True, encoding='utf8', hash_key='0123456789123456', categorize=True)[source]#
Return a data hash of the Index/Series/DataFrame.
Parameters
objIndex, Series, or DataFrame
indexbool, default TrueInclude the index in the hash (if Series/DataFrame).
encodingstr, default ‘utf8’Encoding for data & key when strings.
hash_keystr, default _default_hash_keyHash_key for string key to encode.
categorizebool, default TrueWhether to first categorize object arrays before hashing. This is more
efficient when the array contains duplicate values.
Returns
Series of uint64, same length as the object
| reference/api/pandas.util.hash_pandas_object.html |
pandas.arrays.IntervalArray.left | `pandas.arrays.IntervalArray.left`
Return the left endpoints of each Interval in the IntervalArray as an Index. | property IntervalArray.left[source]#
Return the left endpoints of each Interval in the IntervalArray as an Index.
| reference/api/pandas.arrays.IntervalArray.left.html |
pandas.tseries.offsets.MonthBegin.is_quarter_end | `pandas.tseries.offsets.MonthBegin.is_quarter_end`
Return boolean whether a timestamp occurs on the quarter end.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
``` | MonthBegin.is_quarter_end()#
Return boolean whether a timestamp occurs on the quarter end.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_quarter_end(ts)
False
| reference/api/pandas.tseries.offsets.MonthBegin.is_quarter_end.html |
pandas.Index.append | `pandas.Index.append`
Append a collection of Index options together. | Index.append(other)[source]#
Append a collection of Index options together.
Parameters
otherIndex or list/tuple of indices
Returns
Index
| reference/api/pandas.Index.append.html |
pandas.Series.autocorr | `pandas.Series.autocorr`
Compute the lag-N autocorrelation.
```
>>> s = pd.Series([0.25, 0.5, 0.2, -0.05])
>>> s.autocorr()
0.10355...
>>> s.autocorr(lag=2)
-0.99999...
``` | Series.autocorr(lag=1)[source]#
Compute the lag-N autocorrelation.
This method computes the Pearson correlation between
the Series and its shifted self.
Parameters
lagint, default 1Number of lags to apply before performing autocorrelation.
Returns
floatThe Pearson correlation between self and self.shift(lag).
See also
Series.corrCompute the correlation between two Series.
Series.shiftShift index by desired number of periods.
DataFrame.corrCompute pairwise correlation of columns.
DataFrame.corrwithCompute pairwise correlation between rows or columns of two DataFrame objects.
Notes
If the Pearson correlation is not well defined return ‘NaN’.
Examples
>>> s = pd.Series([0.25, 0.5, 0.2, -0.05])
>>> s.autocorr()
0.10355...
>>> s.autocorr(lag=2)
-0.99999...
If the Pearson correlation is not well defined, then ‘NaN’ is returned.
>>> s = pd.Series([1, 0, 0, 0])
>>> s.autocorr()
nan
| reference/api/pandas.Series.autocorr.html |
pandas.tseries.offsets.Hour.is_on_offset | `pandas.tseries.offsets.Hour.is_on_offset`
Return boolean whether a timestamp intersects with this frequency.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
``` | Hour.is_on_offset()#
Return boolean whether a timestamp intersects with this frequency.
Parameters
dtdatetime.datetimeTimestamp to check intersections with frequency.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Day(1)
>>> freq.is_on_offset(ts)
True
>>> ts = pd.Timestamp(2022, 8, 6)
>>> ts.day_name()
'Saturday'
>>> freq = pd.offsets.BusinessDay(1)
>>> freq.is_on_offset(ts)
False
| reference/api/pandas.tseries.offsets.Hour.is_on_offset.html |
pandas.tseries.offsets.CustomBusinessHour.base | `pandas.tseries.offsets.CustomBusinessHour.base`
Returns a copy of the calling offset object with n=1 and all other
attributes equal. | CustomBusinessHour.base#
Returns a copy of the calling offset object with n=1 and all other
attributes equal.
| reference/api/pandas.tseries.offsets.CustomBusinessHour.base.html |
pandas.tseries.offsets.BMonthBegin | `pandas.tseries.offsets.BMonthBegin`
alias of pandas._libs.tslibs.offsets.BusinessMonthBegin | pandas.tseries.offsets.BMonthBegin#
alias of pandas._libs.tslibs.offsets.BusinessMonthBegin
| reference/api/pandas.tseries.offsets.BMonthBegin.html |
Reshaping and pivot tables | Reshaping and pivot tables | Reshaping by pivoting DataFrame objects#
Data is often stored in so-called “stacked” or “record” format:
In [1]: import pandas._testing as tm
In [2]: def unpivot(frame):
...: N, K = frame.shape
...: data = {
...: "value": frame.to_numpy().ravel("F"),
...: "variable": np.asarray(frame.columns).repeat(N),
...: "date": np.tile(np.asarray(frame.index), K),
...: }
...: return pd.DataFrame(data, columns=["date", "variable", "value"])
...:
In [3]: df = unpivot(tm.makeTimeDataFrame(3))
In [4]: df
Out[4]:
date variable value
0 2000-01-03 A 0.469112
1 2000-01-04 A -0.282863
2 2000-01-05 A -1.509059
3 2000-01-03 B -1.135632
4 2000-01-04 B 1.212112
5 2000-01-05 B -0.173215
6 2000-01-03 C 0.119209
7 2000-01-04 C -1.044236
8 2000-01-05 C -0.861849
9 2000-01-03 D -2.104569
10 2000-01-04 D -0.494929
11 2000-01-05 D 1.071804
To select out everything for variable A we could do:
In [5]: filtered = df[df["variable"] == "A"]
In [6]: filtered
Out[6]:
date variable value
0 2000-01-03 A 0.469112
1 2000-01-04 A -0.282863
2 2000-01-05 A -1.509059
But suppose we wish to do time series operations with the variables. A better
representation would be where the columns are the unique variables and an
index of dates identifies individual observations. To reshape the data into
this form, we use the DataFrame.pivot() method (also implemented as a
top level function pivot()):
In [7]: pivoted = df.pivot(index="date", columns="variable", values="value")
In [8]: pivoted
Out[8]:
variable A B C D
date
2000-01-03 0.469112 -1.135632 0.119209 -2.104569
2000-01-04 -0.282863 1.212112 -1.044236 -0.494929
2000-01-05 -1.509059 -0.173215 -0.861849 1.071804
If the values argument is omitted, and the input DataFrame has more than
one column of values which are not used as column or index inputs to pivot(),
then the resulting “pivoted” DataFrame will have hierarchical columns whose topmost level indicates the respective value
column:
In [9]: df["value2"] = df["value"] * 2
In [10]: pivoted = df.pivot(index="date", columns="variable")
In [11]: pivoted
Out[11]:
value ... value2
variable A B C ... B C D
date ...
2000-01-03 0.469112 -1.135632 0.119209 ... -2.271265 0.238417 -4.209138
2000-01-04 -0.282863 1.212112 -1.044236 ... 2.424224 -2.088472 -0.989859
2000-01-05 -1.509059 -0.173215 -0.861849 ... -0.346429 -1.723698 2.143608
[3 rows x 8 columns]
You can then select subsets from the pivoted DataFrame:
In [12]: pivoted["value2"]
Out[12]:
variable A B C D
date
2000-01-03 0.938225 -2.271265 0.238417 -4.209138
2000-01-04 -0.565727 2.424224 -2.088472 -0.989859
2000-01-05 -3.018117 -0.346429 -1.723698 2.143608
Note that this returns a view on the underlying data in the case where the data
are homogeneously-typed.
Note
pivot() will error with a ValueError: Index contains duplicate
entries, cannot reshape if the index/column pair is not unique. In this
case, consider using pivot_table() which is a generalization
of pivot that can handle duplicate values for one index/column pair.
Reshaping by stacking and unstacking#
Closely related to the pivot() method are the related
stack() and unstack() methods available on
Series and DataFrame. These methods are designed to work together with
MultiIndex objects (see the section on hierarchical indexing). Here are essentially what these methods do:
stack(): “pivot” a level of the (possibly hierarchical) column labels,
returning a DataFrame with an index with a new inner-most level of row
labels.
unstack(): (inverse operation of stack()) “pivot” a level of the
(possibly hierarchical) row index to the column axis, producing a reshaped
DataFrame with a new inner-most level of column labels.
The clearest way to explain is by example. Let’s take a prior example data set
from the hierarchical indexing section:
In [13]: tuples = list(
....: zip(
....: *[
....: ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"],
....: ["one", "two", "one", "two", "one", "two", "one", "two"],
....: ]
....: )
....: )
....:
In [14]: index = pd.MultiIndex.from_tuples(tuples, names=["first", "second"])
In [15]: df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=["A", "B"])
In [16]: df2 = df[:4]
In [17]: df2
Out[17]:
A B
first second
bar one 0.721555 -0.706771
two -1.039575 0.271860
baz one -0.424972 0.567020
two 0.276232 -1.087401
The stack() function “compresses” a level in the DataFrame columns to
produce either:
A Series, in the case of a simple column Index.
A DataFrame, in the case of a MultiIndex in the columns.
If the columns have a MultiIndex, you can choose which level to stack. The
stacked level becomes the new lowest level in a MultiIndex on the columns:
In [18]: stacked = df2.stack()
In [19]: stacked
Out[19]:
first second
bar one A 0.721555
B -0.706771
two A -1.039575
B 0.271860
baz one A -0.424972
B 0.567020
two A 0.276232
B -1.087401
dtype: float64
With a “stacked” DataFrame or Series (having a MultiIndex as the
index), the inverse operation of stack() is unstack(), which by default
unstacks the last level:
In [20]: stacked.unstack()
Out[20]:
A B
first second
bar one 0.721555 -0.706771
two -1.039575 0.271860
baz one -0.424972 0.567020
two 0.276232 -1.087401
In [21]: stacked.unstack(1)
Out[21]:
second one two
first
bar A 0.721555 -1.039575
B -0.706771 0.271860
baz A -0.424972 0.276232
B 0.567020 -1.087401
In [22]: stacked.unstack(0)
Out[22]:
first bar baz
second
one A 0.721555 -0.424972
B -0.706771 0.567020
two A -1.039575 0.276232
B 0.271860 -1.087401
If the indexes have names, you can use the level names instead of specifying
the level numbers:
In [23]: stacked.unstack("second")
Out[23]:
second one two
first
bar A 0.721555 -1.039575
B -0.706771 0.271860
baz A -0.424972 0.276232
B 0.567020 -1.087401
Notice that the stack() and unstack() methods implicitly sort the index
levels involved. Hence a call to stack() and then unstack(), or vice versa,
will result in a sorted copy of the original DataFrame or Series:
In [24]: index = pd.MultiIndex.from_product([[2, 1], ["a", "b"]])
In [25]: df = pd.DataFrame(np.random.randn(4), index=index, columns=["A"])
In [26]: df
Out[26]:
A
2 a -0.370647
b -1.157892
1 a -1.344312
b 0.844885
In [27]: all(df.unstack().stack() == df.sort_index())
Out[27]: True
The above code will raise a TypeError if the call to sort_index() is
removed.
Multiple levels#
You may also stack or unstack more than one level at a time by passing a list
of levels, in which case the end result is as if each level in the list were
processed individually.
In [28]: columns = pd.MultiIndex.from_tuples(
....: [
....: ("A", "cat", "long"),
....: ("B", "cat", "long"),
....: ("A", "dog", "short"),
....: ("B", "dog", "short"),
....: ],
....: names=["exp", "animal", "hair_length"],
....: )
....:
In [29]: df = pd.DataFrame(np.random.randn(4, 4), columns=columns)
In [30]: df
Out[30]:
exp A B A B
animal cat cat dog dog
hair_length long long short short
0 1.075770 -0.109050 1.643563 -1.469388
1 0.357021 -0.674600 -1.776904 -0.968914
2 -1.294524 0.413738 0.276662 -0.472035
3 -0.013960 -0.362543 -0.006154 -0.923061
In [31]: df.stack(level=["animal", "hair_length"])
Out[31]:
exp A B
animal hair_length
0 cat long 1.075770 -0.109050
dog short 1.643563 -1.469388
1 cat long 0.357021 -0.674600
dog short -1.776904 -0.968914
2 cat long -1.294524 0.413738
dog short 0.276662 -0.472035
3 cat long -0.013960 -0.362543
dog short -0.006154 -0.923061
The list of levels can contain either level names or level numbers (but
not a mixture of the two).
# df.stack(level=['animal', 'hair_length'])
# from above is equivalent to:
In [32]: df.stack(level=[1, 2])
Out[32]:
exp A B
animal hair_length
0 cat long 1.075770 -0.109050
dog short 1.643563 -1.469388
1 cat long 0.357021 -0.674600
dog short -1.776904 -0.968914
2 cat long -1.294524 0.413738
dog short 0.276662 -0.472035
3 cat long -0.013960 -0.362543
dog short -0.006154 -0.923061
Missing data#
These functions are intelligent about handling missing data and do not expect
each subgroup within the hierarchical index to have the same set of labels.
They also can handle the index being unsorted (but you can make it sorted by
calling sort_index(), of course). Here is a more complex example:
In [33]: columns = pd.MultiIndex.from_tuples(
....: [
....: ("A", "cat"),
....: ("B", "dog"),
....: ("B", "cat"),
....: ("A", "dog"),
....: ],
....: names=["exp", "animal"],
....: )
....:
In [34]: index = pd.MultiIndex.from_product(
....: [("bar", "baz", "foo", "qux"), ("one", "two")], names=["first", "second"]
....: )
....:
In [35]: df = pd.DataFrame(np.random.randn(8, 4), index=index, columns=columns)
In [36]: df2 = df.iloc[[0, 1, 2, 4, 5, 7]]
In [37]: df2
Out[37]:
exp A B A
animal cat dog cat dog
first second
bar one 0.895717 0.805244 -1.206412 2.565646
two 1.431256 1.340309 -1.170299 -0.226169
baz one 0.410835 0.813850 0.132003 -0.827317
foo one -1.413681 1.607920 1.024180 0.569605
two 0.875906 -2.211372 0.974466 -2.006747
qux two -1.226825 0.769804 -1.281247 -0.727707
As mentioned above, stack() can be called with a level argument to select
which level in the columns to stack:
In [38]: df2.stack("exp")
Out[38]:
animal cat dog
first second exp
bar one A 0.895717 2.565646
B -1.206412 0.805244
two A 1.431256 -0.226169
B -1.170299 1.340309
baz one A 0.410835 -0.827317
B 0.132003 0.813850
foo one A -1.413681 0.569605
B 1.024180 1.607920
two A 0.875906 -2.006747
B 0.974466 -2.211372
qux two A -1.226825 -0.727707
B -1.281247 0.769804
In [39]: df2.stack("animal")
Out[39]:
exp A B
first second animal
bar one cat 0.895717 -1.206412
dog 2.565646 0.805244
two cat 1.431256 -1.170299
dog -0.226169 1.340309
baz one cat 0.410835 0.132003
dog -0.827317 0.813850
foo one cat -1.413681 1.024180
dog 0.569605 1.607920
two cat 0.875906 0.974466
dog -2.006747 -2.211372
qux two cat -1.226825 -1.281247
dog -0.727707 0.769804
Unstacking can result in missing values if subgroups do not have the same
set of labels. By default, missing values will be replaced with the default
fill value for that data type, NaN for float, NaT for datetimelike,
etc. For integer types, by default data will converted to float and missing
values will be set to NaN.
In [40]: df3 = df.iloc[[0, 1, 4, 7], [1, 2]]
In [41]: df3
Out[41]:
exp B
animal dog cat
first second
bar one 0.805244 -1.206412
two 1.340309 -1.170299
foo one 1.607920 1.024180
qux two 0.769804 -1.281247
In [42]: df3.unstack()
Out[42]:
exp B
animal dog cat
second one two one two
first
bar 0.805244 1.340309 -1.206412 -1.170299
foo 1.607920 NaN 1.024180 NaN
qux NaN 0.769804 NaN -1.281247
Alternatively, unstack takes an optional fill_value argument, for specifying
the value of missing data.
In [43]: df3.unstack(fill_value=-1e9)
Out[43]:
exp B
animal dog cat
second one two one two
first
bar 8.052440e-01 1.340309e+00 -1.206412e+00 -1.170299e+00
foo 1.607920e+00 -1.000000e+09 1.024180e+00 -1.000000e+09
qux -1.000000e+09 7.698036e-01 -1.000000e+09 -1.281247e+00
With a MultiIndex#
Unstacking when the columns are a MultiIndex is also careful about doing
the right thing:
In [44]: df[:3].unstack(0)
Out[44]:
exp A B ... A
animal cat dog ... cat dog
first bar baz bar ... baz bar baz
second ...
one 0.895717 0.410835 0.805244 ... 0.132003 2.565646 -0.827317
two 1.431256 NaN 1.340309 ... NaN -0.226169 NaN
[2 rows x 8 columns]
In [45]: df2.unstack(1)
Out[45]:
exp A B ... A
animal cat dog ... cat dog
second one two one ... two one two
first ...
bar 0.895717 1.431256 0.805244 ... -1.170299 2.565646 -0.226169
baz 0.410835 NaN 0.813850 ... NaN -0.827317 NaN
foo -1.413681 0.875906 1.607920 ... 0.974466 0.569605 -2.006747
qux NaN -1.226825 NaN ... -1.281247 NaN -0.727707
[4 rows x 8 columns]
Reshaping by melt#
The top-level melt() function and the corresponding DataFrame.melt()
are useful to massage a DataFrame into a format where one or more columns
are identifier variables, while all other columns, considered measured
variables, are “unpivoted” to the row axis, leaving just two non-identifier
columns, “variable” and “value”. The names of those columns can be customized
by supplying the var_name and value_name parameters.
For instance,
In [46]: cheese = pd.DataFrame(
....: {
....: "first": ["John", "Mary"],
....: "last": ["Doe", "Bo"],
....: "height": [5.5, 6.0],
....: "weight": [130, 150],
....: }
....: )
....:
In [47]: cheese
Out[47]:
first last height weight
0 John Doe 5.5 130
1 Mary Bo 6.0 150
In [48]: cheese.melt(id_vars=["first", "last"])
Out[48]:
first last variable value
0 John Doe height 5.5
1 Mary Bo height 6.0
2 John Doe weight 130.0
3 Mary Bo weight 150.0
In [49]: cheese.melt(id_vars=["first", "last"], var_name="quantity")
Out[49]:
first last quantity value
0 John Doe height 5.5
1 Mary Bo height 6.0
2 John Doe weight 130.0
3 Mary Bo weight 150.0
When transforming a DataFrame using melt(), the index will be ignored. The original index values can be kept around by setting the ignore_index parameter to False (default is True). This will however duplicate them.
New in version 1.1.0.
In [50]: index = pd.MultiIndex.from_tuples([("person", "A"), ("person", "B")])
In [51]: cheese = pd.DataFrame(
....: {
....: "first": ["John", "Mary"],
....: "last": ["Doe", "Bo"],
....: "height": [5.5, 6.0],
....: "weight": [130, 150],
....: },
....: index=index,
....: )
....:
In [52]: cheese
Out[52]:
first last height weight
person A John Doe 5.5 130
B Mary Bo 6.0 150
In [53]: cheese.melt(id_vars=["first", "last"])
Out[53]:
first last variable value
0 John Doe height 5.5
1 Mary Bo height 6.0
2 John Doe weight 130.0
3 Mary Bo weight 150.0
In [54]: cheese.melt(id_vars=["first", "last"], ignore_index=False)
Out[54]:
first last variable value
person A John Doe height 5.5
B Mary Bo height 6.0
A John Doe weight 130.0
B Mary Bo weight 150.0
Another way to transform is to use the wide_to_long() panel data
convenience function. It is less flexible than melt(), but more
user-friendly.
In [55]: dft = pd.DataFrame(
....: {
....: "A1970": {0: "a", 1: "b", 2: "c"},
....: "A1980": {0: "d", 1: "e", 2: "f"},
....: "B1970": {0: 2.5, 1: 1.2, 2: 0.7},
....: "B1980": {0: 3.2, 1: 1.3, 2: 0.1},
....: "X": dict(zip(range(3), np.random.randn(3))),
....: }
....: )
....:
In [56]: dft["id"] = dft.index
In [57]: dft
Out[57]:
A1970 A1980 B1970 B1980 X id
0 a d 2.5 3.2 -0.121306 0
1 b e 1.2 1.3 -0.097883 1
2 c f 0.7 0.1 0.695775 2
In [58]: pd.wide_to_long(dft, ["A", "B"], i="id", j="year")
Out[58]:
X A B
id year
0 1970 -0.121306 a 2.5
1 1970 -0.097883 b 1.2
2 1970 0.695775 c 0.7
0 1980 -0.121306 d 3.2
1 1980 -0.097883 e 1.3
2 1980 0.695775 f 0.1
Combining with stats and GroupBy#
It should be no shock that combining pivot() / stack() / unstack() with
GroupBy and the basic Series and DataFrame statistical functions can produce
some very expressive and fast data manipulations.
In [59]: df
Out[59]:
exp A B A
animal cat dog cat dog
first second
bar one 0.895717 0.805244 -1.206412 2.565646
two 1.431256 1.340309 -1.170299 -0.226169
baz one 0.410835 0.813850 0.132003 -0.827317
two -0.076467 -1.187678 1.130127 -1.436737
foo one -1.413681 1.607920 1.024180 0.569605
two 0.875906 -2.211372 0.974466 -2.006747
qux one -0.410001 -0.078638 0.545952 -1.219217
two -1.226825 0.769804 -1.281247 -0.727707
In [60]: df.stack().mean(1).unstack()
Out[60]:
animal cat dog
first second
bar one -0.155347 1.685445
two 0.130479 0.557070
baz one 0.271419 -0.006733
two 0.526830 -1.312207
foo one -0.194750 1.088763
two 0.925186 -2.109060
qux one 0.067976 -0.648927
two -1.254036 0.021048
# same result, another way
In [61]: df.groupby(level=1, axis=1).mean()
Out[61]:
animal cat dog
first second
bar one -0.155347 1.685445
two 0.130479 0.557070
baz one 0.271419 -0.006733
two 0.526830 -1.312207
foo one -0.194750 1.088763
two 0.925186 -2.109060
qux one 0.067976 -0.648927
two -1.254036 0.021048
In [62]: df.stack().groupby(level=1).mean()
Out[62]:
exp A B
second
one 0.071448 0.455513
two -0.424186 -0.204486
In [63]: df.mean().unstack(0)
Out[63]:
exp A B
animal
cat 0.060843 0.018596
dog -0.413580 0.232430
Pivot tables#
While pivot() provides general purpose pivoting with various
data types (strings, numerics, etc.), pandas also provides pivot_table()
for pivoting with aggregation of numeric data.
The function pivot_table() can be used to create spreadsheet-style
pivot tables. See the cookbook for some advanced
strategies.
It takes a number of arguments:
data: a DataFrame object.
values: a column or a list of columns to aggregate.
index: a column, Grouper, array which has the same length as data, or list of them.
Keys to group by on the pivot table index. If an array is passed, it is being used as the same manner as column values.
columns: a column, Grouper, array which has the same length as data, or list of them.
Keys to group by on the pivot table column. If an array is passed, it is being used as the same manner as column values.
aggfunc: function to use for aggregation, defaulting to numpy.mean.
Consider a data set like this:
In [64]: import datetime
In [65]: df = pd.DataFrame(
....: {
....: "A": ["one", "one", "two", "three"] * 6,
....: "B": ["A", "B", "C"] * 8,
....: "C": ["foo", "foo", "foo", "bar", "bar", "bar"] * 4,
....: "D": np.random.randn(24),
....: "E": np.random.randn(24),
....: "F": [datetime.datetime(2013, i, 1) for i in range(1, 13)]
....: + [datetime.datetime(2013, i, 15) for i in range(1, 13)],
....: }
....: )
....:
In [66]: df
Out[66]:
A B C D E F
0 one A foo 0.341734 -0.317441 2013-01-01
1 one B foo 0.959726 -1.236269 2013-02-01
2 two C foo -1.110336 0.896171 2013-03-01
3 three A bar -0.619976 -0.487602 2013-04-01
4 one B bar 0.149748 -0.082240 2013-05-01
.. ... .. ... ... ... ...
19 three B foo 0.690579 -2.213588 2013-08-15
20 one C foo 0.995761 1.063327 2013-09-15
21 one A bar 2.396780 1.266143 2013-10-15
22 two B bar 0.014871 0.299368 2013-11-15
23 three C bar 3.357427 -0.863838 2013-12-15
[24 rows x 6 columns]
We can produce pivot tables from this data very easily:
In [67]: pd.pivot_table(df, values="D", index=["A", "B"], columns=["C"])
Out[67]:
C bar foo
A B
one A 1.120915 -0.514058
B -0.338421 0.002759
C -0.538846 0.699535
three A -1.181568 NaN
B NaN 0.433512
C 0.588783 NaN
two A NaN 1.000985
B 0.158248 NaN
C NaN 0.176180
In [68]: pd.pivot_table(df, values="D", index=["B"], columns=["A", "C"], aggfunc=np.sum)
Out[68]:
A one three two
C bar foo bar foo bar foo
B
A 2.241830 -1.028115 -2.363137 NaN NaN 2.001971
B -0.676843 0.005518 NaN 0.867024 0.316495 NaN
C -1.077692 1.399070 1.177566 NaN NaN 0.352360
In [69]: pd.pivot_table(
....: df, values=["D", "E"],
....: index=["B"],
....: columns=["A", "C"],
....: aggfunc=np.sum,
....: )
....:
Out[69]:
D ... E
A one three ... three two
C bar foo bar ... foo bar foo
B ...
A 2.241830 -1.028115 -2.363137 ... NaN NaN 0.128491
B -0.676843 0.005518 NaN ... -2.128743 -0.194294 NaN
C -1.077692 1.399070 1.177566 ... NaN NaN 0.872482
[3 rows x 12 columns]
The result object is a DataFrame having potentially hierarchical indexes on the
rows and columns. If the values column name is not given, the pivot table
will include all of the data in an additional level of hierarchy in the columns:
In [70]: pd.pivot_table(df[["A", "B", "C", "D", "E"]], index=["A", "B"], columns=["C"])
Out[70]:
D E
C bar foo bar foo
A B
one A 1.120915 -0.514058 1.393057 -0.021605
B -0.338421 0.002759 0.684140 -0.551692
C -0.538846 0.699535 -0.988442 0.747859
three A -1.181568 NaN 0.961289 NaN
B NaN 0.433512 NaN -1.064372
C 0.588783 NaN -0.131830 NaN
two A NaN 1.000985 NaN 0.064245
B 0.158248 NaN -0.097147 NaN
C NaN 0.176180 NaN 0.436241
Also, you can use Grouper for index and columns keywords. For detail of Grouper, see Grouping with a Grouper specification.
In [71]: pd.pivot_table(df, values="D", index=pd.Grouper(freq="M", key="F"), columns="C")
Out[71]:
C bar foo
F
2013-01-31 NaN -0.514058
2013-02-28 NaN 0.002759
2013-03-31 NaN 0.176180
2013-04-30 -1.181568 NaN
2013-05-31 -0.338421 NaN
2013-06-30 -0.538846 NaN
2013-07-31 NaN 1.000985
2013-08-31 NaN 0.433512
2013-09-30 NaN 0.699535
2013-10-31 1.120915 NaN
2013-11-30 0.158248 NaN
2013-12-31 0.588783 NaN
You can render a nice output of the table omitting the missing values by
calling to_string() if you wish:
In [72]: table = pd.pivot_table(df, index=["A", "B"], columns=["C"], values=["D", "E"])
In [73]: print(table.to_string(na_rep=""))
D E
C bar foo bar foo
A B
one A 1.120915 -0.514058 1.393057 -0.021605
B -0.338421 0.002759 0.684140 -0.551692
C -0.538846 0.699535 -0.988442 0.747859
three A -1.181568 0.961289
B 0.433512 -1.064372
C 0.588783 -0.131830
two A 1.000985 0.064245
B 0.158248 -0.097147
C 0.176180 0.436241
Note that pivot_table() is also available as an instance method on DataFrame,i.e. DataFrame.pivot_table().
Adding margins#
If you pass margins=True to pivot_table(), special All columns and
rows will be added with partial group aggregates across the categories on the
rows and columns:
In [74]: table = df.pivot_table(
....: index=["A", "B"],
....: columns="C",
....: values=["D", "E"],
....: margins=True,
....: aggfunc=np.std
....: )
....:
In [75]: table
Out[75]:
D E
C bar foo All bar foo All
A B
one A 1.804346 1.210272 1.569879 0.179483 0.418374 0.858005
B 0.690376 1.353355 0.898998 1.083825 0.968138 1.101401
C 0.273641 0.418926 0.771139 1.689271 0.446140 1.422136
three A 0.794212 NaN 0.794212 2.049040 NaN 2.049040
B NaN 0.363548 0.363548 NaN 1.625237 1.625237
C 3.915454 NaN 3.915454 1.035215 NaN 1.035215
two A NaN 0.442998 0.442998 NaN 0.447104 0.447104
B 0.202765 NaN 0.202765 0.560757 NaN 0.560757
C NaN 1.819408 1.819408 NaN 0.650439 0.650439
All 1.556686 0.952552 1.246608 1.250924 0.899904 1.059389
Additionally, you can call DataFrame.stack() to display a pivoted DataFrame
as having a multi-level index:
In [76]: table.stack()
Out[76]:
D E
A B C
one A All 1.569879 0.858005
bar 1.804346 0.179483
foo 1.210272 0.418374
B All 0.898998 1.101401
bar 0.690376 1.083825
... ... ...
two C All 1.819408 0.650439
foo 1.819408 0.650439
All All 1.246608 1.059389
bar 1.556686 1.250924
foo 0.952552 0.899904
[24 rows x 2 columns]
Cross tabulations#
Use crosstab() to compute a cross-tabulation of two (or more)
factors. By default crosstab() computes a frequency table of the factors
unless an array of values and an aggregation function are passed.
It takes a number of arguments
index: array-like, values to group by in the rows.
columns: array-like, values to group by in the columns.
values: array-like, optional, array of values to aggregate according to
the factors.
aggfunc: function, optional, If no values array is passed, computes a
frequency table.
rownames: sequence, default None, must match number of row arrays passed.
colnames: sequence, default None, if passed, must match number of column
arrays passed.
margins: boolean, default False, Add row/column margins (subtotals)
normalize: boolean, {‘all’, ‘index’, ‘columns’}, or {0,1}, default False.
Normalize by dividing all values by the sum of values.
Any Series passed will have their name attributes used unless row or column
names for the cross-tabulation are specified
For example:
In [77]: foo, bar, dull, shiny, one, two = "foo", "bar", "dull", "shiny", "one", "two"
In [78]: a = np.array([foo, foo, bar, bar, foo, foo], dtype=object)
In [79]: b = np.array([one, one, two, one, two, one], dtype=object)
In [80]: c = np.array([dull, dull, shiny, dull, dull, shiny], dtype=object)
In [81]: pd.crosstab(a, [b, c], rownames=["a"], colnames=["b", "c"])
Out[81]:
b one two
c dull shiny dull shiny
a
bar 1 0 0 1
foo 2 1 1 0
If crosstab() receives only two Series, it will provide a frequency table.
In [82]: df = pd.DataFrame(
....: {"A": [1, 2, 2, 2, 2], "B": [3, 3, 4, 4, 4], "C": [1, 1, np.nan, 1, 1]}
....: )
....:
In [83]: df
Out[83]:
A B C
0 1 3 1.0
1 2 3 1.0
2 2 4 NaN
3 2 4 1.0
4 2 4 1.0
In [84]: pd.crosstab(df["A"], df["B"])
Out[84]:
B 3 4
A
1 1 0
2 1 3
crosstab() can also be implemented
to Categorical data.
In [85]: foo = pd.Categorical(["a", "b"], categories=["a", "b", "c"])
In [86]: bar = pd.Categorical(["d", "e"], categories=["d", "e", "f"])
In [87]: pd.crosstab(foo, bar)
Out[87]:
col_0 d e
row_0
a 1 0
b 0 1
If you want to include all of data categories even if the actual data does
not contain any instances of a particular category, you should set dropna=False.
For example:
In [88]: pd.crosstab(foo, bar, dropna=False)
Out[88]:
col_0 d e f
row_0
a 1 0 0
b 0 1 0
c 0 0 0
Normalization#
Frequency tables can also be normalized to show percentages rather than counts
using the normalize argument:
In [89]: pd.crosstab(df["A"], df["B"], normalize=True)
Out[89]:
B 3 4
A
1 0.2 0.0
2 0.2 0.6
normalize can also normalize values within each row or within each column:
In [90]: pd.crosstab(df["A"], df["B"], normalize="columns")
Out[90]:
B 3 4
A
1 0.5 0.0
2 0.5 1.0
crosstab() can also be passed a third Series and an aggregation function
(aggfunc) that will be applied to the values of the third Series within
each group defined by the first two Series:
In [91]: pd.crosstab(df["A"], df["B"], values=df["C"], aggfunc=np.sum)
Out[91]:
B 3 4
A
1 1.0 NaN
2 1.0 2.0
Adding margins#
Finally, one can also add margins or normalize this output.
In [92]: pd.crosstab(
....: df["A"], df["B"], values=df["C"], aggfunc=np.sum, normalize=True, margins=True
....: )
....:
Out[92]:
B 3 4 All
A
1 0.25 0.0 0.25
2 0.25 0.5 0.75
All 0.50 0.5 1.00
Tiling#
The cut() function computes groupings for the values of the input
array and is often used to transform continuous variables to discrete or
categorical variables:
In [93]: ages = np.array([10, 15, 13, 12, 23, 25, 28, 59, 60])
In [94]: pd.cut(ages, bins=3)
Out[94]:
[(9.95, 26.667], (9.95, 26.667], (9.95, 26.667], (9.95, 26.667], (9.95, 26.667], (9.95, 26.667], (26.667, 43.333], (43.333, 60.0], (43.333, 60.0]]
Categories (3, interval[float64, right]): [(9.95, 26.667] < (26.667, 43.333] < (43.333, 60.0]]
If the bins keyword is an integer, then equal-width bins are formed.
Alternatively we can specify custom bin-edges:
In [95]: c = pd.cut(ages, bins=[0, 18, 35, 70])
In [96]: c
Out[96]:
[(0, 18], (0, 18], (0, 18], (0, 18], (18, 35], (18, 35], (18, 35], (35, 70], (35, 70]]
Categories (3, interval[int64, right]): [(0, 18] < (18, 35] < (35, 70]]
If the bins keyword is an IntervalIndex, then these will be
used to bin the passed data.:
pd.cut([25, 20, 50], bins=c.categories)
Computing indicator / dummy variables#
To convert a categorical variable into a “dummy” or “indicator” DataFrame,
for example a column in a DataFrame (a Series) which has k distinct
values, can derive a DataFrame containing k columns of 1s and 0s using
get_dummies():
In [97]: df = pd.DataFrame({"key": list("bbacab"), "data1": range(6)})
In [98]: pd.get_dummies(df["key"])
Out[98]:
a b c
0 0 1 0
1 0 1 0
2 1 0 0
3 0 0 1
4 1 0 0
5 0 1 0
Sometimes it’s useful to prefix the column names, for example when merging the result
with the original DataFrame:
In [99]: dummies = pd.get_dummies(df["key"], prefix="key")
In [100]: dummies
Out[100]:
key_a key_b key_c
0 0 1 0
1 0 1 0
2 1 0 0
3 0 0 1
4 1 0 0
5 0 1 0
In [101]: df[["data1"]].join(dummies)
Out[101]:
data1 key_a key_b key_c
0 0 0 1 0
1 1 0 1 0
2 2 1 0 0
3 3 0 0 1
4 4 1 0 0
5 5 0 1 0
This function is often used along with discretization functions like cut():
In [102]: values = np.random.randn(10)
In [103]: values
Out[103]:
array([ 0.4082, -1.0481, -0.0257, -0.9884, 0.0941, 1.2627, 1.29 ,
0.0824, -0.0558, 0.5366])
In [104]: bins = [0, 0.2, 0.4, 0.6, 0.8, 1]
In [105]: pd.get_dummies(pd.cut(values, bins))
Out[105]:
(0.0, 0.2] (0.2, 0.4] (0.4, 0.6] (0.6, 0.8] (0.8, 1.0]
0 0 0 1 0 0
1 0 0 0 0 0
2 0 0 0 0 0
3 0 0 0 0 0
4 1 0 0 0 0
5 0 0 0 0 0
6 0 0 0 0 0
7 1 0 0 0 0
8 0 0 0 0 0
9 0 0 1 0 0
See also Series.str.get_dummies.
get_dummies() also accepts a DataFrame. By default all categorical
variables (categorical in the statistical sense, those with object or
categorical dtype) are encoded as dummy variables.
In [106]: df = pd.DataFrame({"A": ["a", "b", "a"], "B": ["c", "c", "b"], "C": [1, 2, 3]})
In [107]: pd.get_dummies(df)
Out[107]:
C A_a A_b B_b B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
All non-object columns are included untouched in the output. You can control
the columns that are encoded with the columns keyword.
In [108]: pd.get_dummies(df, columns=["A"])
Out[108]:
B C A_a A_b
0 c 1 1 0
1 c 2 0 1
2 b 3 1 0
Notice that the B column is still included in the output, it just hasn’t
been encoded. You can drop B before calling get_dummies if you don’t
want to include it in the output.
As with the Series version, you can pass values for the prefix and
prefix_sep. By default the column name is used as the prefix, and _ as
the prefix separator. You can specify prefix and prefix_sep in 3 ways:
string: Use the same value for prefix or prefix_sep for each column
to be encoded.
list: Must be the same length as the number of columns being encoded.
dict: Mapping column name to prefix.
In [109]: simple = pd.get_dummies(df, prefix="new_prefix")
In [110]: simple
Out[110]:
C new_prefix_a new_prefix_b new_prefix_b new_prefix_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
In [111]: from_list = pd.get_dummies(df, prefix=["from_A", "from_B"])
In [112]: from_list
Out[112]:
C from_A_a from_A_b from_B_b from_B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
In [113]: from_dict = pd.get_dummies(df, prefix={"B": "from_B", "A": "from_A"})
In [114]: from_dict
Out[114]:
C from_A_a from_A_b from_B_b from_B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
Sometimes it will be useful to only keep k-1 levels of a categorical
variable to avoid collinearity when feeding the result to statistical models.
You can switch to this mode by turn on drop_first.
In [115]: s = pd.Series(list("abcaa"))
In [116]: pd.get_dummies(s)
Out[116]:
a b c
0 1 0 0
1 0 1 0
2 0 0 1
3 1 0 0
4 1 0 0
In [117]: pd.get_dummies(s, drop_first=True)
Out[117]:
b c
0 0 0
1 1 0
2 0 1
3 0 0
4 0 0
When a column contains only one level, it will be omitted in the result.
In [118]: df = pd.DataFrame({"A": list("aaaaa"), "B": list("ababc")})
In [119]: pd.get_dummies(df)
Out[119]:
A_a B_a B_b B_c
0 1 1 0 0
1 1 0 1 0
2 1 1 0 0
3 1 0 1 0
4 1 0 0 1
In [120]: pd.get_dummies(df, drop_first=True)
Out[120]:
B_b B_c
0 0 0
1 1 0
2 0 0
3 1 0
4 0 1
By default new columns will have np.uint8 dtype.
To choose another dtype, use the dtype argument:
In [121]: df = pd.DataFrame({"A": list("abc"), "B": [1.1, 2.2, 3.3]})
In [122]: pd.get_dummies(df, dtype=bool).dtypes
Out[122]:
B float64
A_a bool
A_b bool
A_c bool
dtype: object
New in version 1.5.0.
To convert a “dummy” or “indicator” DataFrame, into a categorical DataFrame,
for example k columns of a DataFrame containing 1s and 0s can derive a
DataFrame which has k distinct values using
from_dummies():
In [123]: df = pd.DataFrame({"prefix_a": [0, 1, 0], "prefix_b": [1, 0, 1]})
In [124]: df
Out[124]:
prefix_a prefix_b
0 0 1
1 1 0
2 0 1
In [125]: pd.from_dummies(df, sep="_")
Out[125]:
prefix
0 b
1 a
2 b
Dummy coded data only requires k - 1 categories to be included, in this case
the k th category is the default category, implied by not being assigned any of
the other k - 1 categories, can be passed via default_category.
In [126]: df = pd.DataFrame({"prefix_a": [0, 1, 0]})
In [127]: df
Out[127]:
prefix_a
0 0
1 1
2 0
In [128]: pd.from_dummies(df, sep="_", default_category="b")
Out[128]:
prefix
0 b
1 a
2 b
Factorizing values#
To encode 1-d values as an enumerated type use factorize():
In [129]: x = pd.Series(["A", "A", np.nan, "B", 3.14, np.inf])
In [130]: x
Out[130]:
0 A
1 A
2 NaN
3 B
4 3.14
5 inf
dtype: object
In [131]: labels, uniques = pd.factorize(x)
In [132]: labels
Out[132]: array([ 0, 0, -1, 1, 2, 3])
In [133]: uniques
Out[133]: Index(['A', 'B', 3.14, inf], dtype='object')
Note that factorize() is similar to numpy.unique, but differs in its
handling of NaN:
Note
The following numpy.unique will fail under Python 3 with a TypeError
because of an ordering bug. See also
here.
In [134]: ser = pd.Series(['A', 'A', np.nan, 'B', 3.14, np.inf])
In [135]: pd.factorize(ser, sort=True)
Out[135]: (array([ 2, 2, -1, 3, 0, 1]), Index([3.14, inf, 'A', 'B'], dtype='object'))
In [136]: np.unique(ser, return_inverse=True)[::-1]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[136], line 1
----> 1 np.unique(ser, return_inverse=True)[::-1]
File <__array_function__ internals>:180, in unique(*args, **kwargs)
File ~/micromamba/envs/test/lib/python3.8/site-packages/numpy/lib/arraysetops.py:274, in unique(ar, return_index, return_inverse, return_counts, axis, equal_nan)
272 ar = np.asanyarray(ar)
273 if axis is None:
--> 274 ret = _unique1d(ar, return_index, return_inverse, return_counts,
275 equal_nan=equal_nan)
276 return _unpack_tuple(ret)
278 # axis was specified and not None
File ~/micromamba/envs/test/lib/python3.8/site-packages/numpy/lib/arraysetops.py:333, in _unique1d(ar, return_index, return_inverse, return_counts, equal_nan)
330 optional_indices = return_index or return_inverse
332 if optional_indices:
--> 333 perm = ar.argsort(kind='mergesort' if return_index else 'quicksort')
334 aux = ar[perm]
335 else:
TypeError: '<' not supported between instances of 'float' and 'str'
Note
If you just want to handle one column as a categorical variable (like R’s factor),
you can use df["cat_col"] = pd.Categorical(df["col"]) or
df["cat_col"] = df["col"].astype("category"). For full docs on Categorical,
see the Categorical introduction and the
API documentation.
Examples#
In this section, we will review frequently asked questions and examples. The
column names and relevant column values are named to correspond with how this
DataFrame will be pivoted in the answers below.
In [137]: np.random.seed([3, 1415])
In [138]: n = 20
In [139]: cols = np.array(["key", "row", "item", "col"])
In [140]: df = cols + pd.DataFrame(
.....: (np.random.randint(5, size=(n, 4)) // [2, 1, 2, 1]).astype(str)
.....: )
.....:
In [141]: df.columns = cols
In [142]: df = df.join(pd.DataFrame(np.random.rand(n, 2).round(2)).add_prefix("val"))
In [143]: df
Out[143]:
key row item col val0 val1
0 key0 row3 item1 col3 0.81 0.04
1 key1 row2 item1 col2 0.44 0.07
2 key1 row0 item1 col0 0.77 0.01
3 key0 row4 item0 col2 0.15 0.59
4 key1 row0 item2 col1 0.81 0.64
.. ... ... ... ... ... ...
15 key0 row3 item1 col1 0.31 0.23
16 key0 row0 item2 col3 0.86 0.01
17 key0 row4 item0 col3 0.64 0.21
18 key2 row2 item2 col0 0.13 0.45
19 key0 row2 item0 col4 0.37 0.70
[20 rows x 6 columns]
Pivoting with single aggregations#
Suppose we wanted to pivot df such that the col values are columns,
row values are the index, and the mean of val0 are the values? In
particular, the resulting DataFrame should look like:
col col0 col1 col2 col3 col4
row
row0 0.77 0.605 NaN 0.860 0.65
row2 0.13 NaN 0.395 0.500 0.25
row3 NaN 0.310 NaN 0.545 NaN
row4 NaN 0.100 0.395 0.760 0.24
This solution uses pivot_table(). Also note that
aggfunc='mean' is the default. It is included here to be explicit.
In [144]: df.pivot_table(values="val0", index="row", columns="col", aggfunc="mean")
Out[144]:
col col0 col1 col2 col3 col4
row
row0 0.77 0.605 NaN 0.860 0.65
row2 0.13 NaN 0.395 0.500 0.25
row3 NaN 0.310 NaN 0.545 NaN
row4 NaN 0.100 0.395 0.760 0.24
Note that we can also replace the missing values by using the fill_value
parameter.
In [145]: df.pivot_table(
.....: values="val0",
.....: index="row",
.....: columns="col",
.....: aggfunc="mean",
.....: fill_value=0,
.....: )
.....:
Out[145]:
col col0 col1 col2 col3 col4
row
row0 0.77 0.605 0.000 0.860 0.65
row2 0.13 0.000 0.395 0.500 0.25
row3 0.00 0.310 0.000 0.545 0.00
row4 0.00 0.100 0.395 0.760 0.24
Also note that we can pass in other aggregation functions as well. For example,
we can also pass in sum.
In [146]: df.pivot_table(
.....: values="val0",
.....: index="row",
.....: columns="col",
.....: aggfunc="sum",
.....: fill_value=0,
.....: )
.....:
Out[146]:
col col0 col1 col2 col3 col4
row
row0 0.77 1.21 0.00 0.86 0.65
row2 0.13 0.00 0.79 0.50 0.50
row3 0.00 0.31 0.00 1.09 0.00
row4 0.00 0.10 0.79 1.52 0.24
Another aggregation we can do is calculate the frequency in which the columns
and rows occur together a.k.a. “cross tabulation”. To do this, we can pass
size to the aggfunc parameter.
In [147]: df.pivot_table(index="row", columns="col", fill_value=0, aggfunc="size")
Out[147]:
col col0 col1 col2 col3 col4
row
row0 1 2 0 1 1
row2 1 0 2 1 2
row3 0 1 0 2 0
row4 0 1 2 2 1
Pivoting with multiple aggregations#
We can also perform multiple aggregations. For example, to perform both a
sum and mean, we can pass in a list to the aggfunc argument.
In [148]: df.pivot_table(
.....: values="val0",
.....: index="row",
.....: columns="col",
.....: aggfunc=["mean", "sum"],
.....: )
.....:
Out[148]:
mean sum
col col0 col1 col2 col3 col4 col0 col1 col2 col3 col4
row
row0 0.77 0.605 NaN 0.860 0.65 0.77 1.21 NaN 0.86 0.65
row2 0.13 NaN 0.395 0.500 0.25 0.13 NaN 0.79 0.50 0.50
row3 NaN 0.310 NaN 0.545 NaN NaN 0.31 NaN 1.09 NaN
row4 NaN 0.100 0.395 0.760 0.24 NaN 0.10 0.79 1.52 0.24
Note to aggregate over multiple value columns, we can pass in a list to the
values parameter.
In [149]: df.pivot_table(
.....: values=["val0", "val1"],
.....: index="row",
.....: columns="col",
.....: aggfunc=["mean"],
.....: )
.....:
Out[149]:
mean
val0 val1
col col0 col1 col2 col3 col4 col0 col1 col2 col3 col4
row
row0 0.77 0.605 NaN 0.860 0.65 0.01 0.745 NaN 0.010 0.02
row2 0.13 NaN 0.395 0.500 0.25 0.45 NaN 0.34 0.440 0.79
row3 NaN 0.310 NaN 0.545 NaN NaN 0.230 NaN 0.075 NaN
row4 NaN 0.100 0.395 0.760 0.24 NaN 0.070 0.42 0.300 0.46
Note to subdivide over multiple columns we can pass in a list to the
columns parameter.
In [150]: df.pivot_table(
.....: values=["val0"],
.....: index="row",
.....: columns=["item", "col"],
.....: aggfunc=["mean"],
.....: )
.....:
Out[150]:
mean
val0
item item0 item1 item2
col col2 col3 col4 col0 col1 col2 col3 col4 col0 col1 col3 col4
row
row0 NaN NaN NaN 0.77 NaN NaN NaN NaN NaN 0.605 0.86 0.65
row2 0.35 NaN 0.37 NaN NaN 0.44 NaN NaN 0.13 NaN 0.50 0.13
row3 NaN NaN NaN NaN 0.31 NaN 0.81 NaN NaN NaN 0.28 NaN
row4 0.15 0.64 NaN NaN 0.10 0.64 0.88 0.24 NaN NaN NaN NaN
Exploding a list-like column#
New in version 0.25.0.
Sometimes the values in a column are list-like.
In [151]: keys = ["panda1", "panda2", "panda3"]
In [152]: values = [["eats", "shoots"], ["shoots", "leaves"], ["eats", "leaves"]]
In [153]: df = pd.DataFrame({"keys": keys, "values": values})
In [154]: df
Out[154]:
keys values
0 panda1 [eats, shoots]
1 panda2 [shoots, leaves]
2 panda3 [eats, leaves]
We can ‘explode’ the values column, transforming each list-like to a separate row, by using explode(). This will replicate the index values from the original row:
In [155]: df["values"].explode()
Out[155]:
0 eats
0 shoots
1 shoots
1 leaves
2 eats
2 leaves
Name: values, dtype: object
You can also explode the column in the DataFrame.
In [156]: df.explode("values")
Out[156]:
keys values
0 panda1 eats
0 panda1 shoots
1 panda2 shoots
1 panda2 leaves
2 panda3 eats
2 panda3 leaves
Series.explode() will replace empty lists with np.nan and preserve scalar entries. The dtype of the resulting Series is always object.
In [157]: s = pd.Series([[1, 2, 3], "foo", [], ["a", "b"]])
In [158]: s
Out[158]:
0 [1, 2, 3]
1 foo
2 []
3 [a, b]
dtype: object
In [159]: s.explode()
Out[159]:
0 1
0 2
0 3
1 foo
2 NaN
3 a
3 b
dtype: object
Here is a typical usecase. You have comma separated strings in a column and want to expand this.
In [160]: df = pd.DataFrame([{"var1": "a,b,c", "var2": 1}, {"var1": "d,e,f", "var2": 2}])
In [161]: df
Out[161]:
var1 var2
0 a,b,c 1
1 d,e,f 2
Creating a long form DataFrame is now straightforward using explode and chained operations
In [162]: df.assign(var1=df.var1.str.split(",")).explode("var1")
Out[162]:
var1 var2
0 a 1
0 b 1
0 c 1
1 d 2
1 e 2
1 f 2
| user_guide/reshaping.html |
pandas.Index.take | `pandas.Index.take`
Return a new Index of the values selected by the indices. | Index.take(indices, axis=0, allow_fill=True, fill_value=None, **kwargs)[source]#
Return a new Index of the values selected by the indices.
For internal compatibility with numpy arrays.
Parameters
indicesarray-likeIndices to be taken.
axisint, optionalThe axis over which to select values, always 0.
allow_fillbool, default True
fill_valuescalar, default NoneIf allow_fill=True and fill_value is not None, indices specified by
-1 are regarded as NA. If Index doesn’t hold NA, raise ValueError.
Returns
IndexAn index formed of elements at the given indices. Will be the same
type as self, except for RangeIndex.
See also
numpy.ndarray.takeReturn an array formed from the elements of a at the given indices.
| reference/api/pandas.Index.take.html |
pandas.core.resample.Resampler.backfill | `pandas.core.resample.Resampler.backfill`
Backward fill the values.
Deprecated since version 1.4: Use bfill instead. | Resampler.backfill(limit=None)[source]#
Backward fill the values.
Deprecated since version 1.4: Use bfill instead.
Parameters
limitint, optionalLimit of how many values to fill.
Returns
Series, DataFrameAn upsampled Series or DataFrame with backward filled NaN values.
| reference/api/pandas.core.resample.Resampler.backfill.html |
pandas.api.types.is_extension_type | `pandas.api.types.is_extension_type`
Check whether an array-like is of a pandas extension class instance.
Deprecated since version 1.0.0: Use is_extension_array_dtype instead.
```
>>> is_extension_type([1, 2, 3])
False
>>> is_extension_type(np.array([1, 2, 3]))
False
>>>
>>> cat = pd.Categorical([1, 2, 3])
>>>
>>> is_extension_type(cat)
True
>>> is_extension_type(pd.Series(cat))
True
>>> is_extension_type(pd.arrays.SparseArray([1, 2, 3]))
True
>>> from scipy.sparse import bsr_matrix
>>> is_extension_type(bsr_matrix([1, 2, 3]))
False
>>> is_extension_type(pd.DatetimeIndex([1, 2, 3]))
False
>>> is_extension_type(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern"))
True
>>>
>>> dtype = DatetimeTZDtype("ns", tz="US/Eastern")
>>> s = pd.Series([], dtype=dtype)
>>> is_extension_type(s)
True
``` | pandas.api.types.is_extension_type(arr)[source]#
Check whether an array-like is of a pandas extension class instance.
Deprecated since version 1.0.0: Use is_extension_array_dtype instead.
Extension classes include categoricals, pandas sparse objects (i.e.
classes represented within the pandas library and not ones external
to it like scipy sparse matrices), and datetime-like arrays.
Parameters
arrarray-like, scalarThe array-like to check.
Returns
booleanWhether or not the array-like is of a pandas extension class instance.
Examples
>>> is_extension_type([1, 2, 3])
False
>>> is_extension_type(np.array([1, 2, 3]))
False
>>>
>>> cat = pd.Categorical([1, 2, 3])
>>>
>>> is_extension_type(cat)
True
>>> is_extension_type(pd.Series(cat))
True
>>> is_extension_type(pd.arrays.SparseArray([1, 2, 3]))
True
>>> from scipy.sparse import bsr_matrix
>>> is_extension_type(bsr_matrix([1, 2, 3]))
False
>>> is_extension_type(pd.DatetimeIndex([1, 2, 3]))
False
>>> is_extension_type(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern"))
True
>>>
>>> dtype = DatetimeTZDtype("ns", tz="US/Eastern")
>>> s = pd.Series([], dtype=dtype)
>>> is_extension_type(s)
True
| reference/api/pandas.api.types.is_extension_type.html |
pandas.DataFrame.equals | `pandas.DataFrame.equals`
Test whether two objects contain the same elements.
```
>>> df = pd.DataFrame({1: [10], 2: [20]})
>>> df
1 2
0 10 20
``` | DataFrame.equals(other)[source]#
Test whether two objects contain the same elements.
This function allows two Series or DataFrames to be compared against
each other to see if they have the same shape and elements. NaNs in
the same location are considered equal.
The row/column index do not need to have the same type, as long
as the values are considered equal. Corresponding columns must be of
the same dtype.
Parameters
otherSeries or DataFrameThe other Series or DataFrame to be compared with the first.
Returns
boolTrue if all elements are the same in both objects, False
otherwise.
See also
Series.eqCompare two Series objects of the same length and return a Series where each element is True if the element in each Series is equal, False otherwise.
DataFrame.eqCompare two DataFrame objects of the same shape and return a DataFrame where each element is True if the respective element in each DataFrame is equal, False otherwise.
testing.assert_series_equalRaises an AssertionError if left and right are not equal. Provides an easy interface to ignore inequality in dtypes, indexes and precision among others.
testing.assert_frame_equalLike assert_series_equal, but targets DataFrames.
numpy.array_equalReturn True if two arrays have the same shape and elements, False otherwise.
Examples
>>> df = pd.DataFrame({1: [10], 2: [20]})
>>> df
1 2
0 10 20
DataFrames df and exactly_equal have the same types and values for
their elements and column labels, which will return True.
>>> exactly_equal = pd.DataFrame({1: [10], 2: [20]})
>>> exactly_equal
1 2
0 10 20
>>> df.equals(exactly_equal)
True
DataFrames df and different_column_type have the same element
types and values, but have different types for the column labels,
which will still return True.
>>> different_column_type = pd.DataFrame({1.0: [10], 2.0: [20]})
>>> different_column_type
1.0 2.0
0 10 20
>>> df.equals(different_column_type)
True
DataFrames df and different_data_type have different types for the
same values for their elements, and will return False even though
their column labels are the same values and types.
>>> different_data_type = pd.DataFrame({1: [10.0], 2: [20.0]})
>>> different_data_type
1 2
0 10.0 20.0
>>> df.equals(different_data_type)
False
| reference/api/pandas.DataFrame.equals.html |
pandas.DataFrame.agg | `pandas.DataFrame.agg`
Aggregate using one or more operations over the specified axis.
```
>>> df = pd.DataFrame([[1, 2, 3],
... [4, 5, 6],
... [7, 8, 9],
... [np.nan, np.nan, np.nan]],
... columns=['A', 'B', 'C'])
``` | DataFrame.agg(func=None, axis=0, *args, **kwargs)[source]#
Aggregate using one or more operations over the specified axis.
Parameters
funcfunction, str, list or dictFunction to use for aggregating the data. If a function, must either
work when passed a DataFrame or when passed to DataFrame.apply.
Accepted combinations are:
function
string function name
list of functions and/or function names, e.g. [np.sum, 'mean']
dict of axis labels -> functions, function names or list of such.
axis{0 or ‘index’, 1 or ‘columns’}, default 0If 0 or ‘index’: apply function to each column.
If 1 or ‘columns’: apply function to each row.
*argsPositional arguments to pass to func.
**kwargsKeyword arguments to pass to func.
Returns
scalar, Series or DataFrameThe return can be:
scalar : when Series.agg is called with single function
Series : when DataFrame.agg is called with a single function
DataFrame : when DataFrame.agg is called with several functions
Return scalar, Series or DataFrame.
The aggregation operations are always performed over an axis, either the
index (default) or the column axis. This behavior is different from
numpy aggregation functions (mean, median, prod, sum, std,
var), where the default is to compute the aggregation of the flattened
array, e.g., numpy.mean(arr_2d) as opposed to
numpy.mean(arr_2d, axis=0).
agg is an alias for aggregate. Use the alias.
See also
DataFrame.applyPerform any type of operations.
DataFrame.transformPerform transformation type operations.
core.groupby.GroupByPerform operations over groups.
core.resample.ResamplerPerform operations over resampled bins.
core.window.RollingPerform operations over rolling window.
core.window.ExpandingPerform operations over expanding window.
core.window.ExponentialMovingWindowPerform operation over exponential weighted window.
Notes
agg is an alias for aggregate. Use the alias.
Functions that mutate the passed object can produce unexpected
behavior or errors and are not supported. See Mutating with User Defined Function (UDF) methods
for more details.
A passed user-defined-function will be passed a Series for evaluation.
Examples
>>> df = pd.DataFrame([[1, 2, 3],
... [4, 5, 6],
... [7, 8, 9],
... [np.nan, np.nan, np.nan]],
... columns=['A', 'B', 'C'])
Aggregate these functions over the rows.
>>> df.agg(['sum', 'min'])
A B C
sum 12.0 15.0 18.0
min 1.0 2.0 3.0
Different aggregations per column.
>>> df.agg({'A' : ['sum', 'min'], 'B' : ['min', 'max']})
A B
sum 12.0 NaN
min 1.0 2.0
max NaN 8.0
Aggregate different functions over the columns and rename the index of the resulting
DataFrame.
>>> df.agg(x=('A', max), y=('B', 'min'), z=('C', np.mean))
A B C
x 7.0 NaN NaN
y NaN 2.0 NaN
z NaN NaN 6.0
Aggregate over the columns.
>>> df.agg("mean", axis="columns")
0 2.0
1 5.0
2 8.0
3 NaN
dtype: float64
| reference/api/pandas.DataFrame.agg.html |
pandas.Index.to_numpy | `pandas.Index.to_numpy`
A NumPy ndarray representing the values in this Series or Index.
```
>>> ser = pd.Series(pd.Categorical(['a', 'b', 'a']))
>>> ser.to_numpy()
array(['a', 'b', 'a'], dtype=object)
``` | Index.to_numpy(dtype=None, copy=False, na_value=_NoDefault.no_default, **kwargs)[source]#
A NumPy ndarray representing the values in this Series or Index.
Parameters
dtypestr or numpy.dtype, optionalThe dtype to pass to numpy.asarray().
copybool, default FalseWhether to ensure that the returned value is not a view on
another array. Note that copy=False does not ensure that
to_numpy() is no-copy. Rather, copy=True ensure that
a copy is made, even if not strictly necessary.
na_valueAny, optionalThe value to use for missing values. The default value depends
on dtype and the type of the array.
New in version 1.0.0.
**kwargsAdditional keywords passed through to the to_numpy method
of the underlying array (for extension arrays).
New in version 1.0.0.
Returns
numpy.ndarray
See also
Series.arrayGet the actual data stored within.
Index.arrayGet the actual data stored within.
DataFrame.to_numpySimilar method for DataFrame.
Notes
The returned array will be the same up to equality (values equal
in self will be equal in the returned array; likewise for values
that are not equal). When self contains an ExtensionArray, the
dtype may be different. For example, for a category-dtype Series,
to_numpy() will return a NumPy array and the categorical dtype
will be lost.
For NumPy dtypes, this will be a reference to the actual data stored
in this Series or Index (assuming copy=False). Modifying the result
in place will modify the data stored in the Series or Index (not that
we recommend doing that).
For extension types, to_numpy() may require copying data and
coercing the result to a NumPy type (possibly object), which may be
expensive. When you need a no-copy reference to the underlying data,
Series.array should be used instead.
This table lays out the different dtypes and default return types of
to_numpy() for various dtypes within pandas.
dtype
array type
category[T]
ndarray[T] (same dtype as input)
period
ndarray[object] (Periods)
interval
ndarray[object] (Intervals)
IntegerNA
ndarray[object]
datetime64[ns]
datetime64[ns]
datetime64[ns, tz]
ndarray[object] (Timestamps)
Examples
>>> ser = pd.Series(pd.Categorical(['a', 'b', 'a']))
>>> ser.to_numpy()
array(['a', 'b', 'a'], dtype=object)
Specify the dtype to control how datetime-aware data is represented.
Use dtype=object to return an ndarray of pandas Timestamp
objects, each with the correct tz.
>>> ser = pd.Series(pd.date_range('2000', periods=2, tz="CET"))
>>> ser.to_numpy(dtype=object)
array([Timestamp('2000-01-01 00:00:00+0100', tz='CET'),
Timestamp('2000-01-02 00:00:00+0100', tz='CET')],
dtype=object)
Or dtype='datetime64[ns]' to return an ndarray of native
datetime64 values. The values are converted to UTC and the timezone
info is dropped.
>>> ser.to_numpy(dtype="datetime64[ns]")
...
array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00...'],
dtype='datetime64[ns]')
| reference/api/pandas.Index.to_numpy.html |
pandas.tseries.offsets.BQuarterEnd.rule_code | pandas.tseries.offsets.BQuarterEnd.rule_code | BQuarterEnd.rule_code#
| reference/api/pandas.tseries.offsets.BQuarterEnd.rule_code.html |
pandas.core.window.expanding.Expanding.cov | `pandas.core.window.expanding.Expanding.cov`
Calculate the expanding sample covariance. | Expanding.cov(other=None, pairwise=None, ddof=1, numeric_only=False, **kwargs)[source]#
Calculate the expanding sample covariance.
Parameters
otherSeries or DataFrame, optionalIf not supplied then will default to self and produce pairwise
output.
pairwisebool, default NoneIf False then only matching columns between self and other will be
used and the output will be a DataFrame.
If True then all pairwise combinations will be calculated and the
output will be a MultiIndexed DataFrame in the case of DataFrame
inputs. In the case of missing elements, only complete pairwise
observations will be used.
ddofint, default 1Delta Degrees of Freedom. The divisor used in calculations
is N - ddof, where N represents the number of elements.
numeric_onlybool, default FalseInclude only float, int, boolean columns.
New in version 1.5.0.
**kwargsFor NumPy compatibility and will not have an effect on the result.
Deprecated since version 1.5.0.
Returns
Series or DataFrameReturn type is the same as the original object with np.float64 dtype.
See also
pandas.Series.expandingCalling expanding with Series data.
pandas.DataFrame.expandingCalling expanding with DataFrames.
pandas.Series.covAggregating cov for Series.
pandas.DataFrame.covAggregating cov for DataFrame.
| reference/api/pandas.core.window.expanding.Expanding.cov.html |
pandas.tseries.offsets.Second.apply_index | `pandas.tseries.offsets.Second.apply_index`
Vectorized apply of DateOffset to DatetimeIndex. | Second.apply_index()#
Vectorized apply of DateOffset to DatetimeIndex.
Deprecated since version 1.1.0: Use offset + dtindex instead.
Parameters
indexDatetimeIndex
Returns
DatetimeIndex
Raises
NotImplementedErrorWhen the specific offset subclass does not have a vectorized
implementation.
| reference/api/pandas.tseries.offsets.Second.apply_index.html |
pandas.tseries.offsets.BusinessMonthEnd.rollback | `pandas.tseries.offsets.BusinessMonthEnd.rollback`
Roll provided date backward to next offset only if not on offset. | BusinessMonthEnd.rollback()#
Roll provided date backward to next offset only if not on offset.
Returns
TimeStampRolled timestamp if not on offset, otherwise unchanged timestamp.
| reference/api/pandas.tseries.offsets.BusinessMonthEnd.rollback.html |
pandas.Index.notnull | `pandas.Index.notnull`
Detect existing (non-missing) values.
```
>>> idx = pd.Index([5.2, 6.0, np.NaN])
>>> idx
Float64Index([5.2, 6.0, nan], dtype='float64')
>>> idx.notna()
array([ True, True, False])
``` | Index.notnull()[source]#
Detect existing (non-missing) values.
Return a boolean same-sized object indicating if the values are not NA.
Non-missing values get mapped to True. Characters such as empty
strings '' or numpy.inf are not considered NA values
(unless you set pandas.options.mode.use_inf_as_na = True).
NA values, such as None or numpy.NaN, get mapped to False
values.
Returns
numpy.ndarray[bool]Boolean array to indicate which entries are not NA.
See also
Index.notnullAlias of notna.
Index.isnaInverse of notna.
notnaTop-level notna.
Examples
Show which entries in an Index are not NA. The result is an
array.
>>> idx = pd.Index([5.2, 6.0, np.NaN])
>>> idx
Float64Index([5.2, 6.0, nan], dtype='float64')
>>> idx.notna()
array([ True, True, False])
Empty strings are not considered NA values. None is considered a NA
value.
>>> idx = pd.Index(['black', '', 'red', None])
>>> idx
Index(['black', '', 'red', None], dtype='object')
>>> idx.notna()
array([ True, True, True, False])
| reference/api/pandas.Index.notnull.html |
pandas.Series.dt.tz_convert | `pandas.Series.dt.tz_convert`
Convert tz-aware Datetime Array/Index from one time zone to another.
```
>>> dti = pd.date_range(start='2014-08-01 09:00',
... freq='H', periods=3, tz='Europe/Berlin')
``` | Series.dt.tz_convert(*args, **kwargs)[source]#
Convert tz-aware Datetime Array/Index from one time zone to another.
Parameters
tzstr, pytz.timezone, dateutil.tz.tzfile or NoneTime zone for time. Corresponding timestamps would be converted
to this time zone of the Datetime Array/Index. A tz of None will
convert to UTC and remove the timezone information.
Returns
Array or Index
Raises
TypeErrorIf Datetime Array/Index is tz-naive.
See also
DatetimeIndex.tzA timezone that has a variable offset from UTC.
DatetimeIndex.tz_localizeLocalize tz-naive DatetimeIndex to a given time zone, or remove timezone from a tz-aware DatetimeIndex.
Examples
With the tz parameter, we can change the DatetimeIndex
to other time zones:
>>> dti = pd.date_range(start='2014-08-01 09:00',
... freq='H', periods=3, tz='Europe/Berlin')
>>> dti
DatetimeIndex(['2014-08-01 09:00:00+02:00',
'2014-08-01 10:00:00+02:00',
'2014-08-01 11:00:00+02:00'],
dtype='datetime64[ns, Europe/Berlin]', freq='H')
>>> dti.tz_convert('US/Central')
DatetimeIndex(['2014-08-01 02:00:00-05:00',
'2014-08-01 03:00:00-05:00',
'2014-08-01 04:00:00-05:00'],
dtype='datetime64[ns, US/Central]', freq='H')
With the tz=None, we can remove the timezone (after converting
to UTC if necessary):
>>> dti = pd.date_range(start='2014-08-01 09:00', freq='H',
... periods=3, tz='Europe/Berlin')
>>> dti
DatetimeIndex(['2014-08-01 09:00:00+02:00',
'2014-08-01 10:00:00+02:00',
'2014-08-01 11:00:00+02:00'],
dtype='datetime64[ns, Europe/Berlin]', freq='H')
>>> dti.tz_convert(None)
DatetimeIndex(['2014-08-01 07:00:00',
'2014-08-01 08:00:00',
'2014-08-01 09:00:00'],
dtype='datetime64[ns]', freq='H')
| reference/api/pandas.Series.dt.tz_convert.html |
pandas.Series.squeeze | `pandas.Series.squeeze`
Squeeze 1 dimensional axis objects into scalars.
Series or DataFrames with a single element are squeezed to a scalar.
DataFrames with a single column or a single row are squeezed to a
Series. Otherwise the object is unchanged.
```
>>> primes = pd.Series([2, 3, 5, 7])
``` | Series.squeeze(axis=None)[source]#
Squeeze 1 dimensional axis objects into scalars.
Series or DataFrames with a single element are squeezed to a scalar.
DataFrames with a single column or a single row are squeezed to a
Series. Otherwise the object is unchanged.
This method is most useful when you don’t know if your
object is a Series or DataFrame, but you do know it has just a single
column. In that case you can safely call squeeze to ensure you have a
Series.
Parameters
axis{0 or ‘index’, 1 or ‘columns’, None}, default NoneA specific axis to squeeze. By default, all length-1 axes are
squeezed. For Series this parameter is unused and defaults to None.
Returns
DataFrame, Series, or scalarThe projection after squeezing axis or all the axes.
See also
Series.ilocInteger-location based indexing for selecting scalars.
DataFrame.ilocInteger-location based indexing for selecting Series.
Series.to_frameInverse of DataFrame.squeeze for a single-column DataFrame.
Examples
>>> primes = pd.Series([2, 3, 5, 7])
Slicing might produce a Series with a single value:
>>> even_primes = primes[primes % 2 == 0]
>>> even_primes
0 2
dtype: int64
>>> even_primes.squeeze()
2
Squeezing objects with more than one value in every axis does nothing:
>>> odd_primes = primes[primes % 2 == 1]
>>> odd_primes
1 3
2 5
3 7
dtype: int64
>>> odd_primes.squeeze()
1 3
2 5
3 7
dtype: int64
Squeezing is even more effective when used with DataFrames.
>>> df = pd.DataFrame([[1, 2], [3, 4]], columns=['a', 'b'])
>>> df
a b
0 1 2
1 3 4
Slicing a single column will produce a DataFrame with the columns
having only one value:
>>> df_a = df[['a']]
>>> df_a
a
0 1
1 3
So the columns can be squeezed down, resulting in a Series:
>>> df_a.squeeze('columns')
0 1
1 3
Name: a, dtype: int64
Slicing a single row from a single column will produce a single
scalar DataFrame:
>>> df_0a = df.loc[df.index < 1, ['a']]
>>> df_0a
a
0 1
Squeezing the rows produces a single scalar Series:
>>> df_0a.squeeze('rows')
a 1
Name: 0, dtype: int64
Squeezing all axes will project directly into a scalar:
>>> df_0a.squeeze()
1
| reference/api/pandas.Series.squeeze.html |
pandas.core.groupby.GroupBy.tail | `pandas.core.groupby.GroupBy.tail`
Return last n rows of each group.
Similar to .apply(lambda x: x.tail(n)), but it returns a subset of rows
from the original DataFrame with original index and order preserved
(as_index flag is ignored).
```
>>> df = pd.DataFrame([['a', 1], ['a', 2], ['b', 1], ['b', 2]],
... columns=['A', 'B'])
>>> df.groupby('A').tail(1)
A B
1 a 2
3 b 2
>>> df.groupby('A').tail(-1)
A B
1 a 2
3 b 2
``` | final GroupBy.tail(n=5)[source]#
Return last n rows of each group.
Similar to .apply(lambda x: x.tail(n)), but it returns a subset of rows
from the original DataFrame with original index and order preserved
(as_index flag is ignored).
Parameters
nintIf positive: number of entries to include from end of each group.
If negative: number of entries to exclude from start of each group.
Returns
Series or DataFrameSubset of original Series or DataFrame as determined by n.
See also
Series.groupbyApply a function groupby to a Series.
DataFrame.groupbyApply a function groupby to each row or column of a DataFrame.
Examples
>>> df = pd.DataFrame([['a', 1], ['a', 2], ['b', 1], ['b', 2]],
... columns=['A', 'B'])
>>> df.groupby('A').tail(1)
A B
1 a 2
3 b 2
>>> df.groupby('A').tail(-1)
A B
1 a 2
3 b 2
| reference/api/pandas.core.groupby.GroupBy.tail.html |
pandas.api.types.is_re | `pandas.api.types.is_re`
Check if the object is a regex pattern instance.
```
>>> is_re(re.compile(".*"))
True
>>> is_re("foo")
False
``` | pandas.api.types.is_re(obj)[source]#
Check if the object is a regex pattern instance.
Parameters
objThe object to check
Returns
is_regexboolWhether obj is a regex pattern.
Examples
>>> is_re(re.compile(".*"))
True
>>> is_re("foo")
False
| reference/api/pandas.api.types.is_re.html |
pandas.tseries.offsets.DateOffset.copy | `pandas.tseries.offsets.DateOffset.copy`
Return a copy of the frequency.
```
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
``` | DateOffset.copy()#
Return a copy of the frequency.
Examples
>>> freq = pd.DateOffset(1)
>>> freq_copy = freq.copy()
>>> freq is freq_copy
False
| reference/api/pandas.tseries.offsets.DateOffset.copy.html |
pandas.DataFrame.clip | `pandas.DataFrame.clip`
Trim values at input threshold(s).
Assigns values outside boundary to boundary values. Thresholds
can be singular values or array like, and in the latter case
the clipping is performed element-wise in the specified axis.
```
>>> data = {'col_0': [9, -3, 0, -1, 5], 'col_1': [-2, -7, 6, 8, -5]}
>>> df = pd.DataFrame(data)
>>> df
col_0 col_1
0 9 -2
1 -3 -7
2 0 6
3 -1 8
4 5 -5
``` | DataFrame.clip(lower=None, upper=None, *args, axis=None, inplace=False, **kwargs)[source]#
Trim values at input threshold(s).
Assigns values outside boundary to boundary values. Thresholds
can be singular values or array like, and in the latter case
the clipping is performed element-wise in the specified axis.
Parameters
lowerfloat or array-like, default NoneMinimum threshold value. All values below this
threshold will be set to it. A missing
threshold (e.g NA) will not clip the value.
upperfloat or array-like, default NoneMaximum threshold value. All values above this
threshold will be set to it. A missing
threshold (e.g NA) will not clip the value.
axis{{0 or ‘index’, 1 or ‘columns’, None}}, default NoneAlign object with lower and upper along the given axis.
For Series this parameter is unused and defaults to None.
inplacebool, default FalseWhether to perform the operation in place on the data.
*args, **kwargsAdditional keywords have no effect but might be accepted
for compatibility with numpy.
Returns
Series or DataFrame or NoneSame type as calling object with the values outside the
clip boundaries replaced or None if inplace=True.
See also
Series.clipTrim values at input threshold in series.
DataFrame.clipTrim values at input threshold in dataframe.
numpy.clipClip (limit) the values in an array.
Examples
>>> data = {'col_0': [9, -3, 0, -1, 5], 'col_1': [-2, -7, 6, 8, -5]}
>>> df = pd.DataFrame(data)
>>> df
col_0 col_1
0 9 -2
1 -3 -7
2 0 6
3 -1 8
4 5 -5
Clips per column using lower and upper thresholds:
>>> df.clip(-4, 6)
col_0 col_1
0 6 -2
1 -3 -4
2 0 6
3 -1 6
4 5 -4
Clips using specific lower and upper thresholds per column element:
>>> t = pd.Series([2, -4, -1, 6, 3])
>>> t
0 2
1 -4
2 -1
3 6
4 3
dtype: int64
>>> df.clip(t, t + 4, axis=0)
col_0 col_1
0 6 2
1 -3 -4
2 0 3
3 6 8
4 5 3
Clips using specific lower threshold per column element, with missing values:
>>> t = pd.Series([2, -4, np.NaN, 6, 3])
>>> t
0 2.0
1 -4.0
2 NaN
3 6.0
4 3.0
dtype: float64
>>> df.clip(t, axis=0)
col_0 col_1
0 9 2
1 -3 -4
2 0 6
3 6 8
4 5 3
| reference/api/pandas.DataFrame.clip.html |
pandas.Index.factorize | `pandas.Index.factorize`
Encode the object as an enumerated type or categorical variable.
```
>>> codes, uniques = pd.factorize(['b', 'b', 'a', 'c', 'b'])
>>> codes
array([0, 0, 1, 2, 0]...)
>>> uniques
array(['b', 'a', 'c'], dtype=object)
``` | Index.factorize(sort=False, na_sentinel=_NoDefault.no_default, use_na_sentinel=_NoDefault.no_default)[source]#
Encode the object as an enumerated type or categorical variable.
This method is useful for obtaining a numeric representation of an
array when all that matters is identifying distinct values. factorize
is available as both a top-level function pandas.factorize(),
and as a method Series.factorize() and Index.factorize().
Parameters
sortbool, default FalseSort uniques and shuffle codes to maintain the
relationship.
na_sentinelint or None, default -1Value to mark “not found”. If None, will not drop the NaN
from the uniques of the values.
Deprecated since version 1.5.0: The na_sentinel argument is deprecated and
will be removed in a future version of pandas. Specify use_na_sentinel as
either True or False.
Changed in version 1.1.2.
use_na_sentinelbool, default TrueIf True, the sentinel -1 will be used for NaN values. If False,
NaN values will be encoded as non-negative integers and will not drop the
NaN from the uniques of the values.
New in version 1.5.0.
Returns
codesndarrayAn integer ndarray that’s an indexer into uniques.
uniques.take(codes) will have the same values as values.
uniquesndarray, Index, or CategoricalThe unique valid values. When values is Categorical, uniques
is a Categorical. When values is some other pandas object, an
Index is returned. Otherwise, a 1-D ndarray is returned.
Note
Even if there’s a missing value in values, uniques will
not contain an entry for it.
See also
cutDiscretize continuous-valued array.
uniqueFind the unique value in an array.
Notes
Reference the user guide for more examples.
Examples
These examples all show factorize as a top-level method like
pd.factorize(values). The results are identical for methods like
Series.factorize().
>>> codes, uniques = pd.factorize(['b', 'b', 'a', 'c', 'b'])
>>> codes
array([0, 0, 1, 2, 0]...)
>>> uniques
array(['b', 'a', 'c'], dtype=object)
With sort=True, the uniques will be sorted, and codes will be
shuffled so that the relationship is the maintained.
>>> codes, uniques = pd.factorize(['b', 'b', 'a', 'c', 'b'], sort=True)
>>> codes
array([1, 1, 0, 2, 1]...)
>>> uniques
array(['a', 'b', 'c'], dtype=object)
When use_na_sentinel=True (the default), missing values are indicated in
the codes with the sentinel value -1 and missing values are not
included in uniques.
>>> codes, uniques = pd.factorize(['b', None, 'a', 'c', 'b'])
>>> codes
array([ 0, -1, 1, 2, 0]...)
>>> uniques
array(['b', 'a', 'c'], dtype=object)
Thus far, we’ve only factorized lists (which are internally coerced to
NumPy arrays). When factorizing pandas objects, the type of uniques
will differ. For Categoricals, a Categorical is returned.
>>> cat = pd.Categorical(['a', 'a', 'c'], categories=['a', 'b', 'c'])
>>> codes, uniques = pd.factorize(cat)
>>> codes
array([0, 0, 1]...)
>>> uniques
['a', 'c']
Categories (3, object): ['a', 'b', 'c']
Notice that 'b' is in uniques.categories, despite not being
present in cat.values.
For all other pandas objects, an Index of the appropriate type is
returned.
>>> cat = pd.Series(['a', 'a', 'c'])
>>> codes, uniques = pd.factorize(cat)
>>> codes
array([0, 0, 1]...)
>>> uniques
Index(['a', 'c'], dtype='object')
If NaN is in the values, and we want to include NaN in the uniques of the
values, it can be achieved by setting use_na_sentinel=False.
>>> values = np.array([1, 2, 1, np.nan])
>>> codes, uniques = pd.factorize(values) # default: use_na_sentinel=True
>>> codes
array([ 0, 1, 0, -1])
>>> uniques
array([1., 2.])
>>> codes, uniques = pd.factorize(values, use_na_sentinel=False)
>>> codes
array([0, 1, 0, 2])
>>> uniques
array([ 1., 2., nan])
| reference/api/pandas.Index.factorize.html |
pandas.tseries.offsets.QuarterEnd.is_month_start | `pandas.tseries.offsets.QuarterEnd.is_month_start`
Return boolean whether a timestamp occurs on the month start.
```
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
``` | QuarterEnd.is_month_start()#
Return boolean whether a timestamp occurs on the month start.
Examples
>>> ts = pd.Timestamp(2022, 1, 1)
>>> freq = pd.offsets.Hour(5)
>>> freq.is_month_start(ts)
True
| reference/api/pandas.tseries.offsets.QuarterEnd.is_month_start.html |
pandas.Timestamp.freq | pandas.Timestamp.freq | Timestamp.freq#
| reference/api/pandas.Timestamp.freq.html |