dask_geopandas.read_feather(path, columns=None, filters=None, index=None, storage_options=None)#

Read a Feather dataset into a Dask-GeoPandas DataFrame.

path: str or list(str)

Source directory for data, or path(s) to individual Feather files. Paths can be a full URL with protocol specifier, and may include glob character if a single string.

columns: None or list(str)

Columns to load. If None, loads all.

filterslist (of list) of tuples or pyarrow.dataset.Expression, default None

Row-wise filter to apply while reading the dataset. Can be specified as a pyarrow.dataset.Expression object or using a list of tuples notation, like [[('col1', '==', 0), ...], ...]. The filter will be applied both at the partition level, this is to prevent the loading of some files, as at the file level to filter the actual rows.

For the list of tuples format, predicates can be expressed in disjunctive normal form (DNF). This means that the innermost tuple describes a single column predicate. These inner predicates are combined with an AND conjunction into a larger predicate. The outer-most list then combines all of the combined filters with an OR disjunction.

Predicates can also be expressed as a List[Tuple]. These are evaluated as an AND conjunction. To express OR in predictates, one must use the List[List[Tuple]] notation.

indexstr, list or False, default None

Field name(s) to use as the output frame index. By default will be inferred from the pandas metadata (if present in the files). Use False to read all fields as columns.

storage_optionsdict, default None

Key/value pairs to be passed on to the file-system backend, if any (inferred from the path, such as “s3://…”). Please see fsspec for more details.

dask_geopandas.GeoDataFrame (even if there is only one column)