Competition
polaris.competition.CompetitionSpecification
Bases: DatasetV2
, PredictiveTaskSpecificationMixin
, SplitSpecificationV1Mixin
An instance of this class represents a Polaris competition. It defines fields and functionality
that in combination with the DatasetV2
class, allow
users to participate in competitions hosted on Polaris Hub.
Examples:
Basic API usage:
import polaris as po
# Load the benchmark from the Hub
competition = po.load_competition("dummy-user/dummy-name")
# Get the train and test data-loaders
train, test = competition.get_train_test_split()
# Use the training data to train your model
# Get the input as an array with 'train.inputs' and 'train.targets'
# Or simply iterate over the train object.
for x, y in train:
...
# Work your magic to accurately predict the test set
prediction_values = np.array([0.0 for x in test])
# Submit your predictions
competition.submit_predictions(
prediction_name="first-prediction",
prediction_owner="dummy-user",
report_url="REPORT_URL",
predictions=prediction_values,
)
Attributes:
Name | Type | Description |
---|---|---|
start_time |
datetime
|
The time at which the competition starts accepting prediction submissions. |
end_time |
datetime
|
The time at which the competition stops accepting prediction submissions. |
n_classes |
dict[ColumnName, int | None]
|
The number of classes within each target column that defines a classification task. |
For additional meta-data attributes, see the base classes.
get_train_test_split
get_train_test_split(
featurization_fn: Callable | None = None,
) -> tuple[Subset, Subset | dict[str, Subset]]
Construct the train and test sets, given the split in the competition specification.
Returns Subset
objects, which offer several ways of accessing the data
and can thus easily serve as a basis to build framework-specific (e.g. PyTorch, Tensorflow)
data-loaders on top of.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
featurization_fn
|
Callable | None
|
A function to apply to the input data. If a multi-input benchmark, this function
expects an input in the format specified by the |
None
|
Returns:
Type | Description |
---|---|
tuple[Subset, Subset | dict[str, Subset]]
|
A tuple with the train |
submit_predictions
submit_predictions(
predictions: IncomingPredictionsType,
prediction_name: SlugCompatibleStringType,
prediction_owner: str,
report_url: HttpUrlString,
contributors: list[HubUser] | None = None,
github_url: HttpUrlString | None = None,
description: str = "",
tags: list[str] | None = None,
user_attributes: dict[str, str] | None = None,
) -> None
Convenient wrapper around the
PolarisHubClient.submit_competition_predictions
method.
It handles the creation of a standardized predictions object, which is expected by the Hub, automatically.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prediction_name
|
SlugCompatibleStringType
|
The name of the prediction. |
required |
prediction_owner
|
str
|
The slug of the user/organization which owns the prediction. |
required |
predictions
|
IncomingPredictionsType
|
The predictions for each test set defined in the competition. |
required |
report_url
|
HttpUrlString
|
A URL to a report/paper/write-up which describes the methods used to generate the predictions. |
required |
contributors
|
list[HubUser] | None
|
The users credited with generating these predictions. |
None
|
github_url
|
HttpUrlString | None
|
An optional URL to a code repository containing the code used to generated these predictions. |
None
|
description
|
str
|
An optional and short description of the predictions. |
''
|
tags
|
list[str] | None
|
An optional list of tags to categorize the prediction by. |
None
|
user_attributes
|
dict[str, str] | None
|
An optional dict with additional, textual user attributes. |
None
|