Retrieve information about a project by project ID.
Get project by ID
Authentication
Authorizationstring
The token (or API key) must be passed as a request header. You can find your user token on the User Account page in Label Studio. Example: <br><pre><code class="language-bash">curl https://label-studio-host/api/projects -H "Authorization: Token [your-token]"</code></pre>
Path parameters
idintegerRequired
Query parameters
members_limitintegerOptionalDefaults to 10
Maximum number of members to return
Response
Project information. Not all fields are available for all roles.
allow_streamboolean
assignment_settingsobject
config_has_control_tagsboolean
Flag to detect is project ready for labeling
config_suitable_for_bulk_annotationboolean
Flag to detect is project ready for bulk annotation
created_atdatetime
data_typesany or null
finished_task_numberinteger
Finished tasks
ground_truth_numberinteger
Honeypot annotation number in project
idinteger
membersstring
members_countinteger
num_tasks_with_annotationsinteger
parsed_label_configany
JSON-formatted labeling configuration
promptsstring
queue_doneinteger
queue_leftinteger
queue_totalinteger
readyboolean
rejectedinteger
review_settingsobject
review_total_tasksinteger
reviewed_numberinteger
reviewer_queue_totalinteger
skipped_annotations_numberinteger
start_training_on_annotation_updateboolean
Start model training after any annotations are submitted or updated
agreement_thresholdstring or nullformat: "decimal"
Minimum percent agreement threshold for which minimum number of annotators must agree
annotation_limit_countinteger or null>=1
annotation_limit_percentstring or nullformat: "decimal"
annotator_evaluation_enabledboolean or null
Enable annotator evaluation for the project
annotator_evaluation_minimum_scorestring or nullformat: "decimal"Defaults to 95.00
annotator_evaluation_minimum_tasksinteger or null>=0Defaults to 10
annotator_evaluation_onboarding_tasksinteger or null>=0Defaults to 0
colorstring or null<=16 characters
comment_classification_configstring or null
control_weightsany or null
Dict of weights for each control tag in metric calculation. Each control tag (e.g. label or choice) will have it's own key in control weight dict with weight for each label and overall weight.For example, if bounding box annotation with control tag named my_bbox should be included with 0.33 weight in agreement calculation, and the first label Car should be twice more important than Airplaine, then you have to need the specify: {'my_bbox': {'type': 'RectangleLabels', 'labels': {'Car': 1.0, 'Airplaine': 0.5}, 'overall': 0.33}
created_byobject or null
Project owner
custom_scriptstring or null
custom_task_lock_ttlinteger or null1-86400
TTL in seconds for task reservations, on new and existing tasks
descriptionstring or null
Project description
duplication_doneboolean or nullDefaults to false
duplication_statusstring or null
enable_empty_annotationboolean or null
Allow annotators to submit empty annotations
evaluate_predictions_automaticallyboolean or null
Retrieve and display predictions when loading a task
expert_instructionstring or null
Labeling instructions in HTML format
is_draftboolean or null
Whether or not the project is in the middle of being created
is_publishedboolean or null
Whether or not the project is published to annotators
label_configstring or null
Label config in XML format. See more about it in documentation
max_additional_annotators_assignableinteger or null
Maximum number of additional annotators that can be assigned to a low agreement task
maximum_annotationsinteger or null-2147483648-2147483647
Maximum number of annotations for one task. If the number of annotations per task is equal or greater to this value, the task is completed (is_labeled=True)
min_annotations_to_start_traininginteger or null-2147483648-2147483647
Minimum number of completed tasks after which model training is started
model_versionstring or null
Machine learning model version
organizationinteger or null
overlap_cohort_percentageinteger or null-2147483648-2147483647
pause_on_failed_annotator_evaluationboolean or nullDefaults to false
pinned_atdatetime or null
Pinned date and time
require_comment_on_skipboolean or nullDefaults to false
reveal_preannotations_interactivelyboolean or null
Reveal pre-annotations interactively
samplingenum or null
* `Sequential sampling` - Tasks are ordered by Data manager ordering
* `Uniform sampling` - Tasks are chosen randomly
* `Uncertainty sampling` - Tasks are chosen according to model uncertainty scores (active learning mode)
Allowed values:
show_annotation_historyboolean or null
Show annotation history to annotator
show_collab_predictionsboolean or null
If set, the annotator can view model predictions
show_instructionboolean or null
Show instructions to the annotator before they start
show_overlap_firstboolean or null
show_skip_buttonboolean or null
Show a skip button in interface and allow annotators to skip the task
show_unused_data_columns_to_annotatorsboolean or null
skip_queueenum or null
* `REQUEUE_FOR_ME` - Requeue for me
* `REQUEUE_FOR_OTHERS` - Requeue for others
* `IGNORE_SKIPPED` - Ignore skipped
Allowed values:
task_data_loginstring or null<=256 characters
Task data credentials: login
task_data_passwordstring or null<=256 characters
Task data credentials: password
titlestring or null3-50 characters
Project name. Must be between 3 and 50 characters long.
show_ground_truth_firstboolean or nullDeprecated
Onboarding mode (true): show ground truth tasks first in the labeling stream
consensus - Consensus
pairwise - Pairwise Averaging
Maximum number of annotations for one task. If the number of annotations per task is equal or greater to this value, the task is completed (is_labeled=True)
Reveal pre-annotations interactively
Sequential sampling - Tasks are ordered by Data manager ordering
Uniform sampling - Tasks are chosen randomly
Uncertainty sampling - Tasks are chosen according to model uncertainty scores (active learning mode)
Task data credentials: login
Task data credentials: password
Onboarding mode (true): show ground truth tasks first in the labeling stream
The token (or API key) must be passed as a request header. You can find your user token on the User Account page in Label Studio. Example: