Get project by ID

Retrieve information about a project by project ID.

Authentication

AuthorizationToken
The token (or API key) must be passed as a request header. You can find your user token on the User Account page in Label Studio. Example: <br><pre><code class="language-bash">curl https://label-studio-host/api/projects -H "Authorization: Token [your-token]"</code></pre>

Path parameters

idintegerRequired

Query parameters

members_limitintegerOptionalDefaults to 10
Maximum number of members to return

Response

Project information. Not all fields are available for all roles.
allow_streambooleanRead-only
assignment_settingsobject
config_has_control_tagsbooleanRead-only
Flag to detect is project ready for labeling
config_suitable_for_bulk_annotationbooleanRead-only
Flag to detect is project ready for bulk annotation
created_atdatetimeRead-only
data_typesmap from strings to any or nullRead-only
finished_task_numberintegerRead-only
Finished tasks
ground_truth_numberintegerRead-only
Honeypot annotation number in project
idintegerRead-only
is_dimensions_enabledstringRead-only
membersstringRead-only
members_countintegerRead-only
num_tasks_with_annotationsintegerRead-only
parsed_label_configmap from strings to anyRead-only

JSON-formatted labeling configuration

promptsstringRead-only
queue_doneintegerRead-only
queue_leftintegerRead-only
queue_totalintegerRead-only
readybooleanRead-only
rejectedintegerRead-only
review_settingsobject
review_total_tasksintegerRead-only
reviewed_numberintegerRead-only
reviewer_queue_totalintegerRead-only
skipped_annotations_numberintegerRead-only
start_training_on_annotation_updatebooleanRead-only
Start model training after any annotations are submitted or updated
statestringRead-only
task_numberintegerRead-only
Total task number in project
total_annotations_numberintegerRead-only
total_predictions_numberinteger or nullRead-only
useful_annotation_numberinteger or nullRead-only
workspacestringRead-only
workspace_titlestringRead-only
agreement_methodologyenum
Methodology (Consensus / Pairwise Averaging) * `consensus` - Consensus * `pairwise` - Pairwise Averaging
Allowed values:
agreement_thresholdstring or nullformat: "decimal"
Agreement threshold
annotation_limit_countinteger or null>=1
Limit by number of tasks
annotation_limit_percentstring or nullformat: "decimal"
Limit by percentage of tasks
annotator_evaluation_continuous_tasksinteger>=0Defaults to 0

Continuous Evaluation: Required tasks

annotator_evaluation_enabledboolean
Evaluate all annotators against ground truth
annotator_evaluation_minimum_scorestring or nullformat: "decimal"Defaults to 95.00
Score required to pass evaluation
annotator_evaluation_minimum_tasksinteger or null>=0Defaults to 10
Number of tasks for evaluation
annotator_evaluation_onboarding_tasksinteger>=0Defaults to 0

Onboarding Evaluation: Required tasks

colorstring or null<=16 characters
Color
comment_classification_configstring
control_weightsmap from strings to objects or null

Dict of weights for each control tag in metric calculation. Keys are control tag names from the labeling config. At least one tag must have a non-zero overall weight.

created_byobject
Project owner
custom_scriptstring
Plugins
custom_task_lock_ttlinteger or null1-86400

Task reservation time. TTL in seconds (UI displays and edits this value in minutes).

descriptionstring or null
Description
duplication_donebooleanDefaults to false
duplication_statusstring
enable_empty_annotationboolean
Allow empty annotations
evaluate_predictions_automaticallyboolean
Retrieve and display predictions when loading a task
expert_instructionstring or null
Instructions
is_draftboolean
Whether or not the project is in the middle of being created
is_publishedboolean
Whether or not the project is published to annotators
label_configstring or null
Labeling Configuration
max_additional_annotators_assignableinteger or null
Maximum additional annotators
maximum_annotationsinteger-2147483648-2147483647
Annotations per task
min_annotations_to_start_traininginteger-2147483648-2147483647
Minimum number of completed tasks after which model training is started
model_versionstring or null
Machine learning model version
organizationinteger or null
overlap_cohort_percentageinteger-2147483648-2147483647
Annotations per task coverage
pause_on_failed_annotator_evaluationboolean or nullDefaults to false
Pause annotator on failed evaluation
pinned_atdatetime or null
Pinned date and time
require_comment_on_skipbooleanDefaults to false
Require comment to skip
reveal_preannotations_interactivelyboolean

Reveal pre-annotations interactively

samplingenum or null
* `Sequential sampling` - Tasks are ordered by Data manager ordering * `Uniform sampling` - Tasks are chosen randomly * `Uncertainty sampling` - Tasks are chosen according to model uncertainty scores (active learning mode)
Allowed values:
show_annotation_historyboolean
Show Data Manager to Annotators
show_collab_predictionsboolean

Use predictions to pre-label Tasks

show_instructionboolean
Show instructions before labeling
show_overlap_firstboolean
Show tasks with overlap first
show_skip_buttonboolean
Allow skipping tasks
show_unused_data_columns_to_annotatorsboolean or null

Show only columns used in labeling configuration to Annotators. API uses inverse field semantics here: set false to show only used columns, set true to show all task.data columns.

skip_queueenum or null
* `REQUEUE_FOR_ME` - Requeue for me * `REQUEUE_FOR_OTHERS` - Requeue for others * `IGNORE_SKIPPED` - Ignore skipped
Allowed values:
strict_task_overlapbooleanDefaults to true
Enforce strict overlap limit
task_data_loginstring or null<=256 characters
Login
task_data_passwordstring or null<=256 characters
Password
titlestring or null3-50 characters
Project Name
show_ground_truth_firstbooleanDeprecated

Onboarding mode (true): show ground truth tasks first in the labeling stream