Update project details

Update the details of a specific project.

Authentication

Authorizationstring
The token (or API key) must be passed as a request header. You can find your user token on the User Account page in Label Studio. Example: <br><pre><code class="language-bash">curl https://label-studio-host/api/projects -H "Authorization: Token [your-token]"</code></pre>

Path parameters

idintegerRequired

Query parameters

members_limitintegerOptionalDefaults to 10
Maximum number of members to return

Request

This endpoint expects an object.
agreement_methodologyenumOptional
* `consensus` - Consensus * `pairwise` - Pairwise Averaging
Allowed values:
agreement_thresholdstring or nullOptionalformat: "decimal"
Minimum percent agreement threshold for which minimum number of annotators must agree
annotation_limit_countinteger or nullOptional>=1
annotation_limit_percentstring or nullOptionalformat: "decimal"
annotator_evaluation_continuous_tasksintegerOptional>=0Defaults to 0
annotator_evaluation_enabledbooleanOptional
Enable annotator evaluation for the project
annotator_evaluation_minimum_scorestring or nullOptionalformat: "decimal"Defaults to 95.00
annotator_evaluation_minimum_tasksinteger or nullOptional>=0Defaults to 10
annotator_evaluation_onboarding_tasksintegerOptional>=0Defaults to 0
assignment_settingsobjectOptional
colorstring or nullOptional<=16 characters
comment_classification_configstringOptional
control_weightsmap from strings to any or nullOptional
Dict of weights for each control tag in metric calculation.
created_byobjectOptional
Project owner
custom_scriptstringOptional
custom_task_lock_ttlinteger or nullOptional1-86400
TTL in seconds for task reservations, on new and existing tasks
descriptionstring or nullOptional
Project description
enable_empty_annotationbooleanOptional
Allow annotators to submit empty annotations
evaluate_predictions_automaticallybooleanOptional
Retrieve and display predictions when loading a task
expert_instructionstring or nullOptional
Labeling instructions in HTML format
is_draftbooleanOptional
Whether or not the project is in the middle of being created
is_publishedbooleanOptional
Whether or not the project is published to annotators
label_configstring or nullOptional
Label config in XML format. See more about it in documentation
max_additional_annotators_assignableinteger or nullOptional
Maximum number of additional annotators that can be assigned to a low agreement task
maximum_annotationsintegerOptional-2147483648-2147483647

Maximum number of annotations for one task. If the number of annotations per task is equal or greater to this value, the task is completed (is_labeled=True)

min_annotations_to_start_trainingintegerOptional-2147483648-2147483647
Minimum number of completed tasks after which model training is started
model_versionstring or nullOptional
Machine learning model version
organizationinteger or nullOptional
overlap_cohort_percentageintegerOptional-2147483648-2147483647
pause_on_failed_annotator_evaluationboolean or nullOptionalDefaults to false
pinned_atdatetime or nullOptional
Pinned date and time
require_comment_on_skipbooleanOptionalDefaults to false
reveal_preannotations_interactivelybooleanOptional

Reveal pre-annotations interactively

review_settingsobjectOptional
samplingenum or nullOptional
* `Sequential sampling` - Tasks are ordered by Data manager ordering * `Uniform sampling` - Tasks are chosen randomly * `Uncertainty sampling` - Tasks are chosen according to model uncertainty scores (active learning mode)
Allowed values:
show_annotation_historybooleanOptional
Show annotation history to annotator
show_collab_predictionsbooleanOptional
If set, the annotator can view model predictions
show_instructionbooleanOptional
Show instructions to the annotator before they start
show_overlap_firstbooleanOptional
show_skip_buttonbooleanOptional
Show a skip button in interface and allow annotators to skip the task
show_unused_data_columns_to_annotatorsboolean or nullOptional
skip_queueenum or nullOptional
* `REQUEUE_FOR_ME` - Requeue for me * `REQUEUE_FOR_OTHERS` - Requeue for others * `IGNORE_SKIPPED` - Ignore skipped
Allowed values:
strict_task_overlapbooleanOptionalDefaults to true
task_data_loginstring or nullOptional<=256 characters

Task data credentials: login

task_data_passwordstring or nullOptional<=256 characters

Task data credentials: password

titlestring or nullOptional3-50 characters
Project name. Must be between 3 and 50 characters long.
workspaceintegerOptional
show_ground_truth_firstbooleanOptionalDeprecated

Onboarding mode (true): show ground truth tasks first in the labeling stream

Response

assignment_settingsobject
config_has_control_tagsbooleanRead-only
Flag to detect is project ready for labeling
config_suitable_for_bulk_annotationbooleanRead-only
Flag to detect is project ready for bulk annotation
created_atdatetimeRead-only
finished_task_numberintegerRead-only
Finished tasks
ground_truth_numberintegerRead-only
Honeypot annotation number in project
idintegerRead-only
num_tasks_with_annotationsintegerRead-only
Tasks with annotations count
parsed_label_configmap from strings to anyRead-only

JSON-formatted labeling configuration

promptsstringRead-only
queue_doneintegerRead-only
queue_totalintegerRead-only
review_settingsobject
skipped_annotations_numberintegerRead-only
Skipped by collaborators annotation number in project
start_training_on_annotation_updatebooleanRead-only
Start model training after any annotations are submitted or updated
statestringRead-only
task_numberintegerRead-only
Total task number in project
total_annotations_numberintegerRead-only

Total annotations number in project including skipped_annotations_number and ground_truth_number.

total_predictions_numberintegerRead-only

Total predictions number in project including skipped_annotations_number, ground_truth_number, and useful_annotation_number.

useful_annotation_numberintegerRead-only

Useful annotation number in project not including skipped_annotations_number and ground_truth_number. Total annotations = annotation_number + skipped_annotations_number + ground_truth_number

workspaceinteger
workspace_titlestringRead-only
agreement_methodologyenum or null
* `consensus` - Consensus * `pairwise` - Pairwise Averaging
Allowed values:
agreement_thresholdstring or nullformat: "decimal"
Minimum percent agreement threshold for which minimum number of annotators must agree
annotation_limit_countinteger or null>=1
annotation_limit_percentstring or nullformat: "decimal"
annotator_evaluation_continuous_tasksinteger or null>=0Defaults to 0
annotator_evaluation_enabledboolean or null
Enable annotator evaluation for the project
annotator_evaluation_minimum_scorestring or nullformat: "decimal"Defaults to 95.00
annotator_evaluation_minimum_tasksinteger or null>=0Defaults to 10
annotator_evaluation_onboarding_tasksinteger or null>=0Defaults to 0
colorstring or null<=16 characters
comment_classification_configstring or null
control_weightsmap from strings to any or null
Dict of weights for each control tag in metric calculation.
created_byobject or null
Project owner
custom_scriptstring or null
custom_task_lock_ttlinteger or null1-86400
TTL in seconds for task reservations, on new and existing tasks
descriptionstring or null
Project description
enable_empty_annotationboolean or null
Allow annotators to submit empty annotations
evaluate_predictions_automaticallyboolean or null
Retrieve and display predictions when loading a task
expert_instructionstring or null
Labeling instructions in HTML format
is_draftboolean or null
Whether or not the project is in the middle of being created
is_publishedboolean or null
Whether or not the project is published to annotators
label_configstring or null
Label config in XML format. See more about it in documentation
max_additional_annotators_assignableinteger or null
Maximum number of additional annotators that can be assigned to a low agreement task
maximum_annotationsinteger or null-2147483648-2147483647

Maximum number of annotations for one task. If the number of annotations per task is equal or greater to this value, the task is completed (is_labeled=True)

min_annotations_to_start_traininginteger or null-2147483648-2147483647
Minimum number of completed tasks after which model training is started
model_versionstring or null
Machine learning model version
organizationinteger or null
overlap_cohort_percentageinteger or null-2147483648-2147483647
pause_on_failed_annotator_evaluationboolean or nullDefaults to false
pinned_atdatetime or null
Pinned date and time
require_comment_on_skipboolean or nullDefaults to false
reveal_preannotations_interactivelyboolean or null

Reveal pre-annotations interactively

samplingenum or null
* `Sequential sampling` - Tasks are ordered by Data manager ordering * `Uniform sampling` - Tasks are chosen randomly * `Uncertainty sampling` - Tasks are chosen according to model uncertainty scores (active learning mode)
Allowed values:
show_annotation_historyboolean or null
Show annotation history to annotator
show_collab_predictionsboolean or null
If set, the annotator can view model predictions
show_instructionboolean or null
Show instructions to the annotator before they start
show_overlap_firstboolean or null
show_skip_buttonboolean or null
Show a skip button in interface and allow annotators to skip the task
show_unused_data_columns_to_annotatorsboolean or null
skip_queueenum or null
* `REQUEUE_FOR_ME` - Requeue for me * `REQUEUE_FOR_OTHERS` - Requeue for others * `IGNORE_SKIPPED` - Ignore skipped
Allowed values:
strict_task_overlapboolean or nullDefaults to true
task_data_loginstring or null<=256 characters

Task data credentials: login

task_data_passwordstring or null<=256 characters

Task data credentials: password

titlestring or null3-50 characters
Project name. Must be between 3 and 50 characters long.
show_ground_truth_firstboolean or nullDeprecated

Onboarding mode (true): show ground truth tasks first in the labeling stream