The token (or API key) must be passed as a request header. You can find your user token on the User Account page in Label Studio. Example: <br><pre><code class="language-bash">curl https://label-studio-host/api/projects -H "Authorization: Token [your-token]"</code></pre>
Request
This endpoint expects an object.
annotator_evaluation_enabledbooleanOptional
Enable annotator evaluation for the project
colorstring or nullOptional<=16 characters
control_weightsmap from strings to objects or nullOptional
Dict of weights for each control tag in metric calculation. Keys are control tag names from the labeling config. At least one tag must have a non-zero overall weight.
created_byobjectOptional
Project owner
descriptionstring or nullOptional
Project Description
enable_empty_annotationbooleanOptional
Allow annotators to submit empty annotations
evaluate_predictions_automaticallybooleanOptional
Retrieve and display predictions when loading a task
expert_instructionstring or nullOptional
Labeling instructions in HTML format
is_draftbooleanOptional
Whether or not the project is in the middle of being created
is_publishedbooleanOptional
Whether or not the project is published to annotators
label_configstring or nullOptional
Label config in XML format. See more about it in documentation
Maximum number of annotations for one task. If the number of annotations per task is equal or greater to this value, the task is completed (is_labeled=True)
* `Sequential sampling` - Tasks are ordered by Data manager ordering
* `Uniform sampling` - Tasks are chosen randomly
* `Uncertainty sampling` - Tasks are chosen according to model uncertainty scores (active learning mode)
Allowed values:
show_annotation_historybooleanOptional
Show annotation history to annotator
show_collab_predictionsbooleanOptional
If set, the annotator can view model predictions
show_instructionbooleanOptional
Show instructions to the annotator before they start
show_overlap_firstbooleanOptional
show_skip_buttonbooleanOptional
Show a skip button in interface and allow annotators to skip the task
skip_queueenum or nullOptional
* `REQUEUE_FOR_ME` - Requeue for me
* `REQUEUE_FOR_OTHERS` - Requeue for others
* `IGNORE_SKIPPED` - Ignore skipped
Allowed values:
task_data_loginstring or nullOptional<=256 characters
Task data credentials: login
task_data_passwordstring or nullOptional<=256 characters
Task data credentials: password
titlestring or nullOptional3-50 characters
Project Title
workspaceintegerOptional
In Workspace
show_ground_truth_firstbooleanOptionalDeprecated
Onboarding mode (true): show ground truth tasks first in the labeling stream
Start model training after any annotations are submitted or updated
statestringRead-only
task_numberintegerRead-only
Total task number in project
total_annotations_numberintegerRead-only
Total annotations number in project including skipped_annotations_number and ground_truth_number.
total_predictions_numberintegerRead-only
Total predictions number in project including skipped_annotations_number, ground_truth_number, and useful_annotation_number.
useful_annotation_numberintegerRead-only
Useful annotation number in project not including skipped_annotations_number and ground_truth_number. Total annotations = annotation_number + skipped_annotations_number + ground_truth_number
annotator_evaluation_enabledboolean
Enable annotator evaluation for the project
colorstring or null<=16 characters
control_weightsmap from strings to objects or null
Dict of weights for each control tag in metric calculation. Keys are control tag names from the labeling config. At least one tag must have a non-zero overall weight.
created_byobject
Project owner
descriptionstring or null
Project Description
enable_empty_annotationboolean
Allow annotators to submit empty annotations
evaluate_predictions_automaticallyboolean
Retrieve and display predictions when loading a task
expert_instructionstring or null
Labeling instructions in HTML format
is_draftboolean
Whether or not the project is in the middle of being created
is_publishedboolean
Whether or not the project is published to annotators
label_configstring or null
Label config in XML format. See more about it in documentation
maximum_annotationsinteger-2147483648-2147483647
Maximum number of annotations for one task. If the number of annotations per task is equal or greater to this value, the task is completed (is_labeled=True)
* `Sequential sampling` - Tasks are ordered by Data manager ordering
* `Uniform sampling` - Tasks are chosen randomly
* `Uncertainty sampling` - Tasks are chosen according to model uncertainty scores (active learning mode)
Allowed values:
show_annotation_historyboolean
Show annotation history to annotator
show_collab_predictionsboolean
If set, the annotator can view model predictions
show_instructionboolean
Show instructions to the annotator before they start
show_overlap_firstboolean
show_skip_buttonboolean
Show a skip button in interface and allow annotators to skip the task
skip_queueenum or null
* `REQUEUE_FOR_ME` - Requeue for me
* `REQUEUE_FOR_OTHERS` - Requeue for others
* `IGNORE_SKIPPED` - Ignore skipped
Allowed values:
task_data_loginstring or null<=256 characters
Task data credentials: login
task_data_passwordstring or null<=256 characters
Task data credentials: password
titlestring or null3-50 characters
Project Title
workspaceinteger
In Workspace
show_ground_truth_firstbooleanDeprecated
Onboarding mode (true): show ground truth tasks first in the labeling stream