Projects

Get project by ID

GET

Retrieve information about a specific project by project ID. The project ID can be found in the URL when viewing the project in Label Studio, or you can retrieve all project IDs using List all projects.

Path parameters

idintegerRequired
A unique integer value identifying this project.

Response

This endpoint returns an object
id
integerOptional
title
stringOptional
Project name. Must be between 3 and 50 characters long.
description
stringOptional
Project description
label_config
stringOptional
Label config in XML format. See more about it in documentation
expert_instruction
stringOptional
Labeling instructions in HTML format
show_instruction
booleanOptional
Show instructions to the annotator before they start
show_skip_button
booleanOptional
Show a skip button in interface and allow annotators to skip the task
enable_empty_annotation
booleanOptional
Allow annotators to submit empty annotations
show_annotation_history
booleanOptional
Show annotation history to annotator
organization
integerOptional
color
stringOptional
maximum_annotations
integerOptional

Maximum number of annotations for one task. If the number of annotations per task is equal or greater to this value, the task is completed (is_labeled=True)

is_published
booleanOptional
Whether or not the project is published to annotators
model_version
stringOptional
Machine learning model version
is_draft
booleanOptional
Whether or not the project is in the middle of being created
created_by
objectOptional
Project owner
created_at
datetimeOptional
min_annotations_to_start_training
integerOptional
Minimum number of completed tasks after which model training is started
start_training_on_annotation_update
stringOptional
Start model training after any annotations are submitted or updated
show_collab_predictions
booleanOptional
If set, the annotator can view model predictions
num_tasks_with_annotations
integerOptional
Tasks with annotations count
task_number
integerOptional
Total task number in project
useful_annotation_number
integerOptional

Useful annotation number in project not including skipped_annotations_number and ground_truth_number. Total annotations = annotation_number + skipped_annotations_number + ground_truth_number

ground_truth_number
integerOptional
Honeypot annotation number in project
skipped_annotations_number
integerOptional
Skipped by collaborators annotation number in project
total_annotations_number
integerOptional

Total annotations number in project including skipped_annotations_number and ground_truth_number.

total_predictions_number
integerOptional

Total predictions number in project including skipped_annotations_number, ground_truth_number, and useful_annotation_number.

sampling
enumOptional
Allowed values: Sequential samplingUniform samplingUncertainty sampling
show_ground_truth_first
booleanOptional
show_overlap_first
booleanOptional
overlap_cohort_percentage
integerOptional
task_data_login
stringOptional
Task data credentials: login
task_data_password
stringOptional
Task data credentials: password
control_weights
map from strings to anyOptional
Dict of weights for each control tag in metric calculation. Each control tag (e.g. label or choice) will have it's own key in control weight dict with weight for each label and overall weight.For example, if bounding box annotation with control tag named my_bbox should be included with 0.33 weight in agreement calculation, and the first label Car should be twice more important than Airplaine, then you have to need the specify: {'my_bbox': {'type': 'RectangleLabels', 'labels': {'Car': 1.0, 'Airplaine': 0.5}, 'overall': 0.33}
parsed_label_config
map from strings to anyOptional
JSON-formatted labeling configuration
evaluate_predictions_automatically
booleanOptional
Retrieve and display predictions when loading a task
config_has_control_tags
stringOptional
Flag to detect is project ready for labeling
skip_queue
enumOptional
Allowed values: REQUEUE_FOR_MEREQUEUE_FOR_OTHERSIGNORE_SKIPPED
reveal_preannotations_interactively
booleanOptional
Reveal pre-annotations interactively
pinned_at
datetimeOptional
Pinned date and time
finished_task_number
integerOptional
Finished tasks
queue_total
stringOptional
queue_done
stringOptional
GET
1curl http://localhost:8080/api/projects/1/ \
2 -H "Authorization: Token <api_key>"
200Retrieved
1{
2 "id": 1,
3 "title": "My project",
4 "description": "My first project",
5 "label_config": "<View>[...]</View>",
6 "expert_instruction": "Label all cats",
7 "show_instruction": true,
8 "show_skip_button": true,
9 "enable_empty_annotation": true,
10 "show_annotation_history": true,
11 "organization": 1,
12 "color": "#FF0000",
13 "maximum_annotations": 1,
14 "is_published": true,
15 "model_version": "1.0.0",
16 "is_draft": false,
17 "created_by": {
18 "id": 1,
19 "first_name": "Jo",
20 "last_name": "Doe",
21 "email": "manager@humansignal.com",
22 "avatar": "avatar"
23 },
24 "created_at": "2023-08-24T14:15:22Z",
25 "min_annotations_to_start_training": 0,
26 "start_training_on_annotation_update": "start_training_on_annotation_update",
27 "show_collab_predictions": true,
28 "num_tasks_with_annotations": 10,
29 "task_number": 100,
30 "useful_annotation_number": 10,
31 "ground_truth_number": 5,
32 "skipped_annotations_number": 0,
33 "total_annotations_number": 10,
34 "total_predictions_number": 0,
35 "sampling": "Sequential sampling",
36 "show_ground_truth_first": true,
37 "show_overlap_first": true,
38 "overlap_cohort_percentage": 100,
39 "task_data_login": "user",
40 "task_data_password": "secret",
41 "control_weights": {
42 "control_weights": {
43 "key": "value"
44 }
45 },
46 "parsed_label_config": {
47 "parsed_label_config": {
48 "key": "value"
49 }
50 },
51 "evaluate_predictions_automatically": false,
52 "config_has_control_tags": "config_has_control_tags",
53 "skip_queue": "REQUEUE_FOR_ME",
54 "reveal_preannotations_interactively": true,
55 "pinned_at": "2023-08-24T14:15:22Z",
56 "finished_task_number": 10,
57 "queue_total": "queue_total",
58 "queue_done": "queue_done"
59}