Tutorials

Improve Object Detection with YOLOv8

Introduction

In this tutorial, you will learn how to improve object detection predictions from YOLOv8 using Label Studio. You will create a project in Label Studio, import images, and annotate them with bounding boxes. You will then use the annotations to fine-tune the YOLOv8 model.

First install the required packages:

$pip install ultralytics

To ensure that the model is working correctly, you can load the model from the checkpoint.

1from ultralytics import YOLO
2
3MODEL_NAME = "yolov8n.pt"
4model = YOLO()

Create a Label Studio project

Create a Label Studio project with the YOLOv8 labels. You need to define the labeling configuration with the YOLOv8 labels.

1yolo_labels = '\n'.join([f'<Label value="{label}"/>' for label in model.names.values()])
2label_config = f'''
3<View>
4 <Image name="img" value="$image" zoom="true" width="100%" maxWidth="800" brightnessControl="true" contrastControl="true" gammaControl="true" />
5 <RectangleLabels name="label" toName="img">
6 {yolo_labels}
7 </RectangleLabels>
8</View>'''

Now use this label_config to create a project.

1from label_studio_sdk.client import LabelStudio
2
3API_KEY = 'YOUR_API_KEY'
4client = LabelStudio(api_key=API_KEY)
5
6project = client.projects.create(
7 title='Object detection',
8 description='Detect objects with YOLOv8',
9 label_config=label_config
10)

Import images

You can import images from cloud storage. If you use AWS S3, connect your project to storage bucket:

1storage = client.import_storage.s3.create(
2 project=project.id,
3 bucket='your-bucket',
4 prefix='images/subfolder',
5 regex_filter='.*jpg',
6 recursive_scan=True,
7 use_blob_urls=True,
8 aws_access_key_id='AKIAJZ5Q4ZQ7ZQ7ZQ7ZQ',
9 aws_secret_access_key='YOUR_SECRET_ACCESS_KEY',
10)

Import images from the storage:

1client.import_storage.s3.sync(id=storage.id)

Create YOLO predictions

You can collect object detections from the model and convert it to Label Studio JSON format.

1def predict_yolo(images):
2 results = model(images)
3 predictions = []
4 for result in results:
5 img_width, img_height = result.orig_shape
6 boxes = result.boxes.cpu().numpy()
7 prediction = {'result': [], 'score': 0.0, 'model_version': MODEL_NAME}
8 scores = []
9 for box, class_id, score in zip(boxes.xywh, boxes.cls, boxes.conf):
10 x, y, w, h = box
11 prediction['result'].append({
12 'from_name': 'label',
13 'to_name': 'img',
14 'original_width': int(img_width),
15 'original_height': int(img_height),
16 'image_rotation': 0,
17 'value': {
18 'rotation': 0,
19 'rectanglelabels': [result.names[class_id]],
20 'width': w / img_width * 100,
21 'height': h / img_height * 100,
22 'x': (x - 0.5 * w) / img_width * 100,
23 'y': (y - 0.5 * h) / img_height * 100
24 },
25 'score': float(score),
26 'type': 'rectanglelabels',
27 })
28 scores.append(float(score))
29 prediction['score'] = min(scores) if scores else 0.0
30 predictions.append(prediction)
31 return predictions

Now, create YOLO predictions for the imported images and save them to the Label Studio project.bucket You can specify scores and model versions for the predictions.

1from PIL import Image
2import requests
3from tqdm import tqdm
4
5project = client.projects.get(28)
6tasks = client.tasks.list(project=project.id)
7images = []
8for i, task in enumerate(tqdm(tasks)):
9 url = f'http://localhost:8080{task["data"]["image"]}'
10 image = Image.open(requests.get(url, headers={'Authorization': f'Token {API_KEY}'}, stream=True).raw)
11 predictions = predict_yolo([image])[0]
12 client.predictions.create(task=task['id'], result=predictions['result'], score=predictions['score'], model_version=predictions['model_version'])

Annotate Low Confidence Predictions

We can use views to filter and organize tasks in Label Studio. For example, we can create a view that shows only tasks with low confidence predictions for the person class:

1tab = client.views.create(
2 project=project.id,
3 data={
4 'title': 'Person low conf',
5 'filters': {
6 "conjunction": "and",
7 "items": [
8 {
9 "filter": "filter:tasks:total_predictions",
10 "operator": "greater",
11 "value": 0,
12 "type": "Number"
13 },
14 {
15 "filter": "filter:tasks:predictions_results",
16 "operator": "contains",
17 "value": "person",
18 "type": "String"
19 },
20 {
21 "filter": "filter:tasks:predictions_score",
22 "operator": "less",
23 "value": 0.3,
24 "type": "Number"
25 },
26 ]
27 }
28 }
29)

Mark them as COMPLETED when done.

1client.views.update(
2 id=tab.id,
3 data={'title': 'COMPLETED'}
4)

Export Annotations

Finally, in order to export annotations that correspond to the batch of tasks, we can use the following code:

1tab = client.views.get(id=tab.id)
2annotated_tasks = client.tasks.list(view=tab.id, fields='all')
3for annotated_task in annotated_tasks:
4 print(annotated_task.annotations)

Convert Annotations to YOLO format

Coming soon! :)