Logging ML Server Predictions

This code demonstrates FastAPI server with a single endpoint, assuming it serves for ML model predictions. The request to this endpoint automatically logged in Label Studio instance with await client.tasks.create() method.

You can also find this as a Python script in the SDK repo for setting up the FastAPI server.

Create a project in Label Studio

Let’s create a project in Label Studio to collect the predictions’ requests and responses from the ML model, and aim to task human reviewers to assess the correctness of the predictions.

1LABEL_STUDIO_URL = 'YOUR_BASE_URL'
2LABEL_STUDIO_API_KEY = 'YOUR_API_KEY'
3
4from label_studio_sdk import LabelStudio
5
6client = LabelStudio(base_url=LABEL_STUDIO_URL, api_key=LABEL_STUDIO_API_KEY)
7
8project = client.projects.create(
9 title='ML Observability Project',
10 label_config='''
11 <View>
12 <Text name="text" value="$text" />
13 <Text name="context" value="$context" />
14 <Text name="predictions" value="$predictions" />
15 <Choices name="correctness" toName="text">
16 <Choice value="Correct" />
17 <Choice value="Incorrect" />
18 <Choice value="Partially correct" />
19 </Choices>
20 </View>
21 '''
22)
23print(f'PROJECT ID: {project.id}')

Create FastAPI server

First install dependencies:

$pip install fastapi uvicorn

Let’s create a simple FastAPI server with a single endpoint /predict that accepts POST requests with JSON payload.

1from fastapi import FastAPI, HTTPException
2from pydantic import BaseModel
3from label_studio_sdk.client import AsyncLabelStudio
4
5app = FastAPI()
6
7# Initialize the async client with the API key and project ID from running Label Studio app
8
9client = AsyncLabelStudio(
10 base_url="http://localhost:8080",
11 api_key='YOUR_API_KEY',
12)
13project_id = 1 # <- replace with your key
14
15
16# Some dummy input data
17class UserInput(BaseModel):
18 text: str
19 context: str
20
21
22@app.post("/predict")
23async def create_item(user_input: UserInput):
24 # Get model predictions
25 # Replace this with your model prediction code
26 # predictions = await model.predict(user_input.text, user_input.context)
27 predictions = '...'
28 data = {'text': user_input.text, 'context': user_input.context, 'predictions': predictions}
29 try:
30 task = await client.tasks.create(project=project_id, data=data)
31 return task
32 except Exception as e:
33 raise HTTPException(status_code=400, detail=str(e))

Run the server

$LABEL_STUDIO_API_KEY=your-api-key LABEL_STUDIO_PROJECT_ID=project-id uvicorn fastapi_server:app --reload

Logging predictions

Now you can send POST requests to http://localhost:8000/predict with JSON payload:

$curl -X POST "http://localhost:8000/predict" -H "Content-Type: application/json" -d '{"text": "example", "context": "context"}'

Open Label Studio project and see the tasks created from the server responses.