Logging ML Server Predictions
This code demonstrates FastAPI server with a single endpoint, assuming it serves for ML model predictions.
The request to this endpoint automatically logged in Label Studio instance with await client.tasks.create()
method.
Create a project in Label Studio
Let’s create a project in Label Studio to collect the predictions’ requests and responses from the ML model, and aim to task human reviewers to assess the correctness of the predictions.
Create FastAPI server
First install dependencies:
Let’s create a simple FastAPI server with a single endpoint /predict
that accepts POST requests with JSON payload.
Run the server
Logging predictions
Now you can send POST requests to http://localhost:8000/predict with JSON payload:
Open Label Studio project and see the tasks created from the server responses.