Docs & API reference

Introduction

Vize provides a trainable image recognition API to recognize and classify images. It allows you to implement state-of-the-art artificial intelligence into your project. We provide a user interface for simple set up of your task and programmatic access to manage an account, upload images and train new models. After an easy setup, you get highly accurate results over API and you are ready to build this functionality in your application. It’s easy, quick and highly scalable.

Vize image recognition online app:

Workplace

Getting started

After signing into the Vize application, you can start defining your image recognition task in a graphical interface. You are also provided with an access token for the programmatic access. Upload at least 20 images for each category to train your machine-learning model. After the training, you will find out the training accuracy and you get a preview interface to validate and test your model. You can achieve better accuracy by adding more images and retraining the model.

Using API token you can connect your application with our API service and start using image recognition in your project. Each response contains details on the best matching label as well as recognition accuracy.

Terminology

The task is where you start. Each task has a set of labels (categories), training images and a recognition model. Only you can access your tasks.

Model is the machine learning model behind your image recognition API. Its a neural network trained on your specific images and thus highly accurate at recognizing new images. Each model has an accuracy measured at the end of the training. Model is private only to its owner. Each retraining increases the version of the model by one and you can select a model version that is deployed.

Label (category) is a feature you want to recognize on your images. You must provide training images with this feature and Vize learns to recognize it.

Before training of the model, Vize internally split your images into training images (80%) and testing images (20%). Training images are used for training, testing images are used to evaluate the accuracy of the model.

The accuracy of the model is a number saying how accurately are your labels recognized. Accuracy 95% means 95 of 100 images will get the right label. Accuracy depends on the number of images uploaded for training and will not be very accurate for a low number of training images.

Best practices

We recommend the following rules to get the best results.

  • For the training use variety of images, different focuses on an object and pictures taken from different angles. Do not mix images of different categories (only apples in apple category). Use images that describe best your category.


  • Use many images to train. Minimum is 20 images per category but you'll get much better results with couple of hundreds per label.


  • Resize your image before you send it to API endpoint. File upload takes most of the time in your API call and we recommend reducing image size down to 512px on the shorter side. Our endpoint will accept jpg and png images up to 10MB, but a significant time of your request will be spent on the image upload.

See a great article on our blog about How to setup an image recognition task properly.

API reference

This is a documentation for Vize API reference. You can find interactive API Reference after logging in at https://api.ximilar.com/recognition/swagger/.

Vize API is located at https://api.vize.ai and the same API (excpept deprecated method /v1/classify) is on https://api.ximilar.com/recognition. Each API entity (task, label, image) is defined by its ID. ID is formatted as a universally unique identifier (UUID) string. You can use IDs from browser URLs to quick access entities over programmatic access.

Sample of ID:

0a8c8186-aee8-47c8-9eaf-348103feb14d

IDs in browser links:

Workplace

Authentication

User authentication is done through a token in the header of HTTP request. You can find your token in user options. Here is a sample of authorization token in the header:

Authorization: Token 1af538baa90-----XXX-----baf83ff24

A sample of authentication:

curl -v -XGET -H 'Authorization: Token __API_TOKEN__' https://api.vize.ai/v2/task/import requests

url = 'https://api.vize.ai/v2/task/'
headers={'Authorization': "Token __API_TOKEN__"}
response = requests.get(url, headers=headers)
if response.raise_for_status():
    print(response.text)
else:
    print('Error posting API: ' + response.text)

Classify endpoint - /v1/classify/

Classify endpoint executes an image recognition. It allows POST method and you can find it in our interactive API reference. You can pass an image in these formats:

  • file upload jpg, png supported, (provide image_file and set Content-Type header to multipart/form-data) This is preferred and most optimised method for passing images.


  • URL (provide image_url). Can be slow due to third-party servers.


  • Base64 (provide image_base64)


A sample of image recognition:

curl -v -XPOST -H 'Authorization: Token __API_TOKEN__' -F 'image_file=@__IMAGE_FILE__;type=image/jpeg' -F 'task=__TASK_ID__' https://api.vize.ai/v1/classify/import requests

url = 'https://api.vize.ai/v1/classify/'
headers = {'Authorization': "Token __API_TOKEN__"}
files = {'image_file': open('__IMAGE_FILE__', 'rb')}
data = {'task': '__TASK_ID__'}

response = requests.post(url, headers=headers, files=files, data=data)
if response.raise_for_status():
    print(response.text)
else:
    print('Error posting API: ' + response.text)

$curl_handle = curl_init("https://api.vize.ai/v1/classify/");

curl_setopt($curl_handle, CURLOPT_POST, 1);
$args['image_file'] = new CurlFile({path/myimage.png}, 'image/png');
$args['task'] = __TASK_ID__;
curl_setopt($curl_handle, CURLOPT_POSTFIELDS, $args);
curl_setopt($curl_handle, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl_handle, CURLOPT_HTTPHEADER, array(
    "Authorization: Token __API_TOKEN__",
    "cache-control: no-cache",));

$returned_data = curl_exec($curl_handle);
curl_close ($curl_handle);
echo $returned_data;var unirest = require('unirest');

// Use http://unirest.io/nodejs open-source library. Install: npm install unirest

unirest.post("https://api.vize.ai/v1/classify/")
.header("Authorization", "Token __API_TOKEN__")

.header("Content-Type", "multipart/form-data")
.header("Accept", "text/plain")
.attach('image', 'cat2.jpg')
.attach('task', __TASK_ID__)

.end(function (result) {
  console.log(result.status, result.headers, result.body);
});// Use http://unirest.io/java open-source library.

HttpResponse response = Unirest.post("https://api.vize.ai/v1/classify/?image=__IMAGE_FILE__")
.header("Authorization", "Token __API_TOKEN__")
.header("Content-Type", "multipart/form-data")
.header("Accept", "text/plain")
.attach('task', __TASK_ID__)
.asString();// Use http://unirest.io/objective-c open-source library.

NSDictionary *headers = @{@"Authorization": @"Token __API_TOKEN__", @"Content-Type": @"multipart/form-data", @"Accept": @"text/plain"};
UNIUrlConnection *asyncConnection = [[UNIRest post:^(UNISimpleRequest *request) {
  [request setUrl:@"https://api.vize.ai/v1/classify/?image={path/myimage.png}"];
  [request setHeaders:headers];
}] asundefinedAsync:^(UNIHTTPundefinedResponse *response, NSError *error) {
  NSInteger code = response.code;
  NSDictionary *responseHeaders = response.headers;
  UNIJsonNode *body = response.body;
  NSData *rawBody = response.rawBody;
}];// Use http://unirest.io/net open-source library.

Task> response = Unirest.post("https://api.vize.ai/v1/classify/?image={path/myimage.png}")
.header("Authorization", "Token __API_TOKEN__")
.header("Content-Type", "multipart/form-data")
.header("Accept", "text/plain")
.attach('task', __TASK_ID__)
.asString();# Use http://unirest.io/ruby open-source library.

response = Unirest.post ("https://api.vize.ai/v1/classify/?image={path/myimage.png}",
  headers:{
    "Authorization" => "Token __API_TOKEN__",
    "Content-Type" => "multipart/form-data",
    "Accept" => "text/plain"
  },
  parameters:{
    "task" => "__TASK_ID__"
  })

New classification API:

We have released a new classification API endpoint /v2/classify which gets JSON-formatted body. Differences from the /v1/classify endpoint:

  • can classify a batch of images (up to 10) at once

  • can extract visual features (descriptors) for similarity search

  • you can optionally specify a version of the model to be used

  • the returned result is very similar as but more simple

  • you cannot directly send a local image file, but you must first convert it to Base64

For billing purposes, we count each image in the batch as one "request" - using the batch is a way to speedup the processing, not to outsmart the billing :-)

curl -H "Content-Type: application/json" -H "authorization: Token __API_TOKEN__" https://api.vize.ai/v2/classify -d '{"task_id": "0a8c8186-aee8-47c8-9eaf-348103feb14d", "version": 2, "descriptor": 0, "records": [ {"_url": "https://bit.ly/2IymQJv" } ] }'import requests
import json
import base64

url = 'https://api.vize.ai/v2/classify/'
headers = {
    'Authorization': "Token __API_TOKEN__",
    'Content-Type': 'application/json'
}
with open(__IMAGE_PATH__, "rb") as image_file:
    encoded_string = base64.b64encode(image_file.read()).decode('utf-8')

data = {
    'task_id': __TASK_ID__,
    'records': [ {'_url': __IMAGE_URL__ }, {"_base64": encoded_string } ]
}

response = requests.post(endpoint, headers=headers, data=json.dumps(data))
if response.raise_for_status():
    print(json.dumps(response.json(), indent=2))
else:
    print('Error posting API: ' + response.text)$curl_handle = curl_init("https://api.vize.ai/v2/classify");

$data = [
  'task_id' => __TASK_ID__,
  'records' => [
    [ '_url' => 'https://bit.ly/2IymQJv' ],
    [ '_base64' => base64_encode(file_get_contents(__PATH_TO_IMAGE__)) ]
  ]
];

curl_setopt($curl_handle, CURLOPT_CUSTOMREQUEST, "POST");
curl_setopt($curl_handle, CURLOPT_POSTFIELDS, json_encode($data));
curl_setopt($curl_handle, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl_handle, CURLOPT_FAILONERROR, true);
curl_setopt($curl_handle, CURLOPT_HTTPHEADER, array(
  "Content-Type: application/json",
  "Authorization: Token __API_TOKEN__",
  "cache-control: no-cache",)
);

$response = curl_exec($curl_handle);
$error_msg = curl_error($curl_handle);

if ($error_msg) { // Handle error
  print_r($error_msg);
} else { // Handle response
  print_r($response);
}
curl_close ($curl_handle);

Task — /v2/task/

Task endpoints let you manage tasks in your account. You can list all the tasks, create, delete, and modify created tasks. Until the first task training is successfully finished the production version of the task is -1 and the task cannot be used for classification. Find details in the interactive API reference.

List tasks:

curl -v -XGET -H 'Authorization: Token __API_TOKEN__' https://api.vize.ai/v2/task/import requests

url = 'https://api.vize.ai/v2/task/'
headers={'Authorization': "Token __API_TOKEN__"}

response = requests.get(url, headers=headers)
if response.raise_for_status():
    print(response.text)
else:
    print('Error posting API: ' + response.text)

Create new task:

curl -v -XPOST -H 'Authorization: Token __API_TOKEN__' -F 'name=My new task' -F 'description=Demo task' https://api.vize.ai/v2/task/import requests

url = 'https://api.vize.ai/v2/task/'
headers={'Authorization': "Token __API_TOKEN__"}
data = {"name": "My new task", "description": "Demo task"}

response = requests.post(url, headers=headers, data=data)
if response.raise_for_status():
        print(response.text)
else:
        print('Error calling API: ' + response.text)

Delete task:

curl -v -XDELETE -H 'Authorization: Token __API_TOKEN__' https://api.vize.ai/v2/task/__TASK_ID__/import requests

url = 'https://api.vize.ai/v2/task/__TASK_ID__/'
headers = {'Authorization': "Token __API_TOKEN__"}

response = requests.delete(url, headers=headers)
if response.raise_for_status():
    print(response.text)
else:
    print('Error calling API: ' + response.text)

Label — /v2/label/

Label endpoints let you manage labels (categories) in your tasks. You manage your labels independently (list them, create, delete, and modify) and then you connect them to your tasks. Each task requires at least two labels for training. Each label must contain at least 20 images. Find details in our interactive API reference.

List all your labels:

curl -v -XGET -H 'Authorization: Token __API_TOKEN__' https://api.vize.ai/v2/label/import requests

url = 'https://api.vize.ai/v2/label/'
headers = {'Authorization': 'Token __API_TOKEN__'}

response = requests.get(url, headers=headers)
if response.raise_for_status():
    print(response.text)
else:
    print('Error posting API: ' + response.text)

Create new label:

curl -v -XPOST -H 'Authorization: Token __API_TOKEN__' -F 'name=New label' https://api.vize.ai/v2/label/import requests

url = 'https://api.vize.ai/v2/label/'
headers={'Authorization': "Token __API_TOKEN__"}
data = {"name": "New label", "task": "__TASK_ID__"}

response = requests.post(url, headers=headers, data=data)
if response.raise_for_status():
    print(response.text)
else:
    print('Error calling API: ' + response.text)

Connect a created label to your task:

curl -v -XPOST -H 'Authorization: Token __API_TOKEN__' -F 'label_id=__LABEL_ID__' https://api.vize.ai/v2/task/__TASK_ID__/add-label/import requests

url = 'https://api.vize.ai/v2/task/%s/add-label/' % str(__TASK_ID__)
headers={'Authorization': "Token __API_TOKEN__"}
data = {"label_id": "__LABEL_ID__"}

response = requests.post(url, headers=headers, data=data)
if response.raise_for_status():
    print(response.text)
else:
    print('Error calling API: ' + response.text)

Training image — /v2/training-image/

Training image endpoint let you upload training images and add labels to these images. You can list training images, create, delete, modify created images. Because Vize will soon allow multi-label classification, the API allows to add more than one label to each training image. Find details in interactive API reference.

Upload training image:

curl -v -XPOST -H 'Authorization: Token __API_TOKEN__' -F 'img_path=@__FILE__;type=image/jpeg' https://api.vize.ai/v2/training-image/import requests

url = 'https://api.vize.ai/v2/training-image/'
headers={'Authorization': "Token __API_TOKEN__"}
files = {'img_path': open(file_path, 'rb')}

response = requests.post(url, headers=headers, files=files)
if response.raise_for_status():
    print(response.text)
else:
    print('Error calling API: ' + response.text)

Add label to a training image:

curl -v -XPOST -H 'Authorization: Token __API_TOKEN__' -F 'label_id=__LABEL_ID__' https://api.vize.ai/v2/training-image/__IMAGE_ID__/add-labelimport requests

url = 'https://api.vize.ai/v2/training-image/__IMAGE_ID__/add-label'
headers={'Authorization': "Token __API_TOKEN__"}
data = {"label_id": "__LABEL_ID__"}

response = requests.delete(url, headers=headers, data=data)
if response.raise_for_status():
    print(response.text)
 else:
    print('Error calling API: ' + response.text)

Training — /v2/task/__TASK_ID__/train/

Use training endpoint to start a model training. It takes few minutes up to few hours to train a model depending on the number of images in your training collection. You are notified about the start and the finish of the training by email.

Start training:

curl -v -XPOST -H 'Authorization: Token __API_TOKEN__' https://api.vize.ai/v2/task/__TASK_ID__/train/import requests

url = 'https://api.vize.ai/v2/task/%s/train/' % str(task_id)
headers={'Authorization': "Token __API_TOKEN__"}

response = requests.get(url, headers=headers)
if response.raise_for_status():
    print(response.text)
else:
    print('Error calling API: ' + response.text)