Skip to main content
Version: 0.11.4

Creating a searchable Art Database with The MET's open-access collection

In this example, we show how you can enrich data using Cognitive Skills and write to an Azure Search Index using SynapseML. We use a subset of The MET's open-access collection and enrich it by passing it through 'Describe Image' and a custom 'Image Similarity' skill. The results are then written to a searchable index.

import os, sys, time, json, requests
from pyspark.sql.functions import lit, udf, col, split
from import *

cognitive_key = find_secret("cognitive-api-key")
cognitive_loc = "eastus"
azure_search_key = find_secret("azure-search-key")
search_service = "mmlspark-azure-search"
search_index = "test"
data = ("csv")
.option("header", True)
.withColumn("searchAction", lit("upload"))
.withColumn("Neighbors", split(col("Neighbors"), ",").cast("array<string>"))
.withColumn("Tags", split(col("Tags"), ",").cast("array<string>"))
from import AnalyzeImage
from import SelectColumns

# define pipeline
describeImage = (
["Categories", "Description", "Faces", "ImageType", "Color", "Adult"]

df2 = (
.select("*", "RawImageDescription.*")
.drop("Errors", "RawImageDescription")

Before writing the results to a Search Index, you must define a schema which must specify the name, type, and attributes of each field in your index. Refer Create a basic index in Azure Search for more information.

from import *


The Search Index can be queried using the Azure Search REST API by sending GET or POST requests and specifying query parameters that give the criteria for selecting matching documents. For more information on querying refer Query your Azure Search index using the REST API

url = "https://{}{}/docs/search?api-version=2019-05-06".format(
search_service, search_index
url, json={"search": "Glass"}, headers={"api-key": azure_search_key}