Overview
This approach lets you build a Custom GPT that can call Spark API endpoints as “Actions”, so users can ask questions in natural language and your GPT can fetch Spark insights/audience outputs in the background. This is useful when:- You want a repeatable workflow (e.g., “focus group mode”, “audience profiler mode”).
- Add additional guardrails to ensure non-expert users get the correct data
- You want Spark data available inside a dedicated GPT rather than only in the main ChatGPT + connector experience.
Prerequisites
You will need:- A GWI API token with access to Spark API
- A ChatGPT plan that supports Custom GPTs with Actions
How to use
Create your Custom GPT
- In ChatGPT, go to Explore GPTs
- Click Create
-
Give it a name (e.g., “GWI Spark Focus Group – UK Gen Z Travel”)

Add Spark as an Action
- Go to the GPT Configure tab
- Scroll to Actions
- Click Add action
-
Choose Import from URL or paste a schema manually
Recommended OpenAPI schema (copy/paste)
-
What it should look like once copied in

Configure Authentication (API token)
In the Action auth settings:- Choose API Key
- Location: Header
- Header name: Authorization
-
Value: your token (stored by ChatGPT as the secret)

Add instructions so the GPT uses Spark correctly
In your GPT’s Instructions, add guidance like:- When to call Spark (e.g., whenever user asks for data-backed insights)
- Ask for missing parameters (dataset, country, audience definition)
- Keep responses grounded: cite what the API returned; don’t invent stats
Spark usage: Use Spark to answer specific audience questions (who they are, what they do/like, attitudes, channels). If the user asks a big/strategic task (e.g., “make a media plan”), break it into a number of sub-questions (audience profile, interests, media habits, motivations/barriers, key messages, channel fit etc), run Spark for each, then synthesize into recommendations. One analytical goal per Spark call. Prefer multiple narrow calls over one broad call.
-
Also add your use-case behaviour rules: Once you’ve told the GPT how to use Spark, don’t forget to add instructions for how you want it to behave overall—what extra context it should ask for and how it should structure its responses for your specific workflow.

Testing and publishing
Once you’ve finished configuring the Custom GPT, test it using the chat panel on the right to make sure it behaves the way you intended (and is returning useful Spark-backed responses). When you’re happy with the outputs, click Create (top right) to publish it, then share it with the users you want to test and use it.
Troubleshooting
Import from URL doesn’t work
- Use Paste schema instead
- Or ensure the schema URL is publicly accessible and returns valid OpenAPI YAML/JSON
401 Unauthorized
- The token is missing/invalid
- The token doesn’t have Spark access
- The header format is wrong (Authorization vs Bearer …)

