Workshop

Vapi Workflow (Assistant Creation)

00:00 / 11:24

Vapi has updated how Workflows are used. Be sure to review the latest implementation below!

New way to implement the workflows in the app:

  • Set up a Global Prompt:
    • 1
    • For example “You are a voice assistant helping with creating new AI interviewers. Your task is to collect data from the user. Remember that this is a voice conversation - do not use any special characters.”
  • Modify the first Conversation Node:
    • 2
    • Add a prompt like “Greet the user and help them create a new AI Interviewer.”
    • On the right-hand menu click on “Extract Variables” tab, and add the variables, just like in the Gather node in the video.
    • level - The job experience level. - string - you can add enum values to make sure the data stays consistent
    • amount - How many questions would you like to generate? - number
    • techstack - A list of technologies to cover during the job interview. For example, React, Next.js, Express.js, Node and so on… - string
    • role - What role should would you like to train for? For example Frontend, Backend, Fullstack, Design, UX? - string
    • type - What type of the interview should it be? - string - you can provide enums here too
  • Modify the condition bubble to say something like “If user provided all the required variables.” - you can click on the yellow bubble or the green edge to get it's condition in the right-side menu.
  • Modify the next Conversation node’s prompt to “Say that the Interview will be generated shortly.”
  • Add a new node - API request, and configure it exactly the same as shown in the video.
    • 3
Hint: you can remove the the conditional bubbles by simply removing the condition text. If you want to add the condition again, just click on the green edge line and write the condition down in the right-hand menu.
  • Add a new node after API request - this time it’s going to be a Conversation node again, instructed to thank the user for their time and informing them that the interview has been generated successfully.
  • Add a Hang-Up node below.
  • The final Workflow should look like this:
    • 4
  • Click on the"Code" button in the upper right part of the dashboard, and copy the generated workflow code.
    • 5
  • Save the code in /constants/index.ts as const generator.
constants.index.ts
export const generator = {
  name: "Generate Interview",
  nodes: [
    {
      name: "start",
      type: "conversation",
      isStart: true,
      metadata: {
        position: {
          x: 0,
          y: 0,
        },
      },
      prompt:
        "Speak first. Greet the user and help them create a new AI Interviewer",
      voice: {
        model: "aura-2",
        voiceId: "thalia",
        provider: "deepgram",
      },
      variableExtractionPlan: {
        output: [
          {
            title: "level",
            description: "The job experience level.",
            type: "string",
            enum: ["entry", "mid", "senior"],
          },
          {
            title: "amount",
            description: "How many questions would you like to generate?",
            type: "number",
            enum: [],
          },
          {
            title: "techstack",
            description:
              "A list of technologies to cover during the job interview. For example, React, Next.js, Express.js, Node and so on...",
            type: "string",
            enum: [],
          },
          {
            title: "role",
            description:
              "What role should would you like to train for? For example Frontend, Backend, Fullstack, Design, UX?",
            type: "string",
            enum: [],
          },
          {
            title: "type",
            description: "What type of the interview should it be? ",
            type: "string",
            enum: ["behavioural", "technical", "mixed"],
          },
        ],
      },
    },
    {
      name: "apiRequest_1747470739045",
      type: "apiRequest",
      metadata: {
        position: {
          x: -16.075937072883846,
          y: 703.623428447121,
        },
      },
      method: "POST",
      url: `${process.env.NEXT_PUBLIC_BASE_URL}/api/vapi/generate`,
      headers: {
        type: "object",
        properties: {},
      },
      body: {
        type: "object",
        properties: {
          role: {
            type: "string",
            description: "",
            value: "{{ role }}",
          },
          level: {
            type: "string",
            description: "",
            value: "{{ level }}",
          },
          type: {
            type: "string",
            description: "",
            value: "{{ type }}",
          },
          amount: {
            type: "number",
            description: "",
            value: "{{ amount }}",
          },
          userid: {
            type: "string",
            description: "",
            value: "{{ userid }}",
          },
          techstack: {
            type: "string",
            description: "",
            value: "{{ techstack }}",
          },
        },
      },
      output: {
        type: "object",
        properties: {},
      },
      mode: "blocking",
      hooks: [],
    },
    {
      name: "conversation_1747721261435",
      type: "conversation",
      metadata: {
        position: {
          x: -17.547788169718615,
          y: 1003.3409337989506,
        },
      },
      prompt:
        "Thank the user for the conversation and inform them that the interview was generated successfully.",
      voice: {
        provider: "deepgram",
        voiceId: "thalia",
        model: "aura-2",
      },
    },
    {
      name: "conversation_1747744490967",
      type: "conversation",
      metadata: {
        position: {
          x: -11.165436030430953,
          y: 484.94857971060617,
        },
      },
      prompt: "Say that the Interview will be generated shortly.",
      voice: {
        provider: "deepgram",
        voiceId: "thalia",
        model: "aura-2",
      },
    },
    {
      name: "hangup_1747744730181",
      type: "hangup",
      metadata: {
        position: {
          x: 76.01267674000721,
          y: 1272.0665127156606,
        },
      },
    },
  ],
  edges: [
    {
      from: "apiRequest_1747470739045",
      to: "conversation_1747721261435",
      condition: {
        type: "ai",
        prompt: "",
      },
    },
    {
      from: "start",
      to: "conversation_1747744490967",
      condition: {
        type: "ai",
        prompt: "If user provided all the required variables",
      },
    },
    {
      from: "conversation_1747744490967",
      to: "apiRequest_1747470739045",
      condition: {
        type: "ai",
        prompt: "",
      },
    },
    {
      from: "conversation_1747721261435",
      to: "hangup_1747744730181",
      condition: {
        type: "ai",
        prompt: "",
      },
    },
  ],
};
}
  • Modify the url parameter of API node, providing NEXT_PUBLIC_BASE_URL as the base of endpoint's URL - it's line 68 in the example above. This way you won't have to update it each time the url of your app changes.
  • Later in the course, when implementing vapi.start() method, instead of passing workflow's ID to the method, pass the generator you saved in constants - exactly the same way as you are going to do with interviewer.
  • You can find all these changes in the source code provided at the top of this lesson.

0 Comments

glass-bbok

No Comments Yet

Be the first to share your thoughts and start the conversation.

tick-guideNext Lesson

Vapi Agent