Wiki & Integration Guides

Step-by-step guides for integrating GitHub Copilot API Gateway with popular AI tools and frameworks.

🦜 LangChain Integration

Use Copilot as your LLM provider in LangChain for building RAG pipelines, agents, and chains.

Installation

bash
pip install langchain langchain-openai

Configuration

Python
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    base_url="http://127.0.0.1:3030/v1",
    api_key="copilot",
    model="gpt-4o"
)

# Use in chains
response = llm.invoke("Explain RAG in one sentence")
print(response.content)

With RAG Pipeline

Python
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_community.vectorstores import FAISS
from langchain.chains import RetrievalQA

# Configure LLM and embeddings
llm = ChatOpenAI(
    base_url="http://127.0.0.1:3030/v1",
    api_key="copilot",
    model="gpt-4o"
)

embeddings = OpenAIEmbeddings(
    base_url="http://127.0.0.1:3030/v1",
    api_key="copilot"
)

# Create vector store and QA chain
vectorstore = FAISS.from_texts(your_documents, embeddings)
qa_chain = RetrievalQA.from_chain_type(llm, retriever=vectorstore.as_retriever())

🦙 LlamaIndex Integration

Build data-centric LLM applications with LlamaIndex powered by Copilot.

Python
from llama_index.llms.openai import OpenAI
from llama_index.core import Settings

# Configure global LLM
Settings.llm = OpenAI(
    api_base="http://127.0.0.1:3030/v1",
    api_key="copilot",
    model="gpt-4o"
)

# Now use LlamaIndex normally
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

documents = SimpleDirectoryReader("./data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()

response = query_engine.query("Summarize the documents")

🎯 Cursor IDE

Use Copilot API Gateway as your model provider in Cursor.

Configuration Steps

  1. Open Cursor Settings (Cmd+, / Ctrl+,)
  2. Navigate to ModelsOpenAI API Key
  3. Set the API Key to any value (e.g., copilot)
  4. Enable Override OpenAI Base URL
  5. Set Base URL to: http://127.0.0.1:3030/v1
  6. Select your preferred model (e.g., gpt-4o)

Now Cursor will use your Copilot subscription for all AI features!

💬 Aider

Aider is an AI pair programming tool for your terminal. Use it with Copilot:

bash
# Set environment variables
export OPENAI_API_KEY=copilot
export OPENAI_API_BASE=http://127.0.0.1:3030/v1

# Run aider
aider --model gpt-4o

Or create a .aider.conf.yml in your project:

YAML
openai-api-key: copilot
openai-api-base: http://127.0.0.1:3030/v1
model: gpt-4o

🔄 Continue

Configure Continue (open-source AI code assistant) to use Copilot:

Edit ~/.continue/config.json:

JSON
{
  "models": [
    {
      "title": "Copilot GPT-4o",
      "provider": "openai",
      "model": "gpt-4o",
      "apiKey": "copilot",
      "apiBase": "http://127.0.0.1:3030/v1"
    }
  ]
}

🔧 Open Interpreter

Let language models run code on your computer, powered by Copilot:

bash
# Set environment
export OPENAI_API_KEY=copilot
export OPENAI_API_BASE=http://127.0.0.1:3030/v1

# Run interpreter
interpreter --model gpt-4o

🤖 AutoGPT

Run autonomous AI agents with your Copilot subscription:

Edit .env in your AutoGPT directory:

env
OPENAI_API_KEY=copilot
OPENAI_API_BASE_URL=http://127.0.0.1:3030/v1
SMART_LLM=gpt-4o
FAST_LLM=gpt-4o-mini

👥 CrewAI

Orchestrate AI agent teams with CrewAI:

Python
import os
os.environ["OPENAI_API_KEY"] = "copilot"
os.environ["OPENAI_API_BASE"] = "http://127.0.0.1:3030/v1"

from crewai import Agent, Task, Crew

researcher = Agent(
    role="Researcher",
    goal="Research and summarize topics",
    backstory="Expert research analyst",
    llm="gpt-4o"
)

task = Task(
    description="Research the latest trends in AI",
    agent=researcher
)

crew = Crew(agents=[researcher], tasks=[task])
result = crew.kickoff()

🎯 Enterprise Apps Hub

The extension includes 30+ built-in AI workflows accessible from the VS Code sidebar.

Accessing Apps Hub

  1. Open VS Code with the extension installed
  2. Click "Open Apps Hub" in the sidebar
  3. Browse or search for the app you need
  4. Fill in the form and run the workflow

🎭 Playwright Generator

Generate production-ready E2E tests from natural language:

  1. Open Apps Hub → Playwright Generator
  2. Describe the test scenario (e.g., "Login with valid credentials")
  3. Optionally add Jira issue ID for context
  4. Click Generate to create the test code

🔗 Jira Integration

Connect Jira to automatically fetch issue context:

  1. Open Apps Hub → Settings
  2. Enter your Jira URL, email, and API token
  3. Save credentials
  4. Now any app can pull context from Jira issues

🛠️ Function Calling

Use OpenAI-compatible tool/function calling:

Python
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get current weather for a location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {"type": "string"}
                },
                "required": ["location"]
            }
        }
    }
]

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "What's the weather in Tokyo?"}],
    tools=tools
)

📋 JSON Mode

Guarantee valid JSON output:

Python
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{
        "role": "user",
        "content": "Extract name and email from: John Doe, john@example.com"
    }],
    response_format={"type": "json_object"}
)

import json
data = json.loads(response.choices[0].message.content)
# {"name": "John Doe", "email": "john@example.com"}

🌊 Streaming Responses

Build responsive UIs with real-time streaming:

Python
stream = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Write a poem"}],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="", flush=True)

Have Questions?

Check out the FAQ for common questions and troubleshooting.

View FAQ