Wiki & Integration Guides
Step-by-step guides for integrating GitHub Copilot API Gateway with popular AI tools and frameworks.
🦜 LangChain Integration
Use Copilot as your LLM provider in LangChain for building RAG pipelines, agents, and chains.
Installation
pip install langchain langchain-openai
Configuration
from langchain_openai import ChatOpenAI llm = ChatOpenAI( base_url="http://127.0.0.1:3030/v1", api_key="copilot", model="gpt-4o" ) # Use in chains response = llm.invoke("Explain RAG in one sentence") print(response.content)
With RAG Pipeline
from langchain_openai import ChatOpenAI, OpenAIEmbeddings from langchain_community.vectorstores import FAISS from langchain.chains import RetrievalQA # Configure LLM and embeddings llm = ChatOpenAI( base_url="http://127.0.0.1:3030/v1", api_key="copilot", model="gpt-4o" ) embeddings = OpenAIEmbeddings( base_url="http://127.0.0.1:3030/v1", api_key="copilot" ) # Create vector store and QA chain vectorstore = FAISS.from_texts(your_documents, embeddings) qa_chain = RetrievalQA.from_chain_type(llm, retriever=vectorstore.as_retriever())
🦙 LlamaIndex Integration
Build data-centric LLM applications with LlamaIndex powered by Copilot.
from llama_index.llms.openai import OpenAI from llama_index.core import Settings # Configure global LLM Settings.llm = OpenAI( api_base="http://127.0.0.1:3030/v1", api_key="copilot", model="gpt-4o" ) # Now use LlamaIndex normally from llama_index.core import VectorStoreIndex, SimpleDirectoryReader documents = SimpleDirectoryReader("./data").load_data() index = VectorStoreIndex.from_documents(documents) query_engine = index.as_query_engine() response = query_engine.query("Summarize the documents")
🎯 Cursor IDE
Use Copilot API Gateway as your model provider in Cursor.
Configuration Steps
- Open Cursor Settings (
Cmd+,/Ctrl+,) - Navigate to Models → OpenAI API Key
- Set the API Key to any value (e.g.,
copilot) - Enable Override OpenAI Base URL
- Set Base URL to:
http://127.0.0.1:3030/v1 - Select your preferred model (e.g.,
gpt-4o)
Now Cursor will use your Copilot subscription for all AI features!
💬 Aider
Aider is an AI pair programming tool for your terminal. Use it with Copilot:
# Set environment variables export OPENAI_API_KEY=copilot export OPENAI_API_BASE=http://127.0.0.1:3030/v1 # Run aider aider --model gpt-4o
Or create a .aider.conf.yml in your project:
openai-api-key: copilot openai-api-base: http://127.0.0.1:3030/v1 model: gpt-4o
🔄 Continue
Configure Continue (open-source AI code assistant) to use Copilot:
Edit ~/.continue/config.json:
{
"models": [
{
"title": "Copilot GPT-4o",
"provider": "openai",
"model": "gpt-4o",
"apiKey": "copilot",
"apiBase": "http://127.0.0.1:3030/v1"
}
]
}
🔧 Open Interpreter
Let language models run code on your computer, powered by Copilot:
# Set environment export OPENAI_API_KEY=copilot export OPENAI_API_BASE=http://127.0.0.1:3030/v1 # Run interpreter interpreter --model gpt-4o
🤖 AutoGPT
Run autonomous AI agents with your Copilot subscription:
Edit .env in your AutoGPT directory:
OPENAI_API_KEY=copilot OPENAI_API_BASE_URL=http://127.0.0.1:3030/v1 SMART_LLM=gpt-4o FAST_LLM=gpt-4o-mini
👥 CrewAI
Orchestrate AI agent teams with CrewAI:
import os os.environ["OPENAI_API_KEY"] = "copilot" os.environ["OPENAI_API_BASE"] = "http://127.0.0.1:3030/v1" from crewai import Agent, Task, Crew researcher = Agent( role="Researcher", goal="Research and summarize topics", backstory="Expert research analyst", llm="gpt-4o" ) task = Task( description="Research the latest trends in AI", agent=researcher ) crew = Crew(agents=[researcher], tasks=[task]) result = crew.kickoff()
🎯 Enterprise Apps Hub
The extension includes 30+ built-in AI workflows accessible from the VS Code sidebar.
Accessing Apps Hub
- Open VS Code with the extension installed
- Click "Open Apps Hub" in the sidebar
- Browse or search for the app you need
- Fill in the form and run the workflow
🎭 Playwright Generator
Generate production-ready E2E tests from natural language:
- Open Apps Hub → Playwright Generator
- Describe the test scenario (e.g., "Login with valid credentials")
- Optionally add Jira issue ID for context
- Click Generate to create the test code
🔗 Jira Integration
Connect Jira to automatically fetch issue context:
- Open Apps Hub → Settings
- Enter your Jira URL, email, and API token
- Save credentials
- Now any app can pull context from Jira issues
🛠️ Function Calling
Use OpenAI-compatible tool/function calling:
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
},
"required": ["location"]
}
}
}
]
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What's the weather in Tokyo?"}],
tools=tools
)
📋 JSON Mode
Guarantee valid JSON output:
response = client.chat.completions.create(
model="gpt-4o",
messages=[{
"role": "user",
"content": "Extract name and email from: John Doe, john@example.com"
}],
response_format={"type": "json_object"}
)
import json
data = json.loads(response.choices[0].message.content)
# {"name": "John Doe", "email": "john@example.com"}
🌊 Streaming Responses
Build responsive UIs with real-time streaming:
stream = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Write a poem"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)