AI Architecture
This document describes architecture shared by the GitLab Duo AI features. For historical motivation and goals of this architecture, see the AI Gateway Architecture design document.
Introduction
The following diagram shows a simplified view of how the different components in GitLab interact.
@startuml
!theme cloudscape-design
skinparam componentStyle rectangle
package Clients {
[IDEs, Code Editors, Language Server] as IDE
[GitLab Web Frontend] as GLWEB
}
[GitLab.com] as GLCOM
[Self-Managed/Dedicated] as SMI
[CustomersDot API] as CD
[AI Gateway] as AIGW
package Models {
[3rd party models (Anthropic,VertexAI)] as THIRD
[GitLab Native Models] as GLNM
}
Clients -down-> GLCOM : REST/Websockets
Clients -down-> SMI : REST/Websockets
Clients -down-> AIGW : code completion direct connection
SMI -right-> CD : License + JWT Sync
GLCOM -down-> AIGW : Prompts + Telemetry + JWT (REST)
SMI -down-> AIGW : Prompts + Telemetry + JWT (REST)
AIGW -up-> GLCOM : JWKS public key sync
AIGW -up-> CD : JWKS public key sync
AIGW -down-> Models : prompts
@enduml
- AI Abstraction layer - Every GitLab instance (Self-Managed, GitLab.com, ..) contains an AI Abstraction layer which provides a framework for implementing new AI features in the monolith. This layer adds contextual information to the request and does request pre/post processing.
Systems
- GitLab instances - GitLab monolith that powers all types of GitLab instances
- CustomersDot - Allows customers to buy and upgrade subscriptions by adding more seats and add/edit payment records. It also manages self-managed licenses.
- AI Gateway - System that provides unified interface for invoking models. Deployed in Google Cloud Run (using Runway).
- Extensions
- Language Server (powers code suggestions in VS Code, Visual Studio 2022 for Windows, and Neovim)
- VS Code
- JetBrains
- Visual Studio 2022 for Windows
- Neovim
Difference between how GitLab.com and Self-Managed/Dedicated access AI Gateway
- GitLab.com
- GitLab.com instances self-issue JWT Auth token signed with a private key.
- Other types of instances
- Self-Managed and Dedicated regularly synchronise their licenses and AI Access tokens with CustomersDot.
- Self-Managed and Dedicated instances route traffic to appropriate AI Gateway.
SaaS-based AI abstraction layer
GitLab currently operates a cloud-hosted AI architecture. We will allow access to it for licensed self managed instances using the AI-gateway. See the design document for details.
There are two primary reasons for this: the best AI models are cloud-based as they often depend on specialized hardware designed for this purpose, and operating self-managed infrastructure capable of AI at-scale and with appropriate performance is a significant undertaking. We are actively tracking self-managed customers interested in AI.
AI Gateway
The AI Gateway (formerly the model gateway) is a standalone-service that will give access to AI features to all users of GitLab, no matter which instance they are using: self-managed, dedicated or GitLab.com. The SaaS-based AI abstraction layer will transition to connecting to this gateway, rather than accessing cloud-based providers directly.
Calls to the AI-gateway from GitLab-rails can be made using the Abstraction Layer. By default, these actions are performed asynchronously via a Sidekiq job to prevent long-running requests in Puma. It should be used for non-latency sensitive actions due to the added latency by Sidekiq.
At the time of writing, the Abstraction Layer still directly calls the AI providers. Epic 11484 proposes to change this.
When a certain action is latency sensitive, we can decide to call the
AI-gateway directly. This avoids the latency added by Sidekiq.
We already do this for code_suggestions
which get handled by API endpoints nested in
/api/v4/code_suggestions
. For any new endpoints added, we should
nest them within the /api/v4/ai_assisted
namespace. Doing this will
automatically route the requests on GitLab.com to the ai-assisted
fleet for GitLab.com, isolating the workload from the regular API and
making it easier to scale if needed.
Supported technologies
As part of the AI working group, we have been investigating various technologies and vetting them. Below is a list of the tools which have been reviewed and already approved for use within the GitLab application.
It is possible to utilize other models or technologies, however they will need to go through a review process prior to use. Use the AI Project Proposal template as part of your idea and include the new tools required to support it.
Models
The following models have been approved for use:
- Google's Vertex AI and model garden
- Anthropic models
- Suggested reviewer
Vector stores
NOTE: There is a proposal to change vector stores for improving the quality of search results. See RAG for GitLab Duo for more information.
The following vector stores have been approved for use:
-
pgvector
is a Postgres extension adding support for storing vector embeddings and calculating ANN (approximate nearest neighbor).
Indexing Update
NOTE: There is a proposal to change indexing update for improving the quality of search results. See RAG for GitLab Duo for more information.
We are currently using sequential scan, which provides perfect recall. We are considering adding an index if we can ensure that it still produces accurate results, as noted in the pgvector
indexing documentation.
Given that the table contains thousands of entries, indexing with these updated settings would likely improve search speed while maintaining high accuracy. However, more testing may be needed to verify the optimal configuration for this dataset size before deploying to production.
A draft MR has been created to update the index.
The index function has been updated to improve search quality. This was tested locally by setting the ivfflat.probes
value to 10
with the following SQL command:
::Embedding::Vertex::GitlabDocumentation.connection.execute("SET ivfflat.probes = 10")
Setting the probes
value for indexing improves results, as per the neighbor documentation.
For optimal probes
and lists
values:
- Use
lists
equal torows / 1000
for tables with up to 1 million rows andsqrt(rows)
for larger datasets. - For
probes
start withlists / 10
for tables up to 1 million rows andsqrt(lists)
for larger datasets.
Code Suggestions
Code Suggestions is being integrated as part of the GitLab-Rails repository which will unify the architectures between Code Suggestions and AI features that use the abstraction layer, along with offering self-managed support for the other AI features.
The following table documents functionality that Code Suggestions offers today, and what those changes will look like as part of the unification:
Topic | Details | Where this happens today | Where this will happen going forward |
---|---|---|---|
Request processing | |||
Receives requests from IDEs (VS Code, GitLab WebIDE, MS Visual Studio 2022 for Windows, IntelliJ, JetBrains, VIM, Emacs, Sublime), including code before and after the cursor | GitLab Rails | GitLab Rails | |
Authenticates the current user, verifies they are authorized to use Code Suggestions for this project | GitLab Rails + AI Gateway | GitLab Rails + AI Gateway | |
Preprocesses the request to add context, such as including imports via TreeSitter | AI Gateway | Undecided | |
Routes the request to the AI Provider | AI Gateway | AI Gateway | |
Returns the response to the IDE | GitLab Rails | GitLab Rails | |
Logs the request, including timestamp, response time, model, etc | Both | Both | |
Telemetry | |||
User acceptance or rejection in the IDE | AI Gateway | Both | |
Number of unique users per day | GitLab Rails, AI gateway | Undecided | |
Error rate, model usage, response time, IDE usage | AI Gateway | Both | |
Suggestions per language | AI Gateway | Both | |
Monitoring | Both | Both | |
Model Routing | |||
Currently we are not using this functionality, but Code Suggestions is able to support routing to multiple models based on a percentage of traffic | AI Gateway | Both | |
Internal Models | |||
Currently unmaintained, the ability to run models in our own instance, running them inside Triton, and routing requests to our own models | AI Gateway | AI Gateway |
Self-managed support
Code Suggestions for self-managed users was introduced as part of the Cloud Connector MVC.
For more information on the technical solution for this project see the Cloud Connector architecture documentation.
The intention is to evolve this solution to service other AI features under the Cloud Connector product umbrella.
Code Suggestions Latency
Code Suggestions acceptance rates are highly sensitive to latency. While writing code with an AI assistant, a user will pause only for a short duration before continuing on with manually typing out a block of code. As soon as the user has pressed a subsequent keypress, the existing suggestion will be invalidated and a new request will need to be issued to the Code Suggestions endpoint. In turn, this request will also be highly sensitive to latency.
In a worst case with sufficient latency, the IDE could be issuing a string of requests, each of which is then ignored as the user proceeds without waiting for the response. This adds no value for the user, while still putting load on our services.
See our discussions here around how we plan to iterate on latency for this feature.
Future changes to the architecture
- We plan on deploying AI Gateway in different regions to improve latency (see the ed epic Multi-region support for AI Gateway).
- We would like to centralize telemetry. However, centralizing AI (or, Cloud Connector) telemetry is a difficult and unsolved problem as of now.