The State of Vue.js Report 2025 is now available! Case studies, key trends and community insights.
The Conceptual Framework
In the fast-paced world of software development, the demand for faster, more efficient backend development is greater than ever. Businesses are constantly pressured to bring products to market quickly, adapt to evolving user needs, and scale seamlessly. Traditional backend prototyping methods, while reliable, can be time-consuming and prone to bottlenecks—especially when interpreting and implementing complex API specifications.
This is where Large Language Models (LLMs) are proving to be game-changers. By automating aspects of backend generation, LLMs are transforming how developers approach API prototyping, reducing manual effort and accelerating the path from concept to functional prototype. OpenAPI v3 is a standard for defining and documenting RESTful APIs, providing a structured, machine-readable format. However, manually converting these specifications into functional backend code can be time-consuming. Leveraging LLMs to automate this process offers a faster, more efficient solution.
This article provides a step-by-step guide to using LLMs for backend prototyping from OpenAPI v3 specs. We’ve explored the technical process of using LLMs to generate functional backend prototypes from an OpenAPI v3 specification, providing a solid starting point for further development.
Why Use LLMs for Backend Prototyping?
Large Language Models are transforming backend prototyping by significantly enhancing speed, efficiency, and accuracy. Traditional backend development often involves repetitive tasks like setting up API routes, configuring databases, and writing boilerplate code—processes that can be time-consuming and prone to human error.
Automating Repetitive Coding tasks: LLMs can automate much of this work by generating backend code directly from structured inputs like OpenAPI v3 specifications. This not only accelerates development but also allows teams to focus on refining business logic and architecture rather than getting bogged down in routine coding tasks.
Keeping Consistency: Manually interpreting API specifications can lead to mistakes, such as misconfigured data types or overlooked validation rules. LLMs, when given a well-defined OpenAPI v3 document, can produce consistent, error-free code that aligns closely with the original specifications. This reduces the risk of bugs and ensures the prototype is reliable from the outset.
Flexibility: When API specs change, developers can simply update the specification and prompt the model to regenerate or modify the affected parts of the code, saving time and ensuring accuracy.
Scalability: LLMs make prototyping complex systems like microservices architectures easier. They can quickly scaffold multiple services, ensuring consistency across components and simplifying integration. This allows prototypes to evolve into production-ready systems with minimal rework.
Step 1: Idea Input and Initialization
The process begins with the user inputting an application idea into the LLM. This high-level concept serves as the foundation, guiding the LLM in understanding the application's goals and requirements. The LLM processes this input, setting the stage for the subsequent technical steps.
For best results, the initial user prompt should be concise, focused, and as technical as possible so that the LLM can better translate the requirements into specification language.
To make the process less hypothetical, let’s follow the steps with an example: a link-keeping solution that allows users to add, annotate, and mark web links as read.
Step 2: OpenAPI v3 Specification Generation
With the data structures defined, the LLM generates an OpenAPI v3 specification. This specification outlines the CRUD (Create, Read, Update, Delete) actions to manipulate the data structures. The LLM incorporates security definitions using scopes, ensuring that each endpoint is protected and accessible only to authorized users.
The initial system prompt would be the following:
Rationales:
Even though OpenAPI v3 supports both specification files written as JSON and YAML, we specifically mention YAML as the target language as it’s less restrictive. It makes adjusting the specification in parts easier without breaking JSON’s syntax.
We have to mention including HTTP status error responses, as it’s optional in the OpenAPI specification. The addition of the security considerations also triggers the LLM to cover 401 Unauthorized and 403 Forbidden HTTP statuses.
A quick mention of security of the endpoints will often result only in basic bearer token requirements in the request, granting access to every endpoint to every authenticated user. Scoping the access is future-proofing the solution; hence, the prompt mentions it specifically, down to an example.
Keeping separate schemas for components makes the specification more readable and maintainable, though it’s optional in the OpenAPI specifications. We specifically mention the components as a requirement to make the specification robust and future-proof.
The output for our example prompt, generated with help of aider using the Codestral model, is:
openapi: 3.0.0
info:
title: Link-Keeping Solution API
description: API for a link-keeping solution that allows users to add, annotate, and mark web links as read.
version: 1.0.0
servers:
- url: https://api.link-keeping.com/v1
description: Production server
paths:
/links:
post:
summary: Add a new link
security:
- bearerAuth: [ "write:links" ]
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/LinkRequest'
responses:
'201':
description: Link created successfully
content:
application/json:
schema:
$ref: '#/components/schemas/Link'
'400':
description: Invalid request
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'401':
description: Unauthorized
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'403':
description: Forbidden
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
get:
summary: Get all links
security:
- bearerAuth: [ "read:links" ]
responses:
'200':
description: A list of links
content:
application/json:
schema:
type: array
items:
$ref: '#/components/schemas/Link'
'401':
description: Unauthorized
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'403':
description: Forbidden
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
/links/{id}:
parameters:
- name: id
in: path
required: true
schema:
type: string
description: The ID of the link
get:
summary: Get a link by ID
security:
- bearerAuth: [ "read:links" ]
responses:
'200':
description: A link object
content:
application/json:
schema:
$ref: '#/components/schemas/Link'
'401':
description: Unauthorized
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'403':
description: Forbidden
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'404':
description: Link not found
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
put:
summary: Update a link
security:
- bearerAuth: [ "write:links" ]
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/LinkRequest'
responses:
'200':
description: Link updated successfully
content:
application/json:
schema:
$ref: '#/components/schemas/Link'
'400':
description: Invalid request
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'401':
description: Unauthorized
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'403':
description: Forbidden
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'404':
description: Link not found
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
delete:
summary: Delete a link
security:
- bearerAuth: [ "delete:links" ]
responses:
'204':
description: Link deleted successfully
'401':
description: Unauthorized
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'403':
description: Forbidden
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
'404':
description: Link not found
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
components:
securitySchemes:
bearerAuth:
type: http
scheme: bearer
bearerFormat: JWT
schemas:
LinkRequest:
type: object
properties:
url:
type: string
format: uri
example: https://example.com
title:
type: string
example: Example Website
description:
type: string
example: This is an example website.
required:
- url
Link:
type: object
properties:
id:
type: string
example: 12345
url:
type: string
format: uri
example: https://example.com
title:
type: string
example: Example Website
description:
type: string
example: This is an example website.
read:
type: boolean
example: false
annotations:
type: array
items:
type: string
example: []
Error:
type: object
properties:
message:
type: string
example: An error occurred
The resulting specification can be validated with existing tools, like https://editor.swagger.io/.
Step 3: Backend Prototype Development
The specification defined in the previous step lets us make the current step a separate process, giving more space in the context of adjustments and corrections of the backend code generated from the specification.
Given that this process aims to create a starting point for future development, the prompters have to specify the technologies to use so they can comfortably continue development later. In our case, we’ve picked Node.js as the runtime and SQLite as the database. An LLM can choose the HTTP framework, input validation solution, authentication validator, SQLite driver, and other dependencies. Still, we can narrow it down further to match the developer’s preferences.
Assuming the input will concern a file name according to the specification, i.e., openapi.yaml, the prompt will look as follows:
“
Given the contents of openapi.yaml, create a backend implementation in Node.js, using SQLite to store and retrieve user data.
“
Aider manages the addition of the openapi.yaml to the context automatically. Also, Aider’s internal tooling makes the LLM output into a valid file structure on disc. Here’s the project structure generated by the LLM:
├─ src/
│ ├─ controllers/
│ │ ├─ linkController.js
│ │ ├─ userController.js
│ │
│ ├─ routes/
│ │ ├─ linkRoutes.js
│ │ ├─ userRoutes.js
│ │
│ ├─ services/
│ │ ├─ authService.js
│ │
│ ├─ utils/
│ │ ├─ db.js
│ │
│ ├─ app.js
│
├─ .env
├─ package.json
The LLM provided a reasonable project structure and created register/login endpoints with password hashing logic (which it wasn’t asked directly to make, but the project must function).
The LLM followed best development and security practices. It made the application’s JWT secret configurable via the .env file, split the route, controller, and services definitions, and set up the database structure before usage. This saved hours of developer work; however, it did need some manual corrections to start the project.
Step 4: HTTP Client Generation
Now, leveraging existing generators and our specification file, we can effortlessly create 100% correct HTTP clients from the specification defined in step 2. This saves more development time than a typical AI-assisted development cycle, which targets the implementation files directly.
The Integrated Prototype Solution
The result is a comprehensive backend application scaffold crafted by the LLM to meet the application's initial requirements. This prototype is a foundation, allowing developers to iterate and build upon it. The OpenAPI v3 specification acts as the blueprint, guiding the development process and ensuring consistency across the backend and client implementations. This approach accelerates the initial development phase, reduces errors, and provides a robust starting point for further refinement.
:quality(90))