i18n MCP Server is a powerful tool for managing internationalization (i18n) in your projects. It streamlines the process of translating JSON-based language files by allowing you to use powerful language models to generate translations directly from a base language file — all through a convenient server interface.
The MCP server is intended to work with the Cursor IDE using stdio transport. It should also work properly with any client that supports stdio transport.
Automatically generate translations in multiple languages from a single base language file.
You don't need to pay for additional translation services. Select the translation model you want to use and generate translations on demand.
No need to manually duplicate or edit JSON files — just send a request and get translated files ready to use.
Built with simplicity and speed in mind for seamless integration into development pipelines.
This MCP server helps with incremental translation of JSON files. It provides simple tools to:
git clone
pnpm install # or npm/yarn
pnpm run build
Navigate to Cursor Settings / MCP
and click to add a new MCP server. In the opened JSON, include the server
definition:
{
"mcpServers": {
...
"i18n-translation-server": {
"command": "node",
"args": ["<base-path>/i18n-mcp/dist/mcp_server.js"]
},
...
}
}
You will need to replace <base-path>
with the correct absolute path to the compiled server.
Once the MCP server is running, you can use the Cursor Agent to interact with the server tools. For example:
We are preparing the i18n files for a project and have a base language file. Using the following data, execute the
proposed tasks to prepare the additional required languages.
Base JSON language file: <absolute path to the base language file>
Base language: <base language>
Supported languages: <comma-separated list of supported languages>
These are the tasks that we need to perform:
1. Clear the previous data
2. Read the base JSON language file
3. Get the next chunk
4. For each language in the supported languages, do the following subtasks:
4.1. Translate all items in the chunk from the base language to the target language. Ensure that the order is
preserved and that any i18n string parameters are respected.
4.2. Update the translations for the language
5. Repeat from task 3 until the next chunk does not return any data.
6. After processing all chunks, save translations for each supported language.
This request will generate a JSON file for each language in the same folder as the base JSON language file. The current
version was tested using gpt-4.1
and a chunk size of 250 entries. For other models, this value may need to be
adjusted due to token limits. Another issue that may occur is that the flow can sometimes be interrupted between steps
5 and 6, requiring manual resumption due to the 25 tool call restriction in Cursor.
This is just a proposed task request, but new ones can be written by chaining the defined tools.
During testing, we noted that language files are generated faster when generating a single language per request, instead of all at once. Additionally, the response time will depend on the model used.
Pull requests and issues are welcome! Let's build a better translation workflow together.
This project is licensed under the GNU Affero General Public License v3.0. See the LICENSE file for details.
{
"mcpServers": {
"i18n-translation-server": {
"env": {},
"args": [
"<base-path>/i18n-mcp/dist/mcp_server.js"
],
"command": "node"
}
}
}
Seamless access to top MCP servers powering the future of AI integration.