Derex.dev

Day 3: Building a Translation Chain

1. The Frustration

I wanted to build a basic translation tool. Something minimal, but magical. Just a sentence in English and a translation out the other side in another language—Amharic, in this case. I imagined wiring up a LangChain pipeline that feels like calling a function: "Translate('Hello world', to='Amharic')" and watching it just work.

I reached for the tools I’ve been working with so far—LangChain and Ollama running llama3.2. LLaMA’s newer models have made amazing progress, so I figured, “Why not give them a shot at translation?”

But as soon as I tested it on real sentences, the limitations became clear. The translations weren’t accurate. Worse—they were confidently wrong. It reminded me of a very humbling truth: not every model is a generalist. Sometimes, you need specialists.


2. What I Tried

! pip install -U langchain-ollama langchain-core

from langchain_ollama.llms import OllamaLLM
from langchain_core.prompts import ChatPromptTemplate

model = OllamaLLM(model="llama3.2:latest")

template = """Translate this to {lang}:\n{text}"""
prompt = ChatPromptTemplate.from_template(template=template)

chain = prompt | model

result = chain.invoke({"text": "The weather is nice today", "lang": "Amharic"})
print(result)

It worked, technically. It just didn’t work well. It gave back something that wasn’t quite Amharic. Or maybe it was just really broken Amharic. Either way, not something I’d send to anyone in a real application.


3. The Mental Model

Here’s what this exercise taught me:

LangChain’s strength lies in composing tools, not replacing them. It lets you build the pipes between tools. But if the model you plug in at one end is weak, your output is still going to be weak—no matter how elegant your prompt is.

And translation is hard. It’s not just vocabulary—it’s grammar, cultural context, tone, and fluency. General-purpose LLMs often hallucinate or oversimplify. That’s why dedicated translation models (like those from Hugging Face’s Helsinki-NLP or NLLB) exist.


4. What’s Next

I’ll continue looking for a good local, lightweight model for translation. Maybe Hugging Face. Maybe something distilled or trained for multilingual use. If you know any, send them my way.

In the meantime, I’m not going to let this hiccup derail the journey. LangChain still gave me the structure. The rest is just model hunting.


5. Takeaway

You can have the best plumbing, but if your water source is dirty, the taps won’t help. Choose the right model for the right task. And when that model doesn’t exist? Make a note, move forward, and keep building.

Done with Day 3.

Did I make a mistake? Please consider Send Email With Subject