Introducing LangChain.dart ๐Ÿฆœ๏ธ๐Ÿ”—

David Miguel
LangChain.dart ๐Ÿฆœ๏ธ๐Ÿ”—
6 min readJul 2, 2023

--

Harness the power of LLMs (GPT-4, PaLM, etc.) in your Dart/Flutter apps.

Photo by Shaojie on Unsplash

Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP), serving as essential components in a wide range of applications, such as question-answering, summarization, translation, and text generation.

The adoption of LLMs is creating a new tech stack in its wake. However, emerging libraries and tools are predominantly being developed for the Python and JavaScript ecosystems. As a result, the number of applications leveraging LLMs in these ecosystems has grown exponentially.

Source: State of AI Development by Replit

In contrast, the Dart / Flutter ecosystem has not experienced similar growth, which can likely be attributed to the scarcity of Dart and Flutter libraries that streamline the complexities associated with working with LLMs.

Introducing LangChain.dart ๐Ÿฆœ๏ธ๐Ÿ”—

LangChain.dart aims to fill this gap by abstracting the intricacies of working with LLMs in Dart and Flutter, enabling developers to harness their combined potential effectively.

LangChain.dart is a Dart port of the popular LangChain Python framework created by Harrison Chase. It enables LLM-powered Dart / Flutter applications such as:

(All the examples above are open-source projects built using LangChain)

Why do we need LangChain?

To create effective Generative AI applications, itโ€™s crucial to provide LLM the capability to interact with external systems (applications, databases, APIs, public internet, etc.). This enables the language models to be data-aware and agentic, which means they can understand, reason, and use the data to return meaningful responses and actions.

To interact with external systems you need to implement different components and orchestrate all of them. For example, if you want to develop a chatbot for your companyโ€™s internal knowledge base, you may have to:

  1. Get the question from the user.
  2. Load the documents from your knowledge base.
  3. Decide which documents are relevant to the user question.
  4. Build a prompt for the LLM with the relevant information.
  5. Send that prompt to the LLM.
  6. Parse the LLM response.
  7. Display the answer to the user.

Although it may seem trivial to connect these calls and orchestrate them, it can become tedious to write glue code repeatedly for each new data connector or language model. This is where LangChain steps up to the plate!

LangChain value proposition

LangChain provides a set of ready-to-use components for working with language models and the concept of chains, which enables โ€œchainingโ€ components together to formulate more advanced use cases around LLMs.

The components can be grouped into a few core modules:

Core LangChain components
  • Model I/O: streamlines the interaction between the model inputs (prompt templates), the Language Model (abstracting different providers), and the model output (output parsers).
  • Retrieval: assists in loading user data (document loaders), modifying it (document transformers), storing (via text embedding models and vector stores), and retrieving when needed (retrievers).
  • Chains: a way to compose multiple components or other chains into a single pipeline.
  • Memory: equips chains or agents with both short-term and long-term memory capabilities, facilitating recall of prior interactions with the user.
  • Agents: โ€œBotsโ€ that harness LLMs to perform tasks. They serve as the link between LLM and the tools (web search, calculators, database lookup, etc.). They determine what has to be accomplished and the tools that are more suitable for the specific task.

Letโ€™s make it more concrete with an example of the most basic use case: invoking an LLM with a prompt and obtaining its response.

Imagine you are developing an application that generates jokes. You would have to ask the model to generate a joke and then display it on screen. Achieving this with LangChain.dart requires only a few lines of code:

final openaiApiKey = 'your-openai-key';
final llm = OpenAI(apiKey: openaiApiKey);
final result = await llm('Tell me a joke');
print(result); // What did the fish say when it ran into a wall? Dam!

In this example, we are using the OpenAI model wrapper that abstracts the interaction with OpenAI Completions API. By default, it uses text-davinci-003 model (the most capable of the GPT-3 series). However, you can always select a different model or customize its parameters to meet your requirements.

LangChain.dart initial release

After several weeks of hard work porting the base components of the original library while trying to keep up with the speed of changes in the field (e.g. OpenAI Functions being released in between), we are happy to announce the first initial version of LangChain.dart.

It is still an early version with limited functionality, but enough to start covering some of the main use cases. We also want to start collecting feedback from the community so that we can take it into account in future versions.

The following features are already supported:

Model I/O:

Data Connection:

Memory:

Chains:

Agents:

Tools:

Whatโ€™s next?

We are busy implementing new functionality. Some features you can expect in the next releases are:

  • More supported LLMs (e.g. HuggingFaceHub and GCP Vertex AI integration).
  • More supported Vector DBs (e.g. Pinecone, Supabase Vector).
  • More types of chains (e.g. SequentialChain, RouterChain).
  • More types of memory (e.g. ConversationTokenBufferMemory, ConversationSummaryMemory).
  • More tools (e.g. Wikipedia, Web search).

In the near future, we will also start seeing greater support for local on-device LLMs on mobile devices. For example, MediaPipe is working on adding on-device text generation support. We are excited to integrate these solutions as soon as they are released since they will be very valuable for building fully local LLM-powered mobile apps.

You can check out what we are currently working on in our public board.

How to get started?

If you are new to the fields of Generative AI and Large Language Models, we recommend that you take the Google Cloudโ€™s Introduction to Generative AI and Introduction to Large Language Models courses.

Once you staring mastering the basic concepts, you can start getting familiar with LangChain by taking DeepLearning.AIโ€™s LangChain for LLM Application Development course, guided by Harrison Chase (creator of LangChain).

After that, you will be ready to start playing with LangChain.dart. Take a look at the documentation and the sample apps, and keep an eye on the tutorials we will be publishing soon.

Contributing

We invite the community to join us in accelerating feature development, closing the gap in functionality compared to the original LangChain Python framework, and enhancing the educational resources to ease the learning process.

So if you are interested in implementing a new feature, fixing a bug, improving the documentation, or writing an article in our blog, check out the Contributing guide and join our Discord server.

With all that said, we look forward to seeing what applications the community builds on top of LangChain.dart! ๐Ÿฆœ๏ธ๐Ÿ”—

--

--