Introducing LangChain.dart ๐ฆ๏ธ๐
Harness the power of LLMs (GPT-4, PaLM, etc.) in your Dart/Flutter apps.
Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP), serving as essential components in a wide range of applications, such as question-answering, summarization, translation, and text generation.
The adoption of LLMs is creating a new tech stack in its wake. However, emerging libraries and tools are predominantly being developed for the Python and JavaScript ecosystems. As a result, the number of applications leveraging LLMs in these ecosystems has grown exponentially.
In contrast, the Dart / Flutter ecosystem has not experienced similar growth, which can likely be attributed to the scarcity of Dart and Flutter libraries that streamline the complexities associated with working with LLMs.
Introducing LangChain.dart ๐ฆ๏ธ๐
LangChain.dart aims to fill this gap by abstracting the intricacies of working with LLMs in Dart and Flutter, enabling developers to harness their combined potential effectively.
LangChain.dart is a Dart port of the popular LangChain Python framework created by Harrison Chase. It enables LLM-powered Dart / Flutter applications such as:
- A blog outline generator app (e.g. blog outline generator).
- A generic chatbot app (e.g. LibreChat).
- An app that allows you to ask questions in natural language about your private documents (e.g. privateGPT, Quivr, localGPT, KnowledgeGPT, Notion QA), the last book you read (e.g. Book GPT), your SQL databases (DB-GPT) or even about all the videos of an entire YouTube channel (e.g. YouTube-to-Chatbot).
- An app that summarizes academic papers (e.g. SummarizePaper, Paper QA).
- An app that draws diagrams or mindmaps from descriptions (e.g. FlowGPT, MindGeniusAI)
- An intelligent sales assistant (e.g. SalesCopilot).
- An app that automatically codes LLM-powered demo apps (e.g. DemoGPT).
- And many more.
(All the examples above are open-source projects built using LangChain)
Why do we need LangChain?
To create effective Generative AI applications, itโs crucial to provide LLM the capability to interact with external systems (applications, databases, APIs, public internet, etc.). This enables the language models to be data-aware and agentic, which means they can understand, reason, and use the data to return meaningful responses and actions.
To interact with external systems you need to implement different components and orchestrate all of them. For example, if you want to develop a chatbot for your companyโs internal knowledge base, you may have to:
- Get the question from the user.
- Load the documents from your knowledge base.
- Decide which documents are relevant to the user question.
- Build a prompt for the LLM with the relevant information.
- Send that prompt to the LLM.
- Parse the LLM response.
- Display the answer to the user.
Although it may seem trivial to connect these calls and orchestrate them, it can become tedious to write glue code repeatedly for each new data connector or language model. This is where LangChain steps up to the plate!
LangChain value proposition
LangChain provides a set of ready-to-use components for working with language models and the concept of chains, which enables โchainingโ components together to formulate more advanced use cases around LLMs.
The components can be grouped into a few core modules:
- Model I/O: streamlines the interaction between the model inputs (prompt templates), the Language Model (abstracting different providers), and the model output (output parsers).
- Retrieval: assists in loading user data (document loaders), modifying it (document transformers), storing (via text embedding models and vector stores), and retrieving when needed (retrievers).
- Chains: a way to compose multiple components or other chains into a single pipeline.
- Memory: equips chains or agents with both short-term and long-term memory capabilities, facilitating recall of prior interactions with the user.
- Agents: โBotsโ that harness LLMs to perform tasks. They serve as the link between LLM and the tools (web search, calculators, database lookup, etc.). They determine what has to be accomplished and the tools that are more suitable for the specific task.
Letโs make it more concrete with an example of the most basic use case: invoking an LLM with a prompt and obtaining its response.
Imagine you are developing an application that generates jokes. You would have to ask the model to generate a joke and then display it on screen. Achieving this with LangChain.dart requires only a few lines of code:
final openaiApiKey = 'your-openai-key';
final llm = OpenAI(apiKey: openaiApiKey);
final result = await llm('Tell me a joke');
print(result); // What did the fish say when it ran into a wall? Dam!
In this example, we are using the OpenAI model wrapper that abstracts the interaction with OpenAI Completions API. By default, it uses text-davinci-003
model (the most capable of the GPT-3 series). However, you can always select a different model or customize its parameters to meet your requirements.
LangChain.dart initial release
After several weeks of hard work porting the base components of the original library while trying to keep up with the speed of changes in the field (e.g. OpenAI Functions being released in between), we are happy to announce the first initial version of LangChain.dart.
It is still an early version with limited functionality, but enough to start covering some of the main use cases. We also want to start collecting feedback from the community so that we can take it into account in future versions.
The following features are already supported:
Model I/O:
- Prompts:
PromptTemplate
,ChatPromptTemplate
,ConditionalPromptSelector
, andPipelinePromptTemplate
. - Language models:
+ OpenAI models (OpenAI
andChatOpenAI
, including support for OpenAI Functions).
+ Local models supported by Prem App (GPT4ALL Lora Q4
,Dolly v2 12B
,Falcon 7B Instruct
, andVicuna 7B Q4
). - OutputParsers:
OutputFunctionsParser
.
Data Connection:
- Document loaders:
TextLoader
. - Document transformers:
CharacterTextSplitter
. - Embedding models:
+OpenAIEmbeddings
.
+ Local embedding models supported by Prem App (GPT4ALL Lora Q4
,All MiniLM L6 v2
, andVicuna 7B Q4
) - Vector stores:
MemoryVectorStore
. - Retrievers:
VectorStoreRetriever
.
Memory:
Chains:
LLMChain
,ConversationChain
,StuffDocumentsChain
,StuffDocumentsQAChain
,RetrievalQAChain
,OpenAIQAWithStructureChain
,OpenAIQAWithSourcesChain
.
Agents:
Tools:
Whatโs next?
We are busy implementing new functionality. Some features you can expect in the next releases are:
- More supported LLMs (e.g. HuggingFaceHub and GCP Vertex AI integration).
- More supported Vector DBs (e.g. Pinecone, Supabase Vector).
- More types of chains (e.g.
SequentialChain
,RouterChain
). - More types of memory (e.g.
ConversationTokenBufferMemory
,ConversationSummaryMemory
). - More tools (e.g. Wikipedia, Web search).
In the near future, we will also start seeing greater support for local on-device LLMs on mobile devices. For example, MediaPipe is working on adding on-device text generation support. We are excited to integrate these solutions as soon as they are released since they will be very valuable for building fully local LLM-powered mobile apps.
You can check out what we are currently working on in our public board.
How to get started?
If you are new to the fields of Generative AI and Large Language Models, we recommend that you take the Google Cloudโs Introduction to Generative AI and Introduction to Large Language Models courses.
Once you staring mastering the basic concepts, you can start getting familiar with LangChain by taking DeepLearning.AIโs LangChain for LLM Application Development course, guided by Harrison Chase (creator of LangChain).
After that, you will be ready to start playing with LangChain.dart. Take a look at the documentation and the sample apps, and keep an eye on the tutorials we will be publishing soon.
Contributing
We invite the community to join us in accelerating feature development, closing the gap in functionality compared to the original LangChain Python framework, and enhancing the educational resources to ease the learning process.
So if you are interested in implementing a new feature, fixing a bug, improving the documentation, or writing an article in our blog, check out the Contributing guide and join our Discord server.
With all that said, we look forward to seeing what applications the community builds on top of LangChain.dart! ๐ฆ๏ธ๐