LangChain.dart 101: what can you build with it? 🦜️🔗

David Miguel
LangChain.dart 🦜️🔗
9 min readJul 20, 2023

--

Exploring development opportunities: CLI, Flutter apps & backend services

Development opportunities of LangChain.dart

In the previous article, we introduced LangChain.dart, a Dart port of the popular LangChain Python framework that enables to build applications powered by LLM (Large Language Models). Today, we will explore the development opportunities that LangChain.dart brings and how to get started with each of them. Along the way, we will also introduce several basic concepts, like OpenAI vs ChatOpenAI model wrappers, local models, different types of ChatMessage and prompt templates. Let’s get started!

CLI Apps

Command Line Interface (CLI) apps are essential tools for developers. But, wouldn’t it be great if they could be more intelligent?

With LangChain.dart, you can create Dart CLI apps that leverage the power of LLMs to perform a variety of tasks. For example:

  • A CLI app that automatically generates a commit title and description based on the uncommitted changes in the project.
  • A CLI app that automatically generates a README file based on the project’s source code.
  • A CLI app that explains in natural language the code in a given file.

To get started with CLI apps, we are going to implement a very simple chatbot:

Dart CLI chatbot using LangChain.dart

The logic we have to implement is very simple:

  1. Get the OpenAI API key from an environment variable.
  2. Create an instance of OpenAI model wrapper.
  3. Ask the user for a question.
  4. Send the question to the model.
  5. Print the answer to the console.

Let’s see the code (you can find the complete source code here):

void main(final List<String> arguments) async {
final openaiApiKey = Platform.environment['OPENAI_API_KEY'];
final llm = OpenAI(apiKey: openaiApiKey, temperature: 0.9);

stdout.writeln('How can I help you?');

while (true) {
stdout.write('> ');
final query = stdin.readLineSync() ?? '';
final result = await llm(query);
stdout.writeln(result);
}
}

As simple as that!

This example is using the OpenAI model wrapper that abstracts the interaction with OpenAI Completions API which is now deprecated. OpenAI provides a newer API that is more suitable for conversational applications: OpenAI Chat API.

This Chat API, instead of accepting a string and returning a string, accepts a list of messages comprising the conversation and returns a message containing the answer from the model.

LangChain.dart provides the ChatOpenAI model wrapper that abstracts the interaction with the OpenAI Chat API. Let’s see how we can refactor our previous code to use it instead:

void main(final List<String> arguments) async {
final openaiApiKey = Platform.environment['OPENAI_API_KEY'];
final llm = ChatOpenAI(apiKey: openaiApiKey, temperature: 0.9);

stdout.writeln('How can I help you?');

while (true) {
stdout.write('> ');
final query = stdin.readLineSync() ?? '';
final humanMessage = ChatMessage.human(query);
final aiMessage = await llm.call([humanMessage]);
stdout.writeln(aiMessage.content.trim());
}
}

As you can see, the code is very similar. The main difference is that we now use the ChatOpenAI model wrapper which takes a list of messages instead of a string.

In this example, we have used two types of ChatMessage:
- HumanChatMessage: represents a message from a human (the user).
- AIChatMessage: represents a message from the AI (the model).

There are other types of messages (SystemChatMessage, FunctionChatMessage and CustomChatMessage) that we will cover in the future.

To sum up, we have covered the two types of language model abstractions that LangChain.dart provides (LLMs and Chat models) and how to use them to build CLI apps. Now it’s your turn to make your own!

Flutter Apps

CLI apps are great for geeky developers, but normal people prefer to use apps with a nice user interface. And, what better way to build it than with Flutter?

Flutter allows you to build high-quality, natively compiled applications for mobile, web, desktop, and embedded devices from a single codebase. And, with LangChain.dart, you can easily supercharge your Flutter apps with LLMs.

In the previous article, we listed some examples of apps that can be built with LangChain and LLMs, including:

  • An app that allows you to ask questions in natural language about your private documents.
  • An app that summarizes academic papers.
  • An app that draws diagrams or mindmaps from descriptions.
  • An intelligent sales assistant.

To get started working with Flutter and LangChain.dart, we are going to implement a very simple app that will answer our questions. But instead of only using the OpenAI API as in the CLI example, we are going to allow the user to choose between OpenAI or a local model:

Flutter chatbot app using LangChain.dart.

For running local models, we are going to use Prem app. It abstracts away all technical complexities of installing local models and creates a local server that exposes a REST API with the same interface as the OpenAI API. This way, we can use the same ChatOpenAI model wrapper only changing the API base URL.

At this moment, it supports running GPT4ALL Lora Q4, Dolly v2 12B, Falcon 7B Instruct, and Vicuna 7B Q4.

Download Prem and follow their instructions to set up a local model. Once the model is running, you will get a localhost URL (e.g. http://localhost:8111) where you can reach the model.

Note: if you don’t want to use Prem, you can use any other OpenAI API-compatible local server, like llama.cpp or One API, Oobabooga Textgen, etc.

Now, that we have the local model running, let’s see how we integrate it with LangChain.dart in our Flutter app. I will only show the relevant code for integration, but you can find the full source code of the app here.

The business logic is encapsulated in the HomeScreenCubit (we are using flutter_bloc for state management). In this class, when the user enters a question and presses “Submit”, onSubmitPressed() is called, which then:

  1. Creates a client based on the selected type:
    + If “OpenAI” is selected, it uses OpenAIClient.instanceFor(apiKey: openAIKey) factory constructor passing the OpenAI API key to create the client.
    + If “Local” is selected, it uses OpenAIClient.local(localUrl) factory constructor passing the localhost URL instead.
  2. Calls the model with the query.
  3. Updates the state with the response.

Let’s see the code (I’ll omit the validation code for brevity):

Future<void> onSubmitPressed() async {
final client = _createClient();
final query = state.query;

emit(state.copyWith(status: HomeScreenStatus.generating));

final llm = ChatOpenAI(apiClient: client);

final result = await llm([ChatMessage.human(query)]);
emit(
state.copyWith(
status: HomeScreenStatus.idle,
response: result.content,
),
);
}

OpenAIClient? _createClient() {
final clientType = state.clientType;

if (clientType == ClientType.openAI) {
final openAIKey = state.openAIKey;
return OpenAIClient.instanceFor(apiKey: openAIKey);
} else {
final localUrl = state.localUrl;
return OpenAIClient.local(localUrl);
}
}

As you can see, the code is very similar to the CLI app. The only difference for running local models that expose the same API as OpenAI is how we instantiate the client.

We are currently working on adding direct integrations with open-source LLMs like LLaMA. Stay tuned!

In this example, if “OpenAI” is selected we ask the user to enter their OpenAI API key. This may work for some use cases but, in general, you don’t want your users to have to deal with API keys. We have different options to solve this problem:

  • We can create a simple proxy that attaches the API key to the request and forwards it to OpenAI. Firebase Cloud Functions is a good serverless option to implement this that many Flutter developers will be familiar with.
  • Use a service like OpenRouter where users authenticate using OAuth and pay for what they use, while the API keys are managed by the service.
  • Move all the interaction with the model to a backend service and expose a REST API that your Flutter app can consume (check the next section for more details).

The ideal solution will depend on your specific use case.

In this section, we have seen how to integrate an OpenAI API-compatible model into a Flutter app, as well as some strategies to keep your API keys safe. Now it’s your turn to build your own LLM-powered Flutter app!

Backend Services

As we have seen in the previous sections, many use cases can be covered by a client-side implementation. However, some scenarios require moving the interaction with the model to a backend service. For example:

  • To protect your API keys (as we mentioned previously).
  • To have more control over the usage of the model (e.g. user authentication, rate limiting, etc.).
  • To cover more complex use cases that require interaction with other backend services or internal databases.
  • To run a model in more powerful hardware.
  • To limit the implementation details that can be reverse-engineered.

To get started working with LangChain.dart in backend services, we are going to implement a simple REST API that given a list of topics, generates a sonnet about them. For this, we are going to use the shelf package and the ChatOpenAI model wrapper (you can find the source code here).

REST API using LangChain.dart.

First, let’s implement a SonnetsService class that will be responsible for generating sonnets. It will have a generateSonnet method that receives a list of topics, constructs the right prompt, and calls the model.

class SonnetsService {
SonnetsService() {
final openAiApiKey = Platform.environment['OPENAI_API_KEY'];
_llm = ChatOpenAI(apiKey: openAiApiKey, temperature: 0.9);
}

late final ChatOpenAI _llm;
final _chatPromptTemplate = ChatPromptTemplate.fromPromptMessages([
SystemChatMessagePromptTemplate.fromTemplate(
'I would like you to assume the role of a poet from the Shakespeare school.',
),
HumanChatMessagePromptTemplate.fromTemplate(
'Create a sonnet using vivid imagery and rhyme about the following topics: {topics}',
),
]);

Future<String> generateSonnet(final List<String> topics) async {
final prompt = _chatPromptTemplate.formatMessages({'topics': topics});
final response = await _llm.call(prompt);
return response.content;
}
}

As you can see, we are using the ChatPromptTemplate class to construct the prompt. This class allows us to define a template with placeholders that will be replaced with the actual values when formatting the prompt. This way, we can easily construct prompts that depend on the user input.

Notice that the prompt contains two chat messages. The first one is a SystemChatMessage, which is a type of message used to set the behaviour of the model. For example, you can modify the personality of the model or provide specific instructions about how it should behave throughout the conversation. In our case, we want the model to behave like a poet from the Shakespeare school.

The second message is already familiar to us: a HumanChatMessage that contains the request from the user. But in this case, it is generated from a HumanChatMessagePromptTemplate, which contains a {topics} placeholder. When formatMessages is called, this placeholder is replaced with the actual topics provided by the user.

Now that we have our service, let’s implement the REST API using the shell package.

class Api {
final sonnetsService = SonnetsService();

Handler get handler {
final router = Router()
..post('/v1/sonnets', _sonnetHandler);
return router.call;
}

Future<Response> _sonnetHandler(final Request request) async {
final payload = jsonDecode(await request.readAsString());
final topics = payload['topics'];

final sonnet = await sonnetsService.generateSonnet(topics.cast<String>());

return Response.ok(
headers: {'Content-type': 'application/json'},
jsonEncode({
'sonnet': sonnet,
}),
);
}
}

And finally, the main function that starts the server:

Future<void> main() async {
final api = Api();
final handler = const Pipeline()
.addMiddleware(logRequests())
.addHandler(api.handler);
final server = await io.serve(handler, InternetAddress.anyIPv4, '8080');
print('Serving at http://${server.address.host}:${server.port}');
}

Now we can run the server with dart bin/server.dart and call the endpoint:

$ curl -X POST \
-d '{
"topics": ["bikes", "amsterdam"]
}' \
http://0.0.0.0:8080/v1/sonnets

In fair Amsterdam, where canals now flow,
A charming city with a vibrant gleam,
Where colors dance and vibrant tulips grow,
A haven where two-wheelers reign supreme.

On cobblestone streets, bikes gracefully glide,
Their metal frames reflecting the sunlight,
Like swans in motion, through the city they ride,
Guided by wheels that spin with all their might.

Through narrow streets with houses standing tall,
Pedals push forward, wind upon their face,
As wheels spin faster, time begins to fall,
And worries vanish, leaving not a trace.

Oh, Amsterdam, your bikes create a spell,
A symphony of freedom, joy, and bell.

In conclusion, LangChain.dart enables developers to harness the full potential of Large Language Models across a variety of application types. Whether it’s an interactive CLI, a user-friendly Flutter application, or a robust backend service.

Don’t forget to subscribe to not miss our next articles where we will cover topics like Summarization, Retrieval Augmented Generation (RAG), OpenAI Functions, and more. And join our Discord server if you need help or want to collaborate on LangChat.dart! 🦜️🔗

--

--