Amazon Bedrock + langchain, Retrieving Info from KBs leverage (RAG)

Michael Wahl
1 min readJan 24, 2024

This is a short sample, but it demonstrates the power and capability of AWS Bedrock services using the foundational base model along with a Bedrock KB. This example can be extended even further using AWS Bedrock agents.

Below using a simple Python script, we build a client connection to the AWS Bedrock services, the output we retrieve back can be piped or routed to other upstream processes including AWS Bedrock agents or serverless lambda functions.

Hopefully, this helps get you started experimenting and thinking more about this. Often these short examples become Lego blocks, where you can continue building the bigger or better.

import boto3
import pprint
from botocore.client import Config
from langchain.llms.bedrock import Bedrock
from langchain.retrievers.bedrock import AmazonKnowledgeBasesRetriever

pp= pprint.PrettyPrinter(indent=2)

#the aws kb is really a serverless Opensearch instances
kb_id = "123ABC456" #your unique aws bedrock kb id

bedrock_config = Config(connect_timeout=120, read_timeout=120, retries={'max_attempts': 0})
bedrock_client = boto3.client('bedrock-runtime')
bedrock_agent_client = boto3.client("bedrock-agent-runtime",
config=bedrock_config,
region_name="us-east-1"
)
model_kwargs_claude = {
"temperature": 0.2,
"top_k": 10,
"max_tokens_to_sample": 1000
}

llm = Bedrock(model_id="anthropic.claude-v2",
model_kwargs=model_kwargs_claude,
client=bedrock_client,)

retriever = AmazonKnowledgeBasesRetriever(
knowledge_base_id=kb_id,
retrieval_config={"vectorSearchConfiguration": {"numberOfResults": 2}},
region_name="us-east-1",
)

docs = retriever.get_relevant_documents(
query="Specific Prompt or whatever we are looking for in the KB",
)
pp.pprint(docs)

--

--

Michael Wahl

Husband | Dad | VP of IT | MBA | Author | AI | #AWSCommunityBuilder | Opinions expressed here are my own | https://michaelwahl.carrd.co