Griptape Framework v0.21.0

Griptape v0.21.0

Hey Griptapers! We’re super excited to share the latest update of Griptape, version v0.21.0! This update packs a punch with new features, significant improvements, and some important changes. Let’s jump into what’s new and what you can expect from this release.

Breaking Changes

First up, let’s address an important breaking change. We’ve relocated the ProxycurlClient Tool to its own dedicated repository, aligning with our new Tool Contributing Guidelines. We hope this Tool can serve as a useful reference for anyone looking to build their own Tool; we’re excited to see what the community creates!

We’ve also renamed certain Hugging Face files for improved consistency across the codebase. If you’re importing from any of these files directly, please update them accordingly:

  • ->
  • ->
  • ->
  • ->

New Features

Image Generation Drivers

The star feature of v0.21.0 is the introduction of Image Generation Drivers. These Drivers let you generate images from text prompts using various image generation backends. Currently, we support Drivers for OpenAI DALL-E, Azure OpenAI DALL-E, Leonardo AI, Amazon Bedrock Stable Diffusion, and Amazon Bedrock Titan Generator.

Check out this amazing example from team member shhlife, demonstrating how to use OpenAI DALL-E with a Griptape Workflow for creating diverse image variations in parallel:

import os
from griptape.drivers import OpenAiDalleImageGenerationDriver
from griptape.engines import ImageGenerationEngine
from griptape.structures import Workflow
from griptape.tasks import ImageGenerationTask, PromptTask, TextSummaryTask

# Topic
topic = "skateboard"

# Styles
styles = [
    "watercolor painting",
    "gestural pencil sketch",
    "industrial design",
    "hyper-photorealistic beauty shot",
    "1970's polaroid",
    "instagram influencer",

# Image Engine
engine = ImageGenerationEngine(

# Create the workflow
workflow = Workflow(stream=True)

# Start and End tasks
start_task = TextSummaryTask('Create image of "" in different styles.')
end_task = TextSummaryTask("")

# Add start and end tasks to workflow

# For each style, create a prompt task and an image generation task
for style in styles:
    style_name = style.replace("_", "_")

    # Prompt Task
    prompt_task = PromptTask(
        'Create a prompt for an image generation model to create an image of a "" in the following style: ',
        context={"topic": topic, "style": style},

    # Image Generation Task
    image_task = ImageGenerationTask(

    # Insert tasks into workflow
    workflow.insert_tasks(start_task, [prompt_task], end_task)
    workflow.insert_tasks(prompt_task, [image_task], end_task)

# Run the workflow

Here are some of the results:

Hyper Realistic

Hugging Face Hub Embedding Driver

Another exciting addition in v0.21.0 is the HuggingFaceHubEmbeddingDriver. This Driver enables you to create text embeddings with the plethora of models available on Hugging Face Hub under the “Feature Extraction” Task. Here’s an example showing how you can use this Embedding Driver:

import os
from transformers import AutoTokenizer
from griptape.drivers import HuggingFaceHubEmbeddingDriver
from griptape.tokenizers import HuggingFaceTokenizer
from griptape.structures import Agent
from import WebScraper, TaskMemoryClient

agent = Agent(
    tools=[WebScraper(), TaskMemoryClient(off_prompt=False)],
)"Based on this website:, tell me what griptape is.")

Hugging Face Hub Streaming

We’ve also upgraded our HuggingFaceHubPromptDriver to utilize the new Inference Client, enabling additional Hugging Face endpoints and a straightforward implementation of streaming. Enable it as you would with any other Driver!

import os
from griptape.structures import Agent
from griptape.drivers import HuggingFaceHubPromptDriver

agent = Agent(
)"Tell me a joke!")

Chat Streaming

The Chat utility has also been upgraded to include streaming functionality! Now the responses from the LLM are displayed in incremental chunks as opposed to all at once; giving a more fluid chat feeling. If you’re using a Prompt Driver with stream=True, the Chat utility will automatically stream the results back. A big shoutout to mattma1970 for this contribution! Here’s how you can try it:

from griptape.structures import Agent
from import Calculator
from griptape.utils import Chat

agent = Agent(tools=[Calculator(off_prompt=False)], stream=True, logger_level=0)

Chat Stream

If you’ve overridden the Prompt Driver, you will need to pass stream=True to the Driver instead.

from griptape.structures import Agent
from griptape.drivers import AmazonBedrockPromptDriver, BedrockTitanPromptModelDriver
from griptape.utils import Chat

agent = Agent(

Simple Tokenizer

This release also introduces the SimpleTokenizer, a new Tokenizer designed for compatibility with LLM providers lacking dedicated tokenization APIs. We’ve integrated SimpleTokenizer in both BedrockTitanTokenizer and BedrockJurassicTokenizer assigning a characters_per_token ratio of 6.

from griptape.tokenizers import SimpleTokenizer

tokenizer = SimpleTokenizer(max_tokens=1024, characters_per_token=6)

print(tokenizer.count_tokens("Hello world!")) # 2
print(tokenizer.count_tokens_left("Hello world!")) # 1022

Prompt Inspection

A frequently requested feature was the ability to inspect the prompt built by Griptape before it’s sent to the LLM. You can now utilize the StartPromptEvent’s fields prompt and prompt_stack to see the fully rendered prompt string and the list of pre-render messages, respectively.

from griptape.structures import Agent
from import (

def handler(event: BaseEvent):
    if isinstance(event, StartPromptEvent):

agent = Agent(event_listeners=[EventListener(handler, event_types=[StartPromptEvent])])"Write me a poem.")

Tool Creation Course

Are you interested in developing your own Griptape Tool? If so, check out our new Griptape Trade School course! And when you’re ready, the Tool Template and Proxycurl Client are great jumping off points.

Other Improvements

Besides the standout features, we’ve made several fixes and enhancements:

  • Added support for Python 3.12, making Griptape compatible with Python versions >= 3.9.
  • Fixed an issue with deserializing Summary Conversation Memory, making it ideal for long-running conversations.
  • Introduced a dedicated documentation page for Tokenizers.

For a full rundown of changes, take a look at our Github releases.

Wrapping Up

It’s been an exhilarating month at Griptape, from launching new framework modalities to preparing for the Griptape Cloud private preview and even receiving a mention at the re:Invent keynote! We can’t wait to see what the community creates with these new features, and we’ve got plenty more in store. To stay updated or just drop by for a chat, join us on Discord!