Griptape Framework v0.24.0

Since the last time we announced framework updates on the blog, a lot has changed around here. Frequent visitors will notice our freshly updated site design, and we’ve also been cooking up something new for you in Griptape Cloud: a managed retrieval service. If you’re interested in hosted solutions, take a look at our new landing page and let us know what you think, either via email or by talking to our team and other users at our Discord.

Now, on to the new stuff: big updates, smaller updates, and breaking changes that devs might want a bit more clarity on. Additional updates to the Griptape framework can be found in our full v0.24 changelog.

Two new models!

Claude-3 / Anthropic LLM support

You asked for it, and we’re excited to deliver: native Claude-3 support is finally here. Upon accessing the AnthropicPromptDriver, you can now add the following:

  • claude-3-opus-20240229, claude-3-sonnet-20240229, and claude-3-haiku-20240307 as model parameters
  • AnthropicImageQueryDriver to access multi-modal operations–essentially, ingesting both image and text data (ie standard image-and-text web pages)–with all compatible Anthropic models
  • Amazon Bedrock support for the aforementioned Anthropic models

When accessing the updated AnthropicPromptDriver, note that you’re given the option to manually adjust top_p and top_k parameters. Griptape’s default values for these parameters should suffice for most users, but you may want to adjust them to strike a balance between coherence and creativity in your projects’ responses. top_p lets you set a cumulative probability score (in percent) that the set of sampled words satisfy your prompt, while top_k creates a hard limit for how many words from the sample set are used or weighed in the response. Running the below script multiple times with different top_p values will demonstrate how higher percentage values will garner more diverse answers:

[code]

from griptape.drivers import AnthropicPromptDriver
from griptape.structures import Agent
from griptape.config import StructureConfig, StructureGlobalDriversConfig
from dotenv import load_dotenv

load_dotenv()

agent = Agent(
   config=StructureConfig(
       global_drivers=StructureGlobalDriversConfig(
           prompt_driver=AnthropicPromptDriver(
               model="claude-3-opus-20240229",
               top_p=0.99 # more diverse results
           )
       )
   )
)

result = agent.run('Describe Seattle in a short sentence.')
print(result.output_task.output.value)

[/code]

Google / Gemini support

While we were at it, we added native support for Google’s Gemini. This includes GooglePromptDriver and GoogleTokenizer for use with gemini-pro.

With new models come new structures

Should you want to start fresh with either Anthropic’s APIs or Google’s APIs, Griptape now supports those via AnthropicStructureConfig and GoogleStructureConfig, respectively. These starting points work similarly to configs we’ve previously offered for OpenAI and Amazon Bedrock (and in somewhat related news, we’ve updated our Amazon Bedrock structure from Claude-2 to Claude-3, with all the inherent benefits that come from jumping to the latest Anthropic model).

Note that Anthropic does not currently offer its own embedding model, and thus Griptape’s Anthropic structure by default points to VoyageAI for its embeddings API. (See more on that below.) Should you want to override the default embedding driver, follow our instructions at: Override Default Structure Embedding Driver.

New embedding model support

Griptape now supports VoyageAI’s embeddings models, and it’s joined by support for OpenAI and Google’s latest embeddings models, as well. As another change, we’ve updated both OpenAI’s embedding model and structure config to point by default to its text-embedding-3-small model, introduced in January of this year. OpenAI suggests average score jumps on benchmarks ranging from 1.3% to 12.6%, along with an estimated 80% reduction in pricing, when upgrading to this model from the previous text-embedding-ada-002 model.

Changes for improved performance

A major web-scraping driver update: Hello, Markdownify

Thanks to both our own internal testing and useful feedback from Discord community members, we’ve fine-tuned and rolled out a vastly improved web-scraping driver as a new, recommended default: MarkdownifyWebScraper. This differs from our previous default text scraping API, Trafilatura, in a few ways: it scrapes more raw content, adds an adjustable delay to load a source page’s dynamic content, and produces a markdown representation of a webpage in order to better drive extraction of target content.

The below sample script shows what happens when both our old and new web-scraper drivers are applied to otherwise identical code:

[code]

import logging
from attr import define
from concurrent.futures import ThreadPoolExecutor
from dataclasses import field
from dotenv import load_dotenv
from griptape.drivers import MarkdownifyWebScraperDriver, TrafilaturaWebScraperDriver
from griptape.loaders import WebLoader
from griptape.structures import Agent
from griptape.tools import TaskMemoryClient, WebScraper
from http.server import BaseHTTPRequestHandler, HTTPServer
from threading import Event

load_dotenv()

@define
class Server:
   port: int = field(kw_only=True)
   html: str = field(kw_only=True)
   _executor: ThreadPoolExecutor = ThreadPoolExecutor()
   _exiting = Event()
       
   def __enter__(self):
       html = self.html
       class Handler(BaseHTTPRequestHandler):
           def do_GET(self):
               self.send_response(200)
               self.send_header('Content-type', 'text/html')
               self.end_headers()
               self.wfile.write(str.encode(html))
           def log_message(self, format, *args):
               return
       def serve(exiting: Event):
           httpd = HTTPServer(('', self.port), Handler)
           httpd.timeout = 0.2
           while not exiting.is_set():
               httpd.handle_request()
       future = self._executor.submit(serve, self._exiting)
       future.cancel()
       
   def __exit__(self, *args, **kwargs):
       self._exiting.set()
       self._executor.shutdown()

html = """
<!DOCTYPE html>
<html>
<head>
   <title>Example Site</title>
</head>
<body>
   <h1>Site with dynamically loaded content</h1>
   <h2>List of job openings:</h2>
   <div id="results">
       Loading...
   </div>
   <script>
       // Simulate a delay in loading the job openings
       // Note that both of the web scrapers ignore script
       // content by default, so any job openings scraped
       // will not come from within this script tag.
       setTimeout(() => {
           document.getElementById('results').innerHTML = `
               <ul>
                   <li>Software Engineer</li>
                   <li>Product Manager</li>
                   <li>QA Engineer</li>
               </ul>
           `;
       }, 10);
   </script>
</body>
</html>
""";


def list_job_openings(web_scraper_driver: MarkdownifyWebScraperDriver):
   agent = Agent(
       logger_level=logging.NOTSET,
       tools=[
           WebScraper(
               web_loader=WebLoader(
                   web_scraper_driver=web_scraper_driver
               ),
               off_prompt=True,
           ),
           TaskMemoryClient(off_prompt=False),
       ],
   )
   return agent.run(
       "List all job openings at 'http://localhost:8080' in a flat numbered list."
   ).output_task.output.value


if __name__ == "__main__":
   # The new driver uses a headless browser (via playwright) to render the webpage,
   # which allows it to wait for dynamically loaded content. The timeout parameter
   # specifies how long to wait for dynamically loaded content after the initial
   # page load.
   old_driver = TrafilaturaWebScraperDriver()
   new_driver = MarkdownifyWebScraperDriver(timeout=1000)

   with Server(port=8080, html=html):
       print("#############################################")
       print("# Scraping for job openings with old driver #")
       print("#############################################")
       print("\n", list_job_openings(old_driver), "\n\n")

       print("#############################################")
       print("# Scraping for job openings with new driver #")
       print("#############################################")
       print("\n", list_job_openings(new_driver), '\n\n')

       print("###################################")
       print("# Content scraped with old driver #")
       print("###################################")
       print("\n", old_driver.scrape_url("http://localhost:8080").value, '\n\n')

       print("###################################")
       print("# Content scraped with new driver #")
       print("###################################")
       print("\n", new_driver.scrape_url("http://localhost:8080").value)

[/code]

And the below results are one great example of why Markdownify is worth implementing on your end:

[code]

#############################################
# Scraping for job openings with old driver #
#############################################
I'm sorry, but I was unable to retrieve the list of job openings from the website because it loads content dynamically.
#############################################
# Scraping for job openings with new driver #
#############################################
Here are the job openings:
1. Software Engineer
2. Product Manager
3. QA Engineer
###################################
# Content scraped with old driver #
###################################
Site with dynamically loaded content List of job openings: Loading...
###################################
# Content scraped with new driver #
###################################
Site with dynamically loaded content
====================================
List of job openings:
---------------------
* Software Engineer
* Product Manager
* QA Engineer

[/code]

Parallel processing

Previously with Griptape, Agent and ToolkitTask behavior as processed by an LLM would be handled in sequential order, waiting for each action to finish before beginning the next one.  Starting with Griptape v0.24, these subtasks will now by default execute multiple actions in parallel. Among other benefits, this will result in an immediate uplift in performance for GPT-4 and Claude-3 implementations of CoT and tool usage.

As a simple example of the performance and efficiency gains you can expect, consider this calculator code, which can now run multiple, complicated mathematical equations in parallel instead of sequentially:

[code]

from griptape.tools import Calculator
from griptape.structures import Agent

agent = Agent(
   tools=[Calculator(off_prompt=False)]
)

agent.run("what's 3124 * 12332, 242^7, and 4311/42?")

[/code]

Breaking changes

  • In a previous update, we removed default models from drivers, and that continues with v0.24: the OpenAiVisionImageQueryDriver field model no longer defaults to gpt-4-vision-preview . Thanks to this change, default models are now managed centrally via structure configs–removing the redundant default in the driver itself.
  • If your current implementation relies on event listeners, note that we have removed subtask_action_name, subtask_action_path , and subtask_action_input from BaseActionsSubtaskEvent.
  • ActionSubtask has been renamed to ActionsSubtask, as a small side effect of our effort to add parallel execution to the framework. Now, an ActionsSubtask contains a whole set of parallelizable actions.

Other changes and improvements

  • Check your implementation of OpenAiVisionImageQueryDriver to confirm its value for the max_tokens field, which has previously been in the driver but must now be manually set. In Griptape v0.24, it defaults to 256. Adjust this to find your ideal balance, as a higher token value will drive longer and more detailed responses–at the cost of higher token consumption.
  • As part of the process of updating to Claude-3, we have updated AnthropicPromptDriver and BedrockClaudePromptModelDriver to use Anthropic's Messages API instead of the older Text Completions API. Anthropic points to a few reasons for this change at its official site, including image processing and improved error handling.

For v0.24’s complete list of changes, please visit its Github release page. In the meantime, we want to thank you, the members of our tireless community, for your input on our recent releases. We understand the value of connecting more modules and tools to middleware and are hard at work ensuring Griptape aligns with your goals. In that spirit, we invite you to join our Discord, peruse our Github, or bookmark our blog gateway to keep tabs on what’s coming next–and let us know what you’re building via Discord or email. See you soon!