ML/AI Engineering

 
 

Hop develops intelligent systems at scale – reliably, reproducibly and responsibly

Here at Hop, we build production-scale systems to deploy machine learning at scale – whether that’s on premise, in the cloud or on device.

Sometimes, this also requires us to construct novel compute substrates to explore more interesting research questions. Our engineers focus on questions of scale, latency, concurrency and resilience. Though we have preferences, we’re generally language- and platform-agnostic, and have worked deeply in AWS, GCP and Azure, as well as various on-premise installations.

 

Featured Case Study

As large language models started seeing widespread commercial use, Harvard Business Publishing recognized that LLMs stood poised to transform the publishing industry. HBP didn’t yet know what was possible from a product perspective, they weren’t sure how to get started, and they didn’t know what they needed to know to implement successfully – they were facing true uncertainty. With a trusted, high-value brand, HBP stands to lose a lot from associating an ineffective – or worse, toxic – chatbot with their name. Not only did they have to chart a course through rapidly changing waters, they had to do it in a way that avoided any potential for a negative brand experience. They needed trusted partners to navigate this space.

Building Trust with LLMs: Balancing Product with Potential for Harvard Business Publishing

Building Trust with LLMs: Balancing Product with Potential

For Harvard Business Publishing, we developed an LLM-powered chatbot application to leverage their rich archive of content, and conducted a successful brand-safe launch.

Are you looking to leverage ML/AI to boost your brand? Contact us to learn how Hop can help.