Core technologies
Our contribution
An AI-powered platform designed for restaurant discovery, data aggregation, and content enrichment. It leverages intelligent algorithms to unify information from various sources and generate engaging, context-aware content for enhanced user experience.
  • Typesense-powered vector search with support for filtering, sorting, and keyword constraints
  • Multilingual content classification, generation and translation using fine-tuned LLMs
  • Data aggregation and normalization across multiple sources
Restaurant reservation and management platform
JS/React
Elixir
Postgress
Styled Components
GraphQl (Apollo)
Rails3/5
Ruby 2.6
MongoDB
Kafka
Elasticsearch
Phoenix
Respondology
Respondology protects brands and individuals from the trolls and their toxicity on social media by removing hateful, ugly, and spam comments 24/7/365 using AI keyword filtering and human moderators.
Our contribution
Using Natural Language Processing and Machine Learning we helped to build a product to automatically moderate social media comments at scale, far faster and more consistently than a human team could. 
Core technologies
Ruby/Rails, PostgreSQL, AWS
Core technologies
Our contribution
A proprietary AI solution for speech recognition tailored to streamline digital documentation processes. Optimized for healthcare, it enables professionals to dictate notes efficiently, reducing time spent on manual data entry and improving workflow productivity.
  • AI-powered speech recognition technology that enables caregivers to dictate their notes into a smartphone.
  • AI speech processing, structures the content, and automatically generates the relevant documentation, such as patient reports, vital signs, and medication logs, directly into the system.
NY Healthcare startup
Ruby/Rails
React.js
PostgreSQL
Python
Whisper
vLLM
Local Nvidia instruct model for HIPPA compliance
AINekko
A fully open-source company committed to democratizing collaboration across the AI community. Focused on models, datasets, and applications, it aims to ensure a diverse and open ecosystem, offering alternatives beyond closed, centralized platforms.
Our contribution
  • End-to-end testing and benchmarking of local LLM infrastructure, including token generation speed and latency analysis
  • OpenAI-compatible API layer for llama.cpp with support for advanced features like function calling, vision, and multi-model routing
  • Developer-focused reference implementations and tooling for integrating local models into low-level systems and applications
  • Tools for building open datasets and enabling modular AI system experimentation within open-source environments
Core technologies
Ruby/Rails, React.js, PostgreSQL, Python
Core technologies
Our contribution
An application designed for hiring agencies to streamline the anonymization of candidate CVs. It efficiently masks sensitive information, enabling safe sharing with clients while reducing the risk of direct hiring and protecting candidate privacy.
  • Developed an MVP tailored for recruitment workflows, with built-in tools for anonymizing candidate CVs
  • Integrated automated detection and highlighting of sensitive information using a custom neural network
  • Enabled faster manual review and redaction through smart pre-selection of data points
  • Designed a user-friendly interface for hiring agencies to efficiently process and share anonymized CVs with clients
Talentako
Ruby/Rails
React.js
PostgreSQL
Python
Llamagator
An LLM aggregator designed as a multi-LLM prompt testing tool. It enables rapid comparison, evaluation, and optimization of prompts across different large language models within a unified interface.
Our contribution
  • Test prompts against multiple LLMs or LLM versions
  • Observe the relative performance of generated responses
  • Run prompts multiple times to observe the reliability 
  • Supports local and API access to LLMs
  • Licensed under Apache License, Version 2.0
Core technologies
Ollama, llama.cpp, OpenAI API
Core technologies
Our contribution
A demo application built to test AI infrastructure components, from large language models to inference engines. It provides a controlled environment for validating performance, compatibility, and deployment workflows.
  • Building an open dataset
  • Integration with low level application code
  • Reference implementation for developers showcase based on local AI infrastructure.
Blame.ai
Ruby/Rails
React.js
PostgreSQL
Python
NekkoAPI
A web API server compatible with OpenAI’s API specifications, built to interface with llama.cpp. It enables seamless integration of local LLMs into existing applications using familiar API calls.
Our contribution
  • Low-level access to C API via ctypes interface
  • High-level Python API for text completion
  • OpenAI-like API LangChain compatibility
  • LlamaIndex compatibility
  • OpenAI compatible web server
  • Function Calling support
  • Vision API support
  • Multiple Models
Core technologies
Ruby/Rails,React.js, PostgreSQL, Python
Core technologies
Our contribution
A demo application for analyzing UK legislation documents using AI. The system scrapes content from UK legislation URLs and processes them through configurable AI workflows with multi-step prompt chains, enabling intelligent document analysis and topic-based organization.
  • Building a legal document analysis platform with automated UK legislation content extraction
  • Integration with OpenAI API for topic-based organization and AI analysis
  • Configurable multi-step prompt chains with versioning and real-time streaming
Waterman AI
Ruby/Rails
React.js
PostgreSQL
Python
Turtlenekko
A Go-based benchmarking tool for evaluating the performance of local large language models via chat completion endpoints. It delivers detailed analysis of token generation speeds and overall responsiveness, supporting optimization and performance tuning.
  • Benchmark local LLM endpoints
  • Measure response times and token generation speeds
  • Compare performance across different models and configurations
  • Generate performance reports
Core technologies: Go, Python, PyTorch


Ready to Start the Right Way?
Let’s map your product together. Get a complete prototype and plan in 2–4 weeks.
Made on
Tilda