Future Blog Post
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
LLMs often return structured data buried inside unstructured text. Instead of writing custom regex or manual parsing, you can now use LLM Output Parser to instantly extract the most relevant JSON/XML structures with just one function call.
Published:
AgentFlow is a Python library that automates the orchestration of multi-step agent workflows by integrating intelligent planning, routing, and execution of specialized operations.
Published:
Introducing our new 24.5M-parameter BERT-based language identification model! Trained on 121M sentences across 200 languages, this model is lightweight, CPU-friendly, and designed for real-time language identification tasks.
Published:
GraphRAG-Tagger is an end-to-end lightweight toolkit for extracting topics from PDFs and visualizing their connections using graphs.
Published:
Medivocate is a Retrieval-Augmented Generation (RAG) application, deployed as a Space on Hugging Face, dedicated to exploring the history and cultural heritage of Africa, including its traditional medicinal practices.
Published:
Dikoka is an AI-powered document analyzer that helps you navigate and uncover key insights from complex historical records. It extracts key insights, generates concise summaries, and suggests follow-up questions for deeper understanding.
Published:
Discursia is a dynamic language-learning app that fosters conversational skills through interactive discussions. It blends personalized learning with robust AI capabilities to create an immersive and effective language development experience.
Published:
Named Entity Recognition (NER) is an essential task in natural language processing (NLP) for identifying key information within text, such as locations, organizations, and people. This project focuses on fine-tuning GLiNER, a pre-trained model specifically designed for NER, to enhance its performance in Location Mention Recognition (LMR).
Published:
Large Language Models (LLMs) have become highly proficient in text generation, comprehension, and interaction. Despite their successes across various sectors, their application in the telecommunications industry remains limited. This project focuses on optimizing LLMs for telecom-specific knowledge tasks.
Published:
The people of Malawi have faced numerous natural disasters and climatic shocks in recent years, such as droughts, floods, and landslides. These events, compounded by the impacts of Covid-19 and other global issues, have severely affected the health and well-being of most Malawians. Rural areas, where more than 80% of the population resides, have been particularly hard-hit.
Published:
Self-supervised learning aims to learn useful representations of input data without relying on human annotations. When trained offline with enormous amounts of unlabeled data, self-supervised models have been found to provide visual representations that are equivalent to or better than supervised models. However, in continual learning (CL) circumstances, where data is fed to the model sequentially, their efficacy is drastically diminished.