Users’ understanding queries
Published:
Jul 2020 - Sep 2020: Users’ understanding queries
Published:
Jul 2020 - Sep 2020: Users’ understanding queries
Published:
Apr 2021 - Jun 2021: NER for commands extraction
Published:
Feb 2021 - Sep 2021: Financial Data Generation
Published:
Oct 2021 - Dec 2021: Financial Data Generation
Published:
Self-supervised learning provides strong visual representations from unlabeled data offline, but struggles in continual learning, which this study addresses using distillation, proofreading, and a prediction layer to prevent forgetting.
Published:
Semi Self-Supervised Learning: improving the performance of self-supervised learning models, especially in scenarios where only a small amount of labeled data is available
Published:
Developing a Machine Learning Algorithm for Accurate Counting of Roof Types in Rural Malawi Using Aerial Imagery
Published:
Enhancing the Accuracy of Falcon 7.5B and Phi-2 on Telecom Knowledge Using the TeleQnA Dataset
Published:
This project fine-tunes the GLiNER model to enhance the recognition and classification of location mentions in text data, aimed at improving disaster response and location-based tasks
Published:
An AI-powered platform exploring African history, culture, and traditional medicine, fostering understanding and appreciation of the continent’s rich heritage.
Published in -, 2022
This work proposes an efficient solution for detecting and filtering misinformation on social networks, specifically targeting misinformation spreaders on Twitter during the COVID-19 crisis, using a Bidirectional GRU model that achieved a 95.3% F1-score on a COVID-19 misinformation dataset, surpassing state-of-the-art results.
Recommended citation: Alex Kameni, 2022
Download Paper | Download Slides
Published in -, 2022
This study presents a framework for continual self-supervised learning of visual representations that prevents forgetting by combining distillation and proofreading techniques, improving the quality of learned representations even when data is fed sequentially.
Recommended citation: Alex Kameni, 2022
Download Paper | Download Slides