🚀 Question Answering Model Based on Fine-tuned huBert
This project presents a question answering model that is a fine - tuned version of [mcsabai/huBert - fine - tuned - hungarian - squadv1](https://huggingface.co/mcsabai/huBert - fine - tuned - hungarian - squadv1) on the milqa dataset. It can effectively answer various questions based on given contexts, such as questions about the Danube River.
🚀 Quick Start
The model can be used directly on the Hugging Face platform. You can input questions and corresponding contexts in the widget area, and the model will provide answers. For example:
- text: "What is the second longest river in Europe?"
context: "The Danube is the longest river in Europe after the Russian Volga. In Germany, in the Black Forest, two small streams, the Breg and the Brigach, merge at Donaueschingen, and from there it travels 2850 kilometers southeast to the Black Sea. The whole territory of Hungary lies within the catchment area of this river. The length of its main branch here is 417 km, so it is a defining element of the country's water system.
The formation of the river began in the Pliocene era. At the end of the Pliocene, the Danube reached the Little Hungarian Plain. At that time, it flowed from north to south here instead of the current west - east direction. Its section in the Little Hungarian Plain was only formed in the Pleistocene era. The youngest part of the river is its south - north flow on the western side of Dobruja, which was formed only at the end of the Pleistocene era.
Today, it is an important international waterway. Since the completion of the German Rhine - Main - Danube Canal in 1992, it has been part of a 3500 - km trans - European waterway that extends from Rotterdam near the North Sea to Sulina near the Black Sea. The total tonnage of goods transported on the Danube reached 100 million tons in 1987."
✨ Features
- Accurate Question Answering: Capable of providing accurate answers based on the given context, whether it's about geographical information, historical facts, or other aspects.
- Multilingual Adaptability: Although the original data is in Hungarian, the underlying model has the potential to be adapted for other languages with further fine - tuning.
📚 Documentation
The model is fine - tuned on the milqa dataset. The dataset contains a large number of question - context pairs, which helps the model learn the relationship between questions and answers. The original model [mcsabai/huBert - fine - tuned - hungarian - squadv1](https://huggingface.co/mcsabai/huBert - fine - tuned - hungarian - squadv1) provides a good foundation for this fine - tuning process.
📄 License
There is no specific license information provided in the original document. If you plan to use this model, it is recommended to check the license of the original model [mcsabai/huBert - fine - tuned - hungarian - squadv1](https://huggingface.co/mcsabai/huBert - fine - tuned - hungarian - squadv1) and the milqa dataset.