đ English Extractive Question Answering Model
This model is designed for English extractive question answering, leveraging the power of the BERT architecture to provide accurate answers.
đ Quick Start
You can use this model directly from the đ¤ Transformers library with a pipeline. Here's how:
>>> from transformers.pipelines import pipeline
>>> from transformers import AutoTokenizer, AutoModelForQuestionAnswering
>>> tokenizer = AutoTokenizer.from_pretrained("zhufy/squad-en-bert-base")
>>> model = AutoModelForQuestionAnswering.from_pretrained("zhufy/squad-en-bert-base")
>>> nlp = pipeline("question-answering", model=model, tokenizer=tokenizer)
>>> context = "A problem is regarded as inherently difficult if its \
solution requires significant resources, whatever the \
algorithm used. The theory formalizes this intuition, \
by introducing mathematical models of computation to \
study these problems and quantifying the amount of \
resources needed to solve them, such as time and storage. \
Other complexity measures are also used, such as the \
amount of communication (used in communication complexity), \
the number of gates in a circuit (used in circuit \
complexity) and the number of processors (used in parallel \
computing). One of the roles of computational complexity \
theory is to determine the practical limits on what \
computers can and cannot do."
>>> question = "What are two basic primary resources used to \
guage complexity?"
>>> inputs = {"question": question,
"context":context }
>>> nlp(inputs)
{'score': 0.8589141368865967,
'start': 305,
'end': 321,
'answer': 'time and storage'}
⨠Features
- English Extractive Question Answering: Specifically designed to handle English extractive question answering tasks.
- Case-Sensitive: Based on the bert-base-cased model, it differentiates between lowercase and uppercase words.
đĻ Installation
No specific installation steps are provided in the original document.
đģ Usage Examples
Basic Usage
The above code example demonstrates the basic usage of the model for question answering. You can provide a context and a question, and the model will return the answer along with its score and position in the context.
đ Documentation
Model Description
This model is for English extractive question answering. It is based on the bert-base-cased model, and it is case-sensitive: it makes a difference between english and English.
Training data
English SQuAD v2.0
đ§ Technical Details
The model uses the BERT architecture, which is a pre-trained language model. It has been fine-tuned on the English SQuAD v2.0 dataset for extractive question answering tasks. The case-sensitivity feature allows it to better understand the context and provide more accurate answers.
đ License
No license information is provided in the original document.