Model Overview
Model Features
Model Capabilities
Use Cases
🚀 NeuroBERT-Mini — Fast BERT for Edge AI, IoT & On-Device NLP
Built for low-latency, lightweight NLP tasks — perfect for smart assistants, microcontrollers, and embedded apps!
🚀 Quick Start
NeuroBERT-Mini
is a lightweight NLP model derived from google/bert-base-uncased, optimized for real-time inference on edge and IoT devices. With a quantized size of ~35MB and ~10M parameters, it delivers efficient contextual language understanding for resource-constrained environments like mobile apps, wearables, microcontrollers, and smart home devices. Designed for low-latency and offline operation, it's ideal for privacy-first applications with limited connectivity.
- Model Name: NeuroBERT-Mini
- Size: ~35MB (quantized)
- Parameters: ~7M
- Architecture: Lightweight BERT (2 layers, hidden size 256, 4 attention heads)
- Description: Lightweight 2-layer, 256-hidden
- License: MIT — free for commercial and personal use
✨ Features
- Lightweight: ~35MB footprint fits devices with limited storage.
- Contextual Understanding: Captures semantic relationships with a compact architecture.
- Offline Capability: Fully functional without internet access.
- Real-Time Inference: Optimized for CPUs, mobile NPUs, and microcontrollers.
- Versatile Applications: Supports masked language modeling (MLM), intent detection, text classification, and named entity recognition (NER).
📦 Installation
Install the required dependencies:
pip install transformers torch
Ensure your environment supports Python 3.6+ and has ~35MB of storage for model weights.
💻 Usage Examples
Basic Usage
Quickstart: Masked Language Modeling
Predict missing words in IoT-related sentences with masked language modeling:
from transformers import pipeline
# Unleash the power
mlm_pipeline = pipeline("fill-mask", model="boltuix/NeuroBERT-Mini")
# Test the magic
result = mlm_pipeline("Please [MASK] the door before leaving.")
print(result[0]["sequence"]) # Output: "Please open the door before leaving."
Quickstart: Text Classification
Perform intent detection or text classification for IoT commands:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load tokenizer and classification model
model_name = "boltuix/NeuroBERT-Mini"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
model.eval()
# Example input
text = "Turn off the fan"
# Tokenize the input
inputs = tokenizer(text, return_tensors="pt")
# Get prediction
with torch.no_grad():
outputs = model(**inputs)
probs = torch.softmax(outputs.logits, dim=1)
pred = torch.argmax(probs, dim=1).item()
# Define labels
labels = ["OFF", "ON"]
# Print result
print(f"Text: {text}")
print(f"Predicted intent: {labels[pred]} (Confidence: {probs[0][pred]:.4f})")
Output:
Text: Turn off the fan
Predicted intent: OFF (Confidence: 0.5328)
Note: Fine-tune the model for specific classification tasks to improve accuracy.
Advanced Usage
Evaluation
NeuroBERT-Mini was evaluated on a masked language modeling task using 10 IoT-related sentences. The model predicts the top-5 tokens for each masked word, and a test passes if the expected word is in the top-5 predictions.
Test Sentences
Sentence | Expected Word |
---|---|
She is a [MASK] at the local hospital. | nurse |
Please [MASK] the door before leaving. | shut |
The drone collects data using onboard [MASK]. | sensors |
The fan will turn [MASK] when the room is empty. | off |
Turn [MASK] the coffee machine at 7 AM. | on |
The hallway light switches on during the [MASK]. | night |
The air purifier turns on due to poor [MASK] quality. | air |
The AC will not run if the door is [MASK]. | open |
Turn off the lights after [MASK] minutes. | five |
The music pauses when someone [MASK] the room. | enters |
Evaluation Code
from transformers import AutoTokenizer, AutoModelForMaskedLM
import torch
# Load model and tokenizer
model_name = "boltuix/NeuroBERT-Mini"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForMaskedLM.from_pretrained(model_name)
model.eval()
# Test data
tests = [
("She is a [MASK] at the local hospital.", "nurse"),
("Please [MASK] the door before leaving.", "shut"),
("The drone collects data using onboard [MASK].", "sensors"),
("The fan will turn [MASK] when the room is empty.", "off"),
("Turn [MASK] the coffee machine at 7 AM.", "on"),
("The hallway light switches on during the [MASK].", "night"),
("The air purifier turns on due to poor [MASK] quality.", "air"),
("The AC will not run if the door is [MASK].", "open"),
("Turn off the lights after [MASK] minutes.", "five"),
("The music pauses when someone [MASK] the room.", "enters")
]
results = []
# Run tests
for text, answer in tests:
inputs = tokenizer(text, return_tensors="pt")
mask_pos = (inputs.input_ids == tokenizer.mask_token_id).nonzero(as_tuple=True)[1]
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits[0, mask_pos, :]
topk = logits.topk(5, dim=1)
top_ids = topk.indices[0]
top_scores = torch.softmax(topk.values, dim=1)[0]
guesses = [(tokenizer.decode([i]).strip().lower(), float(score)) for i, score in zip(top_ids, top_scores)]
results.append({
"sentence": text,
"expected": answer,
"predictions": guesses,
"pass": answer.lower() in [g[0] for g in guesses]
})
# Print results
for r in results:
status = "✓ PASS" if r["pass"] else "✗ FAIL"
print(f"\n {r['sentence']}")
print(f" Expected: {r['expected']}")
print(" Top-5 Predictions (word : confidence):")
for word, score in r['predictions']:
print(f" - {word:12} | {score:.4f}")
print(status)
# Summary
pass_count = sum(r["pass"] for r in results)
print(f"\n Total Passed: {pass_count}/{len(tests)}")
Sample Results (Hypothetical)
- Sentence: She is a [MASK] at the local hospital.
Expected: nurse
Top-5: [doctor (0.35), nurse (0.30), surgeon (0.20), technician (0.10), assistant (0.05)]
Result: ✓ PASS - Sentence: Turn off the lights after [MASK] minutes.
Expected: five
Top-5: [ten (0.40), two (0.25), three (0.20), fifteen (0.10), twenty (0.05)]
Result: ✗ FAIL - Total Passed: ~8/10 (depends on fine-tuning).
The model performs well in IoT contexts (e.g., “sensors,” “off,” “open”) but may require fine-tuning for numerical terms like “five.”
📚 Documentation
Evaluation Metrics
Property | Details |
---|---|
Model Type | Text Classification |
Training Data | Custom IoT Dataset: Curated data focused on IoT terminology, smart home commands, and sensor-related contexts (sourced from chatgpt-datasets). |
Metrics |
|
Use Cases
NeuroBERT-Mini is designed for edge and IoT scenarios with constrained compute and connectivity. Key applications include:
- Smart Home Devices: Parse commands like “Turn [MASK] the coffee machine” (predicts “on”) or “The fan will turn [MASK]” (predicts “off”).
- IoT Sensors: Interpret sensor contexts, e.g., “The drone collects data using onboard [MASK]” (predicts “sensors”).
- Wearables: Real-time intent detection, e.g., “The music pauses when someone [MASK] the room” (predicts “enters”).
- Mobile Apps: Offline chatbots or semantic search, e.g., “She is a [MASK] at the hospital” (predicts “nurse”).
- Voice Assistants: Local command parsing, e.g., “Please [MASK] the door” (predicts “shut”).
- Toy Robotics: Lightweight command understanding for interactive toys.
- Fitness Trackers: Local text feedback processing, e.g., sentiment analysis.
- Car Assistants: Offline command disambiguation without cloud APIs.
Hardware Requirements
- Processors: CPUs, mobile NPUs, or microcontrollers (e.g., ESP32, Raspberry Pi)
- Storage: ~35MB for model weights (quantized for reduced footprint)
- Memory: ~80MB RAM for inference
- Environment: Offline or low-connectivity settings
Quantization ensures efficient memory usage, making it suitable for microcontrollers.
Trained On
- Custom IoT Dataset: Curated data focused on IoT terminology, smart home commands, and sensor-related contexts (sourced from chatgpt-datasets). This enhances performance on tasks like command parsing and device control.
Fine-tuning on domain-specific data is recommended for optimal results.
Fine-Tuning Guide
To adapt NeuroBERT-Mini for custom IoT tasks (e.g., specific smart home commands):
- Prepare Dataset: Collect labeled data (e.g., commands with intents or masked sentences).
- Fine-Tune with Hugging Face:
#!pip uninstall -y transformers torch datasets
#!pip install transformers==4.44.2 torch==2.4.1 datasets==3.0.1
import torch
from transformers import BertTokenizer, BertForSequenceClassification, Trainer, TrainingArguments
from datasets import Dataset
import pandas as pd
# 1. Prepare the sample IoT dataset
data = {
"text": [
"Turn on the fan",
"Switch off the light",
"Invalid command",
"Activate the air conditioner",
"Turn off the heater",
"Gibberish input"
],
"label": [1, 1, 0, 1, 1, 0] # 1 for valid IoT commands, 0 for invalid
}
df = pd.DataFrame(data)
dataset = Dataset.from_pandas(df)
# 2. Load tokenizer and model
model_name = "boltuix/NeuroBERT-Mini"
tokenizer = BertTokenizer.from_pretrained(model_name)
model = BertForSequenceClassification.from_pretrained(model_name)
# 3. Define training arguments
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=10,
)
# 4. Create Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset,
)
# 5. Fine-tune model
trainer.train()
📄 License
This project is licensed under the MIT License - see the LICENSE page for details.

