S

Simcse Chinese Roberta Wwm Ext

Developed by cyclone
A simplified Chinese sentence embedding encoding model based on simple contrastive learning, using the Chinese RoBERTa WWM extended version as the pre-trained model.
Downloads 188
Release Time : 3/2/2022

Model Overview

This model provides simplified Chinese sentence embedding encoding, primarily used for tasks such as text similarity calculation and semantic retrieval.

Model Features

Contrastive Learning
Trained using simple contrastive learning methods, better capturing the semantic information of sentences
Chinese Optimization
Based on the Chinese RoBERTa WWM extended version, specifically optimized for Chinese text
Sentence Embedding
Can encode Chinese sentences of any length into fixed-dimensional semantic vectors

Model Capabilities

Chinese Sentence Encoding
Semantic Similarity Calculation
Text Retrieval

Use Cases

Information Retrieval
Semantic Search
Used to build search engines based on semantics rather than keywords
Text Similarity
Question-Answer Matching
Calculate the semantic similarity between questions and candidate answers
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase