đ Screenshot to HTML/CSS Model
This model can convert screenshots of website components into HTML/CSS codes, offering a new way to transform visual content into code.
Try out the demo!
đ Quick Start
This model converts screenshots of website components into HTML/CSS codes. It is based on a very early checkpoint of our forthcoming vision - language foundation model, which has been fine - tuned using the Websight dataset. This is very much an alpha version. The goal is to kick off an effort to develop improved models capable of converting a website screenshot into actual code.
đģ Usage Examples
Basic Usage
import torch
from PIL import Image
from transformers import AutoModelForCausalLM, AutoProcessor
from transformers.image_utils import to_numpy_array, PILImageResampling, ChannelDimension
from transformers.image_transforms import resize, to_channel_dimension_format
DEVICE = torch.device("cuda")
PROCESSOR = AutoProcessor.from_pretrained(
"HuggingFaceM4/VLM_WebSight_finetuned",
token=API_TOKEN,
)
MODEL = AutoModelForCausalLM.from_pretrained(
"HuggingFaceM4/VLM_WebSight_finetuned",
token=API_TOKEN,
trust_remote_code=True,
torch_dtype=torch.bfloat16,
).to(DEVICE)
image_seq_len = MODEL.config.perceiver_config.resampler_n_latents
BOS_TOKEN = PROCESSOR.tokenizer.bos_token
BAD_WORDS_IDS = PROCESSOR.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids
def convert_to_rgb(image):
if image.mode == "RGB":
return image
image_rgba = image.convert("RGBA")
background = Image.new("RGBA", image_rgba.size, (255, 255, 255))
alpha_composite = Image.alpha_composite(background, image_rgba)
alpha_composite = alpha_composite.convert("RGB")
return alpha_composite
def custom_transform(x):
x = convert_to_rgb(x)
x = to_numpy_array(x)
x = resize(x, (960, 960), resample=PILImageResampling.BILINEAR)
x = PROCESSOR.image_processor.rescale(x, scale=1 / 255)
x = PROCESSOR.image_processor.normalize(
x,
mean=PROCESSOR.image_processor.image_mean,
std=PROCESSOR.image_processor.image_std
)
x = to_channel_dimension_format(x, ChannelDimension.FIRST)
x = torch.tensor(x)
return x
inputs = PROCESSOR.tokenizer(
f"{BOS_TOKEN}<fake_token_around_image>{'<image>' * image_seq_len}<fake_token_around_image>",
return_tensors="pt",
add_special_tokens=False,
)
inputs["pixel_values"] = PROCESSOR.image_processor([image], transform=custom_transform)
inputs = {k: v.to(DEVICE) for k, v in inputs.items()}
generated_ids = MODEL.generate(**inputs, bad_words_ids=BAD_WORDS_IDS, max_length=4096)
generated_text = PROCESSOR.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
đ Documentation
Model Details
Property |
Details |
Developed by |
Hugging Face |
Model Type |
Multi - modal model (screenshot of website component to HTML/CSS code) |
Language(s) (NLP) |
en |
License |
see License section |
Parent Models |
SigLIP and mistralai/Mistral-7B-v0.1 |
Resources for more information |
Websight dataset: Dataset card; Websight technical report: Report |
đ License
The model is built on top of two pre - trained models: SigLIP and mistralai/Mistral-7B-v0.1, which are delivered under an Apache - 2.0 license. As such, users should comply with the licenses of these models.
The two pre - trained models are connected to each other with newly initialized parameters that we train. These are not based on any of the two base frozen models forming the composite model. We release the additional weights we trained under an Apache - 2.0 license.