🚀 UIClip
UIClip is a model designed to quantify the design quality and relevance of a user interface (UI) screenshot given a textual description. It can also generate natural language design suggestions.
🚀 Quick Start
UIClip is a model crafted to quantify the design quality and relevance of a user interface (UI) screenshot based on a textual description. This model can also be employed to generate natural - language design suggestions (refer to the paper). It is presented in the publication "UIClip: A Data - driven Model for Assessing User Interface Design" at UIST 2024 (https://arxiv.org/abs/2404.12500).
User interface (UI) design is a challenging yet crucial task for ensuring the usability, accessibility, and aesthetic qualities of applications. In the paper, a machine - learned model, UIClip, is developed for assessing the design quality and visual relevance of a UI using its screenshot and natural - language description. A large - scale dataset of UIs is constructed through a combination of automated crawling, synthetic augmentation, and human ratings, collated by description and ranked by design quality. Through training on this dataset, UIClip implicitly learns the properties of good and bad designs by:
- Assigning a numerical score representing a UI design's relevance and quality.
- Offering design suggestions.
In an evaluation comparing UIClip's outputs and other baselines to UIs rated by 12 human designers, UIClip achieved the highest agreement with ground - truth rankings. Three example applications are presented to demonstrate how UIClip can facilitate downstream applications relying on instantaneous assessment of UI design quality:
- UI code generation.
- UI design tips generation.
- Quality - aware UI example search.
✨ Features
- Developed by: BigLab
- Model type: CLIP - style Multi - modal Dual - encoder Transformer
- Language(s) (NLP): English
- License: MIT
Property |
Details |
Model Type |
CLIP - style Multi - modal Dual - encoder Transformer |
Language (NLP) |
English |
License |
MIT |
💻 Usage Examples
Basic Usage
import torch
from transformers import CLIPProcessor, CLIPModel
IMG_SIZE = 224
DEVICE = "cpu"
LOGIT_SCALE = 100
NORMALIZE_SCORING = True
model_path="uiclip_jitteredwebsites-2-224-paraphrased"
processor_path="openai/clip-vit-base-patch32"
model = CLIPModel.from_pretrained(model_path)
model = model.eval()
model = model.to(DEVICE)
processor = CLIPProcessor.from_pretrained(processor_path)
def compute_quality_scores(input_list):
description_list = ["ui screenshot. well-designed. " + input_item[0] for input_item in input_list]
img_list = [input_item[1] for input_item in input_list]
text_embeddings_tensor = compute_description_embeddings(description_list)
img_embeddings_tensor = compute_image_embeddings(img_list)
text_embeddings_tensor /= text_embeddings_tensor.norm(dim=-1, keepdim=True)
img_embeddings_tensor /= img_embeddings_tensor.norm(dim=-1, keepdim=True)
if NORMALIZE_SCORING:
text_embeddings_tensor_poor = compute_description_embeddings([d.replace("well-designed. ", "poor design. ") for d in description_list])
text_embeddings_tensor_poor /= text_embeddings_tensor_poor.norm(dim=-1, keepdim=True)
text_embeddings_tensor_all = torch.stack((text_embeddings_tensor, text_embeddings_tensor_poor), dim=1)
else:
text_embeddings_tensor_all = text_embeddings_tensor.unsqueeze(1)
img_embeddings_tensor = img_embeddings_tensor.unsqueeze(1)
scores = (LOGIT_SCALE * img_embeddings_tensor @ text_embeddings_tensor_all.permute(0, 2, 1)).squeeze(1)
if NORMALIZE_SCORING:
scores = scores.softmax(dim=-1)
return scores[:, 0]
def compute_description_embeddings(descriptions):
inputs = processor(text=descriptions, return_tensors="pt", padding=True)
inputs['input_ids'] = inputs['input_ids'].to(DEVICE)
inputs['attention_mask'] = inputs['attention_mask'].to(DEVICE)
text_embedding = model.get_text_features(**inputs)
return text_embedding
def compute_image_embeddings(image_list):
windowed_batch = [slide_window_over_image(img, IMG_SIZE) for img in image_list]
inds = []
for imgi in range(len(windowed_batch)):
inds.append([imgi for _ in windowed_batch[imgi]])
processed_batch = [item for sublist in windowed_batch for item in sublist]
inputs = processor(images=processed_batch, return_tensors="pt")
inputs['pixel_values'] = inputs['pixel_values'].to(DEVICE)
with torch.no_grad():
image_features = model.get_image_features(**inputs)
processed_batch_inds = torch.tensor([item for sublist in inds for item in sublist]).long().to(image_features.device)
embed_list = []
for i in range(len(image_list)):
mask = processed_batch_inds == i
embed_list.append(image_features[mask].mean(dim=0))
image_embedding = torch.stack(embed_list, dim=0)
return image_embedding
def preresize_image(image, image_size):
aspect_ratio = image.width / image.height
if aspect_ratio > 1:
image = image.resize((int(aspect_ratio * image_size), image_size))
else:
image = image.resize((image_size, int(image_size / aspect_ratio)))
return image
def slide_window_over_image(input_image, img_size):
input_image = preresize_image(input_image, img_size)
width, height = input_image.size
square_size = min(width, height)
longer_dimension = max(width, height)
num_steps = (longer_dimension + square_size - 1) // square_size
if num_steps > 1:
step_size = (longer_dimension - square_size) // (num_steps - 1)
else:
step_size = square_size
cropped_images = []
for y in range(0, height - square_size + 1, step_size if height > width else square_size):
for x in range(0, width - square_size + 1, step_size if width > height else square_size):
left = x
upper = y
right = x + square_size
lower = y + square_size
cropped_image = input_image.crop((left, upper, right, lower))
cropped_images.append(cropped_image)
return cropped_images
prediction_scores = compute_quality_scores(list(zip(test_descriptions, test_images)))
📄 License
The model is released under the MIT license.