Commit cf2772fa authored by AUTOMATIC1111's avatar AUTOMATIC1111

Merge branch 'release_candidate'

parents 4afaaf8a 0dfffe53
...@@ -74,6 +74,7 @@ module.exports = { ...@@ -74,6 +74,7 @@ module.exports = {
create_submit_args: "readonly", create_submit_args: "readonly",
restart_reload: "readonly", restart_reload: "readonly",
updateInput: "readonly", updateInput: "readonly",
onEdit: "readonly",
//extraNetworks.js //extraNetworks.js
requestGet: "readonly", requestGet: "readonly",
popup: "readonly", popup: "readonly",
......
name: Bug Report name: Bug Report
description: You think somethings is broken in the UI description: You think something is broken in the UI
title: "[Bug]: " title: "[Bug]: "
labels: ["bug-report"] labels: ["bug-report"]
body: body:
- type: markdown
attributes:
value: |
> The title of the bug report should be short and descriptive.
> Use relevant keywords for searchability.
> Do not leave it blank, but also do not put an entire error log in it.
- type: checkboxes - type: checkboxes
attributes: attributes:
label: Is there an existing issue for this? label: Checklist
description: Please search to see if an issue already exists for the bug you encountered, and that it hasn't been fixed in a recent build/commit. description: |
Please perform basic debugging to see if extensions or configuration is the cause of the issue.
Basic debug procedure
 1. Disable all third-party extensions - check if extension is the cause
 2. Update extensions and webui - sometimes things just need to be updated
 3. Backup and remove your config.json and ui-config.json - check if the issue is caused by bad configuration
 4. Delete venv with third-party extensions disabled - sometimes extensions might cause wrong libraries to be installed
 5. Try a fresh installation webui in a different directory - see if a clean installation solves the issue
Before making a issue report please, check that the issue hasn't been reported recently.
options: options:
- label: I have searched the existing issues and checked the recent builds/commits - label: The issue exists after disabling all extensions
required: true - label: The issue exists on a clean installation of webui
- label: The issue is caused by an extension, but I believe it is caused by a bug in the webui
- label: The issue exists in the current version of the webui
- label: The issue has not been reported before recently
- label: The issue has been reported before but has not been fixed yet
- type: markdown - type: markdown
attributes: attributes:
value: | value: |
*Please fill this form with as much information as possible, don't forget to fill "What OS..." and "What browsers" and *provide screenshots if possible** > Please fill this form with as much information as possible. Don't forget to "Upload Sysinfo" and "What browsers" and provide screenshots if possible
- type: textarea - type: textarea
id: what-did id: what-did
attributes: attributes:
label: What happened? label: What happened?
description: Tell us what happened in a very clear and simple way description: Tell us what happened in a very clear and simple way
placeholder: |
txt2img is not working as intended.
validations: validations:
required: true required: true
- type: textarea - type: textarea
...@@ -27,9 +47,9 @@ body: ...@@ -27,9 +47,9 @@ body:
attributes: attributes:
label: Steps to reproduce the problem label: Steps to reproduce the problem
description: Please provide us with precise step by step instructions on how to reproduce the bug description: Please provide us with precise step by step instructions on how to reproduce the bug
value: | placeholder: |
1. Go to .... 1. Go to ...
2. Press .... 2. Press ...
3. ... 3. ...
validations: validations:
required: true required: true
...@@ -38,13 +58,8 @@ body: ...@@ -38,13 +58,8 @@ body:
attributes: attributes:
label: What should have happened? label: What should have happened?
description: Tell us what you think the normal behavior should be description: Tell us what you think the normal behavior should be
validations: placeholder: |
required: true WebUI should ...
- type: textarea
id: sysinfo
attributes:
label: Sysinfo
description: System info file, generated by WebUI. You can generate it in settings, on the Sysinfo page. Drag the file into the field to upload it. If you submit your report without including the sysinfo file, the report will be closed. If needed, review the report to make sure it includes no personal information you don't want to share. If you can't start WebUI, you can use --dump-sysinfo commandline argument to generate the file.
validations: validations:
required: true required: true
- type: dropdown - type: dropdown
...@@ -58,12 +73,25 @@ body: ...@@ -58,12 +73,25 @@ body:
- Brave - Brave
- Apple Safari - Apple Safari
- Microsoft Edge - Microsoft Edge
- Android
- iOS
- Other - Other
- type: textarea
id: sysinfo
attributes:
label: Sysinfo
description: System info file, generated by WebUI. You can generate it in settings, on the Sysinfo page. Drag the file into the field to upload it. If you submit your report without including the sysinfo file, the report will be closed. If needed, review the report to make sure it includes no personal information you don't want to share. If you can't start WebUI, you can use --dump-sysinfo commandline argument to generate the file.
placeholder: |
1. Go to WebUI Settings -> Sysinfo -> Download system info.
If WebUI fails to launch, use --dump-sysinfo commandline argument to generate the file
2. Upload the Sysinfo as a attached file, Do NOT paste it in as plain text.
validations:
required: true
- type: textarea - type: textarea
id: logs id: logs
attributes: attributes:
label: Console logs label: Console logs
description: Please provide **full** cmd/terminal logs from the moment you started UI to the end of it, after your bug happened. If it's very long, provide a link to pastebin or similar service. description: Please provide **full** cmd/terminal logs from the moment you started UI to the end of it, after the bug occured. If it's very long, provide a link to pastebin or similar service.
render: Shell render: Shell
validations: validations:
required: true required: true
...@@ -71,4 +99,7 @@ body: ...@@ -71,4 +99,7 @@ body:
id: misc id: misc
attributes: attributes:
label: Additional information label: Additional information
description: Please provide us with any relevant additional info or context. description: |
Please provide us with any relevant additional info or context.
Examples:
 I have updated my GPU driver recently.
...@@ -20,7 +20,7 @@ jobs: ...@@ -20,7 +20,7 @@ jobs:
# not to have GHA download an (at the time of writing) 4 GB cache # not to have GHA download an (at the time of writing) 4 GB cache
# of PyTorch and other dependencies. # of PyTorch and other dependencies.
- name: Install Ruff - name: Install Ruff
run: pip install ruff==0.0.272 run: pip install ruff==0.1.6
- name: Run Ruff - name: Run Ruff
run: ruff . run: ruff .
lint-js: lint-js:
......
This diff is collapsed.
...@@ -88,9 +88,10 @@ A browser interface based on Gradio library for Stable Diffusion. ...@@ -88,9 +88,10 @@ A browser interface based on Gradio library for Stable Diffusion.
- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions - [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions
- Now without any bad letters! - Now without any bad letters!
- Load checkpoints in safetensors format - Load checkpoints in safetensors format
- Eased resolution restriction: generated image's dimension must be a multiple of 8 rather than 64 - Eased resolution restriction: generated image's dimensions must be a multiple of 8 rather than 64
- Now with a license! - Now with a license!
- Reorder elements in the UI from settings screen - Reorder elements in the UI from settings screen
- [Segmind Stable Diffusion](https://huggingface.co/segmind/SSD-1B) support
## Installation and Running ## Installation and Running
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for: Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for:
...@@ -103,7 +104,7 @@ Alternatively, use online services (like Google Colab): ...@@ -103,7 +104,7 @@ Alternatively, use online services (like Google Colab):
- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services) - [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
### Installation on Windows 10/11 with NVidia-GPUs using release package ### Installation on Windows 10/11 with NVidia-GPUs using release package
1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre) and extract it's contents. 1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre) and extract its contents.
2. Run `update.bat`. 2. Run `update.bat`.
3. Run `run.bat`. 3. Run `run.bat`.
> For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) > For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs)
...@@ -120,7 +121,9 @@ Alternatively, use online services (like Google Colab): ...@@ -120,7 +121,9 @@ Alternatively, use online services (like Google Colab):
# Debian-based: # Debian-based:
sudo apt install wget git python3 python3-venv libgl1 libglib2.0-0 sudo apt install wget git python3 python3-venv libgl1 libglib2.0-0
# Red Hat-based: # Red Hat-based:
sudo dnf install wget git python3 sudo dnf install wget git python3 gperftools-libs libglvnd-glx
# openSUSE-based:
sudo zypper install wget git python3 libtcmalloc4 libglvnd
# Arch-based: # Arch-based:
sudo pacman -S wget git python3 sudo pacman -S wget git python3
``` ```
...@@ -146,7 +149,7 @@ For the purposes of getting Google and other search engines to crawl the wiki, h ...@@ -146,7 +149,7 @@ For the purposes of getting Google and other search engines to crawl the wiki, h
## Credits ## Credits
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file. Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers - Stable Diffusion - https://github.com/Stability-AI/stablediffusion, https://github.com/CompVis/taming-transformers
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git - k-diffusion - https://github.com/crowsonkb/k-diffusion.git
- GFPGAN - https://github.com/TencentARC/GFPGAN.git - GFPGAN - https://github.com/TencentARC/GFPGAN.git
- CodeFormer - https://github.com/sczhou/CodeFormer - CodeFormer - https://github.com/sczhou/CodeFormer
...@@ -173,5 +176,6 @@ Licenses for borrowed code can be found in `Settings -> Licenses` screen, and al ...@@ -173,5 +176,6 @@ Licenses for borrowed code can be found in `Settings -> Licenses` screen, and al
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd - TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
- LyCORIS - KohakuBlueleaf - LyCORIS - KohakuBlueleaf
- Restart sampling - lambertae - https://github.com/Newbeeer/diffusion_restart_sampling - Restart sampling - lambertae - https://github.com/Newbeeer/diffusion_restart_sampling
- Hypertile - tfernd - https://github.com/tfernd/HyperTile
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user. - Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- (You) - (You)
model:
base_learning_rate: 1.0e-04
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: "jpg"
cond_stage_key: "txt"
image_size: 64
channels: 4
cond_stage_trainable: false # Note: different from the one we trained before
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: False
scheduler_config: # 10000 warmup steps
target: ldm.lr_scheduler.LambdaLinearScheduler
params:
warm_up_steps: [ 10000 ]
cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases
f_start: [ 1.e-6 ]
f_max: [ 1. ]
f_min: [ 1. ]
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32 # unused
in_channels: 4
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_head_channels: 64
use_spatial_transformer: True
use_linear_in_transformer: True
transformer_depth: 1
context_dim: 1024
use_checkpoint: True
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: modules.xlmr_m18.BertSeriesModelWithTransformation
params:
name: "XLMR-Large"
import sys
import copy
import logging
class ColoredFormatter(logging.Formatter):
COLORS = {
"DEBUG": "\033[0;36m", # CYAN
"INFO": "\033[0;32m", # GREEN
"WARNING": "\033[0;33m", # YELLOW
"ERROR": "\033[0;31m", # RED
"CRITICAL": "\033[0;37;41m", # WHITE ON RED
"RESET": "\033[0m", # RESET COLOR
}
def format(self, record):
colored_record = copy.copy(record)
levelname = colored_record.levelname
seq = self.COLORS.get(levelname, self.COLORS["RESET"])
colored_record.levelname = f"{seq}{levelname}{self.COLORS['RESET']}"
return super().format(colored_record)
logger = logging.getLogger("lora")
logger.propagate = False
if not logger.handlers:
handler = logging.StreamHandler(sys.stdout)
handler.setFormatter(
ColoredFormatter("[%(name)s]-%(levelname)s: %(message)s")
)
logger.addHandler(handler)
...@@ -19,3 +19,50 @@ def rebuild_cp_decomposition(up, down, mid): ...@@ -19,3 +19,50 @@ def rebuild_cp_decomposition(up, down, mid):
up = up.reshape(up.size(0), -1) up = up.reshape(up.size(0), -1)
down = down.reshape(down.size(0), -1) down = down.reshape(down.size(0), -1)
return torch.einsum('n m k l, i n, m j -> i j k l', mid, up, down) return torch.einsum('n m k l, i n, m j -> i j k l', mid, up, down)
# copied from https://github.com/KohakuBlueleaf/LyCORIS/blob/dev/lycoris/modules/lokr.py
def factorization(dimension: int, factor:int=-1) -> tuple[int, int]:
'''
return a tuple of two value of input dimension decomposed by the number closest to factor
second value is higher or equal than first value.
In LoRA with Kroneckor Product, first value is a value for weight scale.
secon value is a value for weight.
Becuase of non-commutative property, A⊗B ≠ B⊗A. Meaning of two matrices is slightly different.
examples)
factor
-1 2 4 8 16 ...
127 -> 1, 127 127 -> 1, 127 127 -> 1, 127 127 -> 1, 127 127 -> 1, 127
128 -> 8, 16 128 -> 2, 64 128 -> 4, 32 128 -> 8, 16 128 -> 8, 16
250 -> 10, 25 250 -> 2, 125 250 -> 2, 125 250 -> 5, 50 250 -> 10, 25
360 -> 8, 45 360 -> 2, 180 360 -> 4, 90 360 -> 8, 45 360 -> 12, 30
512 -> 16, 32 512 -> 2, 256 512 -> 4, 128 512 -> 8, 64 512 -> 16, 32
1024 -> 32, 32 1024 -> 2, 512 1024 -> 4, 256 1024 -> 8, 128 1024 -> 16, 64
'''
if factor > 0 and (dimension % factor) == 0:
m = factor
n = dimension // factor
if m > n:
n, m = m, n
return m, n
if factor < 0:
factor = dimension
m, n = 1, dimension
length = m + n
while m<n:
new_m = m + 1
while dimension%new_m != 0:
new_m += 1
new_n = dimension // new_m
if new_m + new_n > length or new_m>factor:
break
else:
m, n = new_m, new_n
if m > n:
n, m = m, n
return m, n
...@@ -93,6 +93,7 @@ class Network: # LoraModule ...@@ -93,6 +93,7 @@ class Network: # LoraModule
self.unet_multiplier = 1.0 self.unet_multiplier = 1.0
self.dyn_dim = None self.dyn_dim = None
self.modules = {} self.modules = {}
self.bundle_embeddings = {}
self.mtime = None self.mtime = None
self.mentioned_name = None self.mentioned_name = None
......
import network
class ModuleTypeGLora(network.ModuleType):
def create_module(self, net: network.Network, weights: network.NetworkWeights):
if all(x in weights.w for x in ["a1.weight", "a2.weight", "alpha", "b1.weight", "b2.weight"]):
return NetworkModuleGLora(net, weights)
return None
# adapted from https://github.com/KohakuBlueleaf/LyCORIS
class NetworkModuleGLora(network.NetworkModule):
def __init__(self, net: network.Network, weights: network.NetworkWeights):
super().__init__(net, weights)
if hasattr(self.sd_module, 'weight'):
self.shape = self.sd_module.weight.shape
self.w1a = weights.w["a1.weight"]
self.w1b = weights.w["b1.weight"]
self.w2a = weights.w["a2.weight"]
self.w2b = weights.w["b2.weight"]
def calc_updown(self, orig_weight):
w1a = self.w1a.to(orig_weight.device, dtype=orig_weight.dtype)
w1b = self.w1b.to(orig_weight.device, dtype=orig_weight.dtype)
w2a = self.w2a.to(orig_weight.device, dtype=orig_weight.dtype)
w2b = self.w2b.to(orig_weight.device, dtype=orig_weight.dtype)
output_shape = [w1a.size(0), w1b.size(1)]
updown = ((w2b @ w1b) + ((orig_weight @ w2a) @ w1a))
return self.finalize_updown(updown, orig_weight, output_shape)
import torch
import network
from lyco_helpers import factorization
from einops import rearrange
class ModuleTypeOFT(network.ModuleType):
def create_module(self, net: network.Network, weights: network.NetworkWeights):
if all(x in weights.w for x in ["oft_blocks"]) or all(x in weights.w for x in ["oft_diag"]):
return NetworkModuleOFT(net, weights)
return None
# Supports both kohya-ss' implementation of COFT https://github.com/kohya-ss/sd-scripts/blob/main/networks/oft.py
# and KohakuBlueleaf's implementation of OFT/COFT https://github.com/KohakuBlueleaf/LyCORIS/blob/dev/lycoris/modules/diag_oft.py
class NetworkModuleOFT(network.NetworkModule):
def __init__(self, net: network.Network, weights: network.NetworkWeights):
super().__init__(net, weights)
self.lin_module = None
self.org_module: list[torch.Module] = [self.sd_module]
self.scale = 1.0
# kohya-ss
if "oft_blocks" in weights.w.keys():
self.is_kohya = True
self.oft_blocks = weights.w["oft_blocks"] # (num_blocks, block_size, block_size)
self.alpha = weights.w["alpha"] # alpha is constraint
self.dim = self.oft_blocks.shape[0] # lora dim
# LyCORIS
elif "oft_diag" in weights.w.keys():
self.is_kohya = False
self.oft_blocks = weights.w["oft_diag"]
# self.alpha is unused
self.dim = self.oft_blocks.shape[1] # (num_blocks, block_size, block_size)
is_linear = type(self.sd_module) in [torch.nn.Linear, torch.nn.modules.linear.NonDynamicallyQuantizableLinear]
is_conv = type(self.sd_module) in [torch.nn.Conv2d]
is_other_linear = type(self.sd_module) in [torch.nn.MultiheadAttention] # unsupported
if is_linear:
self.out_dim = self.sd_module.out_features
elif is_conv:
self.out_dim = self.sd_module.out_channels
elif is_other_linear:
self.out_dim = self.sd_module.embed_dim
if self.is_kohya:
self.constraint = self.alpha * self.out_dim
self.num_blocks = self.dim
self.block_size = self.out_dim // self.dim
else:
self.constraint = None
self.block_size, self.num_blocks = factorization(self.out_dim, self.dim)
def calc_updown(self, orig_weight):
oft_blocks = self.oft_blocks.to(orig_weight.device, dtype=orig_weight.dtype)
eye = torch.eye(self.block_size, device=self.oft_blocks.device)
if self.is_kohya:
block_Q = oft_blocks - oft_blocks.transpose(1, 2) # ensure skew-symmetric orthogonal matrix
norm_Q = torch.norm(block_Q.flatten())
new_norm_Q = torch.clamp(norm_Q, max=self.constraint)
block_Q = block_Q * ((new_norm_Q + 1e-8) / (norm_Q + 1e-8))
oft_blocks = torch.matmul(eye + block_Q, (eye - block_Q).float().inverse())
R = oft_blocks.to(orig_weight.device, dtype=orig_weight.dtype)
# This errors out for MultiheadAttention, might need to be handled up-stream
merged_weight = rearrange(orig_weight, '(k n) ... -> k n ...', k=self.num_blocks, n=self.block_size)
merged_weight = torch.einsum(
'k n m, k n ... -> k m ...',
R,
merged_weight
)
merged_weight = rearrange(merged_weight, 'k m ... -> (k m) ...')
updown = merged_weight.to(orig_weight.device, dtype=orig_weight.dtype) - orig_weight
output_shape = orig_weight.shape
return self.finalize_updown(updown, orig_weight, output_shape)
...@@ -5,16 +5,21 @@ import re ...@@ -5,16 +5,21 @@ import re
import lora_patches import lora_patches
import network import network
import network_lora import network_lora
import network_glora
import network_hada import network_hada
import network_ia3 import network_ia3
import network_lokr import network_lokr
import network_full import network_full
import network_norm import network_norm
import network_oft
import torch import torch
from typing import Union from typing import Union
from modules import shared, devices, sd_models, errors, scripts, sd_hijack from modules import shared, devices, sd_models, errors, scripts, sd_hijack
import modules.textual_inversion.textual_inversion as textual_inversion
from lora_logger import logger
module_types = [ module_types = [
network_lora.ModuleTypeLora(), network_lora.ModuleTypeLora(),
...@@ -23,6 +28,8 @@ module_types = [ ...@@ -23,6 +28,8 @@ module_types = [
network_lokr.ModuleTypeLokr(), network_lokr.ModuleTypeLokr(),
network_full.ModuleTypeFull(), network_full.ModuleTypeFull(),
network_norm.ModuleTypeNorm(), network_norm.ModuleTypeNorm(),
network_glora.ModuleTypeGLora(),
network_oft.ModuleTypeOFT(),
] ]
...@@ -149,9 +156,20 @@ def load_network(name, network_on_disk): ...@@ -149,9 +156,20 @@ def load_network(name, network_on_disk):
is_sd2 = 'model_transformer_resblocks' in shared.sd_model.network_layer_mapping is_sd2 = 'model_transformer_resblocks' in shared.sd_model.network_layer_mapping
matched_networks = {} matched_networks = {}
bundle_embeddings = {}
for key_network, weight in sd.items(): for key_network, weight in sd.items():
key_network_without_network_parts, network_part = key_network.split(".", 1) key_network_without_network_parts, _, network_part = key_network.partition(".")
if key_network_without_network_parts == "bundle_emb":
emb_name, vec_name = network_part.split(".", 1)
emb_dict = bundle_embeddings.get(emb_name, {})
if vec_name.split('.')[0] == 'string_to_param':
_, k2 = vec_name.split('.', 1)
emb_dict['string_to_param'] = {k2: weight}
else:
emb_dict[vec_name] = weight
bundle_embeddings[emb_name] = emb_dict
key = convert_diffusers_name_to_compvis(key_network_without_network_parts, is_sd2) key = convert_diffusers_name_to_compvis(key_network_without_network_parts, is_sd2)
sd_module = shared.sd_model.network_layer_mapping.get(key, None) sd_module = shared.sd_model.network_layer_mapping.get(key, None)
...@@ -174,6 +192,17 @@ def load_network(name, network_on_disk): ...@@ -174,6 +192,17 @@ def load_network(name, network_on_disk):
key = key_network_without_network_parts.replace("lora_te1_text_model", "transformer_text_model") key = key_network_without_network_parts.replace("lora_te1_text_model", "transformer_text_model")
sd_module = shared.sd_model.network_layer_mapping.get(key, None) sd_module = shared.sd_model.network_layer_mapping.get(key, None)
# kohya_ss OFT module
elif sd_module is None and "oft_unet" in key_network_without_network_parts:
key = key_network_without_network_parts.replace("oft_unet", "diffusion_model")
sd_module = shared.sd_model.network_layer_mapping.get(key, None)
# KohakuBlueLeaf OFT module
if sd_module is None and "oft_diag" in key:
key = key_network_without_network_parts.replace("lora_unet", "diffusion_model")
key = key_network_without_network_parts.replace("lora_te1_text_model", "0_transformer_text_model")
sd_module = shared.sd_model.network_layer_mapping.get(key, None)
if sd_module is None: if sd_module is None:
keys_failed_to_match[key_network] = key keys_failed_to_match[key_network] = key
continue continue
...@@ -195,6 +224,14 @@ def load_network(name, network_on_disk): ...@@ -195,6 +224,14 @@ def load_network(name, network_on_disk):
net.modules[key] = net_module net.modules[key] = net_module
embeddings = {}
for emb_name, data in bundle_embeddings.items():
embedding = textual_inversion.create_embedding_from_data(data, emb_name, filename=network_on_disk.filename + "/" + emb_name)
embedding.loaded = None
embeddings[emb_name] = embedding
net.bundle_embeddings = embeddings
if keys_failed_to_match: if keys_failed_to_match:
logging.debug(f"Network {network_on_disk.filename} didn't match keys: {keys_failed_to_match}") logging.debug(f"Network {network_on_disk.filename} didn't match keys: {keys_failed_to_match}")
...@@ -210,11 +247,15 @@ def purge_networks_from_memory(): ...@@ -210,11 +247,15 @@ def purge_networks_from_memory():
def load_networks(names, te_multipliers=None, unet_multipliers=None, dyn_dims=None): def load_networks(names, te_multipliers=None, unet_multipliers=None, dyn_dims=None):
emb_db = sd_hijack.model_hijack.embedding_db
already_loaded = {} already_loaded = {}
for net in loaded_networks: for net in loaded_networks:
if net.name in names: if net.name in names:
already_loaded[net.name] = net already_loaded[net.name] = net
for emb_name, embedding in net.bundle_embeddings.items():
if embedding.loaded:
emb_db.register_embedding_by_name(None, shared.sd_model, emb_name)
loaded_networks.clear() loaded_networks.clear()
...@@ -257,6 +298,21 @@ def load_networks(names, te_multipliers=None, unet_multipliers=None, dyn_dims=No ...@@ -257,6 +298,21 @@ def load_networks(names, te_multipliers=None, unet_multipliers=None, dyn_dims=No
net.dyn_dim = dyn_dims[i] if dyn_dims else 1.0 net.dyn_dim = dyn_dims[i] if dyn_dims else 1.0
loaded_networks.append(net) loaded_networks.append(net)
for emb_name, embedding in net.bundle_embeddings.items():
if embedding.loaded is None and emb_name in emb_db.word_embeddings:
logger.warning(
f'Skip bundle embedding: "{emb_name}"'
' as it was already loaded from embeddings folder'
)
continue
embedding.loaded = False
if emb_db.expected_shape == -1 or emb_db.expected_shape == embedding.shape:
embedding.loaded = True
emb_db.register_embedding(embedding, shared.sd_model)
else:
emb_db.skipped_embeddings[name] = embedding
if failed_to_load_networks: if failed_to_load_networks:
sd_hijack.model_hijack.comments.append("Networks not found: " + ", ".join(failed_to_load_networks)) sd_hijack.model_hijack.comments.append("Networks not found: " + ", ".join(failed_to_load_networks))
...@@ -418,6 +474,7 @@ def network_forward(module, input, original_forward): ...@@ -418,6 +474,7 @@ def network_forward(module, input, original_forward):
def network_reset_cached_weight(self: Union[torch.nn.Conv2d, torch.nn.Linear]): def network_reset_cached_weight(self: Union[torch.nn.Conv2d, torch.nn.Linear]):
self.network_current_names = () self.network_current_names = ()
self.network_weights_backup = None self.network_weights_backup = None
self.network_bias_backup = None
def network_Linear_forward(self, input): def network_Linear_forward(self, input):
...@@ -564,6 +621,7 @@ extra_network_lora = None ...@@ -564,6 +621,7 @@ extra_network_lora = None
available_networks = {} available_networks = {}
available_network_aliases = {} available_network_aliases = {}
loaded_networks = [] loaded_networks = []
loaded_bundle_embeddings = {}
networks_in_memory = {} networks_in_memory = {}
available_network_hash_lookup = {} available_network_hash_lookup = {}
forbidden_network_aliases = {} forbidden_network_aliases = {}
......
...@@ -17,6 +17,8 @@ class ExtraNetworksPageLora(ui_extra_networks.ExtraNetworksPage): ...@@ -17,6 +17,8 @@ class ExtraNetworksPageLora(ui_extra_networks.ExtraNetworksPage):
def create_item(self, name, index=None, enable_filter=True): def create_item(self, name, index=None, enable_filter=True):
lora_on_disk = networks.available_networks.get(name) lora_on_disk = networks.available_networks.get(name)
if lora_on_disk is None:
return
path, ext = os.path.splitext(lora_on_disk.filename) path, ext = os.path.splitext(lora_on_disk.filename)
...@@ -66,9 +68,10 @@ class ExtraNetworksPageLora(ui_extra_networks.ExtraNetworksPage): ...@@ -66,9 +68,10 @@ class ExtraNetworksPageLora(ui_extra_networks.ExtraNetworksPage):
return item return item
def list_items(self): def list_items(self):
for index, name in enumerate(networks.available_networks): # instantiate a list to protect against concurrent modification
names = list(networks.available_networks)
for index, name in enumerate(names):
item = self.create_item(name, index) item = self.create_item(name, index)
if item is not None: if item is not None:
yield item yield item
......
...@@ -23,11 +23,12 @@ class ExtraOptionsSection(scripts.Script): ...@@ -23,11 +23,12 @@ class ExtraOptionsSection(scripts.Script):
self.setting_names = [] self.setting_names = []
self.infotext_fields = [] self.infotext_fields = []
extra_options = shared.opts.extra_options_img2img if is_img2img else shared.opts.extra_options_txt2img extra_options = shared.opts.extra_options_img2img if is_img2img else shared.opts.extra_options_txt2img
elem_id_tabname = "extra_options_" + ("img2img" if is_img2img else "txt2img")
mapping = {k: v for v, k in generation_parameters_copypaste.infotext_to_setting_name_mapping} mapping = {k: v for v, k in generation_parameters_copypaste.infotext_to_setting_name_mapping}
with gr.Blocks() as interface: with gr.Blocks() as interface:
with gr.Accordion("Options", open=False) if shared.opts.extra_options_accordion and extra_options else gr.Group(): with gr.Accordion("Options", open=False, elem_id=elem_id_tabname) if shared.opts.extra_options_accordion and extra_options else gr.Group(elem_id=elem_id_tabname):
row_count = math.ceil(len(extra_options) / shared.opts.extra_options_cols) row_count = math.ceil(len(extra_options) / shared.opts.extra_options_cols)
...@@ -64,11 +65,14 @@ class ExtraOptionsSection(scripts.Script): ...@@ -64,11 +65,14 @@ class ExtraOptionsSection(scripts.Script):
p.override_settings[name] = value p.override_settings[name] = value
shared.options_templates.update(shared.options_section(('ui', "User interface"), { shared.options_templates.update(shared.options_section(('settings_in_ui', "Settings in UI", "ui"), {
"extra_options_txt2img": shared.OptionInfo([], "Options in main UI - txt2img", ui_components.DropdownMulti, lambda: {"choices": list(shared.opts.data_labels.keys())}).js("info", "settingsHintsShowQuicksettings").info("setting entries that also appear in txt2img interfaces").needs_reload_ui(), "settings_in_ui": shared.OptionHTML("""
"extra_options_img2img": shared.OptionInfo([], "Options in main UI - img2img", ui_components.DropdownMulti, lambda: {"choices": list(shared.opts.data_labels.keys())}).js("info", "settingsHintsShowQuicksettings").info("setting entries that also appear in img2img interfaces").needs_reload_ui(), This page allows you to add some settings to the main interface of txt2img and img2img tabs.
"extra_options_cols": shared.OptionInfo(1, "Options in main UI - number of columns", gr.Number, {"precision": 0}).needs_reload_ui(), """),
"extra_options_accordion": shared.OptionInfo(False, "Options in main UI - place into an accordion").needs_reload_ui() "extra_options_txt2img": shared.OptionInfo([], "Settings for txt2img", ui_components.DropdownMulti, lambda: {"choices": list(shared.opts.data_labels.keys())}).js("info", "settingsHintsShowQuicksettings").info("setting entries that also appear in txt2img interfaces").needs_reload_ui(),
"extra_options_img2img": shared.OptionInfo([], "Settings for img2img", ui_components.DropdownMulti, lambda: {"choices": list(shared.opts.data_labels.keys())}).js("info", "settingsHintsShowQuicksettings").info("setting entries that also appear in img2img interfaces").needs_reload_ui(),
"extra_options_cols": shared.OptionInfo(1, "Number of columns for added settings", gr.Slider, {"step": 1, "minimum": 1, "maximum": 20}).info("displayed amount will depend on the actual browser window width").needs_reload_ui(),
"extra_options_accordion": shared.OptionInfo(False, "Place added settings into an accordion").needs_reload_ui()
})) }))
This diff is collapsed.
import hypertile
from modules import scripts, script_callbacks, shared
from scripts.hypertile_xyz import add_axis_options
class ScriptHypertile(scripts.Script):
name = "Hypertile"
def title(self):
return self.name
def show(self, is_img2img):
return scripts.AlwaysVisible
def process(self, p, *args):
hypertile.set_hypertile_seed(p.all_seeds[0])
configure_hypertile(p.width, p.height, enable_unet=shared.opts.hypertile_enable_unet)
self.add_infotext(p)
def before_hr(self, p, *args):
enable = shared.opts.hypertile_enable_unet_secondpass or shared.opts.hypertile_enable_unet
# exclusive hypertile seed for the second pass
if enable:
hypertile.set_hypertile_seed(p.all_seeds[0])
configure_hypertile(p.hr_upscale_to_x, p.hr_upscale_to_y, enable_unet=enable)
if enable and not shared.opts.hypertile_enable_unet:
p.extra_generation_params["Hypertile U-Net second pass"] = True
self.add_infotext(p, add_unet_params=True)
def add_infotext(self, p, add_unet_params=False):
def option(name):
value = getattr(shared.opts, name)
default_value = shared.opts.get_default(name)
return None if value == default_value else value
if shared.opts.hypertile_enable_unet:
p.extra_generation_params["Hypertile U-Net"] = True
if shared.opts.hypertile_enable_unet or add_unet_params:
p.extra_generation_params["Hypertile U-Net max depth"] = option('hypertile_max_depth_unet')
p.extra_generation_params["Hypertile U-Net max tile size"] = option('hypertile_max_tile_unet')
p.extra_generation_params["Hypertile U-Net swap size"] = option('hypertile_swap_size_unet')
if shared.opts.hypertile_enable_vae:
p.extra_generation_params["Hypertile VAE"] = True
p.extra_generation_params["Hypertile VAE max depth"] = option('hypertile_max_depth_vae')
p.extra_generation_params["Hypertile VAE max tile size"] = option('hypertile_max_tile_vae')
p.extra_generation_params["Hypertile VAE swap size"] = option('hypertile_swap_size_vae')
def configure_hypertile(width, height, enable_unet=True):
hypertile.hypertile_hook_model(
shared.sd_model.first_stage_model,
width,
height,
swap_size=shared.opts.hypertile_swap_size_vae,
max_depth=shared.opts.hypertile_max_depth_vae,
tile_size_max=shared.opts.hypertile_max_tile_vae,
enable=shared.opts.hypertile_enable_vae,
)
hypertile.hypertile_hook_model(
shared.sd_model.model,
width,
height,
swap_size=shared.opts.hypertile_swap_size_unet,
max_depth=shared.opts.hypertile_max_depth_unet,
tile_size_max=shared.opts.hypertile_max_tile_unet,
enable=enable_unet,
is_sdxl=shared.sd_model.is_sdxl
)
def on_ui_settings():
import gradio as gr
options = {
"hypertile_explanation": shared.OptionHTML("""
<a href='https://github.com/tfernd/HyperTile'>Hypertile</a> optimizes the self-attention layer within U-Net and VAE models,
resulting in a reduction in computation time ranging from 1 to 4 times. The larger the generated image is, the greater the
benefit.
"""),
"hypertile_enable_unet": shared.OptionInfo(False, "Enable Hypertile U-Net", infotext="Hypertile U-Net").info("enables hypertile for all modes, including hires fix second pass; noticeable change in details of the generated picture"),
"hypertile_enable_unet_secondpass": shared.OptionInfo(False, "Enable Hypertile U-Net for hires fix second pass", infotext="Hypertile U-Net second pass").info("enables hypertile just for hires fix second pass - regardless of whether the above setting is enabled"),
"hypertile_max_depth_unet": shared.OptionInfo(3, "Hypertile U-Net max depth", gr.Slider, {"minimum": 0, "maximum": 3, "step": 1}, infotext="Hypertile U-Net max depth").info("larger = more neural network layers affected; minor effect on performance"),
"hypertile_max_tile_unet": shared.OptionInfo(256, "Hypertile U-Net max tile size", gr.Slider, {"minimum": 0, "maximum": 512, "step": 16}, infotext="Hypertile U-Net max tile size").info("larger = worse performance"),
"hypertile_swap_size_unet": shared.OptionInfo(3, "Hypertile U-Net swap size", gr.Slider, {"minimum": 0, "maximum": 64, "step": 1}, infotext="Hypertile U-Net swap size"),
"hypertile_enable_vae": shared.OptionInfo(False, "Enable Hypertile VAE", infotext="Hypertile VAE").info("minimal change in the generated picture"),
"hypertile_max_depth_vae": shared.OptionInfo(3, "Hypertile VAE max depth", gr.Slider, {"minimum": 0, "maximum": 3, "step": 1}, infotext="Hypertile VAE max depth"),
"hypertile_max_tile_vae": shared.OptionInfo(128, "Hypertile VAE max tile size", gr.Slider, {"minimum": 0, "maximum": 512, "step": 16}, infotext="Hypertile VAE max tile size"),
"hypertile_swap_size_vae": shared.OptionInfo(3, "Hypertile VAE swap size ", gr.Slider, {"minimum": 0, "maximum": 64, "step": 1}, infotext="Hypertile VAE swap size"),
}
for name, opt in options.items():
opt.section = ('hypertile', "Hypertile")
shared.opts.add_option(name, opt)
script_callbacks.on_ui_settings(on_ui_settings)
script_callbacks.on_before_ui(add_axis_options)
from modules import scripts
from modules.shared import opts
xyz_grid = [x for x in scripts.scripts_data if x.script_class.__module__ == "xyz_grid.py"][0].module
def int_applier(value_name:str, min_range:int = -1, max_range:int = -1):
"""
Returns a function that applies the given value to the given value_name in opts.data.
"""
def validate(value_name:str, value:str):
value = int(value)
# validate value
if not min_range == -1:
assert value >= min_range, f"Value {value} for {value_name} must be greater than or equal to {min_range}"
if not max_range == -1:
assert value <= max_range, f"Value {value} for {value_name} must be less than or equal to {max_range}"
def apply_int(p, x, xs):
validate(value_name, x)
opts.data[value_name] = int(x)
return apply_int
def bool_applier(value_name:str):
"""
Returns a function that applies the given value to the given value_name in opts.data.
"""
def validate(value_name:str, value:str):
assert value.lower() in ["true", "false"], f"Value {value} for {value_name} must be either true or false"
def apply_bool(p, x, xs):
validate(value_name, x)
value_boolean = x.lower() == "true"
opts.data[value_name] = value_boolean
return apply_bool
def add_axis_options():
extra_axis_options = [
xyz_grid.AxisOption("[Hypertile] Unet First pass Enabled", str, bool_applier("hypertile_enable_unet"), choices=xyz_grid.boolean_choice(reverse=True)),
xyz_grid.AxisOption("[Hypertile] Unet Second pass Enabled", str, bool_applier("hypertile_enable_unet_secondpass"), choices=xyz_grid.boolean_choice(reverse=True)),
xyz_grid.AxisOption("[Hypertile] Unet Max Depth", int, int_applier("hypertile_max_depth_unet", 0, 3), choices=lambda: [str(x) for x in range(4)]),
xyz_grid.AxisOption("[Hypertile] Unet Max Tile Size", int, int_applier("hypertile_max_tile_unet", 0, 512)),
xyz_grid.AxisOption("[Hypertile] Unet Swap Size", int, int_applier("hypertile_swap_size_unet", 0, 64)),
xyz_grid.AxisOption("[Hypertile] VAE Enabled", str, bool_applier("hypertile_enable_vae"), choices=xyz_grid.boolean_choice(reverse=True)),
xyz_grid.AxisOption("[Hypertile] VAE Max Depth", int, int_applier("hypertile_max_depth_vae", 0, 3), choices=lambda: [str(x) for x in range(4)]),
xyz_grid.AxisOption("[Hypertile] VAE Max Tile Size", int, int_applier("hypertile_max_tile_vae", 0, 512)),
xyz_grid.AxisOption("[Hypertile] VAE Swap Size", int, int_applier("hypertile_swap_size_vae", 0, 64)),
]
set_a = {opt.label for opt in xyz_grid.axis_options}
set_b = {opt.label for opt in extra_axis_options}
if set_a.intersection(set_b):
return
xyz_grid.axis_options.extend(extra_axis_options)
...@@ -12,6 +12,8 @@ function isMobile() { ...@@ -12,6 +12,8 @@ function isMobile() {
} }
function reportWindowSize() { function reportWindowSize() {
if (gradioApp().querySelector('.toprow-compact-tools')) return; // not applicable for compact prompt layout
var currentlyMobile = isMobile(); var currentlyMobile = isMobile();
if (currentlyMobile == isSetupForMobile) return; if (currentlyMobile == isSetupForMobile) return;
isSetupForMobile = currentlyMobile; isSetupForMobile = currentlyMobile;
......
...@@ -119,7 +119,7 @@ window.addEventListener('paste', e => { ...@@ -119,7 +119,7 @@ window.addEventListener('paste', e => {
} }
const firstFreeImageField = visibleImageFields const firstFreeImageField = visibleImageFields
.filter(el => el.querySelector('input[type=file]'))?.[0]; .filter(el => !el.querySelector('img'))?.[0];
dropReplaceImage( dropReplaceImage(
firstFreeImageField ? firstFreeImageField ?
......
...@@ -18,37 +18,43 @@ function keyupEditAttention(event) { ...@@ -18,37 +18,43 @@ function keyupEditAttention(event) {
const before = text.substring(0, selectionStart); const before = text.substring(0, selectionStart);
let beforeParen = before.lastIndexOf(OPEN); let beforeParen = before.lastIndexOf(OPEN);
if (beforeParen == -1) return false; if (beforeParen == -1) return false;
let beforeParenClose = before.lastIndexOf(CLOSE);
while (beforeParenClose !== -1 && beforeParenClose > beforeParen) { let beforeClosingParen = before.lastIndexOf(CLOSE);
beforeParen = before.lastIndexOf(OPEN, beforeParen - 1); if (beforeClosingParen != -1 && beforeClosingParen > beforeParen) return false;
beforeParenClose = before.lastIndexOf(CLOSE, beforeParenClose - 1);
}
// Find closing parenthesis around current cursor // Find closing parenthesis around current cursor
const after = text.substring(selectionStart); const after = text.substring(selectionStart);
let afterParen = after.indexOf(CLOSE); let afterParen = after.indexOf(CLOSE);
if (afterParen == -1) return false; if (afterParen == -1) return false;
let afterParenOpen = after.indexOf(OPEN);
while (afterParenOpen !== -1 && afterParen > afterParenOpen) { let afterOpeningParen = after.indexOf(OPEN);
afterParen = after.indexOf(CLOSE, afterParen + 1); if (afterOpeningParen != -1 && afterOpeningParen < afterParen) return false;
afterParenOpen = after.indexOf(OPEN, afterParenOpen + 1);
}
if (beforeParen === -1 || afterParen === -1) return false;
// Set the selection to the text between the parenthesis // Set the selection to the text between the parenthesis
const parenContent = text.substring(beforeParen + 1, selectionStart + afterParen); const parenContent = text.substring(beforeParen + 1, selectionStart + afterParen);
const lastColon = parenContent.lastIndexOf(":"); if (/.*:-?[\d.]+/s.test(parenContent)) {
selectionStart = beforeParen + 1; const lastColon = parenContent.lastIndexOf(":");
selectionEnd = selectionStart + lastColon; selectionStart = beforeParen + 1;
selectionEnd = selectionStart + lastColon;
} else {
selectionStart = beforeParen + 1;
selectionEnd = selectionStart + parenContent.length;
}
target.setSelectionRange(selectionStart, selectionEnd); target.setSelectionRange(selectionStart, selectionEnd);
return true; return true;
} }
function selectCurrentWord() { function selectCurrentWord() {
if (selectionStart !== selectionEnd) return false; if (selectionStart !== selectionEnd) return false;
const delimiters = opts.keyedit_delimiters + " \r\n\t"; const whitespace_delimiters = {"Tab": "\t", "Carriage Return": "\r", "Line Feed": "\n"};
let delimiters = opts.keyedit_delimiters;
for (let i of opts.keyedit_delimiters_whitespace) {
delimiters += whitespace_delimiters[i];
}
// seek backward until to find beggining // seek backward to find beginning
while (!delimiters.includes(text[selectionStart - 1]) && selectionStart > 0) { while (!delimiters.includes(text[selectionStart - 1]) && selectionStart > 0) {
selectionStart--; selectionStart--;
} }
...@@ -63,7 +69,7 @@ function keyupEditAttention(event) { ...@@ -63,7 +69,7 @@ function keyupEditAttention(event) {
} }
// If the user hasn't selected anything, let's select their current parenthesis block or word // If the user hasn't selected anything, let's select their current parenthesis block or word
if (!selectCurrentParenthesisBlock('<', '>') && !selectCurrentParenthesisBlock('(', ')')) { if (!selectCurrentParenthesisBlock('<', '>') && !selectCurrentParenthesisBlock('(', ')') && !selectCurrentParenthesisBlock('[', ']')) {
selectCurrentWord(); selectCurrentWord();
} }
...@@ -71,33 +77,54 @@ function keyupEditAttention(event) { ...@@ -71,33 +77,54 @@ function keyupEditAttention(event) {
var closeCharacter = ')'; var closeCharacter = ')';
var delta = opts.keyedit_precision_attention; var delta = opts.keyedit_precision_attention;
var start = selectionStart > 0 ? text[selectionStart - 1] : "";
var end = text[selectionEnd];
if (selectionStart > 0 && text[selectionStart - 1] == '<') { if (start == '<') {
closeCharacter = '>'; closeCharacter = '>';
delta = opts.keyedit_precision_extra; delta = opts.keyedit_precision_extra;
} else if (selectionStart == 0 || text[selectionStart - 1] != "(") { } else if (start == '(' && end == ')' || start == '[' && end == ']') { // convert old-style (((emphasis)))
let numParen = 0;
while (text[selectionStart - numParen - 1] == start && text[selectionEnd + numParen] == end) {
numParen++;
}
if (start == "[") {
weight = (1 / 1.1) ** numParen;
} else {
weight = 1.1 ** numParen;
}
weight = Math.round(weight / opts.keyedit_precision_attention) * opts.keyedit_precision_attention;
text = text.slice(0, selectionStart - numParen) + "(" + text.slice(selectionStart, selectionEnd) + ":" + weight + ")" + text.slice(selectionEnd + numParen);
selectionStart -= numParen - 1;
selectionEnd -= numParen - 1;
} else if (start != '(') {
// do not include spaces at the end // do not include spaces at the end
while (selectionEnd > selectionStart && text[selectionEnd - 1] == ' ') { while (selectionEnd > selectionStart && text[selectionEnd - 1] == ' ') {
selectionEnd -= 1; selectionEnd--;
} }
if (selectionStart == selectionEnd) { if (selectionStart == selectionEnd) {
return; return;
} }
text = text.slice(0, selectionStart) + "(" + text.slice(selectionStart, selectionEnd) + ":1.0)" + text.slice(selectionEnd); text = text.slice(0, selectionStart) + "(" + text.slice(selectionStart, selectionEnd) + ":1.0)" + text.slice(selectionEnd);
selectionStart += 1; selectionStart++;
selectionEnd += 1; selectionEnd++;
} }
var end = text.slice(selectionEnd + 1).indexOf(closeCharacter) + 1; if (text[selectionEnd] != ':') return;
var weight = parseFloat(text.slice(selectionEnd + 1, selectionEnd + 1 + end)); var weightLength = text.slice(selectionEnd + 1).indexOf(closeCharacter) + 1;
var weight = parseFloat(text.slice(selectionEnd + 1, selectionEnd + weightLength));
if (isNaN(weight)) return; if (isNaN(weight)) return;
weight += isPlus ? delta : -delta; weight += isPlus ? delta : -delta;
weight = parseFloat(weight.toPrecision(12)); weight = parseFloat(weight.toPrecision(12));
if (String(weight).length == 1) weight += ".0"; if (Number.isInteger(weight)) weight += ".0";
if (closeCharacter == ')' && weight == 1) { if (closeCharacter == ')' && weight == 1) {
var endParenPos = text.substring(selectionEnd).indexOf(')'); var endParenPos = text.substring(selectionEnd).indexOf(')');
...@@ -105,7 +132,7 @@ function keyupEditAttention(event) { ...@@ -105,7 +132,7 @@ function keyupEditAttention(event) {
selectionStart--; selectionStart--;
selectionEnd--; selectionEnd--;
} else { } else {
text = text.slice(0, selectionEnd + 1) + weight + text.slice(selectionEnd + end); text = text.slice(0, selectionEnd + 1) + weight + text.slice(selectionEnd + weightLength);
} }
target.focus(); target.focus();
......
...@@ -26,8 +26,9 @@ function setupExtraNetworksForTab(tabname) { ...@@ -26,8 +26,9 @@ function setupExtraNetworksForTab(tabname) {
var refresh = gradioApp().getElementById(tabname + '_extra_refresh'); var refresh = gradioApp().getElementById(tabname + '_extra_refresh');
var showDirsDiv = gradioApp().getElementById(tabname + '_extra_show_dirs'); var showDirsDiv = gradioApp().getElementById(tabname + '_extra_show_dirs');
var showDirs = gradioApp().querySelector('#' + tabname + '_extra_show_dirs input'); var showDirs = gradioApp().querySelector('#' + tabname + '_extra_show_dirs input');
var promptContainer = gradioApp().querySelector('.prompt-container-compact#' + tabname + '_prompt_container');
var negativePrompt = gradioApp().querySelector('#' + tabname + '_neg_prompt');
sort.dataset.sortkey = 'sortDefault';
tabs.appendChild(searchDiv); tabs.appendChild(searchDiv);
tabs.appendChild(sort); tabs.appendChild(sort);
tabs.appendChild(sortOrder); tabs.appendChild(sortOrder);
...@@ -49,20 +50,23 @@ function setupExtraNetworksForTab(tabname) { ...@@ -49,20 +50,23 @@ function setupExtraNetworksForTab(tabname) {
elem.style.display = visible ? "" : "none"; elem.style.display = visible ? "" : "none";
}); });
applySort();
}; };
var applySort = function() { var applySort = function() {
var cards = gradioApp().querySelectorAll('#' + tabname + '_extra_tabs div.card');
var reverse = sortOrder.classList.contains("sortReverse"); var reverse = sortOrder.classList.contains("sortReverse");
var sortKey = sort.querySelector("input").value.toLowerCase().replace("sort", "").replaceAll(" ", "_").replace(/_+$/, "").trim(); var sortKey = sort.querySelector("input").value.toLowerCase().replace("sort", "").replaceAll(" ", "_").replace(/_+$/, "").trim() || "name";
sortKey = sortKey ? "sort" + sortKey.charAt(0).toUpperCase() + sortKey.slice(1) : ""; sortKey = "sort" + sortKey.charAt(0).toUpperCase() + sortKey.slice(1);
var sortKeyStore = sortKey ? sortKey + (reverse ? "Reverse" : "") : ""; var sortKeyStore = sortKey + "-" + (reverse ? "Descending" : "Ascending") + "-" + cards.length;
if (!sortKey || sortKeyStore == sort.dataset.sortkey) {
if (sortKeyStore == sort.dataset.sortkey) {
return; return;
} }
sort.dataset.sortkey = sortKeyStore; sort.dataset.sortkey = sortKeyStore;
var cards = gradioApp().querySelectorAll('#' + tabname + '_extra_tabs div.card');
cards.forEach(function(card) { cards.forEach(function(card) {
card.originalParentElement = card.parentElement; card.originalParentElement = card.parentElement;
}); });
...@@ -88,15 +92,13 @@ function setupExtraNetworksForTab(tabname) { ...@@ -88,15 +92,13 @@ function setupExtraNetworksForTab(tabname) {
}; };
search.addEventListener("input", applyFilter); search.addEventListener("input", applyFilter);
applyFilter();
["change", "blur", "click"].forEach(function(evt) {
sort.querySelector("input").addEventListener(evt, applySort);
});
sortOrder.addEventListener("click", function() { sortOrder.addEventListener("click", function() {
sortOrder.classList.toggle("sortReverse"); sortOrder.classList.toggle("sortReverse");
applySort(); applySort();
}); });
applyFilter();
extraNetworksApplySort[tabname] = applySort;
extraNetworksApplyFilter[tabname] = applyFilter; extraNetworksApplyFilter[tabname] = applyFilter;
var showDirsUpdate = function() { var showDirsUpdate = function() {
...@@ -109,11 +111,51 @@ function setupExtraNetworksForTab(tabname) { ...@@ -109,11 +111,51 @@ function setupExtraNetworksForTab(tabname) {
showDirsUpdate(); showDirsUpdate();
} }
function extraNetworksMovePromptToTab(tabname, id, showPrompt, showNegativePrompt) {
if (!gradioApp().querySelector('.toprow-compact-tools')) return; // only applicable for compact prompt layout
var promptContainer = gradioApp().getElementById(tabname + '_prompt_container');
var prompt = gradioApp().getElementById(tabname + '_prompt_row');
var negPrompt = gradioApp().getElementById(tabname + '_neg_prompt_row');
var elem = id ? gradioApp().getElementById(id) : null;
if (showNegativePrompt && elem) {
elem.insertBefore(negPrompt, elem.firstChild);
} else {
promptContainer.insertBefore(negPrompt, promptContainer.firstChild);
}
if (showPrompt && elem) {
elem.insertBefore(prompt, elem.firstChild);
} else {
promptContainer.insertBefore(prompt, promptContainer.firstChild);
}
if (elem) {
elem.classList.toggle('extra-page-prompts-active', showNegativePrompt || showPrompt);
}
}
function extraNetworksUrelatedTabSelected(tabname) { // called from python when user selects an unrelated tab (generate)
extraNetworksMovePromptToTab(tabname, '', false, false);
}
function extraNetworksTabSelected(tabname, id, showPrompt, showNegativePrompt) { // called from python when user selects an extra networks tab
extraNetworksMovePromptToTab(tabname, id, showPrompt, showNegativePrompt);
}
function applyExtraNetworkFilter(tabname) { function applyExtraNetworkFilter(tabname) {
setTimeout(extraNetworksApplyFilter[tabname], 1); setTimeout(extraNetworksApplyFilter[tabname], 1);
} }
function applyExtraNetworkSort(tabname) {
setTimeout(extraNetworksApplySort[tabname], 1);
}
var extraNetworksApplyFilter = {}; var extraNetworksApplyFilter = {};
var extraNetworksApplySort = {};
var activePromptTextarea = {}; var activePromptTextarea = {};
function setupExtraNetworks() { function setupExtraNetworks() {
...@@ -140,14 +182,15 @@ function setupExtraNetworks() { ...@@ -140,14 +182,15 @@ function setupExtraNetworks() {
onUiLoaded(setupExtraNetworks); onUiLoaded(setupExtraNetworks);
var re_extranet = /<([^:]+:[^:]+):[\d.]+>(.*)/; var re_extranet = /<([^:^>]+:[^:]+):[\d.]+>(.*)/;
var re_extranet_g = /\s+<([^:]+:[^:]+):[\d.]+>/g; var re_extranet_g = /<([^:^>]+:[^:]+):[\d.]+>/g;
function tryToRemoveExtraNetworkFromPrompt(textarea, text) { function tryToRemoveExtraNetworkFromPrompt(textarea, text) {
var m = text.match(re_extranet); var m = text.match(re_extranet);
var replaced = false; var replaced = false;
var newTextareaText; var newTextareaText;
if (m) { if (m) {
var extraTextBeforeNet = opts.extra_networks_add_text_separator;
var extraTextAfterNet = m[2]; var extraTextAfterNet = m[2];
var partToSearch = m[1]; var partToSearch = m[1];
var foundAtPosition = -1; var foundAtPosition = -1;
...@@ -161,8 +204,13 @@ function tryToRemoveExtraNetworkFromPrompt(textarea, text) { ...@@ -161,8 +204,13 @@ function tryToRemoveExtraNetworkFromPrompt(textarea, text) {
return found; return found;
}); });
if (foundAtPosition >= 0 && newTextareaText.substr(foundAtPosition, extraTextAfterNet.length) == extraTextAfterNet) { if (foundAtPosition >= 0) {
newTextareaText = newTextareaText.substr(0, foundAtPosition) + newTextareaText.substr(foundAtPosition + extraTextAfterNet.length); if (newTextareaText.substr(foundAtPosition, extraTextAfterNet.length) == extraTextAfterNet) {
newTextareaText = newTextareaText.substr(0, foundAtPosition) + newTextareaText.substr(foundAtPosition + extraTextAfterNet.length);
}
if (newTextareaText.substr(foundAtPosition - extraTextBeforeNet.length, extraTextBeforeNet.length) == extraTextBeforeNet) {
newTextareaText = newTextareaText.substr(0, foundAtPosition - extraTextBeforeNet.length) + newTextareaText.substr(foundAtPosition);
}
} }
} else { } else {
newTextareaText = textarea.value.replaceAll(new RegExp(text, "g"), function(found) { newTextareaText = textarea.value.replaceAll(new RegExp(text, "g"), function(found) {
...@@ -216,27 +264,24 @@ function extraNetworksSearchButton(tabs_id, event) { ...@@ -216,27 +264,24 @@ function extraNetworksSearchButton(tabs_id, event) {
var globalPopup = null; var globalPopup = null;
var globalPopupInner = null; var globalPopupInner = null;
function closePopup() { function closePopup() {
if (!globalPopup) return; if (!globalPopup) return;
globalPopup.style.display = "none"; globalPopup.style.display = "none";
} }
function popup(contents) { function popup(contents) {
if (!globalPopup) { if (!globalPopup) {
globalPopup = document.createElement('div'); globalPopup = document.createElement('div');
globalPopup.onclick = closePopup;
globalPopup.classList.add('global-popup'); globalPopup.classList.add('global-popup');
var close = document.createElement('div'); var close = document.createElement('div');
close.classList.add('global-popup-close'); close.classList.add('global-popup-close');
close.onclick = closePopup; close.addEventListener("click", closePopup);
close.title = "Close"; close.title = "Close";
globalPopup.appendChild(close); globalPopup.appendChild(close);
globalPopupInner = document.createElement('div'); globalPopupInner = document.createElement('div');
globalPopupInner.onclick = function(event) {
event.stopPropagation(); return false;
};
globalPopupInner.classList.add('global-popup-inner'); globalPopupInner.classList.add('global-popup-inner');
globalPopup.appendChild(globalPopupInner); globalPopup.appendChild(globalPopupInner);
...@@ -335,7 +380,7 @@ function extraNetworksEditUserMetadata(event, tabname, extraPage, cardName) { ...@@ -335,7 +380,7 @@ function extraNetworksEditUserMetadata(event, tabname, extraPage, cardName) {
function extraNetworksRefreshSingleCard(page, tabname, name) { function extraNetworksRefreshSingleCard(page, tabname, name) {
requestGet("./sd_extra_networks/get-single-card", {page: page, tabname: tabname, name: name}, function(data) { requestGet("./sd_extra_networks/get-single-card", {page: page, tabname: tabname, name: name}, function(data) {
if (data && data.html) { if (data && data.html) {
var card = gradioApp().querySelector('.card[data-name=' + JSON.stringify(name) + ']'); // likely using the wrong stringify function var card = gradioApp().querySelector(`#${tabname}_${page.replace(" ", "_")}_cards > .card[data-name="${name}"]`);
var newDiv = document.createElement('DIV'); var newDiv = document.createElement('DIV');
newDiv.innerHTML = data.html; newDiv.innerHTML = data.html;
...@@ -347,3 +392,9 @@ function extraNetworksRefreshSingleCard(page, tabname, name) { ...@@ -347,3 +392,9 @@ function extraNetworksRefreshSingleCard(page, tabname, name) {
} }
}); });
} }
window.addEventListener("keydown", function(event) {
if (event.key == "Escape") {
closePopup();
}
});
...@@ -33,8 +33,11 @@ function updateOnBackgroundChange() { ...@@ -33,8 +33,11 @@ function updateOnBackgroundChange() {
const modalImage = gradioApp().getElementById("modalImage"); const modalImage = gradioApp().getElementById("modalImage");
if (modalImage && modalImage.offsetParent) { if (modalImage && modalImage.offsetParent) {
let currentButton = selected_gallery_button(); let currentButton = selected_gallery_button();
let preview = gradioApp().querySelectorAll('.livePreview > img');
if (currentButton?.children?.length > 0 && modalImage.src != currentButton.children[0].src) { if (opts.js_live_preview_in_modal_lightbox && preview.length > 0) {
// show preview image if available
modalImage.src = preview[preview.length - 1].src;
} else if (currentButton?.children?.length > 0 && modalImage.src != currentButton.children[0].src) {
modalImage.src = currentButton.children[0].src; modalImage.src = currentButton.children[0].src;
if (modalImage.style.display === 'none') { if (modalImage.style.display === 'none') {
const modal = gradioApp().getElementById("lightboxModal"); const modal = gradioApp().getElementById("lightboxModal");
......
var observerAccordionOpen = new MutationObserver(function(mutations) { function inputAccordionChecked(id, checked) {
mutations.forEach(function(mutationRecord) { var accordion = gradioApp().getElementById(id);
var elem = mutationRecord.target; accordion.visibleCheckbox.checked = checked;
var open = elem.classList.contains('open'); accordion.onVisibleCheckboxChange();
}
var accordion = elem.parentNode; function setupAccordion(accordion) {
accordion.classList.toggle('input-accordion-open', open); var labelWrap = accordion.querySelector('.label-wrap');
var gradioCheckbox = gradioApp().querySelector('#' + accordion.id + "-checkbox input");
var extra = gradioApp().querySelector('#' + accordion.id + "-extra");
var span = labelWrap.querySelector('span');
var linked = true;
var checkbox = gradioApp().querySelector('#' + accordion.id + "-checkbox input"); var isOpen = function() {
checkbox.checked = open; return labelWrap.classList.contains('open');
updateInput(checkbox); };
var extra = gradioApp().querySelector('#' + accordion.id + "-extra"); var observerAccordionOpen = new MutationObserver(function(mutations) {
if (extra) { mutations.forEach(function(mutationRecord) {
extra.style.display = open ? "" : "none"; accordion.classList.toggle('input-accordion-open', isOpen());
}
if (linked) {
accordion.visibleCheckbox.checked = isOpen();
accordion.onVisibleCheckboxChange();
}
});
}); });
}); observerAccordionOpen.observe(labelWrap, {attributes: true, attributeFilter: ['class']});
function inputAccordionChecked(id, checked) { if (extra) {
var label = gradioApp().querySelector('#' + id + " .label-wrap"); labelWrap.insertBefore(extra, labelWrap.lastElementChild);
if (label.classList.contains('open') != checked) {
label.click();
} }
accordion.onChecked = function(checked) {
if (isOpen() != checked) {
labelWrap.click();
}
};
var visibleCheckbox = document.createElement('INPUT');
visibleCheckbox.type = 'checkbox';
visibleCheckbox.checked = isOpen();
visibleCheckbox.id = accordion.id + "-visible-checkbox";
visibleCheckbox.className = gradioCheckbox.className + " input-accordion-checkbox";
span.insertBefore(visibleCheckbox, span.firstChild);
accordion.visibleCheckbox = visibleCheckbox;
accordion.onVisibleCheckboxChange = function() {
if (linked && isOpen() != visibleCheckbox.checked) {
labelWrap.click();
}
gradioCheckbox.checked = visibleCheckbox.checked;
updateInput(gradioCheckbox);
};
visibleCheckbox.addEventListener('click', function(event) {
linked = false;
event.stopPropagation();
});
visibleCheckbox.addEventListener('input', accordion.onVisibleCheckboxChange);
} }
onUiLoaded(function() { onUiLoaded(function() {
for (var accordion of gradioApp().querySelectorAll('.input-accordion')) { for (var accordion of gradioApp().querySelectorAll('.input-accordion')) {
var labelWrap = accordion.querySelector('.label-wrap'); setupAccordion(accordion);
observerAccordionOpen.observe(labelWrap, {attributes: true, attributeFilter: ['class']});
var extra = gradioApp().querySelector('#' + accordion.id + "-extra");
if (extra) {
labelWrap.insertBefore(extra, labelWrap.lastElementChild);
}
} }
}); });
...@@ -26,7 +26,11 @@ onAfterUiUpdate(function() { ...@@ -26,7 +26,11 @@ onAfterUiUpdate(function() {
lastHeadImg = headImg; lastHeadImg = headImg;
// play notification sound if available // play notification sound if available
gradioApp().querySelector('#audio_notification audio')?.play(); const notificationAudio = gradioApp().querySelector('#audio_notification audio');
if (notificationAudio) {
notificationAudio.volume = opts.notification_volume / 100.0 || 1.0;
notificationAudio.play();
}
if (document.hasFocus()) return; if (document.hasFocus()) return;
......
let settingsExcludeTabsFromShowAll = {
settings_tab_defaults: 1,
settings_tab_sysinfo: 1,
settings_tab_actions: 1,
settings_tab_licenses: 1,
};
function settingsShowAllTabs() {
gradioApp().querySelectorAll('#settings > div').forEach(function(elem) {
if (settingsExcludeTabsFromShowAll[elem.id]) return;
elem.style.display = "block";
});
}
function settingsShowOneTab() {
gradioApp().querySelector('#settings_show_one_page').click();
}
onUiLoaded(function() {
var edit = gradioApp().querySelector('#settings_search');
var editTextarea = gradioApp().querySelector('#settings_search > label > input');
var buttonShowAllPages = gradioApp().getElementById('settings_show_all_pages');
var settings_tabs = gradioApp().querySelector('#settings div');
onEdit('settingsSearch', editTextarea, 250, function() {
var searchText = (editTextarea.value || "").trim().toLowerCase();
gradioApp().querySelectorAll('#settings > div[id^=settings_] div[id^=column_settings_] > *').forEach(function(elem) {
var visible = elem.textContent.trim().toLowerCase().indexOf(searchText) != -1;
elem.style.display = visible ? "" : "none";
});
if (searchText != "") {
settingsShowAllTabs();
} else {
settingsShowOneTab();
}
});
settings_tabs.insertBefore(edit, settings_tabs.firstChild);
settings_tabs.appendChild(buttonShowAllPages);
buttonShowAllPages.addEventListener("click", settingsShowAllTabs);
});
onOptionsChanged(function() {
if (gradioApp().querySelector('#settings .settings-category')) return;
var sectionMap = {};
gradioApp().querySelectorAll('#settings > div > button').forEach(function(x) {
sectionMap[x.textContent.trim()] = x;
});
opts._categories.forEach(function(x) {
var section = x[0];
var category = x[1];
var span = document.createElement('SPAN');
span.textContent = category;
span.className = 'settings-category';
var sectionElem = sectionMap[section];
if (!sectionElem) return;
sectionElem.parentElement.insertBefore(span, sectionElem);
});
});
let promptTokenCountDebounceTime = 800; let promptTokenCountUpdateFunctions = {};
let promptTokenCountTimeouts = {};
var promptTokenCountUpdateFunctions = {};
function update_txt2img_tokens(...args) { function update_txt2img_tokens(...args) {
// Called from Gradio // Called from Gradio
update_token_counter("txt2img_token_button"); update_token_counter("txt2img_token_button");
update_token_counter("txt2img_negative_token_button");
if (args.length == 2) { if (args.length == 2) {
return args[0]; return args[0];
} }
...@@ -14,6 +13,7 @@ function update_txt2img_tokens(...args) { ...@@ -14,6 +13,7 @@ function update_txt2img_tokens(...args) {
function update_img2img_tokens(...args) { function update_img2img_tokens(...args) {
// Called from Gradio // Called from Gradio
update_token_counter("img2img_token_button"); update_token_counter("img2img_token_button");
update_token_counter("img2img_negative_token_button");
if (args.length == 2) { if (args.length == 2) {
return args[0]; return args[0];
} }
...@@ -21,16 +21,7 @@ function update_img2img_tokens(...args) { ...@@ -21,16 +21,7 @@ function update_img2img_tokens(...args) {
} }
function update_token_counter(button_id) { function update_token_counter(button_id) {
if (opts.disable_token_counters) { promptTokenCountUpdateFunctions[button_id]?.();
return;
}
if (promptTokenCountTimeouts[button_id]) {
clearTimeout(promptTokenCountTimeouts[button_id]);
}
promptTokenCountTimeouts[button_id] = setTimeout(
() => gradioApp().getElementById(button_id)?.click(),
promptTokenCountDebounceTime,
);
} }
...@@ -69,10 +60,11 @@ function setupTokenCounting(id, id_counter, id_button) { ...@@ -69,10 +60,11 @@ function setupTokenCounting(id, id_counter, id_button) {
prompt.parentElement.insertBefore(counter, prompt); prompt.parentElement.insertBefore(counter, prompt);
prompt.parentElement.style.position = "relative"; prompt.parentElement.style.position = "relative";
promptTokenCountUpdateFunctions[id] = function() { var func = onEdit(id, textarea, 800, function() {
update_token_counter(id_button); gradioApp().getElementById(id_button)?.click();
}; });
textarea.addEventListener("input", promptTokenCountUpdateFunctions[id]); promptTokenCountUpdateFunctions[id] = func;
promptTokenCountUpdateFunctions[id_button] = func;
} }
function setupTokenCounters() { function setupTokenCounters() {
......
...@@ -170,6 +170,23 @@ function submit_img2img() { ...@@ -170,6 +170,23 @@ function submit_img2img() {
return res; return res;
} }
function submit_extras() {
showSubmitButtons('extras', false);
var id = randomId();
requestProgress(id, gradioApp().getElementById('extras_gallery_container'), gradioApp().getElementById('extras_gallery'), function() {
showSubmitButtons('extras', true);
});
var res = create_submit_args(arguments);
res[0] = id;
console.log(res);
return res;
}
function restoreProgressTxt2img() { function restoreProgressTxt2img() {
showRestoreProgressButton("txt2img", false); showRestoreProgressButton("txt2img", false);
var id = localGet("txt2img_task_id"); var id = localGet("txt2img_task_id");
...@@ -198,9 +215,33 @@ function restoreProgressImg2img() { ...@@ -198,9 +215,33 @@ function restoreProgressImg2img() {
} }
/**
* Configure the width and height elements on `tabname` to accept
* pasting of resolutions in the form of "width x height".
*/
function setupResolutionPasting(tabname) {
var width = gradioApp().querySelector(`#${tabname}_width input[type=number]`);
var height = gradioApp().querySelector(`#${tabname}_height input[type=number]`);
for (const el of [width, height]) {
el.addEventListener('paste', function(event) {
var pasteData = event.clipboardData.getData('text/plain');
var parsed = pasteData.match(/^\s*(\d+)\D+(\d+)\s*$/);
if (parsed) {
width.value = parsed[1];
height.value = parsed[2];
updateInput(width);
updateInput(height);
event.preventDefault();
}
});
}
}
onUiLoaded(function() { onUiLoaded(function() {
showRestoreProgressButton('txt2img', localGet("txt2img_task_id")); showRestoreProgressButton('txt2img', localGet("txt2img_task_id"));
showRestoreProgressButton('img2img', localGet("img2img_task_id")); showRestoreProgressButton('img2img', localGet("img2img_task_id"));
setupResolutionPasting('txt2img');
setupResolutionPasting('img2img');
}); });
...@@ -263,21 +304,6 @@ onAfterUiUpdate(function() { ...@@ -263,21 +304,6 @@ onAfterUiUpdate(function() {
json_elem.parentElement.style.display = "none"; json_elem.parentElement.style.display = "none";
setupTokenCounters(); setupTokenCounters();
var show_all_pages = gradioApp().getElementById('settings_show_all_pages');
var settings_tabs = gradioApp().querySelector('#settings div');
if (show_all_pages && settings_tabs) {
settings_tabs.appendChild(show_all_pages);
show_all_pages.onclick = function() {
gradioApp().querySelectorAll('#settings > div').forEach(function(elem) {
if (elem.id == "settings_tab_licenses") {
return;
}
elem.style.display = "block";
});
};
}
}); });
onOptionsChanged(function() { onOptionsChanged(function() {
...@@ -366,3 +392,20 @@ function switchWidthHeight(tabname) { ...@@ -366,3 +392,20 @@ function switchWidthHeight(tabname) {
updateInput(height); updateInput(height);
return []; return [];
} }
var onEditTimers = {};
// calls func after afterMs milliseconds has passed since the input elem has beed enited by user
function onEdit(editId, elem, afterMs, func) {
var edited = function() {
var existingTimer = onEditTimers[editId];
if (existingTimer) clearTimeout(existingTimer);
onEditTimers[editId] = setTimeout(func, afterMs);
};
elem.addEventListener("input", edited);
return edited;
}
...@@ -17,19 +17,17 @@ from fastapi.encoders import jsonable_encoder ...@@ -17,19 +17,17 @@ from fastapi.encoders import jsonable_encoder
from secrets import compare_digest from secrets import compare_digest
import modules.shared as shared import modules.shared as shared
from modules import sd_samplers, deepbooru, sd_hijack, images, scripts, ui, postprocessing, errors, restart, shared_items from modules import sd_samplers, deepbooru, sd_hijack, images, scripts, ui, postprocessing, errors, restart, shared_items, script_callbacks, generation_parameters_copypaste, sd_models
from modules.api import models from modules.api import models
from modules.shared import opts from modules.shared import opts
from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img, process_images from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img, process_images
from modules.textual_inversion.textual_inversion import create_embedding, train_embedding from modules.textual_inversion.textual_inversion import create_embedding, train_embedding
from modules.textual_inversion.preprocess import preprocess
from modules.hypernetworks.hypernetwork import create_hypernetwork, train_hypernetwork from modules.hypernetworks.hypernetwork import create_hypernetwork, train_hypernetwork
from PIL import PngImagePlugin,Image from PIL import PngImagePlugin, Image
from modules.sd_models import unload_model_weights, reload_model_weights, checkpoint_aliases
from modules.sd_models_config import find_checkpoint_config_near_filename from modules.sd_models_config import find_checkpoint_config_near_filename
from modules.realesrgan_model import get_realesrgan_models from modules.realesrgan_model import get_realesrgan_models
from modules import devices from modules import devices
from typing import Dict, List, Any from typing import Any
import piexif import piexif
import piexif.helper import piexif.helper
from contextlib import closing from contextlib import closing
...@@ -103,7 +101,8 @@ def decode_base64_to_image(encoding): ...@@ -103,7 +101,8 @@ def decode_base64_to_image(encoding):
def encode_pil_to_base64(image): def encode_pil_to_base64(image):
with io.BytesIO() as output_bytes: with io.BytesIO() as output_bytes:
if isinstance(image, str):
return image
if opts.samples_format.lower() == 'png': if opts.samples_format.lower() == 'png':
use_metadata = False use_metadata = False
metadata = PngImagePlugin.PngInfo() metadata = PngImagePlugin.PngInfo()
...@@ -221,28 +220,28 @@ class Api: ...@@ -221,28 +220,28 @@ class Api:
self.add_api_route("/sdapi/v1/options", self.get_config, methods=["GET"], response_model=models.OptionsModel) self.add_api_route("/sdapi/v1/options", self.get_config, methods=["GET"], response_model=models.OptionsModel)
self.add_api_route("/sdapi/v1/options", self.set_config, methods=["POST"]) self.add_api_route("/sdapi/v1/options", self.set_config, methods=["POST"])
self.add_api_route("/sdapi/v1/cmd-flags", self.get_cmd_flags, methods=["GET"], response_model=models.FlagsModel) self.add_api_route("/sdapi/v1/cmd-flags", self.get_cmd_flags, methods=["GET"], response_model=models.FlagsModel)
self.add_api_route("/sdapi/v1/samplers", self.get_samplers, methods=["GET"], response_model=List[models.SamplerItem]) self.add_api_route("/sdapi/v1/samplers", self.get_samplers, methods=["GET"], response_model=list[models.SamplerItem])
self.add_api_route("/sdapi/v1/upscalers", self.get_upscalers, methods=["GET"], response_model=List[models.UpscalerItem]) self.add_api_route("/sdapi/v1/upscalers", self.get_upscalers, methods=["GET"], response_model=list[models.UpscalerItem])
self.add_api_route("/sdapi/v1/latent-upscale-modes", self.get_latent_upscale_modes, methods=["GET"], response_model=List[models.LatentUpscalerModeItem]) self.add_api_route("/sdapi/v1/latent-upscale-modes", self.get_latent_upscale_modes, methods=["GET"], response_model=list[models.LatentUpscalerModeItem])
self.add_api_route("/sdapi/v1/sd-models", self.get_sd_models, methods=["GET"], response_model=List[models.SDModelItem]) self.add_api_route("/sdapi/v1/sd-models", self.get_sd_models, methods=["GET"], response_model=list[models.SDModelItem])
self.add_api_route("/sdapi/v1/sd-vae", self.get_sd_vaes, methods=["GET"], response_model=List[models.SDVaeItem]) self.add_api_route("/sdapi/v1/sd-vae", self.get_sd_vaes, methods=["GET"], response_model=list[models.SDVaeItem])
self.add_api_route("/sdapi/v1/hypernetworks", self.get_hypernetworks, methods=["GET"], response_model=List[models.HypernetworkItem]) self.add_api_route("/sdapi/v1/hypernetworks", self.get_hypernetworks, methods=["GET"], response_model=list[models.HypernetworkItem])
self.add_api_route("/sdapi/v1/face-restorers", self.get_face_restorers, methods=["GET"], response_model=List[models.FaceRestorerItem]) self.add_api_route("/sdapi/v1/face-restorers", self.get_face_restorers, methods=["GET"], response_model=list[models.FaceRestorerItem])
self.add_api_route("/sdapi/v1/realesrgan-models", self.get_realesrgan_models, methods=["GET"], response_model=List[models.RealesrganItem]) self.add_api_route("/sdapi/v1/realesrgan-models", self.get_realesrgan_models, methods=["GET"], response_model=list[models.RealesrganItem])
self.add_api_route("/sdapi/v1/prompt-styles", self.get_prompt_styles, methods=["GET"], response_model=List[models.PromptStyleItem]) self.add_api_route("/sdapi/v1/prompt-styles", self.get_prompt_styles, methods=["GET"], response_model=list[models.PromptStyleItem])
self.add_api_route("/sdapi/v1/embeddings", self.get_embeddings, methods=["GET"], response_model=models.EmbeddingsResponse) self.add_api_route("/sdapi/v1/embeddings", self.get_embeddings, methods=["GET"], response_model=models.EmbeddingsResponse)
self.add_api_route("/sdapi/v1/refresh-checkpoints", self.refresh_checkpoints, methods=["POST"]) self.add_api_route("/sdapi/v1/refresh-checkpoints", self.refresh_checkpoints, methods=["POST"])
self.add_api_route("/sdapi/v1/refresh-vae", self.refresh_vae, methods=["POST"]) self.add_api_route("/sdapi/v1/refresh-vae", self.refresh_vae, methods=["POST"])
self.add_api_route("/sdapi/v1/create/embedding", self.create_embedding, methods=["POST"], response_model=models.CreateResponse) self.add_api_route("/sdapi/v1/create/embedding", self.create_embedding, methods=["POST"], response_model=models.CreateResponse)
self.add_api_route("/sdapi/v1/create/hypernetwork", self.create_hypernetwork, methods=["POST"], response_model=models.CreateResponse) self.add_api_route("/sdapi/v1/create/hypernetwork", self.create_hypernetwork, methods=["POST"], response_model=models.CreateResponse)
self.add_api_route("/sdapi/v1/preprocess", self.preprocess, methods=["POST"], response_model=models.PreprocessResponse)
self.add_api_route("/sdapi/v1/train/embedding", self.train_embedding, methods=["POST"], response_model=models.TrainResponse) self.add_api_route("/sdapi/v1/train/embedding", self.train_embedding, methods=["POST"], response_model=models.TrainResponse)
self.add_api_route("/sdapi/v1/train/hypernetwork", self.train_hypernetwork, methods=["POST"], response_model=models.TrainResponse) self.add_api_route("/sdapi/v1/train/hypernetwork", self.train_hypernetwork, methods=["POST"], response_model=models.TrainResponse)
self.add_api_route("/sdapi/v1/memory", self.get_memory, methods=["GET"], response_model=models.MemoryResponse) self.add_api_route("/sdapi/v1/memory", self.get_memory, methods=["GET"], response_model=models.MemoryResponse)
self.add_api_route("/sdapi/v1/unload-checkpoint", self.unloadapi, methods=["POST"]) self.add_api_route("/sdapi/v1/unload-checkpoint", self.unloadapi, methods=["POST"])
self.add_api_route("/sdapi/v1/reload-checkpoint", self.reloadapi, methods=["POST"]) self.add_api_route("/sdapi/v1/reload-checkpoint", self.reloadapi, methods=["POST"])
self.add_api_route("/sdapi/v1/scripts", self.get_scripts_list, methods=["GET"], response_model=models.ScriptsList) self.add_api_route("/sdapi/v1/scripts", self.get_scripts_list, methods=["GET"], response_model=models.ScriptsList)
self.add_api_route("/sdapi/v1/script-info", self.get_script_info, methods=["GET"], response_model=List[models.ScriptInfo]) self.add_api_route("/sdapi/v1/script-info", self.get_script_info, methods=["GET"], response_model=list[models.ScriptInfo])
self.add_api_route("/sdapi/v1/extensions", self.get_extensions_list, methods=["GET"], response_model=list[models.ExtensionItem])
if shared.cmd_opts.api_server_stop: if shared.cmd_opts.api_server_stop:
self.add_api_route("/sdapi/v1/server-kill", self.kill_webui, methods=["POST"]) self.add_api_route("/sdapi/v1/server-kill", self.kill_webui, methods=["POST"])
...@@ -473,9 +472,6 @@ class Api: ...@@ -473,9 +472,6 @@ class Api:
return models.ExtrasBatchImagesResponse(images=list(map(encode_pil_to_base64, result[0])), html_info=result[1]) return models.ExtrasBatchImagesResponse(images=list(map(encode_pil_to_base64, result[0])), html_info=result[1])
def pnginfoapi(self, req: models.PNGInfoRequest): def pnginfoapi(self, req: models.PNGInfoRequest):
if(not req.image.strip()):
return models.PNGInfoResponse(info="")
image = decode_base64_to_image(req.image.strip()) image = decode_base64_to_image(req.image.strip())
if image is None: if image is None:
return models.PNGInfoResponse(info="") return models.PNGInfoResponse(info="")
...@@ -484,9 +480,10 @@ class Api: ...@@ -484,9 +480,10 @@ class Api:
if geninfo is None: if geninfo is None:
geninfo = "" geninfo = ""
items = {**{'parameters': geninfo}, **items} params = generation_parameters_copypaste.parse_generation_parameters(geninfo)
script_callbacks.infotext_pasted_callback(geninfo, params)
return models.PNGInfoResponse(info=geninfo, items=items) return models.PNGInfoResponse(info=geninfo, items=items, parameters=params)
def progressapi(self, req: models.ProgressRequest = Depends()): def progressapi(self, req: models.ProgressRequest = Depends()):
# copy from check_progress_call of ui.py # copy from check_progress_call of ui.py
...@@ -541,12 +538,12 @@ class Api: ...@@ -541,12 +538,12 @@ class Api:
return {} return {}
def unloadapi(self): def unloadapi(self):
unload_model_weights() sd_models.unload_model_weights()
return {} return {}
def reloadapi(self): def reloadapi(self):
reload_model_weights() sd_models.send_model_to_device(shared.sd_model)
return {} return {}
...@@ -564,9 +561,9 @@ class Api: ...@@ -564,9 +561,9 @@ class Api:
return options return options
def set_config(self, req: Dict[str, Any]): def set_config(self, req: dict[str, Any]):
checkpoint_name = req.get("sd_model_checkpoint", None) checkpoint_name = req.get("sd_model_checkpoint", None)
if checkpoint_name is not None and checkpoint_name not in checkpoint_aliases: if checkpoint_name is not None and checkpoint_name not in sd_models.checkpoint_aliases:
raise RuntimeError(f"model {checkpoint_name!r} not found") raise RuntimeError(f"model {checkpoint_name!r} not found")
for k, v in req.items(): for k, v in req.items():
...@@ -676,19 +673,6 @@ class Api: ...@@ -676,19 +673,6 @@ class Api:
finally: finally:
shared.state.end() shared.state.end()
def preprocess(self, args: dict):
try:
shared.state.begin(job="preprocess")
preprocess(**args) # quick operation unless blip/booru interrogation is enabled
shared.state.end()
return models.PreprocessResponse(info='preprocess complete')
except KeyError as e:
return models.PreprocessResponse(info=f"preprocess error: invalid token: {e}")
except Exception as e:
return models.PreprocessResponse(info=f"preprocess error: {e}")
finally:
shared.state.end()
def train_embedding(self, args: dict): def train_embedding(self, args: dict):
try: try:
shared.state.begin(job="train_embedding") shared.state.begin(job="train_embedding")
...@@ -770,6 +754,25 @@ class Api: ...@@ -770,6 +754,25 @@ class Api:
cuda = {'error': f'{err}'} cuda = {'error': f'{err}'}
return models.MemoryResponse(ram=ram, cuda=cuda) return models.MemoryResponse(ram=ram, cuda=cuda)
def get_extensions_list(self):
from modules import extensions
extensions.list_extensions()
ext_list = []
for ext in extensions.extensions:
ext: extensions.Extension
ext.read_info_from_repo()
if ext.remote is not None:
ext_list.append({
"name": ext.name,
"remote": ext.remote,
"branch": ext.branch,
"commit_hash":ext.commit_hash,
"commit_date":ext.commit_date,
"version":ext.version,
"enabled":ext.enabled
})
return ext_list
def launch(self, server_name, port, root_path): def launch(self, server_name, port, root_path):
self.app.include_router(self.router) self.app.include_router(self.router)
uvicorn.run(self.app, host=server_name, port=port, timeout_keep_alive=shared.cmd_opts.timeout_keep_alive, root_path=root_path) uvicorn.run(self.app, host=server_name, port=port, timeout_keep_alive=shared.cmd_opts.timeout_keep_alive, root_path=root_path)
......
import inspect import inspect
from pydantic import BaseModel, Field, create_model from pydantic import BaseModel, Field, create_model
from typing import Any, Optional from typing import Any, Optional, Literal
from typing_extensions import Literal
from inflection import underscore from inflection import underscore
from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img
from modules.shared import sd_upscalers, opts, parser from modules.shared import sd_upscalers, opts, parser
from typing import Dict, List
API_NOT_ALLOWED = [ API_NOT_ALLOWED = [
"self", "self",
...@@ -130,12 +128,12 @@ StableDiffusionImg2ImgProcessingAPI = PydanticModelGenerator( ...@@ -130,12 +128,12 @@ StableDiffusionImg2ImgProcessingAPI = PydanticModelGenerator(
).generate_model() ).generate_model()
class TextToImageResponse(BaseModel): class TextToImageResponse(BaseModel):
images: List[str] = Field(default=None, title="Image", description="The generated image in base64 format.") images: list[str] = Field(default=None, title="Image", description="The generated image in base64 format.")
parameters: dict parameters: dict
info: str info: str
class ImageToImageResponse(BaseModel): class ImageToImageResponse(BaseModel):
images: List[str] = Field(default=None, title="Image", description="The generated image in base64 format.") images: list[str] = Field(default=None, title="Image", description="The generated image in base64 format.")
parameters: dict parameters: dict
info: str info: str
...@@ -168,17 +166,18 @@ class FileData(BaseModel): ...@@ -168,17 +166,18 @@ class FileData(BaseModel):
name: str = Field(title="File name") name: str = Field(title="File name")
class ExtrasBatchImagesRequest(ExtrasBaseRequest): class ExtrasBatchImagesRequest(ExtrasBaseRequest):
imageList: List[FileData] = Field(title="Images", description="List of images to work on. Must be Base64 strings") imageList: list[FileData] = Field(title="Images", description="List of images to work on. Must be Base64 strings")
class ExtrasBatchImagesResponse(ExtraBaseResponse): class ExtrasBatchImagesResponse(ExtraBaseResponse):
images: List[str] = Field(title="Images", description="The generated images in base64 format.") images: list[str] = Field(title="Images", description="The generated images in base64 format.")
class PNGInfoRequest(BaseModel): class PNGInfoRequest(BaseModel):
image: str = Field(title="Image", description="The base64 encoded PNG image") image: str = Field(title="Image", description="The base64 encoded PNG image")
class PNGInfoResponse(BaseModel): class PNGInfoResponse(BaseModel):
info: str = Field(title="Image info", description="A string with the parameters used to generate the image") info: str = Field(title="Image info", description="A string with the parameters used to generate the image")
items: dict = Field(title="Items", description="An object containing all the info the image had") items: dict = Field(title="Items", description="A dictionary containing all the other fields the image had")
parameters: dict = Field(title="Parameters", description="A dictionary with parsed generation info fields")
class ProgressRequest(BaseModel): class ProgressRequest(BaseModel):
skip_current_image: bool = Field(default=False, title="Skip current image", description="Skip current image serialization") skip_current_image: bool = Field(default=False, title="Skip current image", description="Skip current image serialization")
...@@ -203,9 +202,6 @@ class TrainResponse(BaseModel): ...@@ -203,9 +202,6 @@ class TrainResponse(BaseModel):
class CreateResponse(BaseModel): class CreateResponse(BaseModel):
info: str = Field(title="Create info", description="Response string from create embedding or hypernetwork task.") info: str = Field(title="Create info", description="Response string from create embedding or hypernetwork task.")
class PreprocessResponse(BaseModel):
info: str = Field(title="Preprocess info", description="Response string from preprocessing task.")
fields = {} fields = {}
for key, metadata in opts.data_labels.items(): for key, metadata in opts.data_labels.items():
value = opts.data.get(key) value = opts.data.get(key)
...@@ -232,8 +228,8 @@ FlagsModel = create_model("Flags", **flags) ...@@ -232,8 +228,8 @@ FlagsModel = create_model("Flags", **flags)
class SamplerItem(BaseModel): class SamplerItem(BaseModel):
name: str = Field(title="Name") name: str = Field(title="Name")
aliases: List[str] = Field(title="Aliases") aliases: list[str] = Field(title="Aliases")
options: Dict[str, str] = Field(title="Options") options: dict[str, str] = Field(title="Options")
class UpscalerItem(BaseModel): class UpscalerItem(BaseModel):
name: str = Field(title="Name") name: str = Field(title="Name")
...@@ -284,8 +280,8 @@ class EmbeddingItem(BaseModel): ...@@ -284,8 +280,8 @@ class EmbeddingItem(BaseModel):
vectors: int = Field(title="Vectors", description="The number of vectors in the embedding") vectors: int = Field(title="Vectors", description="The number of vectors in the embedding")
class EmbeddingsResponse(BaseModel): class EmbeddingsResponse(BaseModel):
loaded: Dict[str, EmbeddingItem] = Field(title="Loaded", description="Embeddings loaded for the current model") loaded: dict[str, EmbeddingItem] = Field(title="Loaded", description="Embeddings loaded for the current model")
skipped: Dict[str, EmbeddingItem] = Field(title="Skipped", description="Embeddings skipped for the current model (likely due to architecture incompatibility)") skipped: dict[str, EmbeddingItem] = Field(title="Skipped", description="Embeddings skipped for the current model (likely due to architecture incompatibility)")
class MemoryResponse(BaseModel): class MemoryResponse(BaseModel):
ram: dict = Field(title="RAM", description="System memory stats") ram: dict = Field(title="RAM", description="System memory stats")
...@@ -303,11 +299,20 @@ class ScriptArg(BaseModel): ...@@ -303,11 +299,20 @@ class ScriptArg(BaseModel):
minimum: Optional[Any] = Field(default=None, title="Minimum", description="Minimum allowed value for the argumentin UI") minimum: Optional[Any] = Field(default=None, title="Minimum", description="Minimum allowed value for the argumentin UI")
maximum: Optional[Any] = Field(default=None, title="Minimum", description="Maximum allowed value for the argumentin UI") maximum: Optional[Any] = Field(default=None, title="Minimum", description="Maximum allowed value for the argumentin UI")
step: Optional[Any] = Field(default=None, title="Minimum", description="Step for changing value of the argumentin UI") step: Optional[Any] = Field(default=None, title="Minimum", description="Step for changing value of the argumentin UI")
choices: Optional[List[str]] = Field(default=None, title="Choices", description="Possible values for the argument") choices: Optional[list[str]] = Field(default=None, title="Choices", description="Possible values for the argument")
class ScriptInfo(BaseModel): class ScriptInfo(BaseModel):
name: str = Field(default=None, title="Name", description="Script name") name: str = Field(default=None, title="Name", description="Script name")
is_alwayson: bool = Field(default=None, title="IsAlwayson", description="Flag specifying whether this script is an alwayson script") is_alwayson: bool = Field(default=None, title="IsAlwayson", description="Flag specifying whether this script is an alwayson script")
is_img2img: bool = Field(default=None, title="IsImg2img", description="Flag specifying whether this script is an img2img script") is_img2img: bool = Field(default=None, title="IsImg2img", description="Flag specifying whether this script is an img2img script")
args: List[ScriptArg] = Field(title="Arguments", description="List of script's arguments") args: list[ScriptArg] = Field(title="Arguments", description="List of script's arguments")
class ExtensionItem(BaseModel):
name: str = Field(title="Name", description="Extension name")
remote: str = Field(title="Remote", description="Extension Repository URL")
branch: str = Field(title="Branch", description="Extension Repository Branch")
commit_hash: str = Field(title="Commit Hash", description="Extension Repository Commit Hash")
version: str = Field(title="Version", description="Extension Version")
commit_date: str = Field(title="Commit Date", description="Extension Repository Commit Date")
enabled: bool = Field(title="Enabled", description="Flag specifying whether this extension is enabled")
...@@ -32,7 +32,7 @@ def dump_cache(): ...@@ -32,7 +32,7 @@ def dump_cache():
with cache_lock: with cache_lock:
cache_filename_tmp = cache_filename + "-" cache_filename_tmp = cache_filename + "-"
with open(cache_filename_tmp, "w", encoding="utf8") as file: with open(cache_filename_tmp, "w", encoding="utf8") as file:
json.dump(cache_data, file, indent=4) json.dump(cache_data, file, indent=4, ensure_ascii=False)
os.replace(cache_filename_tmp, cache_filename) os.replace(cache_filename_tmp, cache_filename)
......
...@@ -70,6 +70,7 @@ parser.add_argument("--opt-sdp-no-mem-attention", action='store_true', help="pre ...@@ -70,6 +70,7 @@ parser.add_argument("--opt-sdp-no-mem-attention", action='store_true', help="pre
parser.add_argument("--disable-opt-split-attention", action='store_true', help="prefer no cross-attention layer optimization for automatic choice of optimization") parser.add_argument("--disable-opt-split-attention", action='store_true', help="prefer no cross-attention layer optimization for automatic choice of optimization")
parser.add_argument("--disable-nan-check", action='store_true', help="do not check if produced images/latent spaces have nans; useful for running without a checkpoint in CI") parser.add_argument("--disable-nan-check", action='store_true', help="do not check if produced images/latent spaces have nans; useful for running without a checkpoint in CI")
parser.add_argument("--use-cpu", nargs='+', help="use CPU as torch device for specified modules", default=[], type=str.lower) parser.add_argument("--use-cpu", nargs='+', help="use CPU as torch device for specified modules", default=[], type=str.lower)
parser.add_argument("--use-ipex", action="store_true", help="use Intel XPU as torch device")
parser.add_argument("--disable-model-loading-ram-optimization", action='store_true', help="disable an optimization that reduces RAM use when loading a model") parser.add_argument("--disable-model-loading-ram-optimization", action='store_true', help="disable an optimization that reduces RAM use when loading a model")
parser.add_argument("--listen", action='store_true', help="launch gradio with 0.0.0.0 as server name, allowing to respond to network requests") parser.add_argument("--listen", action='store_true', help="launch gradio with 0.0.0.0 as server name, allowing to respond to network requests")
parser.add_argument("--port", type=int, help="launch gradio with given server port, you need root/admin rights for ports < 1024, defaults to 7860 if available", default=None) parser.add_argument("--port", type=int, help="launch gradio with given server port, you need root/admin rights for ports < 1024, defaults to 7860 if available", default=None)
...@@ -90,7 +91,7 @@ parser.add_argument("--autolaunch", action='store_true', help="open the webui UR ...@@ -90,7 +91,7 @@ parser.add_argument("--autolaunch", action='store_true', help="open the webui UR
parser.add_argument("--theme", type=str, help="launches the UI with light or dark theme", default=None) parser.add_argument("--theme", type=str, help="launches the UI with light or dark theme", default=None)
parser.add_argument("--use-textbox-seed", action='store_true', help="use textbox for seeds in UI (no up/down, but possible to input long seeds)", default=False) parser.add_argument("--use-textbox-seed", action='store_true', help="use textbox for seeds in UI (no up/down, but possible to input long seeds)", default=False)
parser.add_argument("--disable-console-progressbars", action='store_true', help="do not output progressbars to console", default=False) parser.add_argument("--disable-console-progressbars", action='store_true', help="do not output progressbars to console", default=False)
parser.add_argument("--enable-console-prompts", action='store_true', help="print prompts to console when generating with txt2img and img2img", default=False) parser.add_argument("--enable-console-prompts", action='store_true', help="does not do anything", default=False) # Legacy compatibility, use as default value shared.opts.enable_console_prompts
parser.add_argument('--vae-path', type=str, help='Checkpoint to use as VAE; setting this argument disables all settings related to VAE', default=None) parser.add_argument('--vae-path', type=str, help='Checkpoint to use as VAE; setting this argument disables all settings related to VAE', default=None)
parser.add_argument("--disable-safe-unpickle", action='store_true', help="disable checking pytorch models for malicious code", default=False) parser.add_argument("--disable-safe-unpickle", action='store_true', help="disable checking pytorch models for malicious code", default=False)
parser.add_argument("--api", action='store_true', help="use api=True to launch the API together with the webui (use --nowebui instead for only the API)") parser.add_argument("--api", action='store_true', help="use api=True to launch the API together with the webui (use --nowebui instead for only the API)")
...@@ -107,13 +108,14 @@ parser.add_argument("--tls-certfile", type=str, help="Partially enables TLS, req ...@@ -107,13 +108,14 @@ parser.add_argument("--tls-certfile", type=str, help="Partially enables TLS, req
parser.add_argument("--disable-tls-verify", action="store_false", help="When passed, enables the use of self-signed certificates.", default=None) parser.add_argument("--disable-tls-verify", action="store_false", help="When passed, enables the use of self-signed certificates.", default=None)
parser.add_argument("--server-name", type=str, help="Sets hostname of server", default=None) parser.add_argument("--server-name", type=str, help="Sets hostname of server", default=None)
parser.add_argument("--gradio-queue", action='store_true', help="does not do anything", default=True) parser.add_argument("--gradio-queue", action='store_true', help="does not do anything", default=True)
parser.add_argument("--no-gradio-queue", action='store_true', help="Disables gradio queue; causes the webpage to use http requests instead of websockets; was the defaul in earlier versions") parser.add_argument("--no-gradio-queue", action='store_true', help="Disables gradio queue; causes the webpage to use http requests instead of websockets; was the default in earlier versions")
parser.add_argument("--skip-version-check", action='store_true', help="Do not check versions of torch and xformers") parser.add_argument("--skip-version-check", action='store_true', help="Do not check versions of torch and xformers")
parser.add_argument("--no-hashing", action='store_true', help="disable sha256 hashing of checkpoints to help loading performance", default=False) parser.add_argument("--no-hashing", action='store_true', help="disable sha256 hashing of checkpoints to help loading performance", default=False)
parser.add_argument("--no-download-sd-model", action='store_true', help="don't download SD1.5 model even if no model is found in --ckpt-dir", default=False) parser.add_argument("--no-download-sd-model", action='store_true', help="don't download SD1.5 model even if no model is found in --ckpt-dir", default=False)
parser.add_argument('--subpath', type=str, help='customize the subpath for gradio, use with reverse proxy') parser.add_argument('--subpath', type=str, help='customize the subpath for gradio, use with reverse proxy')
parser.add_argument('--add-stop-route', action='store_true', help='add /_stop route to stop server') parser.add_argument('--add-stop-route', action='store_true', help='does not do anything')
parser.add_argument('--api-server-stop', action='store_true', help='enable server stop/restart/kill via api') parser.add_argument('--api-server-stop', action='store_true', help='enable server stop/restart/kill via api')
parser.add_argument('--timeout-keep-alive', type=int, default=30, help='set timeout_keep_alive for uvicorn') parser.add_argument('--timeout-keep-alive', type=int, default=30, help='set timeout_keep_alive for uvicorn')
parser.add_argument("--disable-all-extensions", action='store_true', help="prevent all extensions from running regardless of any other settings", default=False) parser.add_argument("--disable-all-extensions", action='store_true', help="prevent all extensions from running regardless of any other settings", default=False)
parser.add_argument("--disable-extra-extensions", action='store_true', help=" prevent all extensions except built-in from running regardless of any other settings", default=False) parser.add_argument("--disable-extra-extensions", action='store_true', help="prevent all extensions except built-in from running regardless of any other settings", default=False)
parser.add_argument("--skip-load-model-at-start", action='store_true', help="if load a model at web start, only take effect when --nowebui", )
...@@ -4,7 +4,6 @@ Supports saving and restoring webui and extensions from a known working set of c ...@@ -4,7 +4,6 @@ Supports saving and restoring webui and extensions from a known working set of c
import os import os
import json import json
import time
import tqdm import tqdm
from datetime import datetime from datetime import datetime
...@@ -38,7 +37,7 @@ def list_config_states(): ...@@ -38,7 +37,7 @@ def list_config_states():
config_states = sorted(config_states, key=lambda cs: cs["created_at"], reverse=True) config_states = sorted(config_states, key=lambda cs: cs["created_at"], reverse=True)
for cs in config_states: for cs in config_states:
timestamp = time.asctime(time.gmtime(cs["created_at"])) timestamp = datetime.fromtimestamp(cs["created_at"]).strftime('%Y-%m-%d %H:%M:%S')
name = cs.get("name", "Config") name = cs.get("name", "Config")
full_name = f"{name}: {timestamp}" full_name = f"{name}: {timestamp}"
all_config_states[full_name] = cs all_config_states[full_name] = cs
......
...@@ -8,6 +8,13 @@ from modules import errors, shared ...@@ -8,6 +8,13 @@ from modules import errors, shared
if sys.platform == "darwin": if sys.platform == "darwin":
from modules import mac_specific from modules import mac_specific
if shared.cmd_opts.use_ipex:
from modules import xpu_specific
def has_xpu() -> bool:
return shared.cmd_opts.use_ipex and xpu_specific.has_xpu
def has_mps() -> bool: def has_mps() -> bool:
if sys.platform != "darwin": if sys.platform != "darwin":
...@@ -30,6 +37,9 @@ def get_optimal_device_name(): ...@@ -30,6 +37,9 @@ def get_optimal_device_name():
if has_mps(): if has_mps():
return "mps" return "mps"
if has_xpu():
return xpu_specific.get_xpu_device_string()
return "cpu" return "cpu"
...@@ -38,7 +48,7 @@ def get_optimal_device(): ...@@ -38,7 +48,7 @@ def get_optimal_device():
def get_device_for(task): def get_device_for(task):
if task in shared.cmd_opts.use_cpu: if task in shared.cmd_opts.use_cpu or "all" in shared.cmd_opts.use_cpu:
return cpu return cpu
return get_optimal_device() return get_optimal_device()
...@@ -54,13 +64,17 @@ def torch_gc(): ...@@ -54,13 +64,17 @@ def torch_gc():
if has_mps(): if has_mps():
mac_specific.torch_mps_gc() mac_specific.torch_mps_gc()
if has_xpu():
xpu_specific.torch_xpu_gc()
def enable_tf32(): def enable_tf32():
if torch.cuda.is_available(): if torch.cuda.is_available():
# enabling benchmark option seems to enable a range of cards to do fp16 when they otherwise can't # enabling benchmark option seems to enable a range of cards to do fp16 when they otherwise can't
# see https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/4407 # see https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/4407
if any(torch.cuda.get_device_capability(devid) == (7, 5) for devid in range(0, torch.cuda.device_count())): device_id = (int(shared.cmd_opts.device_id) if shared.cmd_opts.device_id is not None and shared.cmd_opts.device_id.isdigit() else 0) or torch.cuda.current_device()
if torch.cuda.get_device_capability(device_id) == (7, 5) and torch.cuda.get_device_name(device_id).startswith("NVIDIA GeForce GTX 16"):
torch.backends.cudnn.benchmark = True torch.backends.cudnn.benchmark = True
torch.backends.cuda.matmul.allow_tf32 = True torch.backends.cuda.matmul.allow_tf32 = True
......
...@@ -6,6 +6,21 @@ import traceback ...@@ -6,6 +6,21 @@ import traceback
exception_records = [] exception_records = []
def format_traceback(tb):
return [[f"{x.filename}, line {x.lineno}, {x.name}", x.line] for x in traceback.extract_tb(tb)]
def format_exception(e, tb):
return {"exception": str(e), "traceback": format_traceback(tb)}
def get_exceptions():
try:
return list(reversed(exception_records))
except Exception as e:
return str(e)
def record_exception(): def record_exception():
_, e, tb = sys.exc_info() _, e, tb = sys.exc_info()
if e is None: if e is None:
...@@ -14,8 +29,7 @@ def record_exception(): ...@@ -14,8 +29,7 @@ def record_exception():
if exception_records and exception_records[-1] == e: if exception_records and exception_records[-1] == e:
return return
from modules import sysinfo exception_records.append(format_exception(e, tb))
exception_records.append(sysinfo.format_exception(e, tb))
if len(exception_records) > 5: if len(exception_records) > 5:
exception_records.pop(0) exception_records.pop(0)
......
from __future__ import annotations
import configparser
import os import os
import threading import threading
import re
from modules import shared, errors, cache, scripts from modules import shared, errors, cache, scripts
from modules.gitpython_hack import Repo from modules.gitpython_hack import Repo
from modules.paths_internal import extensions_dir, extensions_builtin_dir, script_path # noqa: F401 from modules.paths_internal import extensions_dir, extensions_builtin_dir, script_path # noqa: F401
extensions = []
os.makedirs(extensions_dir, exist_ok=True) os.makedirs(extensions_dir, exist_ok=True)
...@@ -19,11 +22,55 @@ def active(): ...@@ -19,11 +22,55 @@ def active():
return [x for x in extensions if x.enabled] return [x for x in extensions if x.enabled]
class ExtensionMetadata:
filename = "metadata.ini"
config: configparser.ConfigParser
canonical_name: str
requires: list
def __init__(self, path, canonical_name):
self.config = configparser.ConfigParser()
filepath = os.path.join(path, self.filename)
if os.path.isfile(filepath):
try:
self.config.read(filepath)
except Exception:
errors.report(f"Error reading {self.filename} for extension {canonical_name}.", exc_info=True)
self.canonical_name = self.config.get("Extension", "Name", fallback=canonical_name)
self.canonical_name = canonical_name.lower().strip()
self.requires = self.get_script_requirements("Requires", "Extension")
def get_script_requirements(self, field, section, extra_section=None):
"""reads a list of requirements from the config; field is the name of the field in the ini file,
like Requires or Before, and section is the name of the [section] in the ini file; additionally,
reads more requirements from [extra_section] if specified."""
x = self.config.get(section, field, fallback='')
if extra_section:
x = x + ', ' + self.config.get(extra_section, field, fallback='')
return self.parse_list(x.lower())
def parse_list(self, text):
"""converts a line from config ("ext1 ext2, ext3 ") into a python list (["ext1", "ext2", "ext3"])"""
if not text:
return []
# both "," and " " are accepted as separator
return [x for x in re.split(r"[,\s]+", text.strip()) if x]
class Extension: class Extension:
lock = threading.Lock() lock = threading.Lock()
cached_fields = ['remote', 'commit_date', 'branch', 'commit_hash', 'version'] cached_fields = ['remote', 'commit_date', 'branch', 'commit_hash', 'version']
metadata: ExtensionMetadata
def __init__(self, name, path, enabled=True, is_builtin=False): def __init__(self, name, path, enabled=True, is_builtin=False, metadata=None):
self.name = name self.name = name
self.path = path self.path = path
self.enabled = enabled self.enabled = enabled
...@@ -36,6 +83,8 @@ class Extension: ...@@ -36,6 +83,8 @@ class Extension:
self.branch = None self.branch = None
self.remote = None self.remote = None
self.have_info_from_repo = False self.have_info_from_repo = False
self.metadata = metadata if metadata else ExtensionMetadata(self.path, name.lower())
self.canonical_name = metadata.canonical_name
def to_dict(self): def to_dict(self):
return {x: getattr(self, x) for x in self.cached_fields} return {x: getattr(self, x) for x in self.cached_fields}
...@@ -56,6 +105,7 @@ class Extension: ...@@ -56,6 +105,7 @@ class Extension:
self.do_read_info_from_repo() self.do_read_info_from_repo()
return self.to_dict() return self.to_dict()
try: try:
d = cache.cached_data_for_file('extensions-git', self.name, os.path.join(self.path, ".git"), read_from_repo) d = cache.cached_data_for_file('extensions-git', self.name, os.path.join(self.path, ".git"), read_from_repo)
self.from_dict(d) self.from_dict(d)
...@@ -136,9 +186,6 @@ class Extension: ...@@ -136,9 +186,6 @@ class Extension:
def list_extensions(): def list_extensions():
extensions.clear() extensions.clear()
if not os.path.isdir(extensions_dir):
return
if shared.cmd_opts.disable_all_extensions: if shared.cmd_opts.disable_all_extensions:
print("*** \"--disable-all-extensions\" arg was used, will not load any extensions ***") print("*** \"--disable-all-extensions\" arg was used, will not load any extensions ***")
elif shared.opts.disable_all_extensions == "all": elif shared.opts.disable_all_extensions == "all":
...@@ -148,18 +195,43 @@ def list_extensions(): ...@@ -148,18 +195,43 @@ def list_extensions():
elif shared.opts.disable_all_extensions == "extra": elif shared.opts.disable_all_extensions == "extra":
print("*** \"Disable all extensions\" option was set, will only load built-in extensions ***") print("*** \"Disable all extensions\" option was set, will only load built-in extensions ***")
extension_paths = [] loaded_extensions = {}
for dirname in [extensions_dir, extensions_builtin_dir]:
# scan through extensions directory and load metadata
for dirname in [extensions_builtin_dir, extensions_dir]:
if not os.path.isdir(dirname): if not os.path.isdir(dirname):
return continue
for extension_dirname in sorted(os.listdir(dirname)): for extension_dirname in sorted(os.listdir(dirname)):
path = os.path.join(dirname, extension_dirname) path = os.path.join(dirname, extension_dirname)
if not os.path.isdir(path): if not os.path.isdir(path):
continue continue
extension_paths.append((extension_dirname, path, dirname == extensions_builtin_dir)) canonical_name = extension_dirname
metadata = ExtensionMetadata(path, canonical_name)
# check for duplicated canonical names
already_loaded_extension = loaded_extensions.get(metadata.canonical_name)
if already_loaded_extension is not None:
errors.report(f'Duplicate canonical name "{canonical_name}" found in extensions "{extension_dirname}" and "{already_loaded_extension.name}". Former will be discarded.', exc_info=False)
continue
is_builtin = dirname == extensions_builtin_dir
extension = Extension(name=extension_dirname, path=path, enabled=extension_dirname not in shared.opts.disabled_extensions, is_builtin=is_builtin, metadata=metadata)
extensions.append(extension)
loaded_extensions[canonical_name] = extension
# check for requirements
for extension in extensions:
for req in extension.metadata.requires:
required_extension = loaded_extensions.get(req)
if required_extension is None:
errors.report(f'Extension "{extension.name}" requires "{req}" which is not installed.', exc_info=False)
continue
if not extension.enabled:
errors.report(f'Extension "{extension.name}" requires "{required_extension.name}" which is disabled.', exc_info=False)
continue
for dirname, path, is_builtin in extension_paths: extensions: list[Extension] = []
extension = Extension(name=dirname, path=path, enabled=dirname not in shared.opts.disabled_extensions, is_builtin=is_builtin)
extensions.append(extension)
from __future__ import annotations
import base64 import base64
import io import io
import json import json
...@@ -9,15 +10,12 @@ from modules.paths import data_path ...@@ -9,15 +10,12 @@ from modules.paths import data_path
from modules import shared, ui_tempdir, script_callbacks, processing from modules import shared, ui_tempdir, script_callbacks, processing
from PIL import Image from PIL import Image
re_param_code = r'\s*([\w ]+):\s*("(?:\\.|[^\\"])+"|[^,]*)(?:,|$)' re_param_code = r'\s*(\w[\w \-/]+):\s*("(?:\\.|[^\\"])+"|[^,]*)(?:,|$)'
re_param = re.compile(re_param_code) re_param = re.compile(re_param_code)
re_imagesize = re.compile(r"^(\d+)x(\d+)$") re_imagesize = re.compile(r"^(\d+)x(\d+)$")
re_hypernet_hash = re.compile("\(([0-9a-f]+)\)$") re_hypernet_hash = re.compile("\(([0-9a-f]+)\)$")
type_of_gr_update = type(gr.update()) type_of_gr_update = type(gr.update())
paste_fields = {}
registered_param_bindings = []
class ParamBinding: class ParamBinding:
def __init__(self, paste_button, tabname, source_text_component=None, source_image_component=None, source_tabname=None, override_settings_component=None, paste_field_names=None): def __init__(self, paste_button, tabname, source_text_component=None, source_image_component=None, source_tabname=None, override_settings_component=None, paste_field_names=None):
...@@ -30,6 +28,10 @@ class ParamBinding: ...@@ -30,6 +28,10 @@ class ParamBinding:
self.paste_field_names = paste_field_names or [] self.paste_field_names = paste_field_names or []
paste_fields: dict[str, dict] = {}
registered_param_bindings: list[ParamBinding] = []
def reset(): def reset():
paste_fields.clear() paste_fields.clear()
registered_param_bindings.clear() registered_param_bindings.clear()
...@@ -113,7 +115,6 @@ def register_paste_params_button(binding: ParamBinding): ...@@ -113,7 +115,6 @@ def register_paste_params_button(binding: ParamBinding):
def connect_paste_params_buttons(): def connect_paste_params_buttons():
binding: ParamBinding
for binding in registered_param_bindings: for binding in registered_param_bindings:
destination_image_component = paste_fields[binding.tabname]["init_img"] destination_image_component = paste_fields[binding.tabname]["init_img"]
fields = paste_fields[binding.tabname]["fields"] fields = paste_fields[binding.tabname]["fields"]
...@@ -313,6 +314,9 @@ Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 965400086, Size: 512x512, Model ...@@ -313,6 +314,9 @@ Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 965400086, Size: 512x512, Model
if "VAE Decoder" not in res: if "VAE Decoder" not in res:
res["VAE Decoder"] = "Full" res["VAE Decoder"] = "Full"
skip = set(shared.opts.infotext_skip_pasting)
res = {k: v for k, v in res.items() if k not in skip}
return res return res
...@@ -443,3 +447,4 @@ def connect_paste(button, paste_fields, input_comp, override_settings_component, ...@@ -443,3 +447,4 @@ def connect_paste(button, paste_fields, input_comp, override_settings_component,
outputs=[], outputs=[],
show_progress=False, show_progress=False,
) )
...@@ -9,6 +9,7 @@ from modules import paths, shared, devices, modelloader, errors ...@@ -9,6 +9,7 @@ from modules import paths, shared, devices, modelloader, errors
model_dir = "GFPGAN" model_dir = "GFPGAN"
user_path = None user_path = None
model_path = os.path.join(paths.models_path, model_dir) model_path = os.path.join(paths.models_path, model_dir)
model_file_path = None
model_url = "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth" model_url = "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth"
have_gfpgan = False have_gfpgan = False
loaded_gfpgan_model = None loaded_gfpgan_model = None
...@@ -17,6 +18,7 @@ loaded_gfpgan_model = None ...@@ -17,6 +18,7 @@ loaded_gfpgan_model = None
def gfpgann(): def gfpgann():
global loaded_gfpgan_model global loaded_gfpgan_model
global model_path global model_path
global model_file_path
if loaded_gfpgan_model is not None: if loaded_gfpgan_model is not None:
loaded_gfpgan_model.gfpgan.to(devices.device_gfpgan) loaded_gfpgan_model.gfpgan.to(devices.device_gfpgan)
return loaded_gfpgan_model return loaded_gfpgan_model
...@@ -24,17 +26,24 @@ def gfpgann(): ...@@ -24,17 +26,24 @@ def gfpgann():
if gfpgan_constructor is None: if gfpgan_constructor is None:
return None return None
models = modelloader.load_models(model_path, model_url, user_path, ext_filter="GFPGAN") models = modelloader.load_models(model_path, model_url, user_path, ext_filter=['.pth'])
if len(models) == 1 and models[0].startswith("http"): if len(models) == 1 and models[0].startswith("http"):
model_file = models[0] model_file = models[0]
elif len(models) != 0: elif len(models) != 0:
latest_file = max(models, key=os.path.getctime) gfp_models = []
for item in models:
if 'GFPGAN' in os.path.basename(item):
gfp_models.append(item)
latest_file = max(gfp_models, key=os.path.getctime)
model_file = latest_file model_file = latest_file
else: else:
print("Unable to load gfpgan model!") print("Unable to load gfpgan model!")
return None return None
if hasattr(facexlib.detection.retinaface, 'device'): if hasattr(facexlib.detection.retinaface, 'device'):
facexlib.detection.retinaface.device = devices.device_gfpgan facexlib.detection.retinaface.device = devices.device_gfpgan
model_file_path = model_file
model = gfpgan_constructor(model_path=model_file, upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None, device=devices.device_gfpgan) model = gfpgan_constructor(model_path=model_file, upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None, device=devices.device_gfpgan)
loaded_gfpgan_model = model loaded_gfpgan_model = model
...@@ -77,19 +86,25 @@ def setup_model(dirname): ...@@ -77,19 +86,25 @@ def setup_model(dirname):
global user_path global user_path
global have_gfpgan global have_gfpgan
global gfpgan_constructor global gfpgan_constructor
global model_file_path
facexlib_path = model_path
if dirname is not None:
facexlib_path = dirname
load_file_from_url_orig = gfpgan.utils.load_file_from_url load_file_from_url_orig = gfpgan.utils.load_file_from_url
facex_load_file_from_url_orig = facexlib.detection.load_file_from_url facex_load_file_from_url_orig = facexlib.detection.load_file_from_url
facex_load_file_from_url_orig2 = facexlib.parsing.load_file_from_url facex_load_file_from_url_orig2 = facexlib.parsing.load_file_from_url
def my_load_file_from_url(**kwargs): def my_load_file_from_url(**kwargs):
return load_file_from_url_orig(**dict(kwargs, model_dir=model_path)) return load_file_from_url_orig(**dict(kwargs, model_dir=model_file_path))
def facex_load_file_from_url(**kwargs): def facex_load_file_from_url(**kwargs):
return facex_load_file_from_url_orig(**dict(kwargs, save_dir=model_path, model_dir=None)) return facex_load_file_from_url_orig(**dict(kwargs, save_dir=facexlib_path, model_dir=None))
def facex_load_file_from_url2(**kwargs): def facex_load_file_from_url2(**kwargs):
return facex_load_file_from_url_orig2(**dict(kwargs, save_dir=model_path, model_dir=None)) return facex_load_file_from_url_orig2(**dict(kwargs, save_dir=facexlib_path, model_dir=None))
gfpgan.utils.load_file_from_url = my_load_file_from_url gfpgan.utils.load_file_from_url = my_load_file_from_url
facexlib.detection.load_file_from_url = facex_load_file_from_url facexlib.detection.load_file_from_url = facex_load_file_from_url
......
...@@ -23,7 +23,7 @@ class Git(git.Git): ...@@ -23,7 +23,7 @@ class Git(git.Git):
) )
return self._parse_object_header(ret) return self._parse_object_header(ret)
def stream_object_data(self, ref: str) -> tuple[str, str, int, "Git.CatFileContentStream"]: def stream_object_data(self, ref: str) -> tuple[str, str, int, Git.CatFileContentStream]:
# Not really streaming, per se; this buffers the entire object in memory. # Not really streaming, per se; this buffers the entire object in memory.
# Shouldn't be a problem for our use case, since we're only using this for # Shouldn't be a problem for our use case, since we're only using this for
# object headers (commit objects). # object headers (commit objects).
......
...@@ -47,10 +47,20 @@ def Block_get_config(self): ...@@ -47,10 +47,20 @@ def Block_get_config(self):
def BlockContext_init(self, *args, **kwargs): def BlockContext_init(self, *args, **kwargs):
if scripts.scripts_current is not None:
scripts.scripts_current.before_component(self, **kwargs)
scripts.script_callbacks.before_component_callback(self, **kwargs)
res = original_BlockContext_init(self, *args, **kwargs) res = original_BlockContext_init(self, *args, **kwargs)
add_classes_to_gradio_component(self) add_classes_to_gradio_component(self)
scripts.script_callbacks.after_component_callback(self, **kwargs)
if scripts.scripts_current is not None:
scripts.scripts_current.after_component(self, **kwargs)
return res return res
......
...@@ -468,7 +468,7 @@ def create_hypernetwork(name, enable_sizes, overwrite_old, layer_structure=None, ...@@ -468,7 +468,7 @@ def create_hypernetwork(name, enable_sizes, overwrite_old, layer_structure=None,
shared.reload_hypernetworks() shared.reload_hypernetworks()
def train_hypernetwork(id_task, hypernetwork_name, learn_rate, batch_size, gradient_step, data_root, log_directory, training_width, training_height, varsize, steps, clip_grad_mode, clip_grad_value, shuffle_tags, tag_drop_out, latent_sampling_method, use_weight, create_image_every, save_hypernetwork_every, template_filename, preview_from_txt2img, preview_prompt, preview_negative_prompt, preview_steps, preview_sampler_index, preview_cfg_scale, preview_seed, preview_width, preview_height): def train_hypernetwork(id_task, hypernetwork_name: str, learn_rate: float, batch_size: int, gradient_step: int, data_root: str, log_directory: str, training_width: int, training_height: int, varsize: bool, steps: int, clip_grad_mode: str, clip_grad_value: float, shuffle_tags: bool, tag_drop_out: bool, latent_sampling_method: str, use_weight: bool, create_image_every: int, save_hypernetwork_every: int, template_filename: str, preview_from_txt2img: bool, preview_prompt: str, preview_negative_prompt: str, preview_steps: int, preview_sampler_name: str, preview_cfg_scale: float, preview_seed: int, preview_width: int, preview_height: int):
from modules import images, processing from modules import images, processing
save_hypernetwork_every = save_hypernetwork_every or 0 save_hypernetwork_every = save_hypernetwork_every or 0
...@@ -698,7 +698,7 @@ def train_hypernetwork(id_task, hypernetwork_name, learn_rate, batch_size, gradi ...@@ -698,7 +698,7 @@ def train_hypernetwork(id_task, hypernetwork_name, learn_rate, batch_size, gradi
p.prompt = preview_prompt p.prompt = preview_prompt
p.negative_prompt = preview_negative_prompt p.negative_prompt = preview_negative_prompt
p.steps = preview_steps p.steps = preview_steps
p.sampler_name = sd_samplers.samplers[preview_sampler_index].name p.sampler_name = sd_samplers.samplers_map[preview_sampler_name.lower()]
p.cfg_scale = preview_cfg_scale p.cfg_scale = preview_cfg_scale
p.seed = preview_seed p.seed = preview_seed
p.width = preview_width p.width = preview_width
......
...@@ -561,6 +561,8 @@ def save_image_with_geninfo(image, geninfo, filename, extension=None, existing_p ...@@ -561,6 +561,8 @@ def save_image_with_geninfo(image, geninfo, filename, extension=None, existing_p
}) })
piexif.insert(exif_bytes, filename) piexif.insert(exif_bytes, filename)
elif extension.lower() == ".gif":
image.save(filename, format=image_format, comment=geninfo)
else: else:
image.save(filename, format=image_format, quality=opts.jpeg_quality) image.save(filename, format=image_format, quality=opts.jpeg_quality)
...@@ -661,7 +663,13 @@ def save_image(image, path, basename, seed=None, prompt=None, extension='png', i ...@@ -661,7 +663,13 @@ def save_image(image, path, basename, seed=None, prompt=None, extension='png', i
save_image_with_geninfo(image_to_save, info, temp_file_path, extension, existing_pnginfo=params.pnginfo, pnginfo_section_name=pnginfo_section_name) save_image_with_geninfo(image_to_save, info, temp_file_path, extension, existing_pnginfo=params.pnginfo, pnginfo_section_name=pnginfo_section_name)
os.replace(temp_file_path, filename_without_extension + extension) filename = filename_without_extension + extension
if shared.opts.save_images_replace_action != "Replace":
n = 0
while os.path.exists(filename):
n += 1
filename = f"{filename_without_extension}-{n}{extension}"
os.replace(temp_file_path, filename)
fullfn_without_extension, extension = os.path.splitext(params.filename) fullfn_without_extension, extension = os.path.splitext(params.filename)
if hasattr(os, 'statvfs'): if hasattr(os, 'statvfs'):
...@@ -718,7 +726,12 @@ def read_info_from_image(image: Image.Image) -> tuple[str | None, dict]: ...@@ -718,7 +726,12 @@ def read_info_from_image(image: Image.Image) -> tuple[str | None, dict]:
geninfo = items.pop('parameters', None) geninfo = items.pop('parameters', None)
if "exif" in items: if "exif" in items:
exif = piexif.load(items["exif"]) exif_data = items["exif"]
try:
exif = piexif.load(exif_data)
except OSError:
# memory / exif was not valid so piexif tried to read from a file
exif = None
exif_comment = (exif or {}).get("Exif", {}).get(piexif.ExifIFD.UserComment, b'') exif_comment = (exif or {}).get("Exif", {}).get(piexif.ExifIFD.UserComment, b'')
try: try:
exif_comment = piexif.helper.UserComment.load(exif_comment) exif_comment = piexif.helper.UserComment.load(exif_comment)
...@@ -728,6 +741,8 @@ def read_info_from_image(image: Image.Image) -> tuple[str | None, dict]: ...@@ -728,6 +741,8 @@ def read_info_from_image(image: Image.Image) -> tuple[str | None, dict]:
if exif_comment: if exif_comment:
items['exif comment'] = exif_comment items['exif comment'] = exif_comment
geninfo = exif_comment geninfo = exif_comment
elif "comment" in items: # for gif
geninfo = items["comment"].decode('utf8', errors="ignore")
for field in IGNORED_INFO_KEYS: for field in IGNORED_INFO_KEYS:
items.pop(field, None) items.pop(field, None)
......
...@@ -10,6 +10,7 @@ from modules import images as imgutil ...@@ -10,6 +10,7 @@ from modules import images as imgutil
from modules.generation_parameters_copypaste import create_override_settings_dict, parse_generation_parameters from modules.generation_parameters_copypaste import create_override_settings_dict, parse_generation_parameters
from modules.processing import Processed, StableDiffusionProcessingImg2Img, process_images from modules.processing import Processed, StableDiffusionProcessingImg2Img, process_images
from modules.shared import opts, state from modules.shared import opts, state
from modules.sd_models import get_closet_checkpoint_match
import modules.shared as shared import modules.shared as shared
import modules.processing as processing import modules.processing as processing
from modules.ui import plaintext_to_html from modules.ui import plaintext_to_html
...@@ -41,7 +42,10 @@ def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args, to_scale=Fal ...@@ -41,7 +42,10 @@ def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args, to_scale=Fal
cfg_scale = p.cfg_scale cfg_scale = p.cfg_scale
sampler_name = p.sampler_name sampler_name = p.sampler_name
steps = p.steps steps = p.steps
override_settings = p.override_settings
sd_model_checkpoint_override = get_closet_checkpoint_match(override_settings.get("sd_model_checkpoint", None))
batch_results = None
discard_further_results = False
for i, image in enumerate(images): for i, image in enumerate(images):
state.job = f"{i+1} out of {len(images)}" state.job = f"{i+1} out of {len(images)}"
if state.skipped: if state.skipped:
...@@ -104,16 +108,42 @@ def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args, to_scale=Fal ...@@ -104,16 +108,42 @@ def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args, to_scale=Fal
p.sampler_name = parsed_parameters.get("Sampler", sampler_name) p.sampler_name = parsed_parameters.get("Sampler", sampler_name)
p.steps = int(parsed_parameters.get("Steps", steps)) p.steps = int(parsed_parameters.get("Steps", steps))
model_info = get_closet_checkpoint_match(parsed_parameters.get("Model hash", None))
if model_info is not None:
p.override_settings['sd_model_checkpoint'] = model_info.name
elif sd_model_checkpoint_override:
p.override_settings['sd_model_checkpoint'] = sd_model_checkpoint_override
else:
p.override_settings.pop("sd_model_checkpoint", None)
if output_dir:
p.outpath_samples = output_dir
p.override_settings['save_to_dirs'] = False
p.override_settings['save_images_replace_action'] = "Add number suffix"
if p.n_iter > 1 or p.batch_size > 1:
p.override_settings['samples_filename_pattern'] = f'{image_path.stem}-[generation_number]'
else:
p.override_settings['samples_filename_pattern'] = f'{image_path.stem}'
proc = modules.scripts.scripts_img2img.run(p, *args) proc = modules.scripts.scripts_img2img.run(p, *args)
if proc is None: if proc is None:
if output_dir: p.override_settings.pop('save_images_replace_action', None)
p.outpath_samples = output_dir proc = process_images(p)
p.override_settings['save_to_dirs'] = False
if p.n_iter > 1 or p.batch_size > 1: if not discard_further_results and proc:
p.override_settings['samples_filename_pattern'] = f'{image_path.stem}-[generation_number]' if batch_results:
else: batch_results.images.extend(proc.images)
p.override_settings['samples_filename_pattern'] = f'{image_path.stem}' batch_results.infotexts.extend(proc.infotexts)
process_images(p) else:
batch_results = proc
if 0 <= shared.opts.img2img_batch_show_results_limit < len(batch_results.images):
discard_further_results = True
batch_results.images = batch_results.images[:int(shared.opts.img2img_batch_show_results_limit)]
batch_results.infotexts = batch_results.infotexts[:int(shared.opts.img2img_batch_show_results_limit)]
return batch_results
def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_styles, init_img, sketch, init_img_with_mask, inpaint_color_sketch, inpaint_color_sketch_orig, init_img_inpaint, init_mask_inpaint, steps: int, sampler_name: str, mask_blur: int, mask_alpha: float, inpainting_fill: int, n_iter: int, batch_size: int, cfg_scale: float, image_cfg_scale: float, denoising_strength: float, selected_scale_tab: int, height: int, width: int, scale_by: float, resize_mode: int, inpaint_full_res: bool, inpaint_full_res_padding: int, inpainting_mask_invert: int, img2img_batch_input_dir: str, img2img_batch_output_dir: str, img2img_batch_inpaint_mask_dir: str, override_settings_texts, img2img_batch_use_png_info: bool, img2img_batch_png_info_props: list, img2img_batch_png_info_dir: str, request: gr.Request, *args): def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_styles, init_img, sketch, init_img_with_mask, inpaint_color_sketch, inpaint_color_sketch_orig, init_img_inpaint, init_mask_inpaint, steps: int, sampler_name: str, mask_blur: int, mask_alpha: float, inpainting_fill: int, n_iter: int, batch_size: int, cfg_scale: float, image_cfg_scale: float, denoising_strength: float, selected_scale_tab: int, height: int, width: int, scale_by: float, resize_mode: int, inpaint_full_res: bool, inpaint_full_res_padding: int, inpainting_mask_invert: int, img2img_batch_input_dir: str, img2img_batch_output_dir: str, img2img_batch_inpaint_mask_dir: str, override_settings_texts, img2img_batch_use_png_info: bool, img2img_batch_png_info_props: list, img2img_batch_png_info_dir: str, request: gr.Request, *args):
...@@ -189,7 +219,7 @@ def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_s ...@@ -189,7 +219,7 @@ def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_s
p.user = request.username p.user = request.username
if shared.cmd_opts.enable_console_prompts: if shared.opts.enable_console_prompts:
print(f"\nimg2img: {prompt}", file=shared.progress_print_out) print(f"\nimg2img: {prompt}", file=shared.progress_print_out)
if mask: if mask:
...@@ -198,10 +228,10 @@ def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_s ...@@ -198,10 +228,10 @@ def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_s
with closing(p): with closing(p):
if is_batch: if is_batch:
assert not shared.cmd_opts.hide_ui_dir_config, "Launched with --hide-ui-dir-config, batch img2img disabled" assert not shared.cmd_opts.hide_ui_dir_config, "Launched with --hide-ui-dir-config, batch img2img disabled"
processed = process_batch(p, img2img_batch_input_dir, img2img_batch_output_dir, img2img_batch_inpaint_mask_dir, args, to_scale=selected_scale_tab == 1, scale_by=scale_by, use_png_info=img2img_batch_use_png_info, png_info_props=img2img_batch_png_info_props, png_info_dir=img2img_batch_png_info_dir)
process_batch(p, img2img_batch_input_dir, img2img_batch_output_dir, img2img_batch_inpaint_mask_dir, args, to_scale=selected_scale_tab == 1, scale_by=scale_by, use_png_info=img2img_batch_use_png_info, png_info_props=img2img_batch_png_info_props, png_info_dir=img2img_batch_png_info_dir) if processed is None:
processed = Processed(p, [], p.seed, "")
processed = Processed(p, [], p.seed, "")
else: else:
processed = modules.scripts.scripts_img2img.run(p, *args) processed = modules.scripts.scripts_img2img.run(p, *args)
if processed is None: if processed is None:
......
...@@ -3,3 +3,14 @@ import sys ...@@ -3,3 +3,14 @@ import sys
# this will break any attempt to import xformers which will prevent stability diffusion repo from trying to use it # this will break any attempt to import xformers which will prevent stability diffusion repo from trying to use it
if "--xformers" not in "".join(sys.argv): if "--xformers" not in "".join(sys.argv):
sys.modules["xformers"] = None sys.modules["xformers"] = None
# Hack to fix a changed import in torchvision 0.17+, which otherwise breaks
# basicsr; see https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/13985
try:
import torchvision.transforms.functional_tensor # noqa: F401
except ImportError:
try:
import torchvision.transforms.functional as functional
sys.modules["torchvision.transforms.functional_tensor"] = functional
except ImportError:
pass # shrug...
...@@ -151,8 +151,8 @@ def initialize_rest(*, reload_script_modules=False): ...@@ -151,8 +151,8 @@ def initialize_rest(*, reload_script_modules=False):
from modules import devices from modules import devices
devices.first_time_calculation() devices.first_time_calculation()
if not shared.cmd_opts.skip_load_model_at_start:
Thread(target=load_model).start() Thread(target=load_model).start()
from modules import shared_items from modules import shared_items
shared_items.reload_hypernetworks() shared_items.reload_hypernetworks()
......
...@@ -150,10 +150,14 @@ def dumpstacks(): ...@@ -150,10 +150,14 @@ def dumpstacks():
def configure_sigint_handler(): def configure_sigint_handler():
# make the program just exit at ctrl+c without waiting for anything # make the program just exit at ctrl+c without waiting for anything
from modules import shared
def sigint_handler(sig, frame): def sigint_handler(sig, frame):
print(f'Interrupted with signal {sig} in {frame}') print(f'Interrupted with signal {sig} in {frame}')
dumpstacks() if shared.opts.dump_stacks_on_signal:
dumpstacks()
os._exit(0) os._exit(0)
......
...@@ -6,6 +6,7 @@ import os ...@@ -6,6 +6,7 @@ import os
import shutil import shutil
import sys import sys
import importlib.util import importlib.util
import importlib.metadata
import platform import platform
import json import json
from functools import lru_cache from functools import lru_cache
...@@ -64,7 +65,7 @@ Use --skip-python-version-check to suppress this warning. ...@@ -64,7 +65,7 @@ Use --skip-python-version-check to suppress this warning.
@lru_cache() @lru_cache()
def commit_hash(): def commit_hash():
try: try:
return subprocess.check_output([git, "rev-parse", "HEAD"], shell=False, encoding='utf8').strip() return subprocess.check_output([git, "-C", script_path, "rev-parse", "HEAD"], shell=False, encoding='utf8').strip()
except Exception: except Exception:
return "<none>" return "<none>"
...@@ -72,7 +73,7 @@ def commit_hash(): ...@@ -72,7 +73,7 @@ def commit_hash():
@lru_cache() @lru_cache()
def git_tag(): def git_tag():
try: try:
return subprocess.check_output([git, "describe", "--tags"], shell=False, encoding='utf8').strip() return subprocess.check_output([git, "-C", script_path, "describe", "--tags"], shell=False, encoding='utf8').strip()
except Exception: except Exception:
try: try:
...@@ -119,11 +120,16 @@ def run(command, desc=None, errdesc=None, custom_env=None, live: bool = default_ ...@@ -119,11 +120,16 @@ def run(command, desc=None, errdesc=None, custom_env=None, live: bool = default_
def is_installed(package): def is_installed(package):
try: try:
spec = importlib.util.find_spec(package) dist = importlib.metadata.distribution(package)
except ModuleNotFoundError: except importlib.metadata.PackageNotFoundError:
return False try:
spec = importlib.util.find_spec(package)
except ModuleNotFoundError:
return False
return spec is not None
return spec is not None return dist is not None
def repo_dir(name): def repo_dir(name):
...@@ -310,6 +316,26 @@ def requirements_met(requirements_file): ...@@ -310,6 +316,26 @@ def requirements_met(requirements_file):
def prepare_environment(): def prepare_environment():
torch_index_url = os.environ.get('TORCH_INDEX_URL', "https://download.pytorch.org/whl/cu118") torch_index_url = os.environ.get('TORCH_INDEX_URL', "https://download.pytorch.org/whl/cu118")
torch_command = os.environ.get('TORCH_COMMAND', f"pip install torch==2.0.1 torchvision==0.15.2 --extra-index-url {torch_index_url}") torch_command = os.environ.get('TORCH_COMMAND', f"pip install torch==2.0.1 torchvision==0.15.2 --extra-index-url {torch_index_url}")
if args.use_ipex:
if platform.system() == "Windows":
# The "Nuullll/intel-extension-for-pytorch" wheels were built from IPEX source for Intel Arc GPU: https://github.com/intel/intel-extension-for-pytorch/tree/xpu-main
# This is NOT an Intel official release so please use it at your own risk!!
# See https://github.com/Nuullll/intel-extension-for-pytorch/releases/tag/v2.0.110%2Bxpu-master%2Bdll-bundle for details.
#
# Strengths (over official IPEX 2.0.110 windows release):
# - AOT build (for Arc GPU only) to eliminate JIT compilation overhead: https://github.com/intel/intel-extension-for-pytorch/issues/399
# - Bundles minimal oneAPI 2023.2 dependencies into the python wheels, so users don't need to install oneAPI for the whole system.
# - Provides a compatible torchvision wheel: https://github.com/intel/intel-extension-for-pytorch/issues/465
# Limitation:
# - Only works for python 3.10
url_prefix = "https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.0.110%2Bxpu-master%2Bdll-bundle"
torch_command = os.environ.get('TORCH_COMMAND', f"pip install {url_prefix}/torch-2.0.0a0+gite9ebda2-cp310-cp310-win_amd64.whl {url_prefix}/torchvision-0.15.2a0+fa99a53-cp310-cp310-win_amd64.whl {url_prefix}/intel_extension_for_pytorch-2.0.110+gitc6ea20b-cp310-cp310-win_amd64.whl")
else:
# Using official IPEX release for linux since it's already an AOT build.
# However, users still have to install oneAPI toolkit and activate oneAPI environment manually.
# See https://intel.github.io/intel-extension-for-pytorch/index.html#installation for details.
torch_index_url = os.environ.get('TORCH_INDEX_URL', "https://pytorch-extension.intel.com/release-whl/stable/xpu/us/")
torch_command = os.environ.get('TORCH_COMMAND', f"pip install torch==2.0.0a0 intel-extension-for-pytorch==2.0.110+gitba7f6c1 --extra-index-url {torch_index_url}")
requirements_file = os.environ.get('REQS_FILE', "requirements_versions.txt") requirements_file = os.environ.get('REQS_FILE', "requirements_versions.txt")
xformers_package = os.environ.get('XFORMERS_PACKAGE', 'xformers==0.0.20') xformers_package = os.environ.get('XFORMERS_PACKAGE', 'xformers==0.0.20')
...@@ -352,6 +378,8 @@ def prepare_environment(): ...@@ -352,6 +378,8 @@ def prepare_environment():
run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True) run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True)
startup_timer.record("install torch") startup_timer.record("install torch")
if args.use_ipex:
args.skip_torch_cuda_test = True
if not args.skip_torch_cuda_test and not check_run_python("import torch; assert torch.cuda.is_available()"): if not args.skip_torch_cuda_test and not check_run_python("import torch; assert torch.cuda.is_available()"):
raise RuntimeError( raise RuntimeError(
'Torch is not able to use GPU; ' 'Torch is not able to use GPU; '
...@@ -441,7 +469,7 @@ def dump_sysinfo(): ...@@ -441,7 +469,7 @@ def dump_sysinfo():
import datetime import datetime
text = sysinfo.get() text = sysinfo.get()
filename = f"sysinfo-{datetime.datetime.utcnow().strftime('%Y-%m-%d-%H-%M')}.txt" filename = f"sysinfo-{datetime.datetime.utcnow().strftime('%Y-%m-%d-%H-%M')}.json"
with open(filename, "w", encoding="utf8") as file: with open(filename, "w", encoding="utf8") as file:
file.write(text) file.write(text)
......
...@@ -14,21 +14,24 @@ def list_localizations(dirname): ...@@ -14,21 +14,24 @@ def list_localizations(dirname):
if ext.lower() != ".json": if ext.lower() != ".json":
continue continue
localizations[fn] = os.path.join(dirname, file) localizations[fn] = [os.path.join(dirname, file)]
for file in scripts.list_scripts("localizations", ".json"): for file in scripts.list_scripts("localizations", ".json"):
fn, ext = os.path.splitext(file.filename) fn, ext = os.path.splitext(file.filename)
localizations[fn] = file.path if fn not in localizations:
localizations[fn] = []
localizations[fn].append(file.path)
def localization_js(current_localization_name: str) -> str: def localization_js(current_localization_name: str) -> str:
fn = localizations.get(current_localization_name, None) fns = localizations.get(current_localization_name, None)
data = {} data = {}
if fn is not None: if fns is not None:
try: for fn in fns:
with open(fn, "r", encoding="utf8") as file: try:
data = json.load(file) with open(fn, "r", encoding="utf8") as file:
except Exception: data.update(json.load(file))
errors.report(f"Error loading localization from {fn}", exc_info=True) except Exception:
errors.report(f"Error loading localization from {fn}", exc_info=True)
return f"window.localization = {json.dumps(data)}" return f"window.localization = {json.dumps(data)}"
import os import os
import logging import logging
try:
from tqdm.auto import tqdm
class TqdmLoggingHandler(logging.Handler):
def __init__(self, level=logging.INFO):
super().__init__(level)
def emit(self, record):
try:
msg = self.format(record)
tqdm.write(msg)
self.flush()
except Exception:
self.handleError(record)
TQDM_IMPORTED = True
except ImportError:
# tqdm does not exist before first launch
# I will import once the UI finishes seting up the enviroment and reloads.
TQDM_IMPORTED = False
def setup_logging(loglevel): def setup_logging(loglevel):
if loglevel is None: if loglevel is None:
loglevel = os.environ.get("SD_WEBUI_LOG_LEVEL") loglevel = os.environ.get("SD_WEBUI_LOG_LEVEL")
loghandlers = []
if TQDM_IMPORTED:
loghandlers.append(TqdmLoggingHandler())
if loglevel: if loglevel:
log_level = getattr(logging, loglevel.upper(), None) or logging.INFO log_level = getattr(logging, loglevel.upper(), None) or logging.INFO
logging.basicConfig( logging.basicConfig(
level=log_level, level=log_level,
format='%(asctime)s %(levelname)s [%(name)s] %(message)s', format='%(asctime)s %(levelname)s [%(name)s] %(message)s',
datefmt='%Y-%m-%d %H:%M:%S', datefmt='%Y-%m-%d %H:%M:%S',
handlers=loghandlers
) )
import logging import logging
import torch import torch
from torch import Tensor
import platform import platform
from modules.sd_hijack_utils import CondFunc from modules.sd_hijack_utils import CondFunc
from packaging import version from packaging import version
...@@ -51,6 +52,17 @@ def cumsum_fix(input, cumsum_func, *args, **kwargs): ...@@ -51,6 +52,17 @@ def cumsum_fix(input, cumsum_func, *args, **kwargs):
return cumsum_func(input, *args, **kwargs) return cumsum_func(input, *args, **kwargs)
# MPS workaround for https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14046
def interpolate_with_fp32_fallback(orig_func, *args, **kwargs) -> Tensor:
try:
return orig_func(*args, **kwargs)
except RuntimeError as e:
if "not implemented for" in str(e) and "Half" in str(e):
input_tensor = args[0]
return orig_func(input_tensor.to(torch.float32), *args[1:], **kwargs).to(input_tensor.dtype)
else:
print(f"An unexpected RuntimeError occurred: {str(e)}")
if has_mps: if has_mps:
if platform.mac_ver()[0].startswith("13.2."): if platform.mac_ver()[0].startswith("13.2."):
# MPS workaround for https://github.com/pytorch/pytorch/issues/95188, thanks to danieldk (https://github.com/explosion/curated-transformers/pull/124) # MPS workaround for https://github.com/pytorch/pytorch/issues/95188, thanks to danieldk (https://github.com/explosion/curated-transformers/pull/124)
...@@ -77,6 +89,9 @@ if has_mps: ...@@ -77,6 +89,9 @@ if has_mps:
# MPS workaround for https://github.com/pytorch/pytorch/issues/96113 # MPS workaround for https://github.com/pytorch/pytorch/issues/96113
CondFunc('torch.nn.functional.layer_norm', lambda orig_func, x, normalized_shape, weight, bias, eps, **kwargs: orig_func(x.float(), normalized_shape, weight.float() if weight is not None else None, bias.float() if bias is not None else bias, eps).to(x.dtype), lambda _, input, *args, **kwargs: len(args) == 4 and input.device.type == 'mps') CondFunc('torch.nn.functional.layer_norm', lambda orig_func, x, normalized_shape, weight, bias, eps, **kwargs: orig_func(x.float(), normalized_shape, weight.float() if weight is not None else None, bias.float() if bias is not None else bias, eps).to(x.dtype), lambda _, input, *args, **kwargs: len(args) == 4 and input.device.type == 'mps')
# MPS workaround for https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14046
CondFunc('torch.nn.functional.interpolate', interpolate_with_fp32_fallback, None)
# MPS workaround for https://github.com/pytorch/pytorch/issues/92311 # MPS workaround for https://github.com/pytorch/pytorch/issues/92311
if platform.processor() == 'i386': if platform.processor() == 'i386':
for funcName in ['torch.argmax', 'torch.Tensor.argmax']: for funcName in ['torch.argmax', 'torch.Tensor.argmax']:
......
...@@ -24,10 +24,15 @@ from pytorch_lightning.utilities.distributed import rank_zero_only ...@@ -24,10 +24,15 @@ from pytorch_lightning.utilities.distributed import rank_zero_only
from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config
from ldm.modules.ema import LitEma from ldm.modules.ema import LitEma
from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution
from ldm.models.autoencoder import VQModelInterface, IdentityFirstStage, AutoencoderKL from ldm.models.autoencoder import IdentityFirstStage, AutoencoderKL
from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like
from ldm.models.diffusion.ddim import DDIMSampler from ldm.models.diffusion.ddim import DDIMSampler
try:
from ldm.models.autoencoder import VQModelInterface
except Exception:
class VQModelInterface:
pass
__conditioning_keys__ = {'concat': 'c_concat', __conditioning_keys__ = {'concat': 'c_concat',
'crossattn': 'c_crossattn', 'crossattn': 'c_crossattn',
......
import json import json
import sys import sys
from dataclasses import dataclass
import gradio as gr import gradio as gr
...@@ -8,13 +9,14 @@ from modules.shared_cmd_options import cmd_opts ...@@ -8,13 +9,14 @@ from modules.shared_cmd_options import cmd_opts
class OptionInfo: class OptionInfo:
def __init__(self, default=None, label="", component=None, component_args=None, onchange=None, section=None, refresh=None, comment_before='', comment_after='', infotext=None, restrict_api=False): def __init__(self, default=None, label="", component=None, component_args=None, onchange=None, section=None, refresh=None, comment_before='', comment_after='', infotext=None, restrict_api=False, category_id=None):
self.default = default self.default = default
self.label = label self.label = label
self.component = component self.component = component
self.component_args = component_args self.component_args = component_args
self.onchange = onchange self.onchange = onchange
self.section = section self.section = section
self.category_id = category_id
self.refresh = refresh self.refresh = refresh
self.do_not_save = False self.do_not_save = False
...@@ -63,7 +65,11 @@ class OptionHTML(OptionInfo): ...@@ -63,7 +65,11 @@ class OptionHTML(OptionInfo):
def options_section(section_identifier, options_dict): def options_section(section_identifier, options_dict):
for v in options_dict.values(): for v in options_dict.values():
v.section = section_identifier if len(section_identifier) == 2:
v.section = section_identifier
elif len(section_identifier) == 3:
v.section = section_identifier[0:2]
v.category_id = section_identifier[2]
return options_dict return options_dict
...@@ -76,7 +82,7 @@ class Options: ...@@ -76,7 +82,7 @@ class Options:
def __init__(self, data_labels: dict[str, OptionInfo], restricted_opts): def __init__(self, data_labels: dict[str, OptionInfo], restricted_opts):
self.data_labels = data_labels self.data_labels = data_labels
self.data = {k: v.default for k, v in self.data_labels.items()} self.data = {k: v.default for k, v in self.data_labels.items() if not v.do_not_save}
self.restricted_opts = restricted_opts self.restricted_opts = restricted_opts
def __setattr__(self, key, value): def __setattr__(self, key, value):
...@@ -158,7 +164,7 @@ class Options: ...@@ -158,7 +164,7 @@ class Options:
assert not cmd_opts.freeze_settings, "saving settings is disabled" assert not cmd_opts.freeze_settings, "saving settings is disabled"
with open(filename, "w", encoding="utf8") as file: with open(filename, "w", encoding="utf8") as file:
json.dump(self.data, file, indent=4) json.dump(self.data, file, indent=4, ensure_ascii=False)
def same_type(self, x, y): def same_type(self, x, y):
if x is None or y is None: if x is None or y is None:
...@@ -206,21 +212,59 @@ class Options: ...@@ -206,21 +212,59 @@ class Options:
d = {k: self.data.get(k, v.default) for k, v in self.data_labels.items()} d = {k: self.data.get(k, v.default) for k, v in self.data_labels.items()}
d["_comments_before"] = {k: v.comment_before for k, v in self.data_labels.items() if v.comment_before is not None} d["_comments_before"] = {k: v.comment_before for k, v in self.data_labels.items() if v.comment_before is not None}
d["_comments_after"] = {k: v.comment_after for k, v in self.data_labels.items() if v.comment_after is not None} d["_comments_after"] = {k: v.comment_after for k, v in self.data_labels.items() if v.comment_after is not None}
item_categories = {}
for item in self.data_labels.values():
category = categories.mapping.get(item.category_id)
category = "Uncategorized" if category is None else category.label
if category not in item_categories:
item_categories[category] = item.section[1]
# _categories is a list of pairs: [section, category]. Each section (a setting page) will get a special heading above it with the category as text.
d["_categories"] = [[v, k] for k, v in item_categories.items()] + [["Defaults", "Other"]]
return json.dumps(d) return json.dumps(d)
def add_option(self, key, info): def add_option(self, key, info):
self.data_labels[key] = info self.data_labels[key] = info
if key not in self.data and not info.do_not_save:
self.data[key] = info.default
def reorder(self): def reorder(self):
"""reorder settings so that all items related to section always go together""" """Reorder settings so that:
- all items related to section always go together
- all sections belonging to a category go together
- sections inside a category are ordered alphabetically
- categories are ordered by creation order
Category is a superset of sections: for category "postprocessing" there could be multiple sections: "face restoration", "upscaling".
This function also changes items' category_id so that all items belonging to a section have the same category_id.
"""
category_ids = {}
section_categories = {}
section_ids = {}
settings_items = self.data_labels.items() settings_items = self.data_labels.items()
for _, item in settings_items: for _, item in settings_items:
if item.section not in section_ids: if item.section not in section_categories:
section_ids[item.section] = len(section_ids) section_categories[item.section] = item.category_id
for _, item in settings_items:
item.category_id = section_categories.get(item.section)
for category_id in categories.mapping:
if category_id not in category_ids:
category_ids[category_id] = len(category_ids)
self.data_labels = dict(sorted(settings_items, key=lambda x: section_ids[x[1].section])) def sort_key(x):
item: OptionInfo = x[1]
category_order = category_ids.get(item.category_id, len(category_ids))
section_order = item.section[1]
return category_order, section_order
self.data_labels = dict(sorted(settings_items, key=sort_key))
def cast_value(self, key, value): def cast_value(self, key, value):
"""casts an arbitrary to the same type as this setting's value with key """casts an arbitrary to the same type as this setting's value with key
...@@ -243,3 +287,22 @@ class Options: ...@@ -243,3 +287,22 @@ class Options:
value = expected_type(value) value = expected_type(value)
return value return value
@dataclass
class OptionsCategory:
id: str
label: str
class OptionsCategories:
def __init__(self):
self.mapping = {}
def register_category(self, category_id, label):
if category_id in self.mapping:
return category_id
self.mapping[category_id] = OptionsCategory(category_id, label)
categories = OptionsCategories()
import os import os
import sys import sys
from modules.paths_internal import models_path, script_path, data_path, extensions_dir, extensions_builtin_dir # noqa: F401 from modules.paths_internal import models_path, script_path, data_path, extensions_dir, extensions_builtin_dir, cwd # noqa: F401
import modules.safe # noqa: F401 import modules.safe # noqa: F401
......
...@@ -8,6 +8,7 @@ import shlex ...@@ -8,6 +8,7 @@ import shlex
commandline_args = os.environ.get('COMMANDLINE_ARGS', "") commandline_args = os.environ.get('COMMANDLINE_ARGS', "")
sys.argv += shlex.split(commandline_args) sys.argv += shlex.split(commandline_args)
cwd = os.getcwd()
modules_path = os.path.dirname(os.path.realpath(__file__)) modules_path = os.path.dirname(os.path.realpath(__file__))
script_path = os.path.dirname(modules_path) script_path = os.path.dirname(modules_path)
......
...@@ -29,11 +29,7 @@ def run_postprocessing(extras_mode, image, image_folder, input_dir, output_dir, ...@@ -29,11 +29,7 @@ def run_postprocessing(extras_mode, image, image_folder, input_dir, output_dir,
image_list = shared.listfiles(input_dir) image_list = shared.listfiles(input_dir)
for filename in image_list: for filename in image_list:
try: yield filename, filename
image = Image.open(filename)
except Exception:
continue
yield image, filename
else: else:
assert image, 'image not selected' assert image, 'image not selected'
yield image, None yield image, None
...@@ -45,43 +41,97 @@ def run_postprocessing(extras_mode, image, image_folder, input_dir, output_dir, ...@@ -45,43 +41,97 @@ def run_postprocessing(extras_mode, image, image_folder, input_dir, output_dir,
infotext = '' infotext = ''
for image_data, name in get_images(extras_mode, image, image_folder, input_dir): data_to_process = list(get_images(extras_mode, image, image_folder, input_dir))
shared.state.job_count = len(data_to_process)
for image_placeholder, name in data_to_process:
image_data: Image.Image image_data: Image.Image
shared.state.nextjob()
shared.state.textinfo = name shared.state.textinfo = name
shared.state.skipped = False
if shared.state.interrupted:
break
if isinstance(image_placeholder, str):
try:
image_data = Image.open(image_placeholder)
except Exception:
continue
else:
image_data = image_placeholder
shared.state.assign_current_image(image_data)
parameters, existing_pnginfo = images.read_info_from_image(image_data) parameters, existing_pnginfo = images.read_info_from_image(image_data)
if parameters: if parameters:
existing_pnginfo["parameters"] = parameters existing_pnginfo["parameters"] = parameters
pp = scripts_postprocessing.PostprocessedImage(image_data.convert("RGB")) initial_pp = scripts_postprocessing.PostprocessedImage(image_data.convert("RGB"))
scripts.scripts_postproc.run(pp, args) scripts.scripts_postproc.run(initial_pp, args)
if opts.use_original_name_batch and name is not None: if shared.state.skipped:
basename = os.path.splitext(os.path.basename(name))[0] continue
else:
basename = '' used_suffixes = {}
for pp in [initial_pp, *initial_pp.extra_images]:
suffix = pp.get_suffix(used_suffixes)
infotext = ", ".join([k if k == v else f'{k}: {generation_parameters_copypaste.quote(v)}' for k, v in pp.info.items() if v is not None]) if opts.use_original_name_batch and name is not None:
basename = os.path.splitext(os.path.basename(name))[0]
forced_filename = basename + suffix
else:
basename = ''
forced_filename = None
if opts.enable_pnginfo: infotext = ", ".join([k if k == v else f'{k}: {generation_parameters_copypaste.quote(v)}' for k, v in pp.info.items() if v is not None])
pp.image.info = existing_pnginfo
pp.image.info["postprocessing"] = infotext
if save_output: if opts.enable_pnginfo:
images.save_image(pp.image, path=outpath, basename=basename, seed=None, prompt=None, extension=opts.samples_format, info=infotext, short_filename=True, no_prompt=True, grid=False, pnginfo_section_name="extras", existing_info=existing_pnginfo, forced_filename=None) pp.image.info = existing_pnginfo
pp.image.info["postprocessing"] = infotext
if extras_mode != 2 or show_extras_results: if save_output:
outputs.append(pp.image) fullfn, _ = images.save_image(pp.image, path=outpath, basename=basename, extension=opts.samples_format, info=infotext, short_filename=True, no_prompt=True, grid=False, pnginfo_section_name="extras", existing_info=existing_pnginfo, forced_filename=forced_filename, suffix=suffix)
if pp.caption:
caption_filename = os.path.splitext(fullfn)[0] + ".txt"
if os.path.isfile(caption_filename):
with open(caption_filename, encoding="utf8") as file:
existing_caption = file.read().strip()
else:
existing_caption = ""
action = shared.opts.postprocessing_existing_caption_action
if action == 'Prepend' and existing_caption:
caption = f"{existing_caption} {pp.caption}"
elif action == 'Append' and existing_caption:
caption = f"{pp.caption} {existing_caption}"
elif action == 'Keep' and existing_caption:
caption = existing_caption
else:
caption = pp.caption
caption = caption.strip()
if caption:
with open(caption_filename, "w", encoding="utf8") as file:
file.write(caption)
if extras_mode != 2 or show_extras_results:
outputs.append(pp.image)
image_data.close() image_data.close()
devices.torch_gc() devices.torch_gc()
shared.state.end()
return outputs, ui_common.plaintext_to_html(infotext), '' return outputs, ui_common.plaintext_to_html(infotext), ''
def run_postprocessing_webui(id_task, *args, **kwargs):
return run_postprocessing(*args, **kwargs)
def run_extras(extras_mode, resize_mode, image, image_folder, input_dir, output_dir, show_extras_results, gfpgan_visibility, codeformer_visibility, codeformer_weight, upscaling_resize, upscaling_resize_w, upscaling_resize_h, upscaling_crop, extras_upscaler_1, extras_upscaler_2, extras_upscaler_2_visibility, upscale_first: bool, save_output: bool = True): def run_extras(extras_mode, resize_mode, image, image_folder, input_dir, output_dir, show_extras_results, gfpgan_visibility, codeformer_visibility, codeformer_weight, upscaling_resize, upscaling_resize_w, upscaling_resize_h, upscaling_crop, extras_upscaler_1, extras_upscaler_2, extras_upscaler_2_visibility, upscale_first: bool, save_output: bool = True):
"""old handler for API""" """old handler for API"""
...@@ -97,9 +147,11 @@ def run_extras(extras_mode, resize_mode, image, image_folder, input_dir, output_ ...@@ -97,9 +147,11 @@ def run_extras(extras_mode, resize_mode, image, image_folder, input_dir, output_
"upscaler_2_visibility": extras_upscaler_2_visibility, "upscaler_2_visibility": extras_upscaler_2_visibility,
}, },
"GFPGAN": { "GFPGAN": {
"enable": True,
"gfpgan_visibility": gfpgan_visibility, "gfpgan_visibility": gfpgan_visibility,
}, },
"CodeFormer": { "CodeFormer": {
"enable": True,
"codeformer_visibility": codeformer_visibility, "codeformer_visibility": codeformer_visibility,
"codeformer_weight": codeformer_weight, "codeformer_weight": codeformer_weight,
}, },
......
...@@ -142,7 +142,7 @@ class StableDiffusionProcessing: ...@@ -142,7 +142,7 @@ class StableDiffusionProcessing:
overlay_images: list = None overlay_images: list = None
eta: float = None eta: float = None
do_not_reload_embeddings: bool = False do_not_reload_embeddings: bool = False
denoising_strength: float = 0 denoising_strength: float = None
ddim_discretize: str = None ddim_discretize: str = None
s_min_uncond: float = None s_min_uncond: float = None
s_churn: float = None s_churn: float = None
...@@ -296,7 +296,7 @@ class StableDiffusionProcessing: ...@@ -296,7 +296,7 @@ class StableDiffusionProcessing:
return conditioning return conditioning
def edit_image_conditioning(self, source_image): def edit_image_conditioning(self, source_image):
conditioning_image = images_tensor_to_samples(source_image*0.5+0.5, approximation_indexes.get(opts.sd_vae_encode_method)) conditioning_image = shared.sd_model.encode_first_stage(source_image).mode()
return conditioning_image return conditioning_image
...@@ -533,6 +533,7 @@ class Processed: ...@@ -533,6 +533,7 @@ class Processed:
self.all_seeds = all_seeds or p.all_seeds or [self.seed] self.all_seeds = all_seeds or p.all_seeds or [self.seed]
self.all_subseeds = all_subseeds or p.all_subseeds or [self.subseed] self.all_subseeds = all_subseeds or p.all_subseeds or [self.subseed]
self.infotexts = infotexts or [info] self.infotexts = infotexts or [info]
self.version = program_version()
def js(self): def js(self):
obj = { obj = {
...@@ -567,6 +568,7 @@ class Processed: ...@@ -567,6 +568,7 @@ class Processed:
"job_timestamp": self.job_timestamp, "job_timestamp": self.job_timestamp,
"clip_skip": self.clip_skip, "clip_skip": self.clip_skip,
"is_using_inpainting_conditioning": self.is_using_inpainting_conditioning, "is_using_inpainting_conditioning": self.is_using_inpainting_conditioning,
"version": self.version,
} }
return json.dumps(obj) return json.dumps(obj)
...@@ -677,8 +679,8 @@ def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments=None, iter ...@@ -677,8 +679,8 @@ def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments=None, iter
"Size": f"{p.width}x{p.height}", "Size": f"{p.width}x{p.height}",
"Model hash": p.sd_model_hash if opts.add_model_hash_to_info else None, "Model hash": p.sd_model_hash if opts.add_model_hash_to_info else None,
"Model": p.sd_model_name if opts.add_model_name_to_info else None, "Model": p.sd_model_name if opts.add_model_name_to_info else None,
"VAE hash": p.sd_vae_hash if opts.add_model_hash_to_info else None, "VAE hash": p.sd_vae_hash if opts.add_vae_hash_to_info else None,
"VAE": p.sd_vae_name if opts.add_model_name_to_info else None, "VAE": p.sd_vae_name if opts.add_vae_name_to_info else None,
"Variation seed": (None if p.subseed_strength == 0 else (p.all_subseeds[0] if use_main_prompt else all_subseeds[index])), "Variation seed": (None if p.subseed_strength == 0 else (p.all_subseeds[0] if use_main_prompt else all_subseeds[index])),
"Variation seed strength": (None if p.subseed_strength == 0 else p.subseed_strength), "Variation seed strength": (None if p.subseed_strength == 0 else p.subseed_strength),
"Seed resize from": (None if p.seed_resize_from_w <= 0 or p.seed_resize_from_h <= 0 else f"{p.seed_resize_from_w}x{p.seed_resize_from_h}"), "Seed resize from": (None if p.seed_resize_from_w <= 0 or p.seed_resize_from_h <= 0 else f"{p.seed_resize_from_w}x{p.seed_resize_from_h}"),
...@@ -709,7 +711,7 @@ def process_images(p: StableDiffusionProcessing) -> Processed: ...@@ -709,7 +711,7 @@ def process_images(p: StableDiffusionProcessing) -> Processed:
if p.scripts is not None: if p.scripts is not None:
p.scripts.before_process(p) p.scripts.before_process(p)
stored_opts = {k: opts.data[k] for k in p.override_settings.keys()} stored_opts = {k: opts.data[k] if k in opts.data else opts.get_default(k) for k in p.override_settings.keys() if k in opts.data}
try: try:
# if no checkpoint override or the override checkpoint can't be found, remove override entry and load opts checkpoint # if no checkpoint override or the override checkpoint can't be found, remove override entry and load opts checkpoint
...@@ -797,7 +799,6 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed: ...@@ -797,7 +799,6 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
infotexts = [] infotexts = []
output_images = [] output_images = []
with torch.no_grad(), p.sd_model.ema_scope(): with torch.no_grad(), p.sd_model.ema_scope():
with devices.autocast(): with devices.autocast():
p.init(p.all_prompts, p.all_seeds, p.all_subseeds) p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
...@@ -871,7 +872,6 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed: ...@@ -871,7 +872,6 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
else: else:
if opts.sd_vae_decode_method != 'Full': if opts.sd_vae_decode_method != 'Full':
p.extra_generation_params['VAE Decoder'] = opts.sd_vae_decode_method p.extra_generation_params['VAE Decoder'] = opts.sd_vae_decode_method
x_samples_ddim = decode_latent_batch(p.sd_model, samples_ddim, target_device=devices.cpu, check_for_nans=True) x_samples_ddim = decode_latent_batch(p.sd_model, samples_ddim, target_device=devices.cpu, check_for_nans=True)
x_samples_ddim = torch.stack(x_samples_ddim).float() x_samples_ddim = torch.stack(x_samples_ddim).float()
...@@ -884,6 +884,8 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed: ...@@ -884,6 +884,8 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
devices.torch_gc() devices.torch_gc()
state.nextjob()
if p.scripts is not None: if p.scripts is not None:
p.scripts.postprocess_batch(p, x_samples_ddim, batch_number=n) p.scripts.postprocess_batch(p, x_samples_ddim, batch_number=n)
...@@ -936,27 +938,27 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed: ...@@ -936,27 +938,27 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
if opts.enable_pnginfo: if opts.enable_pnginfo:
image.info["parameters"] = text image.info["parameters"] = text
output_images.append(image) output_images.append(image)
if save_samples and hasattr(p, 'mask_for_overlay') and p.mask_for_overlay and any([opts.save_mask, opts.save_mask_composite, opts.return_mask, opts.return_mask_composite]): if hasattr(p, 'mask_for_overlay') and p.mask_for_overlay:
image_mask = p.mask_for_overlay.convert('RGB') if opts.return_mask or opts.save_mask:
image_mask_composite = Image.composite(image.convert('RGBA').convert('RGBa'), Image.new('RGBa', image.size), images.resize_image(2, p.mask_for_overlay, image.width, image.height).convert('L')).convert('RGBA') image_mask = p.mask_for_overlay.convert('RGB')
if save_samples and opts.save_mask:
if opts.save_mask: images.save_image(image_mask, p.outpath_samples, "", p.seeds[i], p.prompts[i], opts.samples_format, info=infotext(i), p=p, suffix="-mask")
images.save_image(image_mask, p.outpath_samples, "", p.seeds[i], p.prompts[i], opts.samples_format, info=infotext(i), p=p, suffix="-mask") if opts.return_mask:
output_images.append(image_mask)
if opts.save_mask_composite:
images.save_image(image_mask_composite, p.outpath_samples, "", p.seeds[i], p.prompts[i], opts.samples_format, info=infotext(i), p=p, suffix="-mask-composite") if opts.return_mask_composite or opts.save_mask_composite:
image_mask_composite = Image.composite(image.convert('RGBA').convert('RGBa'), Image.new('RGBa', image.size), images.resize_image(2, p.mask_for_overlay, image.width, image.height).convert('L')).convert('RGBA')
if opts.return_mask: if save_samples and opts.save_mask_composite:
output_images.append(image_mask) images.save_image(image_mask_composite, p.outpath_samples, "", p.seeds[i], p.prompts[i], opts.samples_format, info=infotext(i), p=p, suffix="-mask-composite")
if opts.return_mask_composite:
if opts.return_mask_composite: output_images.append(image_mask_composite)
output_images.append(image_mask_composite)
del x_samples_ddim del x_samples_ddim
devices.torch_gc() devices.torch_gc()
state.nextjob() if not infotexts:
infotexts.append(Processed(p, []).infotext(p, 0))
p.color_corrections = None p.color_corrections = None
...@@ -1142,6 +1144,7 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing): ...@@ -1142,6 +1144,7 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
if not self.enable_hr: if not self.enable_hr:
return samples return samples
devices.torch_gc()
if self.latent_scale_mode is None: if self.latent_scale_mode is None:
decoded_samples = torch.stack(decode_latent_batch(self.sd_model, samples, target_device=devices.cpu, check_for_nans=True)).to(dtype=torch.float32) decoded_samples = torch.stack(decode_latent_batch(self.sd_model, samples, target_device=devices.cpu, check_for_nans=True)).to(dtype=torch.float32)
...@@ -1151,8 +1154,6 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing): ...@@ -1151,8 +1154,6 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
with sd_models.SkipWritingToConfig(): with sd_models.SkipWritingToConfig():
sd_models.reload_model_weights(info=self.hr_checkpoint_info) sd_models.reload_model_weights(info=self.hr_checkpoint_info)
devices.torch_gc()
return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts) return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts)
def sample_hr_pass(self, samples, decoded_samples, seeds, subseeds, subseed_strength, prompts): def sample_hr_pass(self, samples, decoded_samples, seeds, subseeds, subseed_strength, prompts):
...@@ -1160,7 +1161,6 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing): ...@@ -1160,7 +1161,6 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
return samples return samples
self.is_hr_pass = True self.is_hr_pass = True
target_width = self.hr_upscale_to_x target_width = self.hr_upscale_to_x
target_height = self.hr_upscale_to_y target_height = self.hr_upscale_to_y
...@@ -1249,7 +1249,6 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing): ...@@ -1249,7 +1249,6 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
decoded_samples = decode_latent_batch(self.sd_model, samples, target_device=devices.cpu, check_for_nans=True) decoded_samples = decode_latent_batch(self.sd_model, samples, target_device=devices.cpu, check_for_nans=True)
self.is_hr_pass = False self.is_hr_pass = False
return decoded_samples return decoded_samples
def close(self): def close(self):
......
...@@ -29,8 +29,8 @@ class ScriptSeed(scripts.ScriptBuiltinUI): ...@@ -29,8 +29,8 @@ class ScriptSeed(scripts.ScriptBuiltinUI):
else: else:
self.seed = gr.Number(label='Seed', value=-1, elem_id=self.elem_id("seed"), min_width=100, precision=0) self.seed = gr.Number(label='Seed', value=-1, elem_id=self.elem_id("seed"), min_width=100, precision=0)
random_seed = ToolButton(ui.random_symbol, elem_id=self.elem_id("random_seed"), label='Random seed') random_seed = ToolButton(ui.random_symbol, elem_id=self.elem_id("random_seed"), tooltip="Set seed to -1, which will cause a new random number to be used every time")
reuse_seed = ToolButton(ui.reuse_symbol, elem_id=self.elem_id("reuse_seed"), label='Reuse seed') reuse_seed = ToolButton(ui.reuse_symbol, elem_id=self.elem_id("reuse_seed"), tooltip="Reuse seed from last generation, mostly useful if it was randomized")
seed_checkbox = gr.Checkbox(label='Extra', elem_id=self.elem_id("subseed_show"), value=False) seed_checkbox = gr.Checkbox(label='Extra', elem_id=self.elem_id("subseed_show"), value=False)
......
...@@ -2,10 +2,9 @@ from __future__ import annotations ...@@ -2,10 +2,9 @@ from __future__ import annotations
import re import re
from collections import namedtuple from collections import namedtuple
from typing import List
import lark import lark
# a prompt like this: "fantasy landscape with a [mountain:lake:0.25] and [an oak:a christmas tree:0.75][ in foreground::0.6][ in background:0.25] [shoddy:masterful:0.5]" # a prompt like this: "fantasy landscape with a [mountain:lake:0.25] and [an oak:a christmas tree:0.75][ in foreground::0.6][: in background:0.25] [shoddy:masterful:0.5]"
# will be represented with prompt_schedule like this (assuming steps=100): # will be represented with prompt_schedule like this (assuming steps=100):
# [25, 'fantasy landscape with a mountain and an oak in foreground shoddy'] # [25, 'fantasy landscape with a mountain and an oak in foreground shoddy']
# [50, 'fantasy landscape with a lake and an oak in foreground in background shoddy'] # [50, 'fantasy landscape with a lake and an oak in foreground in background shoddy']
...@@ -240,14 +239,14 @@ def get_multicond_prompt_list(prompts: SdConditioning | list[str]): ...@@ -240,14 +239,14 @@ def get_multicond_prompt_list(prompts: SdConditioning | list[str]):
class ComposableScheduledPromptConditioning: class ComposableScheduledPromptConditioning:
def __init__(self, schedules, weight=1.0): def __init__(self, schedules, weight=1.0):
self.schedules: List[ScheduledPromptConditioning] = schedules self.schedules: list[ScheduledPromptConditioning] = schedules
self.weight: float = weight self.weight: float = weight
class MulticondLearnedConditioning: class MulticondLearnedConditioning:
def __init__(self, shape, batch): def __init__(self, shape, batch):
self.shape: tuple = shape # the shape field is needed to send this object to DDIM/PLMS self.shape: tuple = shape # the shape field is needed to send this object to DDIM/PLMS
self.batch: List[List[ComposableScheduledPromptConditioning]] = batch self.batch: list[list[ComposableScheduledPromptConditioning]] = batch
def get_multicond_learned_conditioning(model, prompts, steps, hires_steps=None, use_old_scheduling=False) -> MulticondLearnedConditioning: def get_multicond_learned_conditioning(model, prompts, steps, hires_steps=None, use_old_scheduling=False) -> MulticondLearnedConditioning:
...@@ -278,7 +277,7 @@ class DictWithShape(dict): ...@@ -278,7 +277,7 @@ class DictWithShape(dict):
return self["crossattn"].shape return self["crossattn"].shape
def reconstruct_cond_batch(c: List[List[ScheduledPromptConditioning]], current_step): def reconstruct_cond_batch(c: list[list[ScheduledPromptConditioning]], current_step):
param = c[0][0].cond param = c[0][0].cond
is_dict = isinstance(param, dict) is_dict = isinstance(param, dict)
......
...@@ -14,7 +14,9 @@ def is_restartable() -> bool: ...@@ -14,7 +14,9 @@ def is_restartable() -> bool:
def restart_program() -> None: def restart_program() -> None:
"""creates file tmp/restart and immediately stops the process, which webui.bat/webui.sh interpret as a command to start webui again""" """creates file tmp/restart and immediately stops the process, which webui.bat/webui.sh interpret as a command to start webui again"""
(Path(script_path) / "tmp" / "restart").touch() tmpdir = Path(script_path) / "tmp"
tmpdir.mkdir(parents=True, exist_ok=True)
(tmpdir / "restart").touch()
stop_program() stop_program()
......
...@@ -110,7 +110,7 @@ class ImageRNG: ...@@ -110,7 +110,7 @@ class ImageRNG:
self.is_first = True self.is_first = True
def first(self): def first(self):
noise_shape = self.shape if self.seed_resize_from_h <= 0 or self.seed_resize_from_w <= 0 else (self.shape[0], self.seed_resize_from_h // 8, self.seed_resize_from_w // 8) noise_shape = self.shape if self.seed_resize_from_h <= 0 or self.seed_resize_from_w <= 0 else (self.shape[0], int(self.seed_resize_from_h) // 8, int(self.seed_resize_from_w // 8))
xs = [] xs = []
......
import inspect import inspect
import os import os
from collections import namedtuple from collections import namedtuple
from typing import Optional, Dict, Any from typing import Optional, Any
from fastapi import FastAPI from fastapi import FastAPI
from gradio import Blocks from gradio import Blocks
...@@ -258,7 +258,7 @@ def image_grid_callback(params: ImageGridLoopParams): ...@@ -258,7 +258,7 @@ def image_grid_callback(params: ImageGridLoopParams):
report_exception(c, 'image_grid') report_exception(c, 'image_grid')
def infotext_pasted_callback(infotext: str, params: Dict[str, Any]): def infotext_pasted_callback(infotext: str, params: dict[str, Any]):
for c in callback_map['callbacks_infotext_pasted']: for c in callback_map['callbacks_infotext_pasted']:
try: try:
c.callback(infotext, params) c.callback(infotext, params)
...@@ -449,7 +449,7 @@ def on_infotext_pasted(callback): ...@@ -449,7 +449,7 @@ def on_infotext_pasted(callback):
"""register a function to be called before applying an infotext. """register a function to be called before applying an infotext.
The callback is called with two arguments: The callback is called with two arguments:
- infotext: str - raw infotext. - infotext: str - raw infotext.
- result: Dict[str, any] - parsed infotext parameters. - result: dict[str, any] - parsed infotext parameters.
""" """
add_callback(callback_map['callbacks_infotext_pasted'], callback) add_callback(callback_map['callbacks_infotext_pasted'], callback)
......
...@@ -311,20 +311,113 @@ scripts_data = [] ...@@ -311,20 +311,113 @@ scripts_data = []
postprocessing_scripts_data = [] postprocessing_scripts_data = []
ScriptClassData = namedtuple("ScriptClassData", ["script_class", "path", "basedir", "module"]) ScriptClassData = namedtuple("ScriptClassData", ["script_class", "path", "basedir", "module"])
def topological_sort(dependencies):
"""Accepts a dictionary mapping name to its dependencies, returns a list of names ordered according to dependencies.
Ignores errors relating to missing dependeencies or circular dependencies
"""
visited = {}
result = []
def inner(name):
visited[name] = True
for dep in dependencies.get(name, []):
if dep in dependencies and dep not in visited:
inner(dep)
result.append(name)
for depname in dependencies:
if depname not in visited:
inner(depname)
return result
@dataclass
class ScriptWithDependencies:
script_canonical_name: str
file: ScriptFile
requires: list
load_before: list
load_after: list
def list_scripts(scriptdirname, extension, *, include_extensions=True): def list_scripts(scriptdirname, extension, *, include_extensions=True):
scripts_list = [] scripts = {}
loaded_extensions = {ext.canonical_name: ext for ext in extensions.active()}
loaded_extensions_scripts = {ext.canonical_name: [] for ext in extensions.active()}
# build script dependency map
root_script_basedir = os.path.join(paths.script_path, scriptdirname)
if os.path.exists(root_script_basedir):
for filename in sorted(os.listdir(root_script_basedir)):
if not os.path.isfile(os.path.join(root_script_basedir, filename)):
continue
if os.path.splitext(filename)[1].lower() != extension:
continue
basedir = os.path.join(paths.script_path, scriptdirname) script_file = ScriptFile(paths.script_path, filename, os.path.join(root_script_basedir, filename))
if os.path.exists(basedir): scripts[filename] = ScriptWithDependencies(filename, script_file, [], [], [])
for filename in sorted(os.listdir(basedir)):
scripts_list.append(ScriptFile(paths.script_path, filename, os.path.join(basedir, filename)))
if include_extensions: if include_extensions:
for ext in extensions.active(): for ext in extensions.active():
scripts_list += ext.list_files(scriptdirname, extension) extension_scripts_list = ext.list_files(scriptdirname, extension)
for extension_script in extension_scripts_list:
scripts_list = [x for x in scripts_list if os.path.splitext(x.path)[1].lower() == extension and os.path.isfile(x.path)] if not os.path.isfile(extension_script.path):
continue
script_canonical_name = ("builtin/" if ext.is_builtin else "") + ext.canonical_name + "/" + extension_script.filename
relative_path = scriptdirname + "/" + extension_script.filename
script = ScriptWithDependencies(
script_canonical_name=script_canonical_name,
file=extension_script,
requires=ext.metadata.get_script_requirements("Requires", relative_path, scriptdirname),
load_before=ext.metadata.get_script_requirements("Before", relative_path, scriptdirname),
load_after=ext.metadata.get_script_requirements("After", relative_path, scriptdirname),
)
scripts[script_canonical_name] = script
loaded_extensions_scripts[ext.canonical_name].append(script)
for script_canonical_name, script in scripts.items():
# load before requires inverse dependency
# in this case, append the script name into the load_after list of the specified script
for load_before in script.load_before:
# if this requires an individual script to be loaded before
other_script = scripts.get(load_before)
if other_script:
other_script.load_after.append(script_canonical_name)
# if this requires an extension
other_extension_scripts = loaded_extensions_scripts.get(load_before)
if other_extension_scripts:
for other_script in other_extension_scripts:
other_script.load_after.append(script_canonical_name)
# if After mentions an extension, remove it and instead add all of its scripts
for load_after in list(script.load_after):
if load_after not in scripts and load_after in loaded_extensions_scripts:
script.load_after.remove(load_after)
for other_script in loaded_extensions_scripts.get(load_after, []):
script.load_after.append(other_script.script_canonical_name)
dependencies = {}
for script_canonical_name, script in scripts.items():
for required_script in script.requires:
if required_script not in scripts and required_script not in loaded_extensions:
errors.report(f'Script "{script_canonical_name}" requires "{required_script}" to be loaded, but it is not.', exc_info=False)
dependencies[script_canonical_name] = script.load_after
ordered_scripts = topological_sort(dependencies)
scripts_list = [scripts[script_canonical_name].file for script_canonical_name in ordered_scripts]
return scripts_list return scripts_list
...@@ -365,15 +458,9 @@ def load_scripts(): ...@@ -365,15 +458,9 @@ def load_scripts():
elif issubclass(script_class, scripts_postprocessing.ScriptPostprocessing): elif issubclass(script_class, scripts_postprocessing.ScriptPostprocessing):
postprocessing_scripts_data.append(ScriptClassData(script_class, scriptfile.path, scriptfile.basedir, module)) postprocessing_scripts_data.append(ScriptClassData(script_class, scriptfile.path, scriptfile.basedir, module))
def orderby(basedir): # here the scripts_list is already ordered
# 1st webui, 2nd extensions-builtin, 3rd extensions # processing_script is not considered though
priority = {os.path.join(paths.script_path, "extensions-builtin"):1, paths.script_path:0} for scriptfile in scripts_list:
for key in priority:
if basedir.startswith(key):
return priority[key]
return 9999
for scriptfile in sorted(scripts_list, key=lambda x: [orderby(x.basedir), x]):
try: try:
if scriptfile.basedir != paths.script_path: if scriptfile.basedir != paths.script_path:
sys.path = [scriptfile.basedir] + sys.path sys.path = [scriptfile.basedir] + sys.path
...@@ -473,17 +560,25 @@ class ScriptRunner: ...@@ -473,17 +560,25 @@ class ScriptRunner:
on_after.clear() on_after.clear()
def create_script_ui(self, script): def create_script_ui(self, script):
import modules.api.models as api_models
script.args_from = len(self.inputs) script.args_from = len(self.inputs)
script.args_to = len(self.inputs) script.args_to = len(self.inputs)
try:
self.create_script_ui_inner(script)
except Exception:
errors.report(f"Error creating UI for {script.name}: ", exc_info=True)
def create_script_ui_inner(self, script):
import modules.api.models as api_models
controls = wrap_call(script.ui, script.filename, "ui", script.is_img2img) controls = wrap_call(script.ui, script.filename, "ui", script.is_img2img)
if controls is None: if controls is None:
return return
script.name = wrap_call(script.title, script.filename, "title", default=script.filename).lower() script.name = wrap_call(script.title, script.filename, "title", default=script.filename).lower()
api_args = [] api_args = []
for control in controls: for control in controls:
...@@ -491,11 +586,15 @@ class ScriptRunner: ...@@ -491,11 +586,15 @@ class ScriptRunner:
arg_info = api_models.ScriptArg(label=control.label or "") arg_info = api_models.ScriptArg(label=control.label or "")
for field in ("value", "minimum", "maximum", "step", "choices"): for field in ("value", "minimum", "maximum", "step"):
v = getattr(control, field, None) v = getattr(control, field, None)
if v is not None: if v is not None:
setattr(arg_info, field, v) setattr(arg_info, field, v)
choices = getattr(control, 'choices', None) # as of gradio 3.41, some items in choices are strings, and some are tuples where the first elem is the string
if choices is not None:
arg_info.choices = [x[0] if isinstance(x, tuple) else x for x in choices]
api_args.append(arg_info) api_args.append(arg_info)
script.api_info = api_models.ScriptInfo( script.api_info = api_models.ScriptInfo(
......
import dataclasses
import os import os
import gradio as gr import gradio as gr
from modules import errors, shared from modules import errors, shared
@dataclasses.dataclass
class PostprocessedImageSharedInfo:
target_width: int = None
target_height: int = None
class PostprocessedImage: class PostprocessedImage:
def __init__(self, image): def __init__(self, image):
self.image = image self.image = image
self.info = {} self.info = {}
self.shared = PostprocessedImageSharedInfo()
self.extra_images = []
self.nametags = []
self.disable_processing = False
self.caption = None
def get_suffix(self, used_suffixes=None):
used_suffixes = {} if used_suffixes is None else used_suffixes
suffix = "-".join(self.nametags)
if suffix:
suffix = "-" + suffix
if suffix not in used_suffixes:
used_suffixes[suffix] = 1
return suffix
for i in range(1, 100):
proposed_suffix = suffix + "-" + str(i)
if proposed_suffix not in used_suffixes:
used_suffixes[proposed_suffix] = 1
return proposed_suffix
return suffix
def create_copy(self, new_image, *, nametags=None, disable_processing=False):
pp = PostprocessedImage(new_image)
pp.shared = self.shared
pp.nametags = self.nametags.copy()
pp.info = self.info.copy()
pp.disable_processing = disable_processing
if nametags is not None:
pp.nametags += nametags
return pp
class ScriptPostprocessing: class ScriptPostprocessing:
...@@ -42,10 +85,17 @@ class ScriptPostprocessing: ...@@ -42,10 +85,17 @@ class ScriptPostprocessing:
pass pass
def image_changed(self): def process_firstpass(self, pp: PostprocessedImage, **args):
pass """
Called for all scripts before calling process(). Scripts can examine the image here and set fields
of the pp object to communicate things to other scripts.
args contains a dictionary with all values returned by components from ui()
"""
pass
def image_changed(self):
pass
def wrap_call(func, filename, funcname, *args, default=None, **kwargs): def wrap_call(func, filename, funcname, *args, default=None, **kwargs):
...@@ -118,16 +168,42 @@ class ScriptPostprocessingRunner: ...@@ -118,16 +168,42 @@ class ScriptPostprocessingRunner:
return inputs return inputs
def run(self, pp: PostprocessedImage, args): def run(self, pp: PostprocessedImage, args):
for script in self.scripts_in_preferred_order(): scripts = []
shared.state.job = script.name
for script in self.scripts_in_preferred_order():
script_args = args[script.args_from:script.args_to] script_args = args[script.args_from:script.args_to]
process_args = {} process_args = {}
for (name, _component), value in zip(script.controls.items(), script_args): for (name, _component), value in zip(script.controls.items(), script_args):
process_args[name] = value process_args[name] = value
script.process(pp, **process_args) scripts.append((script, process_args))
for script, process_args in scripts:
script.process_firstpass(pp, **process_args)
all_images = [pp]
for script, process_args in scripts:
if shared.state.skipped:
break
shared.state.job = script.name
for single_image in all_images.copy():
if not single_image.disable_processing:
script.process(single_image, **process_args)
for extra_image in single_image.extra_images:
if not isinstance(extra_image, PostprocessedImage):
extra_image = single_image.create_copy(extra_image)
all_images.append(extra_image)
single_image.extra_images.clear()
pp.extra_images = all_images[1:]
def create_args_for_run(self, scripts_args): def create_args_for_run(self, scripts_args):
if not self.ui_created: if not self.ui_created:
......
...@@ -215,7 +215,7 @@ class LoadStateDictOnMeta(ReplaceHelper): ...@@ -215,7 +215,7 @@ class LoadStateDictOnMeta(ReplaceHelper):
would be on the meta device. would be on the meta device.
""" """
if state_dict == sd: if state_dict is sd:
state_dict = {k: v.to(device="meta", dtype=v.dtype) for k, v in state_dict.items()} state_dict = {k: v.to(device="meta", dtype=v.dtype) for k, v in state_dict.items()}
original(module, state_dict, strict=strict) original(module, state_dict, strict=strict)
......
...@@ -2,14 +2,15 @@ import torch ...@@ -2,14 +2,15 @@ import torch
from torch.nn.functional import silu from torch.nn.functional import silu
from types import MethodType from types import MethodType
from modules import devices, sd_hijack_optimizations, shared, script_callbacks, errors, sd_unet from modules import devices, sd_hijack_optimizations, shared, script_callbacks, errors, sd_unet, patches
from modules.hypernetworks import hypernetwork from modules.hypernetworks import hypernetwork
from modules.shared import cmd_opts from modules.shared import cmd_opts
from modules import sd_hijack_clip, sd_hijack_open_clip, sd_hijack_unet, sd_hijack_xlmr, xlmr from modules import sd_hijack_clip, sd_hijack_open_clip, sd_hijack_unet, sd_hijack_xlmr, xlmr, xlmr_m18
import ldm.modules.attention import ldm.modules.attention
import ldm.modules.diffusionmodules.model import ldm.modules.diffusionmodules.model
import ldm.modules.diffusionmodules.openaimodel import ldm.modules.diffusionmodules.openaimodel
import ldm.models.diffusion.ddpm
import ldm.models.diffusion.ddim import ldm.models.diffusion.ddim
import ldm.models.diffusion.plms import ldm.models.diffusion.plms
import ldm.modules.encoders.modules import ldm.modules.encoders.modules
...@@ -37,6 +38,12 @@ ldm.models.diffusion.ddpm.print = shared.ldm_print ...@@ -37,6 +38,12 @@ ldm.models.diffusion.ddpm.print = shared.ldm_print
optimizers = [] optimizers = []
current_optimizer: sd_hijack_optimizations.SdOptimization = None current_optimizer: sd_hijack_optimizations.SdOptimization = None
ldm_patched_forward = sd_unet.create_unet_forward(ldm.modules.diffusionmodules.openaimodel.UNetModel.forward)
ldm_original_forward = patches.patch(__file__, ldm.modules.diffusionmodules.openaimodel.UNetModel, "forward", ldm_patched_forward)
sgm_patched_forward = sd_unet.create_unet_forward(sgm.modules.diffusionmodules.openaimodel.UNetModel.forward)
sgm_original_forward = patches.patch(__file__, sgm.modules.diffusionmodules.openaimodel.UNetModel, "forward", sgm_patched_forward)
def list_optimizers(): def list_optimizers():
new_optimizers = script_callbacks.list_optimizers_callback() new_optimizers = script_callbacks.list_optimizers_callback()
...@@ -181,6 +188,20 @@ class StableDiffusionModelHijack: ...@@ -181,6 +188,20 @@ class StableDiffusionModelHijack:
errors.display(e, "applying cross attention optimization") errors.display(e, "applying cross attention optimization")
undo_optimizations() undo_optimizations()
def convert_sdxl_to_ssd(self, m):
"""Converts an SDXL model to a Segmind Stable Diffusion model (see https://huggingface.co/segmind/SSD-1B)"""
delattr(m.model.diffusion_model.middle_block, '1')
delattr(m.model.diffusion_model.middle_block, '2')
for i in ['9', '8', '7', '6', '5', '4']:
delattr(m.model.diffusion_model.input_blocks[7][1].transformer_blocks, i)
delattr(m.model.diffusion_model.input_blocks[8][1].transformer_blocks, i)
delattr(m.model.diffusion_model.output_blocks[0][1].transformer_blocks, i)
delattr(m.model.diffusion_model.output_blocks[1][1].transformer_blocks, i)
delattr(m.model.diffusion_model.output_blocks[4][1].transformer_blocks, '1')
delattr(m.model.diffusion_model.output_blocks[5][1].transformer_blocks, '1')
devices.torch_gc()
def hijack(self, m): def hijack(self, m):
conditioner = getattr(m, 'conditioner', None) conditioner = getattr(m, 'conditioner', None)
if conditioner: if conditioner:
...@@ -208,7 +229,7 @@ class StableDiffusionModelHijack: ...@@ -208,7 +229,7 @@ class StableDiffusionModelHijack:
else: else:
m.cond_stage_model = conditioner m.cond_stage_model = conditioner
if type(m.cond_stage_model) == xlmr.BertSeriesModelWithTransformation: if type(m.cond_stage_model) == xlmr.BertSeriesModelWithTransformation or type(m.cond_stage_model) == xlmr_m18.BertSeriesModelWithTransformation:
model_embeddings = m.cond_stage_model.roberta.embeddings model_embeddings = m.cond_stage_model.roberta.embeddings
model_embeddings.token_embedding = EmbeddingsWithFixes(model_embeddings.word_embeddings, self) model_embeddings.token_embedding = EmbeddingsWithFixes(model_embeddings.word_embeddings, self)
m.cond_stage_model = sd_hijack_xlmr.FrozenXLMREmbedderWithCustomWords(m.cond_stage_model, self) m.cond_stage_model = sd_hijack_xlmr.FrozenXLMREmbedderWithCustomWords(m.cond_stage_model, self)
...@@ -239,10 +260,17 @@ class StableDiffusionModelHijack: ...@@ -239,10 +260,17 @@ class StableDiffusionModelHijack:
self.layers = flatten(m) self.layers = flatten(m)
if not hasattr(ldm.modules.diffusionmodules.openaimodel, 'copy_of_UNetModel_forward_for_webui'): import modules.models.diffusion.ddpm_edit
ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui = ldm.modules.diffusionmodules.openaimodel.UNetModel.forward
if isinstance(m, ldm.models.diffusion.ddpm.LatentDiffusion):
sd_unet.original_forward = ldm_original_forward
elif isinstance(m, modules.models.diffusion.ddpm_edit.LatentDiffusion):
sd_unet.original_forward = ldm_original_forward
elif isinstance(m, sgm.models.diffusion.DiffusionEngine):
sd_unet.original_forward = sgm_original_forward
else:
sd_unet.original_forward = None
ldm.modules.diffusionmodules.openaimodel.UNetModel.forward = sd_unet.UNetModel_forward
def undo_hijack(self, m): def undo_hijack(self, m):
conditioner = getattr(m, 'conditioner', None) conditioner = getattr(m, 'conditioner', None)
...@@ -279,7 +307,6 @@ class StableDiffusionModelHijack: ...@@ -279,7 +307,6 @@ class StableDiffusionModelHijack:
self.layers = None self.layers = None
self.clip = None self.clip = None
ldm.modules.diffusionmodules.openaimodel.UNetModel.forward = ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui
def apply_circular(self, enable): def apply_circular(self, enable):
if self.circular_enabled == enable: if self.circular_enabled == enable:
......
import collections import collections
import os.path import os.path
import sys import sys
import gc
import threading import threading
import torch import torch
import re import re
import safetensors.torch import safetensors.torch
from omegaconf import OmegaConf from omegaconf import OmegaConf, ListConfig
from os import mkdir from os import mkdir
from urllib import request from urllib import request
import ldm.modules.midas as midas import ldm.modules.midas as midas
from ldm.util import instantiate_from_config from ldm.util import instantiate_from_config
from modules import paths, shared, modelloader, devices, script_callbacks, sd_vae, sd_disable_initialization, errors, hashes, sd_models_config, sd_unet, sd_models_xl, cache, extra_networks, processing, lowvram, sd_hijack from modules import paths, shared, modelloader, devices, script_callbacks, sd_vae, sd_disable_initialization, errors, hashes, sd_models_config, sd_unet, sd_models_xl, cache, extra_networks, processing, lowvram, sd_hijack, patches
from modules.timer import Timer from modules.timer import Timer
import tomesd import tomesd
import numpy as np
model_dir = "Stable-diffusion" model_dir = "Stable-diffusion"
model_path = os.path.abspath(os.path.join(paths.models_path, model_dir)) model_path = os.path.abspath(os.path.join(paths.models_path, model_dir))
...@@ -49,11 +49,12 @@ class CheckpointInfo: ...@@ -49,11 +49,12 @@ class CheckpointInfo:
def __init__(self, filename): def __init__(self, filename):
self.filename = filename self.filename = filename
abspath = os.path.abspath(filename) abspath = os.path.abspath(filename)
abs_ckpt_dir = os.path.abspath(shared.cmd_opts.ckpt_dir) if shared.cmd_opts.ckpt_dir is not None else None
self.is_safetensors = os.path.splitext(filename)[1].lower() == ".safetensors" self.is_safetensors = os.path.splitext(filename)[1].lower() == ".safetensors"
if shared.cmd_opts.ckpt_dir is not None and abspath.startswith(shared.cmd_opts.ckpt_dir): if abs_ckpt_dir and abspath.startswith(abs_ckpt_dir):
name = abspath.replace(shared.cmd_opts.ckpt_dir, '') name = abspath.replace(abs_ckpt_dir, '')
elif abspath.startswith(model_path): elif abspath.startswith(model_path):
name = abspath.replace(model_path, '') name = abspath.replace(model_path, '')
else: else:
...@@ -129,9 +130,12 @@ except Exception: ...@@ -129,9 +130,12 @@ except Exception:
def setup_model(): def setup_model():
"""called once at startup to do various one-time tasks related to SD models"""
os.makedirs(model_path, exist_ok=True) os.makedirs(model_path, exist_ok=True)
enable_midas_autodownload() enable_midas_autodownload()
patch_given_betas()
def checkpoint_tiles(use_short=False): def checkpoint_tiles(use_short=False):
...@@ -226,15 +230,19 @@ def select_checkpoint(): ...@@ -226,15 +230,19 @@ def select_checkpoint():
return checkpoint_info return checkpoint_info
checkpoint_dict_replacements = { checkpoint_dict_replacements_sd1 = {
'cond_stage_model.transformer.embeddings.': 'cond_stage_model.transformer.text_model.embeddings.', 'cond_stage_model.transformer.embeddings.': 'cond_stage_model.transformer.text_model.embeddings.',
'cond_stage_model.transformer.encoder.': 'cond_stage_model.transformer.text_model.encoder.', 'cond_stage_model.transformer.encoder.': 'cond_stage_model.transformer.text_model.encoder.',
'cond_stage_model.transformer.final_layer_norm.': 'cond_stage_model.transformer.text_model.final_layer_norm.', 'cond_stage_model.transformer.final_layer_norm.': 'cond_stage_model.transformer.text_model.final_layer_norm.',
} }
checkpoint_dict_replacements_sd2_turbo = { # Converts SD 2.1 Turbo from SGM to LDM format.
'conditioner.embedders.0.': 'cond_stage_model.',
}
def transform_checkpoint_dict_key(k): def transform_checkpoint_dict_key(k, replacements):
for text, replacement in checkpoint_dict_replacements.items(): for text, replacement in replacements.items():
if k.startswith(text): if k.startswith(text):
k = replacement + k[len(text):] k = replacement + k[len(text):]
...@@ -245,9 +253,14 @@ def get_state_dict_from_checkpoint(pl_sd): ...@@ -245,9 +253,14 @@ def get_state_dict_from_checkpoint(pl_sd):
pl_sd = pl_sd.pop("state_dict", pl_sd) pl_sd = pl_sd.pop("state_dict", pl_sd)
pl_sd.pop("state_dict", None) pl_sd.pop("state_dict", None)
is_sd2_turbo = 'conditioner.embedders.0.model.ln_final.weight' in pl_sd and pl_sd['conditioner.embedders.0.model.ln_final.weight'].size()[0] == 1024
sd = {} sd = {}
for k, v in pl_sd.items(): for k, v in pl_sd.items():
new_key = transform_checkpoint_dict_key(k) if is_sd2_turbo:
new_key = transform_checkpoint_dict_key(k, checkpoint_dict_replacements_sd2_turbo)
else:
new_key = transform_checkpoint_dict_key(k, checkpoint_dict_replacements_sd1)
if new_key is not None: if new_key is not None:
sd[new_key] = v sd[new_key] = v
...@@ -309,6 +322,8 @@ def get_checkpoint_state_dict(checkpoint_info: CheckpointInfo, timer): ...@@ -309,6 +322,8 @@ def get_checkpoint_state_dict(checkpoint_info: CheckpointInfo, timer):
if checkpoint_info in checkpoints_loaded: if checkpoint_info in checkpoints_loaded:
# use checkpoint cache # use checkpoint cache
print(f"Loading weights [{sd_model_hash}] from cache") print(f"Loading weights [{sd_model_hash}] from cache")
# move to end as latest
checkpoints_loaded.move_to_end(checkpoint_info)
return checkpoints_loaded[checkpoint_info] return checkpoints_loaded[checkpoint_info]
print(f"Loading weights [{sd_model_hash}] from {checkpoint_info.filename}") print(f"Loading weights [{sd_model_hash}] from {checkpoint_info.filename}")
...@@ -346,16 +361,19 @@ def load_model_weights(model, checkpoint_info: CheckpointInfo, state_dict, timer ...@@ -346,16 +361,19 @@ def load_model_weights(model, checkpoint_info: CheckpointInfo, state_dict, timer
model.is_sdxl = hasattr(model, 'conditioner') model.is_sdxl = hasattr(model, 'conditioner')
model.is_sd2 = not model.is_sdxl and hasattr(model.cond_stage_model, 'model') model.is_sd2 = not model.is_sdxl and hasattr(model.cond_stage_model, 'model')
model.is_sd1 = not model.is_sdxl and not model.is_sd2 model.is_sd1 = not model.is_sdxl and not model.is_sd2
model.is_ssd = model.is_sdxl and 'model.diffusion_model.middle_block.1.transformer_blocks.0.attn1.to_q.weight' not in state_dict.keys()
if model.is_sdxl: if model.is_sdxl:
sd_models_xl.extend_sdxl(model) sd_models_xl.extend_sdxl(model)
model.load_state_dict(state_dict, strict=False) if model.is_ssd:
timer.record("apply weights to model") sd_hijack.model_hijack.convert_sdxl_to_ssd(model)
if shared.opts.sd_checkpoint_cache > 0: if shared.opts.sd_checkpoint_cache > 0:
# cache newly loaded model # cache newly loaded model
checkpoints_loaded[checkpoint_info] = state_dict checkpoints_loaded[checkpoint_info] = state_dict.copy()
model.load_state_dict(state_dict, strict=False)
timer.record("apply weights to model")
del state_dict del state_dict
...@@ -453,6 +471,20 @@ def enable_midas_autodownload(): ...@@ -453,6 +471,20 @@ def enable_midas_autodownload():
midas.api.load_model = load_model_wrapper midas.api.load_model = load_model_wrapper
def patch_given_betas():
import ldm.models.diffusion.ddpm
def patched_register_schedule(*args, **kwargs):
"""a modified version of register_schedule function that converts plain list from Omegaconf into numpy"""
if isinstance(args[1], ListConfig):
args = (args[0], np.array(args[1]), *args[2:])
original_register_schedule(*args, **kwargs)
original_register_schedule = patches.patch(__name__, ldm.models.diffusion.ddpm.DDPM, 'register_schedule', patched_register_schedule)
def repair_config(sd_config): def repair_config(sd_config):
if not hasattr(sd_config.model.params, "use_ema"): if not hasattr(sd_config.model.params, "use_ema"):
...@@ -777,17 +809,7 @@ def reload_model_weights(sd_model=None, info=None): ...@@ -777,17 +809,7 @@ def reload_model_weights(sd_model=None, info=None):
def unload_model_weights(sd_model=None, info=None): def unload_model_weights(sd_model=None, info=None):
timer = Timer() send_model_to_cpu(sd_model or shared.sd_model)
if model_data.sd_model:
model_data.sd_model.to(devices.cpu)
sd_hijack.model_hijack.undo_hijack(model_data.sd_model)
model_data.sd_model = None
sd_model = None
gc.collect()
devices.torch_gc()
print(f"Unloaded weights {timer.summary()}.")
return sd_model return sd_model
......
...@@ -21,7 +21,7 @@ config_unopenclip = os.path.join(sd_repo_configs_path, "v2-1-stable-unclip-h-inf ...@@ -21,7 +21,7 @@ config_unopenclip = os.path.join(sd_repo_configs_path, "v2-1-stable-unclip-h-inf
config_inpainting = os.path.join(sd_configs_path, "v1-inpainting-inference.yaml") config_inpainting = os.path.join(sd_configs_path, "v1-inpainting-inference.yaml")
config_instruct_pix2pix = os.path.join(sd_configs_path, "instruct-pix2pix.yaml") config_instruct_pix2pix = os.path.join(sd_configs_path, "instruct-pix2pix.yaml")
config_alt_diffusion = os.path.join(sd_configs_path, "alt-diffusion-inference.yaml") config_alt_diffusion = os.path.join(sd_configs_path, "alt-diffusion-inference.yaml")
config_alt_diffusion_m18 = os.path.join(sd_configs_path, "alt-diffusion-m18-inference.yaml")
def is_using_v_parameterization_for_sd2(state_dict): def is_using_v_parameterization_for_sd2(state_dict):
""" """
...@@ -95,7 +95,10 @@ def guess_model_config_from_state_dict(sd, filename): ...@@ -95,7 +95,10 @@ def guess_model_config_from_state_dict(sd, filename):
if diffusion_model_input.shape[1] == 8: if diffusion_model_input.shape[1] == 8:
return config_instruct_pix2pix return config_instruct_pix2pix
if sd.get('cond_stage_model.roberta.embeddings.word_embeddings.weight', None) is not None: if sd.get('cond_stage_model.roberta.embeddings.word_embeddings.weight', None) is not None:
if sd.get('cond_stage_model.transformation.weight').size()[0] == 1024:
return config_alt_diffusion_m18
return config_alt_diffusion return config_alt_diffusion
return config_default return config_default
......
...@@ -22,7 +22,10 @@ class WebuiSdModel(LatentDiffusion): ...@@ -22,7 +22,10 @@ class WebuiSdModel(LatentDiffusion):
"""structure with additional information about the file with model's weights""" """structure with additional information about the file with model's weights"""
is_sdxl: bool is_sdxl: bool
"""True if the model's architecture is SDXL""" """True if the model's architecture is SDXL or SSD"""
is_ssd: bool
"""True if the model is SSD"""
is_sd2: bool is_sd2: bool
"""True if the model's architecture is SD 2.x""" """True if the model's architecture is SD 2.x"""
......
...@@ -60,7 +60,7 @@ def restart_sampler(model, x, sigmas, extra_args=None, callback=None, disable=No ...@@ -60,7 +60,7 @@ def restart_sampler(model, x, sigmas, extra_args=None, callback=None, disable=No
sigma_restart = get_sigmas_karras(restart_steps, sigmas[min_idx].item(), sigmas[max_idx].item(), device=sigmas.device)[:-1] sigma_restart = get_sigmas_karras(restart_steps, sigmas[min_idx].item(), sigmas[max_idx].item(), device=sigmas.device)[:-1]
while restart_times > 0: while restart_times > 0:
restart_times -= 1 restart_times -= 1
step_list.extend([(old_sigma, new_sigma) for (old_sigma, new_sigma) in zip(sigma_restart[:-1], sigma_restart[1:])]) step_list.extend(zip(sigma_restart[:-1], sigma_restart[1:]))
last_sigma = None last_sigma = None
for old_sigma, new_sigma in tqdm.tqdm(step_list, disable=disable): for old_sigma, new_sigma in tqdm.tqdm(step_list, disable=disable):
......
...@@ -11,7 +11,7 @@ from modules.models.diffusion.uni_pc import uni_pc ...@@ -11,7 +11,7 @@ from modules.models.diffusion.uni_pc import uni_pc
def ddim(model, x, timesteps, extra_args=None, callback=None, disable=None, eta=0.0): def ddim(model, x, timesteps, extra_args=None, callback=None, disable=None, eta=0.0):
alphas_cumprod = model.inner_model.inner_model.alphas_cumprod alphas_cumprod = model.inner_model.inner_model.alphas_cumprod
alphas = alphas_cumprod[timesteps] alphas = alphas_cumprod[timesteps]
alphas_prev = alphas_cumprod[torch.nn.functional.pad(timesteps[:-1], pad=(1, 0))].to(torch.float64 if x.device.type != 'mps' else torch.float32) alphas_prev = alphas_cumprod[torch.nn.functional.pad(timesteps[:-1], pad=(1, 0))].to(torch.float64 if x.device.type != 'mps' and x.device.type != 'xpu' else torch.float32)
sqrt_one_minus_alphas = torch.sqrt(1 - alphas) sqrt_one_minus_alphas = torch.sqrt(1 - alphas)
sigmas = eta * np.sqrt((1 - alphas_prev.cpu().numpy()) / (1 - alphas.cpu()) * (1 - alphas.cpu() / alphas_prev.cpu().numpy())) sigmas = eta * np.sqrt((1 - alphas_prev.cpu().numpy()) / (1 - alphas.cpu()) * (1 - alphas.cpu() / alphas_prev.cpu().numpy()))
...@@ -43,7 +43,7 @@ def ddim(model, x, timesteps, extra_args=None, callback=None, disable=None, eta= ...@@ -43,7 +43,7 @@ def ddim(model, x, timesteps, extra_args=None, callback=None, disable=None, eta=
def plms(model, x, timesteps, extra_args=None, callback=None, disable=None): def plms(model, x, timesteps, extra_args=None, callback=None, disable=None):
alphas_cumprod = model.inner_model.inner_model.alphas_cumprod alphas_cumprod = model.inner_model.inner_model.alphas_cumprod
alphas = alphas_cumprod[timesteps] alphas = alphas_cumprod[timesteps]
alphas_prev = alphas_cumprod[torch.nn.functional.pad(timesteps[:-1], pad=(1, 0))].to(torch.float64 if x.device.type != 'mps' else torch.float32) alphas_prev = alphas_cumprod[torch.nn.functional.pad(timesteps[:-1], pad=(1, 0))].to(torch.float64 if x.device.type != 'mps' and x.device.type != 'xpu' else torch.float32)
sqrt_one_minus_alphas = torch.sqrt(1 - alphas) sqrt_one_minus_alphas = torch.sqrt(1 - alphas)
extra_args = {} if extra_args is None else extra_args extra_args = {} if extra_args is None else extra_args
......
import torch.nn import torch.nn
import ldm.modules.diffusionmodules.openaimodel
from modules import script_callbacks, shared, devices from modules import script_callbacks, shared, devices
unet_options = [] unet_options = []
current_unet_option = None current_unet_option = None
current_unet = None current_unet = None
original_forward = None # not used, only left temporarily for compatibility
def list_unets(): def list_unets():
new_unets = script_callbacks.list_unets_callback() new_unets = script_callbacks.list_unets_callback()
...@@ -84,9 +83,12 @@ class SdUnet(torch.nn.Module): ...@@ -84,9 +83,12 @@ class SdUnet(torch.nn.Module):
pass pass
def UNetModel_forward(self, x, timesteps=None, context=None, *args, **kwargs): def create_unet_forward(original_forward):
if current_unet is not None: def UNetModel_forward(self, x, timesteps=None, context=None, *args, **kwargs):
return current_unet.forward(x, timesteps, context, *args, **kwargs) if current_unet is not None:
return current_unet.forward(x, timesteps, context, *args, **kwargs)
return original_forward(self, x, timesteps, context, *args, **kwargs)
return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs) return UNetModel_forward
...@@ -14,5 +14,5 @@ if os.environ.get('IGNORE_CMD_ARGS_ERRORS', None) is None: ...@@ -14,5 +14,5 @@ if os.environ.get('IGNORE_CMD_ARGS_ERRORS', None) is None:
else: else:
cmd_opts, _ = parser.parse_known_args() cmd_opts, _ = parser.parse_known_args()
cmd_opts.webui_is_non_local = any([cmd_opts.share, cmd_opts.listen, cmd_opts.ngrok, cmd_opts.server_name])
cmd_opts.disable_extension_access = any([cmd_opts.share, cmd_opts.listen, cmd_opts.ngrok, cmd_opts.server_name]) and not cmd_opts.enable_insecure_extension_access cmd_opts.disable_extension_access = cmd_opts.webui_is_non_local and not cmd_opts.enable_insecure_extension_access
...@@ -44,9 +44,9 @@ def refresh_unet_list(): ...@@ -44,9 +44,9 @@ def refresh_unet_list():
modules.sd_unet.list_unets() modules.sd_unet.list_unets()
def list_checkpoint_tiles(): def list_checkpoint_tiles(use_short=False):
import modules.sd_models import modules.sd_models
return modules.sd_models.checkpoint_tiles() return modules.sd_models.checkpoint_tiles(use_short)
def refresh_checkpoints(): def refresh_checkpoints():
...@@ -66,7 +66,25 @@ def reload_hypernetworks(): ...@@ -66,7 +66,25 @@ def reload_hypernetworks():
shared.hypernetworks = hypernetwork.list_hypernetworks(cmd_opts.hypernetwork_dir) shared.hypernetworks = hypernetwork.list_hypernetworks(cmd_opts.hypernetwork_dir)
def get_infotext_names():
from modules import generation_parameters_copypaste, shared
res = {}
for info in shared.opts.data_labels.values():
if info.infotext:
res[info.infotext] = 1
for tab_data in generation_parameters_copypaste.paste_fields.values():
for _, name in tab_data.get("fields") or []:
if isinstance(name, str):
res[name] = 1
return list(res)
ui_reorder_categories_builtin_items = [ ui_reorder_categories_builtin_items = [
"prompt",
"image",
"inpaint", "inpaint",
"sampler", "sampler",
"accordions", "accordions",
......
This diff is collapsed.
...@@ -103,6 +103,7 @@ class State: ...@@ -103,6 +103,7 @@ class State:
def begin(self, job: str = "(unknown)"): def begin(self, job: str = "(unknown)"):
self.sampling_step = 0 self.sampling_step = 0
self.time_start = time.time()
self.job_count = -1 self.job_count = -1
self.processing_has_refined_job_count = False self.processing_has_refined_job_count = False
self.job_no = 0 self.job_no = 0
...@@ -114,7 +115,6 @@ class State: ...@@ -114,7 +115,6 @@ class State:
self.skipped = False self.skipped = False
self.interrupted = False self.interrupted = False
self.textinfo = None self.textinfo = None
self.time_start = time.time()
self.job = job self.job = job
devices.torch_gc() devices.torch_gc()
log.info("Starting job %s", job) log.info("Starting job %s", job)
......
This diff is collapsed.
...@@ -15,7 +15,7 @@ import torch ...@@ -15,7 +15,7 @@ import torch
from torch import Tensor from torch import Tensor
from torch.utils.checkpoint import checkpoint from torch.utils.checkpoint import checkpoint
import math import math
from typing import Optional, NamedTuple, List from typing import Optional, NamedTuple
def narrow_trunc( def narrow_trunc(
...@@ -97,7 +97,7 @@ def _query_chunk_attention( ...@@ -97,7 +97,7 @@ def _query_chunk_attention(
) )
return summarize_chunk(query, key_chunk, value_chunk) return summarize_chunk(query, key_chunk, value_chunk)
chunks: List[AttnChunk] = [ chunks: list[AttnChunk] = [
chunk_scanner(chunk) for chunk in torch.arange(0, k_tokens, kv_chunk_size) chunk_scanner(chunk) for chunk in torch.arange(0, k_tokens, kv_chunk_size)
] ]
acc_chunk = AttnChunk(*map(torch.stack, zip(*chunks))) acc_chunk = AttnChunk(*map(torch.stack, zip(*chunks)))
......
import json import json
import os import os
import sys import sys
import traceback
import platform import platform
import hashlib import hashlib
...@@ -84,7 +83,7 @@ def get_dict(): ...@@ -84,7 +83,7 @@ def get_dict():
"Checksum": checksum_token, "Checksum": checksum_token,
"Commandline": get_argv(), "Commandline": get_argv(),
"Torch env info": get_torch_sysinfo(), "Torch env info": get_torch_sysinfo(),
"Exceptions": get_exceptions(), "Exceptions": errors.get_exceptions(),
"CPU": { "CPU": {
"model": platform.processor(), "model": platform.processor(),
"count logical": psutil.cpu_count(logical=True), "count logical": psutil.cpu_count(logical=True),
...@@ -104,21 +103,6 @@ def get_dict(): ...@@ -104,21 +103,6 @@ def get_dict():
return res return res
def format_traceback(tb):
return [[f"{x.filename}, line {x.lineno}, {x.name}", x.line] for x in traceback.extract_tb(tb)]
def format_exception(e, tb):
return {"exception": str(e), "traceback": format_traceback(tb)}
def get_exceptions():
try:
return list(reversed(errors.exception_records))
except Exception as e:
return str(e)
def get_environment(): def get_environment():
return {k: os.environ[k] for k in sorted(os.environ) if k in environment_whitelist} return {k: os.environ[k] for k in sorted(os.environ) if k in environment_whitelist}
......
This diff is collapsed.
This diff is collapsed.
...@@ -181,40 +181,7 @@ class EmbeddingDatabase: ...@@ -181,40 +181,7 @@ class EmbeddingDatabase:
else: else:
return return
embedding = create_embedding_from_data(data, name, filename=filename, filepath=path)
# textual inversion embeddings
if 'string_to_param' in data:
param_dict = data['string_to_param']
param_dict = getattr(param_dict, '_parameters', param_dict) # fix for torch 1.12.1 loading saved file from torch 1.11
assert len(param_dict) == 1, 'embedding file has multiple terms in it'
emb = next(iter(param_dict.items()))[1]
vec = emb.detach().to(devices.device, dtype=torch.float32)
shape = vec.shape[-1]
vectors = vec.shape[0]
elif type(data) == dict and 'clip_g' in data and 'clip_l' in data: # SDXL embedding
vec = {k: v.detach().to(devices.device, dtype=torch.float32) for k, v in data.items()}
shape = data['clip_g'].shape[-1] + data['clip_l'].shape[-1]
vectors = data['clip_g'].shape[0]
elif type(data) == dict and type(next(iter(data.values()))) == torch.Tensor: # diffuser concepts
assert len(data.keys()) == 1, 'embedding file has multiple terms in it'
emb = next(iter(data.values()))
if len(emb.shape) == 1:
emb = emb.unsqueeze(0)
vec = emb.detach().to(devices.device, dtype=torch.float32)
shape = vec.shape[-1]
vectors = vec.shape[0]
else:
raise Exception(f"Couldn't identify {filename} as neither textual inversion embedding nor diffuser concept.")
embedding = Embedding(vec, name)
embedding.step = data.get('step', None)
embedding.sd_checkpoint = data.get('sd_checkpoint', None)
embedding.sd_checkpoint_name = data.get('sd_checkpoint_name', None)
embedding.vectors = vectors
embedding.shape = shape
embedding.filename = path
embedding.set_hash(hashes.sha256(embedding.filename, "textual_inversion/" + name) or '')
if self.expected_shape == -1 or self.expected_shape == embedding.shape: if self.expected_shape == -1 or self.expected_shape == embedding.shape:
self.register_embedding(embedding, shared.sd_model) self.register_embedding(embedding, shared.sd_model)
...@@ -313,6 +280,45 @@ def create_embedding(name, num_vectors_per_token, overwrite_old, init_text='*'): ...@@ -313,6 +280,45 @@ def create_embedding(name, num_vectors_per_token, overwrite_old, init_text='*'):
return fn return fn
def create_embedding_from_data(data, name, filename='unknown embedding file', filepath=None):
if 'string_to_param' in data: # textual inversion embeddings
param_dict = data['string_to_param']
param_dict = getattr(param_dict, '_parameters', param_dict) # fix for torch 1.12.1 loading saved file from torch 1.11
assert len(param_dict) == 1, 'embedding file has multiple terms in it'
emb = next(iter(param_dict.items()))[1]
vec = emb.detach().to(devices.device, dtype=torch.float32)
shape = vec.shape[-1]
vectors = vec.shape[0]
elif type(data) == dict and 'clip_g' in data and 'clip_l' in data: # SDXL embedding
vec = {k: v.detach().to(devices.device, dtype=torch.float32) for k, v in data.items()}
shape = data['clip_g'].shape[-1] + data['clip_l'].shape[-1]
vectors = data['clip_g'].shape[0]
elif type(data) == dict and type(next(iter(data.values()))) == torch.Tensor: # diffuser concepts
assert len(data.keys()) == 1, 'embedding file has multiple terms in it'
emb = next(iter(data.values()))
if len(emb.shape) == 1:
emb = emb.unsqueeze(0)
vec = emb.detach().to(devices.device, dtype=torch.float32)
shape = vec.shape[-1]
vectors = vec.shape[0]
else:
raise Exception(f"Couldn't identify {filename} as neither textual inversion embedding nor diffuser concept.")
embedding = Embedding(vec, name)
embedding.step = data.get('step', None)
embedding.sd_checkpoint = data.get('sd_checkpoint', None)
embedding.sd_checkpoint_name = data.get('sd_checkpoint_name', None)
embedding.vectors = vectors
embedding.shape = shape
if filepath:
embedding.filename = filepath
embedding.set_hash(hashes.sha256(filepath, "textual_inversion/" + name) or '')
return embedding
def write_loss(log_directory, filename, step, epoch_len, values): def write_loss(log_directory, filename, step, epoch_len, values):
if shared.opts.training_write_csv_every == 0: if shared.opts.training_write_csv_every == 0:
return return
...@@ -386,7 +392,7 @@ def validate_train_inputs(model_name, learn_rate, batch_size, gradient_step, dat ...@@ -386,7 +392,7 @@ def validate_train_inputs(model_name, learn_rate, batch_size, gradient_step, dat
assert log_directory, "Log directory is empty" assert log_directory, "Log directory is empty"
def train_embedding(id_task, embedding_name, learn_rate, batch_size, gradient_step, data_root, log_directory, training_width, training_height, varsize, steps, clip_grad_mode, clip_grad_value, shuffle_tags, tag_drop_out, latent_sampling_method, use_weight, create_image_every, save_embedding_every, template_filename, save_image_with_stored_embedding, preview_from_txt2img, preview_prompt, preview_negative_prompt, preview_steps, preview_sampler_index, preview_cfg_scale, preview_seed, preview_width, preview_height): def train_embedding(id_task, embedding_name, learn_rate, batch_size, gradient_step, data_root, log_directory, training_width, training_height, varsize, steps, clip_grad_mode, clip_grad_value, shuffle_tags, tag_drop_out, latent_sampling_method, use_weight, create_image_every, save_embedding_every, template_filename, save_image_with_stored_embedding, preview_from_txt2img, preview_prompt, preview_negative_prompt, preview_steps, preview_sampler_name, preview_cfg_scale, preview_seed, preview_width, preview_height):
from modules import processing from modules import processing
save_embedding_every = save_embedding_every or 0 save_embedding_every = save_embedding_every or 0
...@@ -590,7 +596,7 @@ def train_embedding(id_task, embedding_name, learn_rate, batch_size, gradient_st ...@@ -590,7 +596,7 @@ def train_embedding(id_task, embedding_name, learn_rate, batch_size, gradient_st
p.prompt = preview_prompt p.prompt = preview_prompt
p.negative_prompt = preview_negative_prompt p.negative_prompt = preview_negative_prompt
p.steps = preview_steps p.steps = preview_steps
p.sampler_name = sd_samplers.samplers[preview_sampler_index].name p.sampler_name = sd_samplers.samplers_map[preview_sampler_name.lower()]
p.cfg_scale = preview_cfg_scale p.cfg_scale = preview_cfg_scale
p.seed = preview_seed p.seed = preview_seed
p.width = preview_width p.width = preview_width
......
...@@ -3,7 +3,6 @@ import html ...@@ -3,7 +3,6 @@ import html
import gradio as gr import gradio as gr
import modules.textual_inversion.textual_inversion import modules.textual_inversion.textual_inversion
import modules.textual_inversion.preprocess
from modules import sd_hijack, shared from modules import sd_hijack, shared
...@@ -15,12 +14,6 @@ def create_embedding(name, initialization_text, nvpt, overwrite_old): ...@@ -15,12 +14,6 @@ def create_embedding(name, initialization_text, nvpt, overwrite_old):
return gr.Dropdown.update(choices=sorted(sd_hijack.model_hijack.embedding_db.word_embeddings.keys())), f"Created: {filename}", "" return gr.Dropdown.update(choices=sorted(sd_hijack.model_hijack.embedding_db.word_embeddings.keys())), f"Created: {filename}", ""
def preprocess(*args):
modules.textual_inversion.preprocess.preprocess(*args)
return f"Preprocessing {'interrupted' if shared.state.interrupted else 'finished'}.", ""
def train_embedding(*args): def train_embedding(*args):
assert not shared.cmd_opts.lowvram, 'Training models with lowvram not possible' assert not shared.cmd_opts.lowvram, 'Training models with lowvram not possible'
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -134,7 +134,7 @@ class UserMetadataEditor: ...@@ -134,7 +134,7 @@ class UserMetadataEditor:
basename, ext = os.path.splitext(filename) basename, ext = os.path.splitext(filename)
with open(basename + '.json', "w", encoding="utf8") as file: with open(basename + '.json', "w", encoding="utf8") as file:
json.dump(metadata, file, indent=4) json.dump(metadata, file, indent=4, ensure_ascii=False)
def save_user_metadata(self, name, desc, notes): def save_user_metadata(self, name, desc, notes):
user_metadata = self.get_user_metadata(name) user_metadata = self.get_user_metadata(name)
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment