- 11 Jul, 2023 4 commits
 - 
- 
TangJicheng authored
 - 
TangJicheng authored
 - 
AUTOMATIC1111 authored
 - 
AUTOMATIC1111 authored
fix: add queue lock for refresh-checkpoints
 
 - 
 - 10 Jul, 2023 1 commit
 - 
- 
tangjicheng authored
 
 - 
 - 27 Jun, 2023 7 commits
 - 
- 
AUTOMATIC authored
 - 
AUTOMATIC1111 authored
Zoom and pan: More options in the settings and improved error output
 - 
AUTOMATIC1111 authored
fixed typos
 - 
AUTOMATIC1111 authored
Use os.makedirs(..., exist_ok=True)
 - 
AUTOMATIC1111 authored
Fix Typo of hints.js
 - 
AUTOMATIC1111 authored
Strip whitespaces from URL and dirname prior to extension installation
 - 
AUTOMATIC authored
 
 - 
 - 25 Jun, 2023 1 commit
 - 
- 
Jabasukuriputo Wang authored
This avoid some cryptic errors brought by accidental spaces around urls
 
 - 
 - 18 Jun, 2023 1 commit
 - 
- 
zhtttylz authored
 
 - 
 - 14 Jun, 2023 2 commits
 - 
- 
Danil Boldyrev authored
 - 
Danil Boldyrev authored
 
 - 
 - 13 Jun, 2023 3 commits
 - 
- 
Aarni Koskela authored
 - 
Danil Boldyrev authored
 - 
Danil Boldyrev authored
 
 - 
 - 10 Jun, 2023 9 commits
 - 
- 
arch-fan authored
 - 
AUTOMATIC authored
 - 
AUTOMATIC authored
 - 
AUTOMATIC authored
 - 
AUTOMATIC1111 authored
Allow activation of Generate Forever during generation
 - 
AUTOMATIC1111 authored
persistent conds cache
 - 
AUTOMATIC1111 authored
Don't die when a LoRA is a broken symlink
 - 
AUTOMATIC1111 authored
Split mask blur into X and Y components, patch Outpainting MK2 accordingly
 - 
AUTOMATIC1111 authored
Forcing Torch Version to 1.13.1 for RX 5000 series GPUs
 
 - 
 - 09 Jun, 2023 3 commits
 - 
- 
Aarni Koskela authored
Fixes #11098
 - 
Splendide Imaginarius authored
Fixes unexpected noise in non-outpainted borders when using MK2 script.
 - 
Splendide Imaginarius authored
Prequisite to fixing Outpainting MK2 mask blur bug.
 
 - 
 - 08 Jun, 2023 2 commits
 - 07 Jun, 2023 3 commits
 - 
- 
AUTOMATIC1111 authored
link footer API to Wiki when API is not active
 - 
AUTOMATIC1111 authored
Fix upcast attention dtype error.
 - 
Alexander Ljungberg authored
Without this fix, enabling the "Upcast cross attention layer to float32" option while also using `--opt-sdp-attention` breaks generation with an error: ``` File "/ext3/automatic1111/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 612, in sdp_attnblock_forward out = torch.nn.functional.scaled_dot_product_attention(q, k, v, dropout_p=0.0, is_causal=False) RuntimeError: Expected query, key, and value to have the same dtype, but got query.dtype: float key.dtype: float and value.dtype: c10::Half instead. ``` The fix is to make sure to upcast the value tensor too. 
 - 
 - 06 Jun, 2023 4 commits