Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Sign in / Register
Toggle navigation
S
Stable Diffusion Webui
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Locked Files
Issues
0
Issues
0
List
Boards
Labels
Service Desk
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Security & Compliance
Security & Compliance
Dependency List
License Compliance
Packages
Packages
List
Container Registry
Analytics
Analytics
CI / CD
Code Review
Insights
Issues
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
novelai-storage
Stable Diffusion Webui
Commits
8d7fa2f6
Commit
8d7fa2f6
authored
Mar 10, 2023
by
Pam
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
sdp_attnblock_forward hijack
parent
0981dea9
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
26 additions
and
0 deletions
+26
-0
modules/sd_hijack.py
modules/sd_hijack.py
+2
-0
modules/sd_hijack_optimizations.py
modules/sd_hijack_optimizations.py
+24
-0
No files found.
modules/sd_hijack.py
View file @
8d7fa2f6
...
...
@@ -47,10 +47,12 @@ def apply_optimizations():
elif
cmd_opts
.
opt_sdp_no_mem_attention
and
can_use_sdp
:
print
(
"Applying scaled dot product cross attention optimization (without memory efficient attention)."
)
ldm
.
modules
.
attention
.
CrossAttention
.
forward
=
sd_hijack_optimizations
.
scaled_dot_product_no_mem_attention_forward
ldm
.
modules
.
diffusionmodules
.
model
.
AttnBlock
.
forward
=
sd_hijack_optimizations
.
sdp_no_mem_attnblock_forward
optimization_method
=
'sdp-no-mem'
elif
cmd_opts
.
opt_sdp_attention
and
can_use_sdp
:
print
(
"Applying scaled dot product cross attention optimization."
)
ldm
.
modules
.
attention
.
CrossAttention
.
forward
=
sd_hijack_optimizations
.
scaled_dot_product_attention_forward
ldm
.
modules
.
diffusionmodules
.
model
.
AttnBlock
.
forward
=
sd_hijack_optimizations
.
sdp_attnblock_forward
optimization_method
=
'sdp'
elif
cmd_opts
.
opt_sub_quad_attention
:
print
(
"Applying sub-quadratic cross attention optimization."
)
...
...
modules/sd_hijack_optimizations.py
View file @
8d7fa2f6
...
...
@@ -473,6 +473,30 @@ def xformers_attnblock_forward(self, x):
except
NotImplementedError
:
return
cross_attention_attnblock_forward
(
self
,
x
)
def
sdp_attnblock_forward
(
self
,
x
):
h_
=
x
h_
=
self
.
norm
(
h_
)
q
=
self
.
q
(
h_
)
k
=
self
.
k
(
h_
)
v
=
self
.
v
(
h_
)
b
,
c
,
h
,
w
=
q
.
shape
q
,
k
,
v
=
map
(
lambda
t
:
rearrange
(
t
,
'b c h w -> b (h w) c'
),
(
q
,
k
,
v
))
dtype
=
q
.
dtype
if
shared
.
opts
.
upcast_attn
:
q
,
k
=
q
.
float
(),
k
.
float
()
q
=
q
.
contiguous
()
k
=
k
.
contiguous
()
v
=
v
.
contiguous
()
out
=
torch
.
nn
.
functional
.
scaled_dot_product_attention
(
q
,
k
,
v
,
dropout_p
=
0.0
,
is_causal
=
False
)
out
=
out
.
to
(
dtype
)
out
=
rearrange
(
out
,
'b (h w) c -> b c h w'
,
h
=
h
)
out
=
self
.
proj_out
(
out
)
return
x
+
out
def
sdp_no_mem_attnblock_forward
(
self
,
x
):
with
torch
.
backends
.
cuda
.
sdp_kernel
(
enable_flash
=
True
,
enable_math
=
True
,
enable_mem_efficient
=
False
):
return
sdp_attnblock_forward
(
self
,
x
)
def
sub_quad_attnblock_forward
(
self
,
x
):
h_
=
x
h_
=
self
.
norm
(
h_
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment