Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Sign in / Register
Toggle navigation
S
Stable Diffusion Webui
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Locked Files
Issues
0
Issues
0
List
Boards
Labels
Service Desk
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Security & Compliance
Security & Compliance
Dependency List
License Compliance
Packages
Packages
List
Container Registry
Analytics
Analytics
CI / CD
Code Review
Insights
Issues
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
novelai-storage
Stable Diffusion Webui
Commits
851c3d51
Commit
851c3d51
authored
Mar 09, 2024
by
Kohaku-Blueleaf
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Fix bugs for torch.nn.MultiheadAttention
parent
12bcacf4
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
13 additions
and
4 deletions
+13
-4
extensions-builtin/Lora/network.py
extensions-builtin/Lora/network.py
+7
-1
extensions-builtin/Lora/networks.py
extensions-builtin/Lora/networks.py
+6
-3
No files found.
extensions-builtin/Lora/network.py
View file @
851c3d51
...
...
@@ -117,6 +117,12 @@ class NetworkModule:
if
hasattr
(
self
.
sd_module
,
'weight'
):
self
.
shape
=
self
.
sd_module
.
weight
.
shape
elif
isinstance
(
self
.
sd_module
,
nn
.
MultiheadAttention
):
# For now, only self-attn use Pytorch's MHA
# So assume all qkvo proj have same shape
self
.
shape
=
self
.
sd_module
.
out_proj
.
weight
.
shape
else
:
self
.
shape
=
None
self
.
ops
=
None
self
.
extra_kwargs
=
{}
...
...
@@ -146,7 +152,7 @@ class NetworkModule:
self
.
alpha
=
weights
.
w
[
"alpha"
]
.
item
()
if
"alpha"
in
weights
.
w
else
None
self
.
scale
=
weights
.
w
[
"scale"
]
.
item
()
if
"scale"
in
weights
.
w
else
None
self
.
dora_scale
=
weights
.
w
[
"dora_scale"
]
if
"dora_scale"
in
weights
.
w
else
None
self
.
dora_scale
=
weights
.
w
.
get
(
"dora_scale"
,
None
)
self
.
dora_mean_dim
=
tuple
(
i
for
i
in
range
(
len
(
self
.
shape
))
if
i
!=
1
)
def
multiplier
(
self
):
...
...
extensions-builtin/Lora/networks.py
View file @
851c3d51
...
...
@@ -429,9 +429,12 @@ def network_apply_weights(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn
if
isinstance
(
self
,
torch
.
nn
.
MultiheadAttention
)
and
module_q
and
module_k
and
module_v
and
module_out
:
try
:
with
torch
.
no_grad
():
updown_q
,
_
=
module_q
.
calc_updown
(
self
.
in_proj_weight
)
updown_k
,
_
=
module_k
.
calc_updown
(
self
.
in_proj_weight
)
updown_v
,
_
=
module_v
.
calc_updown
(
self
.
in_proj_weight
)
# Send "real" orig_weight into MHA's lora module
qw
,
kw
,
vw
=
self
.
in_proj_weight
.
chunk
(
3
,
0
)
updown_q
,
_
=
module_q
.
calc_updown
(
qw
)
updown_k
,
_
=
module_k
.
calc_updown
(
kw
)
updown_v
,
_
=
module_v
.
calc_updown
(
vw
)
del
qw
,
kw
,
vw
updown_qkv
=
torch
.
vstack
([
updown_q
,
updown_k
,
updown_v
])
updown_out
,
ex_bias
=
module_out
.
calc_updown
(
self
.
out_proj
.
weight
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment