Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Sign in / Register
Toggle navigation
S
Stable Diffusion Webui
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Locked Files
Issues
0
Issues
0
List
Boards
Labels
Service Desk
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Security & Compliance
Security & Compliance
Dependency List
License Compliance
Packages
Packages
List
Container Registry
Analytics
Analytics
CI / CD
Code Review
Insights
Issues
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
novelai-storage
Stable Diffusion Webui
Commits
265bc26c
Commit
265bc26c
authored
Dec 14, 2023
by
Kohaku-Blueleaf
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Use self.scale instead of custom finalize
parent
735c9e80
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
18 deletions
+2
-18
extensions-builtin/Lora/network_oft.py
extensions-builtin/Lora/network_oft.py
+2
-18
No files found.
extensions-builtin/Lora/network_oft.py
View file @
265bc26c
...
...
@@ -21,6 +21,8 @@ class NetworkModuleOFT(network.NetworkModule):
self
.
lin_module
=
None
self
.
org_module
:
list
[
torch
.
Module
]
=
[
self
.
sd_module
]
self
.
scale
=
1.0
# kohya-ss
if
"oft_blocks"
in
weights
.
w
.
keys
():
self
.
is_kohya
=
True
...
...
@@ -78,21 +80,3 @@ class NetworkModuleOFT(network.NetworkModule):
print
(
torch
.
norm
(
updown
))
output_shape
=
orig_weight
.
shape
return
self
.
finalize_updown
(
updown
,
orig_weight
,
output_shape
)
def
finalize_updown
(
self
,
updown
,
orig_weight
,
output_shape
,
ex_bias
=
None
):
if
self
.
bias
is
not
None
:
updown
=
updown
.
reshape
(
self
.
bias
.
shape
)
updown
+=
self
.
bias
.
to
(
orig_weight
.
device
,
dtype
=
orig_weight
.
dtype
)
updown
=
updown
.
reshape
(
output_shape
)
if
len
(
output_shape
)
==
4
:
updown
=
updown
.
reshape
(
output_shape
)
if
orig_weight
.
size
()
.
numel
()
==
updown
.
size
()
.
numel
():
updown
=
updown
.
reshape
(
orig_weight
.
shape
)
if
ex_bias
is
not
None
:
ex_bias
=
ex_bias
*
self
.
multiplier
()
# Ignore calc_scale, which is not used in OFT.
return
updown
*
self
.
multiplier
(),
ex_bias
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment