Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Sign in / Register
Toggle navigation
S
Stable Diffusion Webui
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Locked Files
Issues
0
Issues
0
List
Boards
Labels
Service Desk
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Security & Compliance
Security & Compliance
Dependency List
License Compliance
Packages
Packages
List
Container Registry
Analytics
Analytics
CI / CD
Code Review
Insights
Issues
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
novelai-storage
Stable Diffusion Webui
Commits
1fd69655
Commit
1fd69655
authored
Jan 09, 2024
by
Kohaku-Blueleaf
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Revert "Apply correct inference precision implementation"
This reverts commit
e0036596
.
parent
e0036596
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
9 additions
and
33 deletions
+9
-33
modules/devices.py
modules/devices.py
+9
-33
No files found.
modules/devices.py
View file @
1fd69655
...
...
@@ -132,21 +132,6 @@ patch_module_list = [
]
def
cast_output
(
result
):
if
isinstance
(
result
,
tuple
):
result
=
tuple
(
i
.
to
(
dtype_inference
)
if
isinstance
(
i
,
torch
.
Tensor
)
else
i
for
i
in
result
)
elif
isinstance
(
result
,
torch
.
Tensor
):
result
=
result
.
to
(
dtype_inference
)
return
result
def
autocast_with_cast_output
(
self
,
*
args
,
**
kwargs
):
result
=
self
.
org_forward
(
*
args
,
**
kwargs
)
if
dtype_inference
!=
dtype
:
result
=
cast_output
(
result
)
return
result
def
manual_cast_forward
(
target_dtype
):
def
forward_wrapper
(
self
,
*
args
,
**
kwargs
):
if
any
(
...
...
@@ -164,7 +149,15 @@ def manual_cast_forward(target_dtype):
self
.
to
(
org_dtype
)
if
target_dtype
!=
dtype_inference
:
result
=
cast_output
(
result
)
if
isinstance
(
result
,
tuple
):
result
=
tuple
(
i
.
to
(
dtype_inference
)
if
isinstance
(
i
,
torch
.
Tensor
)
else
i
for
i
in
result
)
elif
isinstance
(
result
,
torch
.
Tensor
):
result
=
result
.
to
(
dtype_inference
)
return
result
return
forward_wrapper
...
...
@@ -185,20 +178,6 @@ def manual_cast(target_dtype):
module_type
.
forward
=
module_type
.
org_forward
@
contextlib
.
contextmanager
def
precision_full_with_autocast
(
autocast_ctx
):
for
module_type
in
patch_module_list
:
org_forward
=
module_type
.
forward
module_type
.
forward
=
autocast_with_cast_output
module_type
.
org_forward
=
org_forward
try
:
with
autocast_ctx
:
yield
None
finally
:
for
module_type
in
patch_module_list
:
module_type
.
forward
=
module_type
.
org_forward
def
autocast
(
disable
=
False
):
if
disable
:
return
contextlib
.
nullcontext
()
...
...
@@ -212,9 +191,6 @@ def autocast(disable=False):
if
has_xpu
()
or
has_mps
()
or
cuda_no_autocast
():
return
manual_cast
(
dtype_inference
)
if
dtype_inference
==
torch
.
float32
and
dtype
!=
torch
.
float32
:
return
precision_full_with_autocast
(
torch
.
autocast
(
"cuda"
))
return
torch
.
autocast
(
"cuda"
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment