Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Sign in / Register
Toggle navigation
S
Stable Diffusion Webui
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Locked Files
Issues
0
Issues
0
List
Boards
Labels
Service Desk
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Security & Compliance
Security & Compliance
Dependency List
License Compliance
Packages
Packages
List
Container Registry
Analytics
Analytics
CI / CD
Code Review
Insights
Issues
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
novelai-storage
Stable Diffusion Webui
Commits
bdaa36c8
Commit
bdaa36c8
authored
Sep 30, 2022
by
brkirch
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
When device is MPS, use CPU for GFPGAN instead
GFPGAN will not work if the device is MPS, so default to CPU instead.
parent
84e97a98
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
4 additions
and
4 deletions
+4
-4
modules/devices.py
modules/devices.py
+1
-1
modules/gfpgan_model.py
modules/gfpgan_model.py
+3
-3
No files found.
modules/devices.py
View file @
bdaa36c8
...
...
@@ -34,7 +34,7 @@ errors.run(enable_tf32, "Enabling TF32")
device
=
get_optimal_device
()
device_
codeformer
=
cpu
if
has_mps
else
device
device_
gfpgan
=
device_codeformer
=
cpu
if
device
.
type
==
'mps'
else
device
def
randn
(
seed
,
shape
):
...
...
modules/gfpgan_model.py
View file @
bdaa36c8
...
...
@@ -21,7 +21,7 @@ def gfpgann():
global
loaded_gfpgan_model
global
model_path
if
loaded_gfpgan_model
is
not
None
:
loaded_gfpgan_model
.
gfpgan
.
to
(
shared
.
device
)
loaded_gfpgan_model
.
gfpgan
.
to
(
devices
.
device_gfpgan
)
return
loaded_gfpgan_model
if
gfpgan_constructor
is
None
:
...
...
@@ -36,8 +36,8 @@ def gfpgann():
else
:
print
(
"Unable to load gfpgan model!"
)
return
None
model
=
gfpgan_constructor
(
model_path
=
model_file
,
upscale
=
1
,
arch
=
'clean'
,
channel_multiplier
=
2
,
bg_upsampler
=
None
)
model
.
gfpgan
.
to
(
shared
.
device
)
model
=
gfpgan_constructor
(
model_path
=
model_file
,
upscale
=
1
,
arch
=
'clean'
,
channel_multiplier
=
2
,
bg_upsampler
=
None
,
device
=
devices
.
device_gfpgan
)
model
.
gfpgan
.
to
(
devices
.
device_gfpgan
)
loaded_gfpgan_model
=
model
return
model
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment