Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Sign in / Register
Toggle navigation
S
Stable Diffusion Webui
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Locked Files
Issues
0
Issues
0
List
Boards
Labels
Service Desk
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Security & Compliance
Security & Compliance
Dependency List
License Compliance
Packages
Packages
List
Container Registry
Analytics
Analytics
CI / CD
Code Review
Insights
Issues
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
novelai-storage
Stable Diffusion Webui
Commits
06d0a5ab
Commit
06d0a5ab
authored
Jun 16, 2024
by
AUTOMATIC1111
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
fix NaN issue when running without --precision half
parent
80f618ea
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
3 additions
and
3 deletions
+3
-3
modules/models/sd3/other_impls.py
modules/models/sd3/other_impls.py
+1
-2
modules/models/sd3/sd3_model.py
modules/models/sd3/sd3_model.py
+2
-1
No files found.
modules/models/sd3/other_impls.py
View file @
06d0a5ab
...
@@ -262,8 +262,7 @@ class SDClipModel(torch.nn.Module, ClipTokenWeightEncoder):
...
@@ -262,8 +262,7 @@ class SDClipModel(torch.nn.Module, ClipTokenWeightEncoder):
def
forward
(
self
,
tokens
):
def
forward
(
self
,
tokens
):
backup_embeds
=
self
.
transformer
.
get_input_embeddings
()
backup_embeds
=
self
.
transformer
.
get_input_embeddings
()
device
=
backup_embeds
.
weight
.
device
tokens
=
torch
.
asarray
(
tokens
,
dtype
=
torch
.
int64
,
device
=
backup_embeds
.
weight
.
device
)
tokens
=
torch
.
LongTensor
(
tokens
)
.
to
(
device
)
outputs
=
self
.
transformer
(
tokens
,
intermediate_output
=
self
.
layer_idx
,
final_layer_norm_intermediate
=
self
.
layer_norm_hidden_state
)
outputs
=
self
.
transformer
(
tokens
,
intermediate_output
=
self
.
layer_idx
,
final_layer_norm_intermediate
=
self
.
layer_norm_hidden_state
)
self
.
transformer
.
set_input_embeddings
(
backup_embeds
)
self
.
transformer
.
set_input_embeddings
(
backup_embeds
)
if
self
.
layer
==
"last"
:
if
self
.
layer
==
"last"
:
...
...
modules/models/sd3/sd3_model.py
View file @
06d0a5ab
...
@@ -149,7 +149,8 @@ class SD3Inferencer(torch.nn.Module):
...
@@ -149,7 +149,8 @@ class SD3Inferencer(torch.nn.Module):
return
contextlib
.
nullcontext
()
return
contextlib
.
nullcontext
()
def
get_learned_conditioning
(
self
,
batch
:
list
[
str
]):
def
get_learned_conditioning
(
self
,
batch
:
list
[
str
]):
return
self
.
cond_stage_model
(
batch
)
with
devices
.
without_autocast
():
return
self
.
cond_stage_model
(
batch
)
def
apply_model
(
self
,
x
,
t
,
cond
):
def
apply_model
(
self
,
x
,
t
,
cond
):
return
self
.
model
.
apply_model
(
x
,
t
,
c_crossattn
=
cond
[
'crossattn'
],
y
=
cond
[
'vector'
])
return
self
.
model
.
apply_model
(
x
,
t
,
c_crossattn
=
cond
[
'crossattn'
],
y
=
cond
[
'vector'
])
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment