Commit Graph

140 Commits

Author SHA1 Message Date
papuSpartan 5c8e53d5e9 Allow different merge ratios to be used for each pass. Make toggle cmd flag work again. Remove ratio flag. Remove warning about controlnet being incompatible 2023-04-04 02:26:44 -05:00
papuSpartan a609bd56b4 Transition to using settings through UI instead of cmd line args. Added feature to only apply to hr-fix. Install package using requirements_versions.txt 2023-04-01 22:18:35 -05:00
papuSpartan 26ab018253 delay import 2023-04-01 03:31:22 -05:00
papuSpartan 56680cd84a first 2023-04-01 02:07:08 -05:00
AUTOMATIC 1b63afbedc sort hypernetworks and checkpoints by name 2023-03-28 20:03:57 +03:00
AUTOMATIC1111 f1db987e6a
Merge pull request #8958 from MrCheeze/variations-model
Add support for the unclip (Variations) models, unclip-h and unclip-l
2023-03-28 19:39:20 +03:00
MrCheeze 1f08600345 overwrite xformers in the unclip model config if not available 2023-03-26 16:55:29 -04:00
MrCheeze 8a34671fe9 Add support for the Variations models (unclip-h and unclip-l) 2023-03-25 21:03:07 -04:00
AUTOMATIC1111 956ed9a737
Merge pull request #8780 from Brawlence/master
Unload and re-load checkpoint to VRAM on request (API & Manual)
2023-03-25 12:03:26 +03:00
carat-johyun 92e173d414 fix variable typo 2023-03-23 14:28:08 +09:00
Φφ 4cbbb881ee Unload checkpoints on Request
…to free VRAM.

New Action buttons in the settings to manually free and reload checkpoints, essentially
juggling models between RAM and VRAM.
2023-03-21 09:28:50 +03:00
AUTOMATIC 6a04a7f20f fix an error loading Lora with empty values in metadata 2023-03-14 11:22:29 +03:00
AUTOMATIC c19530f1a5 Add view metadata button for Lora cards. 2023-03-14 09:10:26 +03:00
w-e-w 014e7323f6 when exists 2023-02-19 20:49:07 +09:00
w-e-w c77f01ff31 fix auto sd download issue 2023-02-19 20:37:40 +09:00
missionfloyd c4ea16a03f Add ".vae.ckpt" to ext_blacklist 2023-02-15 19:47:30 -07:00
missionfloyd 1615f786ee Download model if none are found 2023-02-14 20:54:02 -07:00
AUTOMATIC 668d7e9b9a make it possible to load SD1 checkpoints without CLIP 2023-02-05 11:21:00 +03:00
AUTOMATIC 3e0f9a7543 fix issue with switching back to checkpoint that had its checksum calculated during runtime mentioned in #7506 2023-02-04 15:23:16 +03:00
AUTOMATIC1111 c0e0b5844d
Merge pull request #7470 from cbrownstein-lambda/update-error-message-no-checkpoint
Update error message WRT missing checkpoint file
2023-02-04 12:07:12 +03:00
AUTOMATIC 81823407d9 add --no-hashing 2023-02-04 11:38:56 +03:00
Cody Brownstein fb97acef63 Update error message WRT missing checkpoint file
The Safetensors format is also supported.
2023-02-01 14:51:06 -08:00
AUTOMATIC f6b7768f84 support for searching subdirectory names for extra networks 2023-01-29 10:20:19 +03:00
AUTOMATIC 5d14f282c2 fixed a bug where after switching to a checkpoint with unknown hash, you'd get empty space instead of checkpoint name in UI
fixed a bug where if you update a selected checkpoint on disk and then restart the program, a different checkpoint loads, but the name is shown for the the old one.
2023-01-28 16:23:49 +03:00
Max Audron 5eee2ac398 add data-dir flag and set all user data directories based on it 2023-01-27 14:44:30 +01:00
AUTOMATIC 6f31d2210c support detecting midas model
fix broken api for checkpoint list
2023-01-27 11:54:19 +03:00
AUTOMATIC d2ac95fa7b remove the need to place configs near models 2023-01-27 11:28:12 +03:00
AUTOMATIC1111 1574e96729
Merge pull request #6510 from brkirch/unet16-upcast-precision
Add upcast options, full precision sampling from float16 UNet and upcasting attention for inference using SD 2.1 models without --no-half
2023-01-25 19:12:29 +03:00
Kyle ee0a0da324 Add instruct-pix2pix hijack
Allows loading instruct-pix2pix models via same method as inpainting models in sd_models.py and sd_hijack_ip2p.py

Adds ddpm_edit.py necessary for instruct-pix2pix
2023-01-25 08:53:23 -05:00
brkirch 84d9ce30cb Add option for float32 sampling with float16 UNet
This also handles type casting so that ROCm and MPS torch devices work correctly without --no-half. One cast is required for deepbooru in deepbooru_model.py, some explicit casting is required for img2img and inpainting. depth_model can't be converted to float16 or it won't work correctly on some systems (it's known to have issues on MPS) so in sd_models.py model.depth_model is removed for model.half().
2023-01-25 01:13:02 -05:00
AUTOMATIC c1928cdd61 bring back short hashes to sd checkpoint selection 2023-01-19 18:58:08 +03:00
AUTOMATIC a5bbcd2153 fix bug with "Ignore selected VAE for..." option completely disabling VAE election
rework VAE resolving code to be more simple
2023-01-14 19:56:09 +03:00
AUTOMATIC 08c6f009a5 load hashes from cache for checkpoints that have them
add checkpoint hash to footer
2023-01-14 15:55:40 +03:00
AUTOMATIC febd2b722e update key to use with checkpoints' sha256 in cache 2023-01-14 13:37:55 +03:00
AUTOMATIC f9ac3352cb change hypernets to use sha256 hashes 2023-01-14 10:25:37 +03:00
AUTOMATIC a95f135308 change hash to sha256 2023-01-14 09:56:59 +03:00
AUTOMATIC 4bd490727e fix for an error caused by skipping initialization, for realsies this time: TypeError: expected str, bytes or os.PathLike object, not NoneType 2023-01-11 18:54:13 +03:00
AUTOMATIC 1a23dc32ac possible fix for fallback for fast model creation from config, attempt 2 2023-01-11 10:34:36 +03:00
AUTOMATIC 4fdacd31e4 possible fix for fallback for fast model creation from config 2023-01-11 10:24:56 +03:00
AUTOMATIC 0f8603a559 add support for transformers==4.25.1
add fallback for when quick model creation fails
2023-01-10 17:46:59 +03:00
AUTOMATIC ce3f639ec8 add more stuff to ignore when creating model from config
prevent .vae.safetensors files from being listed as stable diffusion models
2023-01-10 16:51:04 +03:00
AUTOMATIC 0c3feb202c disable torch weight initialization and CLIP downloading/reading checkpoint to speedup creating sd model from config 2023-01-10 14:08:29 +03:00
Vladimir Mandic 552d7b90bf
allow model load if previous model failed 2023-01-09 18:34:26 -05:00
AUTOMATIC 642142556d use commandline-supplied cuda device name instead of cuda:0 for safetensors PR that doesn't fix anything 2023-01-04 15:09:53 +03:00
AUTOMATIC 68fbf4558f Merge remote-tracking branch 'Narsil/fix_safetensors_load_speed' 2023-01-04 14:53:03 +03:00
AUTOMATIC 0cd6399b8b fix broken inpainting model 2023-01-04 14:29:13 +03:00
AUTOMATIC 8d8a05a3bb find configs for models at runtime rather than when starting 2023-01-04 12:47:42 +03:00
AUTOMATIC 02d7abf514 helpful error message when trying to load 2.0 without config
failing to load model weights from settings won't break generation for currently loaded model anymore
2023-01-04 12:35:07 +03:00
AUTOMATIC 8f96f92899 call script callbacks for reloaded model after loading embeddings 2023-01-03 18:39:14 +03:00
AUTOMATIC 311354c0bb fix the issue with training on SD2.0 2023-01-02 00:38:09 +03:00