MorphMoe

Artist: **Nenchi** | [pixiv](https://www.pixiv.net/member_illust.php?mode=medium&illust_id=54580388) | [twitter](https://twitter.com/i/web/status/684769579982176256) | [danbooru](https://danbooru.donmai.us/post/show/2237380) Full quality: [.jpg 1 MB](https://files.catbox.moe/xg9ysl.jpg) (1505 × 1620)

4
0

Artist: **Yolanda** | [twitter](https://twitter.com/yolanda315732/status/1510997475745697794) | [danbooru](https://danbooru.donmai.us/post/show/5247919)

30
0

Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*Buran*](https://tapas.io/episode/1039944) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/ngcybg.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*

22
0

Artist: **Rinotuna** | [pixiv](https://www.pixiv.net/en/users/26547499) | [twitter](https://twitter.com/rinotuna) | [artstation](https://rinotuna.artstation.com/projects/rAYdbO) | [linktree](https://linktr.ee/rinotuna) | [patreon](https://www.patreon.com/rinotuna) | [danbooru](https://danbooru.donmai.us/post/show/4597176) Full quality: [.jpg 1 MB](https://cdn.donmai.us/original/89/61/896162d970b9cfbc9ddb2bf30caa6bd4.jpg) (2204 × 2547)

21
0

Artist: **Hashi** | [pixiv](https://www.pixiv.net/member_illust.php?mode=medium&illust_id=108481286) | [twitter](https://twitter.com/hashi_bb84) | [danbooru](https://danbooru.donmai.us/post/show/6344711)

7
0

Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*Seamoth*](https://tapas.io/episode/1018654) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/jjmoc5.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*

45
4

Artist: **Tom-Neko** | [fediverse](https://pawoo.net/@tomu_neko) | [pixiv](https://www.pixiv.net/member_illust.php?mode=medium&illust_id=73099789) | [twitter](https://twitter.com/i/web/status/1094429028272861186) | [danbooru](https://danbooru.donmai.us/post/show/3422841) Full quality: [.png 1 MB](https://cdn.donmai.us/original/3a/cb/3acb27feff680e664d085f2a02fff880.png) (2000 × 800)

7
0

Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*Longleg*](https://tapas.io/episode/1024920) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/8v60he.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*

19
0

Artist: **Astg** | [pixiv](https://www.pixiv.net/member_illust.php?mode=medium&illust_id=50381112) | [twitter](https://twitter.com/ARQISAT) | [artstation](https://www.artstation.com/dangerdrop) | [tumblr](https://dangerdrop.tumblr.com) | [deviantart](https://deviantart.com/view/570193393) | [danbooru](https://danbooru.donmai.us/post/show/2015073)

19
0

Artist: **Zhvo** | [pixiv](https://www.pixiv.net/member_illust.php?mode=medium&illust_id=111632294) | [twitter](https://twitter.com/Zhvowa) | [danbooru](https://danbooru.donmai.us/post/show/7276500) Full quality: [.png 5 MB](https://cdn.donmai.us/original/ef/57/ef578e53e6c7cd874b6222abcac0842f.png) (1440 × 2560)

27
0

Artist: **Lam** | [pixiv](https://www.pixiv.net/member_illust.php?mode=medium&illust_id=110238346) | [twitter](https://twitter.com/ramdayo1122/status/1683766828877893632) | [danbooru](https://danbooru.donmai.us/post/show/6525828)

21
2

Artist: **Shycocoa** | [pixiv](https://www.pixiv.net/en/users/88909247) | [twitter](https://twitter.com/shycocoa/status/1168142205875150850) | [artstation](https://www.artstation.com/soojoop) | [danbooru](https://danbooru.donmai.us/post/show/4313912)

22
0

Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*Pod 153*](https://tapas.io/episode/1011507) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/ozpldw.png)

19
0

Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*The Boring Girl*](https://tapas.io/episode/1003152) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/en7eh9.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*

16
3

Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 30*](https://tapas.io/episode/801672) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/2llg1i.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*

25
0

Artist: **Rinotuna** | [pixiv](https://www.pixiv.net/en/users/26547499) | [twitter](https://twitter.com/rinotuna/status/1524785111295619072) | [artstation](https://rinotuna.artstation.com/projects/X1YP4a) | [linktree](https://linktr.ee/rinotuna) | [patreon](https://www.patreon.com/rinotuna) | [danbooru](https://danbooru.donmai.us/post/show/5412385)

26
0

Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 30*](https://tapas.io/episode/801672) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/c24efn.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*

33
0

Artist: **Rinotuna** | [pixiv](https://www.pixiv.net/en/users/26547499) | [twitter](https://twitter.com/rinotuna/status/1391803334403575808) | [artstation](https://rinotuna.artstation.com/projects/WKDYND) | [linktree](https://linktr.ee/rinotuna) | [patreon](https://www.patreon.com/rinotuna) | [danbooru](https://danbooru.donmai.us/post/show/4514814) Full quality: [.png 2 MB](https://files.catbox.moe/i30byg.png) (1920 × 2400)

29
0

Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 29*](https://tapas.io/episode/794529) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/doq12n.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*

23
0
https://files.catbox.moe/2jj90p.png

Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 29*](https://tapas.io/episode/794529) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/aff9do.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*

28
0

Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 28*](https://tapas.io/episode/786547) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/nx71at.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*

21
0

Artist: **Fami** | [fediverse](https://pawoo.net/@famy_siraso) | [pixiv](https://www.pixiv.net/en/users/5860132) | [twitter](https://twitter.com/i/web/status/888349056656826368) | [tumblr](https://famysiraso.tumblr.com) | [danbooru](https://danbooru.donmai.us/post/show/2794689)

30
0

Artist: **Rinotuna** | [pixiv](https://www.pixiv.net/en/users/26547499) | [twitter](https://twitter.com/rinotuna) | [artstation](https://rinotuna.artstation.com/projects/rJVZWa) | [linktree](https://linktr.ee/rinotuna) | [patreon](https://www.patreon.com/rinotuna) | [danbooru](https://danbooru.donmai.us/post/show/7171231) Full quality: [.jpg 1 MB](https://cdn.donmai.us/original/9c/1c/9c1cfd713606b6eb10be30657b56cdaf.jpg) (2465 × 2973)

53
0

Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 28*](https://tapas.io/episode/786547) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/svad0n.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*

74
6

Artist: **Rinotuna** | [pixiv](https://www.pixiv.net/en/users/26547499) | [twitter](https://twitter.com/rinotuna) | [artstation](https://www.artstation.com/artwork/J9A85v) | [deviantart](https://deviantart.com/view/945623069) | [linktree](https://linktr.ee/rinotuna) | [patreon](https://www.patreon.com/rinotuna) | [danbooru](https://danbooru.donmai.us/post/show/4784391) Full quality: [.jpg 1 MB](https://cdn.donmai.us/original/4d/6f/4d6f4f200ab0e17d4109ff6d9a234a0d.jpg) (2291 × 2709)

19
2

Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 27*](https://tapas.io/episode/780859) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/ujad6r.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*

20
0

Artist: **Fami** | [fediverse](https://pawoo.net/@famy_siraso) | [pixiv](https://www.pixiv.net/en/users/5860132) | [twitter](https://twitter.com/i/web/status/920598849340633093) | [tumblr](https://famysiraso.tumblr.com) | [danbooru](https://danbooru.donmai.us/post/show/5590763)

19
0

cross-posted from: https://sh.itjust.works/post/26382751 > Sorry if the weaponry is incorrect, I just don't want to spend another 5 hours trying to draw missiles part by part. > > > > > brrrrrttttt > [r*ddit](https://libreddit.northboot.xyz/r/NonCredibleDefense/comments/1fz4u32/a10_warthog_81st_fs/)

27
0

Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 27*](https://tapas.io/episode/780859) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/tsix87.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*

31
0

Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 26*](https://tapas.io/episode/769533) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/poegfv.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*

12
0

Artist: **Rinotuna** | [pixiv](https://www.pixiv.net/en/users/26547499) | [twitter](https://twitter.com/rinotuna/status/1481276616261369863) | [artstation](https://www.artstation.com/artwork/G81Wed) | [linktree](https://linktr.ee/rinotuna) | [patreon](https://www.patreon.com/rinotuna) | [danbooru](https://danbooru.donmai.us/post/show/5072502) Full quality: [.jpg 1 MB](https://cdn.donmai.us/original/4a/f0/4af08dd106260da6c489afdab6c17acc.jpg) (2521 × 3130)

14
0

Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 26*](https://tapas.io/episode/769533) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/aoeq7u.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*

14
0

Artist: **Eisuto** | [pixiv](https://www.pixiv.net/member_illust.php?mode=medium&illust_id=69514137) | [twitter](https://twitter.com/i/web/status/1013409266743443457) | [danbooru](https://danbooru.donmai.us/post/show/3181255)

24
0

Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 25*](https://tapas.io/episode/761804) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/sbipi8.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*

23
0

Artist: **Shycocoa** | [pixiv](https://www.pixiv.net/en/users/88909247) | [twitter](https://twitter.com/shycocoa/status/1168142205875150850) | [artstation](https://www.artstation.com/soojoop) | [danbooru](https://danbooru.donmai.us/post/show/4312603)

19
0

Artist: **Ermao Wu** | [twitter](https://twitter.com/ErMao_Wu/status/1459597349551230976) | [danbooru](https://danbooru.donmai.us/post/show/4912181)

20
0

Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 25*](https://tapas.io/episode/761804) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/a3i5rq.png) Edit: catbox.moe is only down for me for some reason, VPN works

24
0

Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 24*](https://tapas.io/episode/725534) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/0uvohn.png) Edit: catbox.moe is only down for me for some reason, VPN works

18
0

Source: [Instagram](https://www.instagram.com/p/DAnvoMIzKDE/)

48
0

Artist: [Onion-Oni](https://m-10ka.deviantart.com) aka TenTh from Random-tan Studio Original post: [*#Humanization 24*](https://tapas.io/episode/725534) on Tapas (warning: JS-heavy site) Upscaled by waifu2x (model: upconv_7_anime_style_art_rgb). [Original](https://files.catbox.moe/20yeub.png) *Unlike photos, upscaling digital art with a well-trained algorithm will likely have little to no undesirable effect. Why? Well, the drawing originated as a series of brush strokes, fill areas, gradients etc. which could be represented in a vector format but are instead rendered on a pixel canvas. As long as no feature is smaller than 2 pixels, the Nyquist-Shannon sampling theorem effectively says that the original vector image can therefore be reconstructed losslessly. (This is not a fully accurate explanation, in practice algorithms need more pixels to make a good guess, especially if compression artifacts are present.) Suppose I gave you a low-res image of the flag of South Korea* 🇰🇷 *and asked you to manually upscale it for printing. Knowing that the flag has no small features so there is no need to guess for detail (this assumption does not hold for photos), you could redraw it with vector shapes that use the same colors and recreate every stroke and arc in the image, and then render them at an arbitrarily high resolution. AI upscalers trained on drawings somewhat imitate this process - not adding detail, just trying to represent the original with more pixels so that it loooks sharp on an HD screen. However, the original images are so low-res that artifacts are basically inevitable, which is why a link to the original is provided.*

26
0