Yes, all right. So my name is Jasper. I'm working as a data scientist and playing a lot with AI. And two weeks ago I was on a trip to China. I took a lot of very bad pictures there, which is just getting right. I can't use the whole main Instagram or anything. So I had the idea to use a stable diffusion to try to improve it. And this is really like a short demo that I started last weekend.
So we're going to see what's possible. So this is like the Olympic Tower from my iPhone engaging. And on the right is some sort of stable diffusion improved version as if it would have been shot by a 35 millimeter analog camera. Little bit sharper, other colors. And it's actually, I found it quite difficult to change these colors and the styles. So I don't know if you've seen the Magnific upscaler.
This was actually started in December only and acquired, I think the last month , the Olympic, really cool stuff they do. So this works by having like a low resolution image, upscaling it. And you can sort of tune either the creativity, consistency, and the aesthetics right. And for my holiday pictures I'm really interested in a very good consistency. So I don't want any fantasy things.
I want to actually add the pictures of where I've been, right? But they should just look better. Which is really difficult, I found. So there are these new control net tiles, which conceptually work by just t iling your image in very small tiles. Having that as a constraint in your AI workflow and have an image generate an image which has similar tiles.
Then I'm also using basically all the control nets I can find, which seem relevant. So I want my nice looking image to have the same, the kenning edges. I want my same depth map. And then you can just add the multibitre, right? So I've been learning this co-fi UI image generation by a fun. So I know if you've been using it or trying it, it's really cool. I used to do everything in the fuses, but this is so much faster for quick prototyping everything.
Just see. I think it's live on my PC to reconnect things. Well, it looks like this. You have all these nodes loading models doing all the steps. And then in the end, you've got your picture, right? So I just want to show some examples. So I'm left my original iPhone picture. And then this is how far I've got now.
So I can do some images that have wider, better colors, more sunny, a bit nicer looking. But it does go a bit too creative, like on the right. So you get this tree which isn't there, right? So I need a bit more control for sure. And especially in this scene, so this was in Hong Kong, which does, I think it 's a better picture. I mean, it's more dramatic for sure.
But there's a different building here, right? So it's, yeah, that's difficult. I think the Chinese characters are also wrong, but then I won't be able to see them. So not to worry about this. But what's really cool actually is when you've now made this Confy-wide workflow, there are now very cool ways to quickly make an API out of it. It's like maybe half an hour work.
So I did it with a different model than this. There's a few GitHub repos where you just replace a JSON which has the work flows. And you can push it to replicate and have a live API ready. So for instance, while this is the GitHub repo, replace one JSON in the predicted pi, and then you've got your API which you can use in Python. You can set this up.
Choose the GPUs you want. Choose to have it active or not, very per second. So that's really, if any of you are working with Confy-wide, it's really something I would investigate for sure. All right, so I think my five minutes are up. And yeah, thanks for the opportunity. [APPLAUSE] Could you scroll back to the last slide, please? This one? No.
This one. I mean with UI in a studio. With two photos in a studio with UI. I think it was from the browser. The browser. Yeah. Yeah, yeah, yeah. This one.
Yeah. So this was not like another web browser. Oh, thank you. Yeah. Yeah. So how does it deal with people? How does it deal with people? Yeah, faces are definitely changing.
I see. So I tried adding control with both as well. At least you get all the people there. But faces, I don't know why, but faces really change more than other things I feel with these styling approaches. And what model are you using because there's many stable diffusion. Yeah, this is-- This is SDXL with the realty fission for-- Mm-hmm.
And a few Lorers as well, like a better detailed one. So I'm not really interested in having a fast API right here, right? Because it should just run over my pictures and be fine. So I can definitely add on Lorers and big models and have many control nets that are meant to be with this. All right. The part was a creative or an added trade model, but you can use a negative prompt to-- I mean, if you do a few iterations and reduce the creates, you can still get the white-brown sea of colors.
Yeah, sure. Yeah. I think it's definitely a matter of just trying, right? If you don't care that it's right every time, then it's fine. And on the basis, you can still use like a EO object recognition and face recognition and replace it with some source photos, right? It's more work. Yeah.
That's-- Well, that's-- so I'm also interested with really like color replacement. So that's sort of the-- I think pasting things back would be more difficult. Or basically, if you really do like-- for instance, the 35 millimeter-- so this changes a lot of color, right? And pasting back the face is foods. Yeah, it would do bad, I feel. Yeah.
Maybe it's just these workflows are not enough for all the pictures. Yeah. What are you doing in each other's nose of the graph? So, again, I could definitely show them. Like a prompt or-- let's see. Yeah, so it's-- it's a lot but actually most of them don't do much. So it does the-- it does the establish-view image here. It does some prompting here.
So basically choosing some custom prompts. And then some more models. Let's see. Prompt text. And so you kind of tweak it. I have this preview of soft-- soft graph. And then you kind of preview and tweak it like this. So how do you-- yeah, they have like inputs and outputs.
So it could be, for instance, this one. Let's see. Yeah, it doesn't show it all very well because I'm disconnected to my PC. It could be like your images and inputs. And then the output is this step now. And that's going to be able to the next one. And it happens with the text, it happens with models. And you meant to combine them, right? So it's as long as what you need as a node, or you can even use some custom buttons here, you can really quickly build your workflow here.
Yeah. All right, thank you. [APPLAUSE] Okay, so 20 pages. All right, let me just help you see. [BLANK_AUDIO]