top of page

Poly Keyboard makes it possible in XR, to switch prints on physical keyboard.

If you uses more than two languages on a daily basis, this must have happened to you: You are used to where certain keys are in one language but can not find them on the keyboard in the other language.

​

Using English/German switch as an example, because the "Y" and "Z" are swapped, I am either typing it wrong because the physical keyboard has German printing yet it is set to type in English, or both are in German but my fingers still remember the English key positions. On a normal day, this typo can happen over 20 times.

​

​

Part Two

Environment AI Model Training

1.

Experiments with untrained AI

An untrained AI doesn't understand what an HDRI image is and how it should look.

 

An HDRI is a 360-degree image that captures all environmental data, including lighting. However, Midjourney mistakes the texture sphere for the meaning of HDRI, making the generated images unusable.

​

Even using a real HDRI image as an embedding did not improve the result.

an_hdri_texture_from_hdri_haven_urban_street.png

Generated with Midjourney text prompt:

an hdri texture from hdri haven, urban street --ar 2:1 --v 5.1

an_hdri_texture_from_hdri_haven_urban_street_img.png

Generated with Midjourney image embedding and text prompt:

an hdri texture from hdri haven, urban street --ar 2:1 --v 5.1

Using ControlNet can produce usable HDRI images, but always too similar to the ControlNet embedding, even when in reference only mode.

00003-1739178872.png

Generated with Stable Diffusion txt to img + ControlNet:

an hdri texture from hdri haven, urban street

modern_buildings_2_1k.png

ControlNet embedding

2.

Train Stable Diffusion Model for generating HDRIs

After the initial tryouts with Midjourney and Stable Diffusion, it became clear that training an HDRI AI model is necessary for generating high-quality environment in XR.

​

The first model was trained with 70 HDRIs downloaded from HDRI Haven using LoRA.

​

Using the first model, more HDRIs were generated and put into the dataset for the second model.

​

The resulting second model worked quite well for generating outdoor HDRIs.

urban street sd screenshot.png

The trained model OutdoorHDRI

Part Three

Image to Music

1.

Train Stable Diffusion Model for generating HDRIs

Turn on audio and watch the volume

Environment and music mapping test in Unreal Engine 5, Meta Quest Pro

For testing, music were generated from two AI-generated HDRIs using the "Image to Music" API by fffiloni on huggingface. 

​

It uses CLIP Interrogator to caption the image, and then run the caption text through Mubert to generate music.

 

The resulted music loop was satisfying, but it took quite long which has to be improved in the future. 

bottom of page