The three speakers who gave the presentation on the high-end character graphics used for the production of “BackStage.”
First, please tell us about the section each of you were in charge of for this presentation.
Kurosaka：I was the director on this project. As part of my responsibilities, I was also a lead for the character asset, which meant working on all the necessary assets for the character’s outward appearance - so the skin, eyes, hair, and the details of her outfit.
Kishi： I acted as the character rigger and did the character setup – so in other words, my job was adding joints and adjusting skinning so that we could move the model. In addition, I was also in charge of setting up the cloth and hair simulation. For simulation I crafted the intended image while working with programmers to decide on the specifications and what functionality should be added. I was also maintaining pipeline tools for DCC, which is necessary to work on the assets. At Luminous Productions, rigging and pipeline management are handled by the technical artist team.
※Character Rigger: a type of job responsible for setting up rigs (mechanism to move objects) necessary to animate a CG character.
Hasuo： I handled shaders and such for the overall character expression. For this project in particular, I programmed an advanced and highly efficient workflow for asset creation by utilizing a variety of technologies including shape and facial blending among others, from the standpoint of creating assets in large quantity.
Please tell us what brought you three together to take on this initiative.
Kurosaka：Since we were going with the theme of “how to create a high-end character” and needed expertise that encompasses all character elements, I reached out to Kishi and Hasuo. The movements and simulation that Kishi could offer combined with Hasuo’s graphics-related shaders – my belief was that, without expecting improvements in these areas, the character would not ultimately be of high quality. So, after deciding to place emphasis on “appearance,” “movement,” and the “execution on runtime,” it was decided that the three of us would give this presentation.
What initially made you go with “how to create a high-end character” as the theme?
Kurosaka：The aim of creating high-quality characters with the future in mind has always been one of the initiatives at Luminous Productions, so we decided to follow along with that for the theme of our presentation as well. However, our purpose went beyond the theme of “how to create a high-end character” and as such, the data we produced was structured anticipating future implementations.
Backstage with the three developers behind the “BackStage” demo featuring ray tracing
What was the concept behind the scene we see in the “BackStage” demo?
Kurosaka：From a technical point of view, while coming up with an environment that could really bring out the ray tracing technique – a setting with lots of reflections, yet wouldn’t look odd having so many - the word “mirror” popped up in our minds, followed by an image of a backstage room. From there, we conceived an image of an actress backstage as our concept, on which we built our demo “BackStage” through incorporating vignettes of her putting on makeup and changing outfits in preparation for going on stage.
When discussing ray tracing, I assume the theme naturally revolves around things like reflection, transmission, and refraction?
Kurosaka：That’s right. Generally, in a scene where someone is putting on makeup with makeup utensils, there would be a massive amount of translucence and reflections….after seeing a bunch of glass bottles like a bottle of perfume and nail polishes lined up in the scene, I asked myself “how did I end up creating such a challenging scene? ” (laugh)
Demonstrating from translucence to refraction in such a scene as this one – I really felt it was a very good showcase of how realistically reproduced the depicted objects were.
Hasuo： When you place an image rendered with rasterization and another with ray tracing enabled next to each other and compare the translucent elements in both images, the ray tracing one does indeed look overwhelmingly life-like.
Kurosaka：There’s a transparent box filled with makeup brushes, with a bunch of small bottles like nail polish lined up in front of it - that was definitely the hardest scene… (laugh). Even now, the framerate drops when the character reaches her hand towards that area in the scene.
Hasuo：That definitely is the heaviest scene.
Kurosaka：And yet, it is a scene that highlights the effect of ray tracing the best. But admittedly, it is heavy.
Kishi：So, when the two transparent objects overlap, do you have to make the system calculate everything?
Kurosaka：Yes, that is correct.
Kishi：It’s hard even with pre-rendering, isn’t it? For refraction, you have to first decide how many times you want to calculate before rendering it.The more times you calculate, the nicer the image will turn out, but when you really decide to go for it…it takes a very long time.
After the presentation were there any specific points you wish you had dug deeper on from a technical perspective?
Kurosaka：Actually yes, in terms of workflow. We were working on how we can efficiently convert the quality of a pre-render to real-time, but we were not able to delve into it this time. Originally, we were hoping to show how it’d look when a shader created through pre-rendering was replaced with our own proprietary real-time shader and so forth.
Can you show us now what you were planning to show then?
Kurosaka：Although this is still a work in progress…
Kishi：For me, I wanted to incorporate a comparison of hand deformation and show it in a way that’s easy to understand, but I couldn’t incorporate it into the demo in time and couldn’t introduce it properly at CEDEC either. So, I’ll get it ready and hopefully I’ll have an opportunity to showcase it somewhere in the future.
* This comparison will be released in the coming days.
Hasuo： I have one as well from the program-side. There is a function with which you can swap the character appearance in real-time, but the way it is implemented now only allows for the entire set to be switched at once while we originally created a basic system that allows for the player to switch individual elements - like just the hair, outfit, or innerwear. We ended up not implementing it due to the insufficient preparation of asset and cloth data this time.
So, originally you could have any combination of the individual elements like hair and outfits?
Kurosaka：Yes, originally, we were supposed to be able to generate them randomly, even during the cutscene.
Hasuo：We couldn’t implement it in time from the technical point of view.
Kurosaka：Since we had to put solving issues, such as lag in frames, caused by cloth simulation first, the priority for things like the appearance swapping was lowered more and more until it had to be dealt with toward the very end of development. Ultimately, we made swapping between appearance sets possible during the free-control mode at the end, which we were able show during our presentation.
It was astonishing enough to see that the appearance could be swapped during the free-control mode. I was particularly impressed with it being possible with ray tracing enabled, but you were aiming for something even more remarkable!
Kurosaka：Yes, we aimed much higher... The swapping was supposed to be possible freely, even during cutscenes...
In terms of difficulty, what would you say the difference is in performing appearance swaps during a cutscene versus in free-control mode?
Kurosaka：Let’s see… perhaps there isn’t that much of a difference in the basic difficulty per se, but for example, some part of the clothing can very noticeably pierce through the character asset when the character is acting and making a complicated pose…
Kishi： In the free-control mode, the character is basically in a standing pose, which makes things easier on the simulation side, but cutscenes can have rather complicated poses…
Kurosaka：We initially scripted the sequence with the intention to perform appearance swaps during cutscenes as well, but ultimately, it was just made possible during the free-control mode.
Kishi：You can, however, change the appearance set and then play the cutscene after with the alternative look.
So, since the calculation required for stable simulation was too heavy, stable framerate couldn’t be achieved?
All three：Yes, that’s right.
Kurosaka：As we didn’t have enough time to polish, we kept checking and discussing the quality of all the appearance patterns on a continuous basis with Kishi and selected only the outfits that were deemed to be of high enough quality to show. It’s a bit of a sad story…
Based on the fact that the demo has helped clarify exactly what should still be improved upon, I assume it has also given you clear goals for the future?
Kurosaka：Everything in the "BackStage" demo contributed to that – it showed us how we should be creating, be it facial, character, cinematics, lighting or something else. This project has really shed light upon what we need to do going forward, which made our planning for the future easier.
Have you already started your planning and set things in motion?
Kurosaka：Yes. With the experience of working on “BackStage” under our belts, I think everyone who was involved can now figure out and design with more ease like “oh, we can tackle it this way,” whenever we start planning for new titles in the future.
You rendered everything with path tracing on in this demo and you now have a clear vision on your future direction – I feel like this has put you ahead of the curve in the industry, don’t you?
Kurosaka： I do have a bit of a lingering sense of discontent, since as an artist you always tend to have the feeling of “I should have done that one thing this way” afterwards, but on the bright side, I do believe this experience will serve as a valuable guideline that will help us improve further from now on. Some points of improvement became apparent because we incorporated ray tracing and some for other reasons.
On a side note - while I’m sure there are pros and cons for both- once you became comfortable using either technique, which is easier to work with overall, rasterization or raytracing?
Kurosaka：I actually prefer ray tracing…
Kishi：I prefer ray tracing as well…
Is ray tracing easier to work with?
Hasuo：Well, when you look at it from a programmer’s or from the engine’s point of view, ray tracing is simpler. It is simple in a way that, when a ray is cast, the ray refracts when it’s supposed to refract and reflects when it’s supposed to reflect. However, while I personally like the concept of ray tracing better, there are still a lot of headache-inducing issues to solve if you really commit yourself to using ray tracing. (laugh) We were able to achieve extremely high-quality imagery with the help of many artists this time, but if we were to apply ray tracing in the vast scale required by an actual open world, there are a lot of issues to be solved in terms of programming.
Kurosaka：It would take a tremendous amount of time.
So, it would be extremely challenging in, let’s say, an environment with a river, an ocean… and if some objects made of glass were thrown in as well?
Hasuo：Let’s imagine there’s an ocean beyond a hallway of a school with its classroom and hallway windows made of glass. In that sort of setting, there would be a parade of refractions and reflections.
Kurosaka：There is a good side to it as well, though. It is capable of tracing translucence properly, so when a translucent object is placed behind another translucent object, it still depicts graphics correctly. On the other hand, if you try this with rasterization, the translucent object beyond a different translucent object sometimes disappears, unless you determine the order of tracing carefully beforehand. So, with ray tracing, we can achieve proper results without putting much thought into it, so to speak.
Would it be difficult to apply ray tracing in a scene with natural terrains where the conditions change real-time, like depending on weather and climate?
Hasuo： I wasn’t in charge of this part for the “BackStage” demo, but based on the discussion with the artist who was, I don’t think it’s feasible to use it in that extent yet. It’s more realistic at this point to use it partially or in a contained space.
So, I’m sure this depends on the environment, but will only the cutscenes fully incorporate ray tracing while the rest of the game sticks with rasterization?
Kurosaka：Yes, I’d say the way to do it would be to incorporate ray tracing into part of rasterization.
Hasuo：Or, handle reflections with ray tracing, perhaps.
Kurosaka：I don’t think ray tracing works well with things requiring real-time performance.
Looking a step ahead of the industry and the challenges it brings
Lastly, please share your thoughts after working on this project and what you expect from the future of Luminous Productions?
Kishi：Thanks to this project, I found facial expression to be very challenging… This demo shed light upon where we are still lacking, so we’re working hard to solve these issues for the actual product. We’ll continue to pursue photorealistic expression so we hope to work with people who have a passion for creating realistic objects rather than deformed characters.
Do many of the artists at Luminous Productions prefer the photorealistic approach?
Kishi：Yes, I’d say so. I can feel the passion many of us here have for pushing towards photorealism, so we are definitely heading towards the same goal. I get the impression that while there are many studios that excel in deformed expression in Japan, we are one of the only ones that specialize in photorealism. Game development focused on photorealism requires highly advanced technological capabilities and therefore is difficult to do in Japan. However, we are able to take on that sort of game development at Luminous Productions, which I believe is our unique strong point.
Kurosaka： I felt that our expression improved, particularly regarding the part in front of the makeup table. However, at the same time, what will be expected of us from now on, and overall in the world of games, is an expression of even higher density and definition. I feel that we have access to the environment where the high-density as well as high-definition information can be reproduced in computer graphics, so I do hope to be able to take on such initiatives as well. Additionally, I want to pursue how we could link the real world and the game world. As such, a lot of our team members have the ability to capture the real world in an accurate manner - by being able to see things in the real world exactly as they are, then analyzing and outputting those details.
How will you balance producing high-density and high-definition realism and the gameplay?
Kurosaka： Well…the foundation of a game ultimately is usability, or in other words, whether the user can enjoy the act of playing, so any realistic elements that get in the way of that will most likely be eliminated. However, when the world is constructed based on the usual gaming “grammar,” it’s more than likely that the sense of reality has already been lost in many ways. For example an NPC in an idle motion or a model that looks real on the outside but can only stand in place in terms of movement. I believe one of our major tasks is to eliminate that sort of awkwardness.
Kishi：I’m sure it’ll be a challenge, but it also gives us an opportunity to further expand the gaming experience if we can establish an efficient production method.
So, the main challenge is how to avoid the uncanny valley by eliminating the unnaturalness when using graphics and AI?
Kurosaka：Yes, and how we can provide a fun and interesting game to our players on top of that. It’s a constant battle with that sort of unnaturalness as long as we create games.
Hasuo：That’s our never-ending challenge.
Hasuo：After experiencing the development of “BackStage,” I noticed that the more realistic the image becomes, the more the unnaturalness of the motion is emphasized. What we need in order to step up to the next level is animation engineers. AI has become trendy and more and more new methods of graphic expression have become available these days, but I personally feel that the growth of the next–generation animation has come to a standstill. Let’s take a model’s idle motion for example - the majority of the idle motions are created and simply looped by the artist. Typically the artist selects from the prepared idle motions and plays the one that fits with the desired timing, but what if we could create an engine that can hide such intention to “select a certain motion for a certain timing” on the artist’s side? We could eliminate that sense of a mechanical loop and potentially achieve a higher level animation.
Kishi：Ideally, the AI becomes more intelligent and be able to do things like return to its assigned station when a character approaches, or show the changes in the AI’s emotion nicely through animation. Right now, even if the AI can make various judgements and has the ability to think on its own, it’s difficult to output it in a way that is understandable to the audience - as an animation and as an expression.
Hasuo：It would surely take a tremendous amount of time and effort, but at the same time, I think that’s exactly what it takes to attain next-gen animation.
"For instance, it’d look odd if the model wasn’t moving at all in its idle motion in the “BackStage” demo, wouldn’t it?
All three：Yes, it would look very strange.
Kurosaka：I expect we’ll continue to wrestle with that sort of strangeness.