After a longer absence I wanted to give a short update on the status of No Place For Traitors - the lack of posts is due to us not 'producing' anymore. That is, our main job is communication at the moment. The prototype is done, and I sent it to 4 different publishers, waiting for their feedback. The feedback in general was good concerning the overall quality of the graphics and the technical aspects of the game. Constructive critic was sent in concerning the gameplay and the puzzle-design, although I have to emphasize that the prototype does not depict the actual puzzle-design of the game. And this is one of the first conclusions I got from these first feedbacks:
I was surprise to note that most of the publishers saw the prototype as a finished product. I draw that conclusion by the remarks some of them made ("The dialogues are written too casual - monks wouldn't talk like that") that I hardly can accept as a reason to reject a product. Texts and puzzledesign is changed easily, we even paid attention during the development of our engine to be able to change puzzles and text without much effort. When I say 'Prototype' I mean 'prototype' - this thing needs work in a lot of aspects, but it shows what we are capable of and what a finished product could look like and it gives an insight on the storyline and the special gameplay-elements of the product. So I was a little bit confused and taken aback by some of the feedback that rejected our product because of obviously easy to be changed features.
In the end I think (and in some cases after talking again with the publisher, I knew) that the reasons for not accepting our game are different ones: For one, there was a fear of some publishers concerning another game, released in 2008 called "The Abbey", which also has the setting of a medieval monastery with a murder being the center of the story - too close to our game, they said, "and we don't want to give the press any reasons to compare". The Abbey was an adventure with a high production-value but with problems concerning the puzzle-design and (at first) technical issues.
It is really sad, that one unsuccessful game can shut down a whole setting for a genre (it seems). I still think (although more than 2 persons think different) that a medieval ghost adventure like No Place For Traitors can be successful, if there are no obvious issues, of course. But I see quite some flaws in The Abbey, in both game development and publisher work, that No Place For Traitors would not have.
The second reason I perceive as being a major problem in finding a publisher are the costs - maybe I have to give up the idea of finding a GERMAN publisher (all the ones I contacted were german) as the mayority of them have not enough money to invest. BUT, I have to say, it is quite weird sometimes with some of these publishers. On the one hand they don't want a costy production of more then 150K €, on the other hand they want a full-grown title to be sold for 35€/piece with 10-12h gameplay and idealy an experienced studio with lots of published games (which Serious Monk isn't). I will be writing about the finances in detail in another post, but so much is obvious: With a team of two and three months of development we managed to realise a prototype, that (thanks to middleware and modern tools) would have taken a bigger team a lot more time to finish some years ago. What I want to say is this: We are most effective as a team, we have a high quality standart and are not expensive in production. I thought this, in combination with the prototype, would have balanced out our lack of published titles.
Well, I don't want to sound too disappointed, actually. I have to admit that nearly all the studios by one way or the other showed that they were overall impressed by the prototype. And, one of the most promising publishers seems still interested - and seems to really accept the prototype as a work-basis for a to-be-developed-product.
Also, the engine has been done! This means, prototyping or even self-publishing an adventure game with a shorter playing-time could be done easily (well, relatively).
Now, concerning the actual prototype: I would like, but cannot publish it, not even the trailer-video as I am atm using copyrighted music. So if anyone knows about some CreativeCommons medieval music (one a capella, and the other ones ambient, instrumental music suited for background), I would be thankful if you could comment or drop me a line.
So anyway, we are still full steam heading forward, stay tuned and thanks for all the comments and emails and support so far. Take care!
Yeehaw! What you see above is the Startscreen of the finished prototype and the first frame of our introduction-animation. The last days we have been really busy polishing the demo and now are proud to announce the final version of the prototype.
This means, that we now are more than ever on the lookout for publishers, because now we can present the game with a playable demo. So at the moment the programming and the designing has come to an end and we focus on contacting publishers.
I know, that some of you (at least I hope so) would like to play the prototype - BUT (besides not wanting to reveal all the secrets about "No Place For Traitors"), as we are using layout-music that is copyrighted we are not able to publish the game to public. Furthermore, the prototype exists only in German at the moment, although our engine already supports multi-language.
So this might be a good and a bad news: The good one is, that the prototype is up and running and we managed to achieve our goal in time and in budget. The bad news is, that this prototype can only be released to a few people and not yet to public.
What sounds quite impressive in the header, is just a small statement, that the first few people playing the prototype were able to finish the puzzles in about 45 minutes without bigger troubles. BUT we are nonetheless proud and excited about it: It was the first time we could see reactions of players and they were good.
We have already changed some gameplay-elements and improved some graphics and edited texts. There are still 2 or three smaller bugs to be taken care of and I will work some more on the character-sheet of our main-character. And, of course, there is still the production of the intro-movie, which will be a 1:30 minute long animated, rendered sequence introducing the story...
And another news: Florian Ebrecht has joined the team and will be doing the sounddesign for the prototype. All in all I hope we will be sending emails to publishers with the finished prototype in about 2 or 3 weeks.
Although it might not seems so, we have been really busy working on our prototype and I am quite happy to announce that we have now scripted all of the puzzles intended for the demo - up to now only in german, but with the scripting we have completed also the animations, most of the gfx and of course all the in-game realtime-models. (The image above is the loading - screen).
So we are now adding some effects, programming some necessary shaders and will be doing some first tests concerning the puzzles (which means, having friends play the prototype and look over their shoulder to see where they get stuck). Also, the guy for the sound-design, Daniel Migge, will soon be starting to collect and create the needed sounds.
So we are on good way and I hope not before long we are able to send the prototype to some publishers. Stay tuned!
It has been some time since I posted anything. This was due to work I had to do for clients, because - as some of you may have noticed - Serious Monk is not a fulltime-job... yet. It is rather the symbol of my wanting to do this full-time and hopefully will one day result in exactly that. Up until then, I will have to work for clients every once in a while to gain the money to have this thing up and running.
Nevertheless we were quite productive the last weeks and I wanted to share a solution to a small, but significant problem we encountered: Transparency in Unity. As you already know (if you read the other articles) we 'fake' our 3-dimensional room, that is the characters are real 3D models, but the rest is plains with high-res-textures on them. For furniture like the table in the Refektorium or the oven in the kitchen, we use plains, with a .tga-file UV-mapped onto them. The *.tga-file has an alpha-channel, which allows us to see the background where the table or oven has ended.
We do changes the same way in the scenes, for example when a character picks something up - we just fade in or fade out a plane with a corresponding texture. Of course, simple objects are not full-resolution, but the plains are modelled in the smallest possible size. We had a situation where a big plain (green border), simulating the changing of a door was behind a smaller plain (red border), simulating the change of a detail in the door.
In unity, a shader with transparency will be sorted by distance to the camera: objects further away from the camera will get drawn first, then the objects closer to the camera. In blender, the smaller plain is in front of the bigger plain, but in unity it was quite the opposite. That has to do with the way unity calculates the distance, which is maybe not too obvious: Unity takes the center of the object - the center being the center of the bounding box, NOT the origin - and draws a line to the camera. The length of this line results in the distance, and thus in the order of drawing.
In our case, the center of the smaller plain was further away from the camera because it is further down, resulting in a greater distance concerning the Top-Bottom-Axis.
The solution is simple (if you know it): You just have to re-position the center of the bounding box (right arrow) by adding one extra vertex (left arrow) and placing it behind the plain. Like this, you can influence the distance to the camera while having the plain and textures stay on the exact same spot. Now unity draws our transparent plains nicely in the order we want it to... YEEHAW!
So the last days I have been busy with other jobs, and it is only today that I have been able to do some work on 'No Place For Traitors' again. I was scripting some of the dialogs when I noticed, that the 5 'Emoticons' of my main-charakter don't seem to cover all the necessary emotions I'd need. So I did, what I should have done before starting the 'Emoticons': research! BUT, no harm done, the frames I did already will still be used.
So I started to read about the basic emotions - and there are sure a lot of different theories concerning these 'basic' emotions. In addition, I realized that it is not necessarily to cover all basic emotions (for example LOVE), but it would be necessary to create some additional 'states', that would reflect the adequat reaction in a dialog (for example being curious).
So I defined 8 different 'states' and 'emotions', based on the basic emotions Love, Joy, Surprise, Anger, Sadness and Fear. In the still you can see from top-left to bottom-right: afraid, angry, curious, disgusted, normal, smile, sad, surprised.
In the prototype I will have all these emotion only for the main-character, Rafael's and Oswald's 'Emoticons' will be created when needed.
Hi there folks,
this blog is getting more attention, which, of course, is a good thing. Some of the mails I have ben getting lately have been inquieries concerning our workflow Blender -> Unity, especially concerning the interactive objects and the walkable areas. Well, here it goes, a more detailed description of what we do:
1. The Images
The first image shows the empty scene, in the case of the kitchen these are three planes, UV-Unwrapped and textures with the background, the oven in the middle (tga with alpha-channel) and the stairs to the left and the right (also tga with alphachannel). Another view of the scene can be seen in this article. The planes are Tracked-To the camera, like that they are facing the camera directly. As Unity and Blender have different unit-system, it can be quite difficult to get the camera distances right, including the focal-lenght. So we use an Empty to transfer the position and facing-direction of the camera to unity - the focallength is being adjusted manually using the upper and lower border of the background-planes for guidance.
2. The walkable Area
The second thumbnail shows the walkable area (blue), a simple mesh, which will be hidden in Unity. The character can only walk on this mesh, no matter where the player clicks. A simple pathfinding is done to get to the nearest point possible to the clicked spot.
3. Interactive objects
With interactive objects I mean clickable, static objects (in contrast to other characters). These include objects that can be looked at, objects that can be taken and also doors - in short, everytime you are interacting with the scene, you click on a hidden mesh - these meshes can be seen in the third thumbnail in yellow.
Sometimes, when you take an item you need to display the change in the scene. We do this by placing sprites in front of the textured planes from (1). These can be hidden or displayed, and like that simulate change in the scene. You can see it in the fourth screenshot with an orange outline.
5. Naming Conventions and Import into Unity
As we don't have much possibilities to transfer information from Blender to Unity, we use the names of the meshes to be able to identify the objects and connecting them to our xml-files (see this article). So, for example, an interactive object is called HG_23_Knife_K. These strings can be parsed by unity and result in information: HiddenGeometry - ID 23 - name: Knife - room: Kitchen. Like that, we can not only separate the different kinds of objects (walkable Area, interactive Objects, Sprites, Background-Images), but also automate tasks like importing and setting certain attributes with a simple script in unity.
6. Scripting with the Online-Tool
The IDs of the objects are unique for each object and are handled and generated by our Online-Tool, a php-script communicating with a MySQL-Database. Each interactive object will have an xml-File, generated by the tool and describing for example certain actions to be taken when clicked. Concerning the xml-files we also use a naming convention and Unity hereby is able to connect an xml-File to a certain object in the scene by comparing the IDs.
Well I hope this helps someone understand a little bit more, how we do things. If not, use the comments or the contact-form :-) Happy blending and uniting...
As stated in my last post, I bring you a small tutorial on eyeballs, especially on getting the nice reflections in the iris, that make eyes so much more vivid. So here it goes:
The first image shows the different parts of an eyeball. From left to right: The outer lens, the pupil, the iris, the eyeball. Of course, the material for the lens would be a transparent yet glossy material (with a high specularity above 100). The pupil would have a shadeless black material, the iris needs an iris-texture (google is your friend) and the eyeball a simple white, but also glossy material. Ok, so far so good, these are the basics most of you already know and like they are shown on many webpages.
I once read these tutorials, too, but nonetheless had difficulty in getting the nice reflections in the right spot. Either they weren't visible at all or they were on the eyeball and not the iris. The clue is a small fact that you can observe in the second picture. The lens-mesh has a different diameter than the eyball. Many people simply cut the eyeball and use the separated bit for the lens. This does not work very well and, by the way, is not how our eyes really are. The lens has to have a much smaller diameter - with that little change, the reflection will be visible from much more angles (concerning the light source).
Of course you should test what diameter suits you best - but my experience is that you need to exagerate a bit, that is make the lens even more convex than in nature, and you'll get some nice reflections.
Of course, this technique is for render-models only. For realtime-models its a different thing, in most cases you don't need an eyeball with that much detail. But for cutscenes or animations I found this approach very satisfactory.
As you already could see on some of the screenshots, the dialog is being displayed on a scroll, which has the typical medaillon at the side, showing the person that is speaking. I want to feature moods for these medaillons, showing a different facial expression depending on the text. For the time being, I decided to do 5 facial expressions (from top left to bottom right): afraid, angry, curious, neutral, smile.
The facials have been created with Shape Keys - so they will not be featured on the real-time-model, because unity cannot (afaik) import and use shapekeys - you'd have to do a complete facial rig (instead of just the jaw, which is what we have done...).
I think the most important thing concerning facial-animations are the eyes, as we as human beings connect to one another mostly with the eyes - small changes can result in an entirely new expression. The crucial part of getting a character alive is the small light-dot-reflection on the eye-ball. As I myself have had some trouble with these reflections, I will post a small how to on getting this light-dot right. Stay tuned...
We had some hard time with the turning-animation of the main character. In the end, it was a kind of silly mistake (the left/right animations where interchanged), but it had us wondering for some time. Nevertheless I wanted to share our approach of how we faced this challenge. Even high-priced productions sometimes have difficulty to do the animations right concerning the walking and turning of the characters. The problem is that you have to do a small number of animations for a big number of possible actions. The turning-animation is a good example. Everytime the player clicks on the ground, our character first turns in the right direction, then starts to move. Of course, the angle our character turns is not set, it can be anything between 1 and 180 degrees. So how do you do ONE animation for an angle between 1 and 180?
The current approach we use (and which finally is working) is, we give our character a linear turning speed as a Constant. This speed matches our animation, which, when fully played, does a turn of 90°. The animation will be looped until the character reached the end of its turn, at which point we do a 0.2 frame-long blending into the walk-animation.
This might still be quite obvious for the most of you. Now comes the tricky part, which might be interesting for some animators: The turn-animation in blender is not one, but two, one for the left and one for the right turn. The final animation has to be without the character really turning, staying on its origin, which is nearly impossible to animate without tricks. Simply rotating the main bone won't help either, because it is quite probable that some of your bones won't copy rotation, so by rotating the main bone, the character might only be twisted in some weird angles. So this is what I did:
I placed a small (temporary) plane under the character and made it turn linear (important!) 90°. Then I parented a camera to the plane, so that in my preview-window to the right it seems like it is the character, not the plane, turning. Then I animated the turning animation, which (simply put) is re-positioning the feet, so that the foot on the ground always moves exactly with the plane. It is crucial to set linear interpolation for the frames (and only for those), when either the left or the right foot are on the ground. The movement in unity will be linear, because it is quite difficult to match movements with beziercurves, so the animation in blender (or any 3D program) has to work with a linear movement.
When you have done your animation, you simply delete the plane and import the character into unity. Oh, and be sure to call the right animation when you are implementing the animations :-)