Studio Shenanigans

Studio Shenanigans #5 – Finding The Power Of Iteration

Realization...

Recently, I went through the process of updating this here website, it was a good 2 weeks of work, in between client priorities. Once I had finished, I scrolled through each page and made sure it was all in order and buttons worked etc… and I sat in my chair still for a second and thought “Man, this is way better than the last website” and I thought the last version was great when I published it…

My next immediate thought was “huh…Iteration is pretty powerful”

And so here I am writing a blog about it.

I’ve always understood the general idea of building upon something to make it better, but this logic in my brain was put into more specificity when I started the Game Development degree with Falmouth Uni. Some of the first theories and principals they were desperately trying to bash into our heads (I now understand why), were Agile methodology and somewhat within that Iteration.

Iteration is most widely described as: “the repetition of a process in order to generate a sequence of outcomes. Each repetition of the process is a single iteration, and the outcome of each iteration is then the starting point of the next iteration.”

During smaller university projects (Pre Studio 316), its was certainly thrown around a lot in team conversations after lectures on the topic and i’d hear team members say “yeah lets iterate on that” or “We can iterate on it later” but it felt for the most part that it was said with somewhat empty meaning.

In part maybe because I didn’t quite understand how to effectively use the principals or even the concept itself fully at this point, but also because we probably just weren’t doing it right.

It was only really going into the final year with the current studio 316 team that I started to get a solid grasp on the iterative process within games and now I feel I am starting to apply those principals outside of games too, like with the studio website.

Where to start...

Before I started the process of updating the website, I spent a good week or so looking through the site’s analytics and user behaviour to really find out where I needed to make those changes. In the same way if it were a game, I would have it in front of testers, and analyse the most used routes, weapons, side quests etc, and of course the opposite and what is not used at all when it is there to be used.

It didn’t take me long to see a relatively large glaring issue in that, the bounce rates and drop offs from just the home page were relatively high. 

Bounce rates (In google Analytics anyway) are determined by the amount of users that opened a site page and stayed only on that page without browsing further.  Having a bounce rate below 50% is the ideal and with the older website version, our site average across all the pages was around 75%…not ideal at all.

It is important to note here that, if your home page is a single page site in which you don’t need to drive users to further interactions then a bounce rate that is high is fine.

Session time is also something I’ve been keeping a keen eye on. Obviously the important thing here is that a higher session duration means someone spent more time looking at your site. which is what you want. 

The website is not the only scenario in which I’ve noticed my iterative process improving, the studios’ pitches for various things like games and investment were also areas in which I saw the iterative process working its magic.

Making Changes…

We are fortunate enough to have a good connection of industry heads through things like the Falmouth Launchpad and our previous stint with Tranzfuser, which enabled us to regularly practice pitch, get feedback and advice and then use that to update the pitch before setting up another practice with another industry head(s).

This consistent and continued approach allowed us to evolve our pitches over the course of about 4 months to a point we were happy with. But after about a 2 month period of no pitches or work done on the pitches, coming back to it and instantly seeing areas to improve and make the content better.

The length of time within each cycle of iteration doesn’t have to be a short period of time, like with the website and pitch. Without having a time restraint, it almost opens the opportunity to gain some insight and experience in design, user experience or whatever it is you are working towards over that time in other areas of work and revisiting it with a refreshed outlook to re-ignite that iterative process once more.

Looking at the website’s changes, I focused on readability and easy engagement. Looking to make the information on the page much easier to see and read, decluttering and making the sections within the home page have easy to find and meaningful “More info” buttons making it easier for users to go to a page to find more about what they were reading initially. 

It was my intention with the changes to target the analytics I had outlined previously making it so that if the session was “Bounced”, the user would still be able to gather plenty of information without needing to interact a second time. And as mentioned before also making buttons for sections more obvious and clear, driving those second interactions and attempting to stop the high drop off rate from the home page. 

These before and afters present the changes relatively well and there is clear improvement on the home page. This kind of improvement is reflected throughout the site.

The Theory Behind It...

Jesse Schell has a whole chapter on Iteration in his book “The Art of Game Design: A book of lenses”, and the chapter starts with “The Game Improves Through I Iteration”.

The first version of your “product” in this case a website or pitch, will almost always never be the last and final version. It’s pretty likely, as in my case, you will throw one away. But that is not to say that the first version was useless because a large part of the iterative process is having a product in front of users to receive feedback from and learn lessons, with which you iterate on.

In the same book, Jesse Schell also outlines the Agile Manifesto, an evolution in the understanding and principles of software development which, if you know of Agile and are in Game Dev you most certainly know by now, that it has shaped the modern process of design and iteration and heavily influenced the workflows of game creation. 

One of the main points of the Agile Manifesto states that the practice prefers “working software and responding to change” over “Comprehensive Documentation and following a plan”. In using these principles in the process of iterating on the website, I was getting a first version up and available for users to see, getting analytics on the site’s performance and making sure I had working software with which I can then respond to changes with. 

These kinds of principles being applied through the game development process is a massively powerful tool. And they also obviously prove useful outside of game dev too in the process of design and creation of things that are created with an end user or recipient in mind. Reading through these resources over the last months has been hugely insightful.

It made me think that perfectionism is almost the killer of the iterative process because, in a true iterative process, the product is always evolving, changing and improving based on current and past user experience and feedback. Being too perfectionist about the product and being reluctant to get a version out there and tested before its “perfect” is what would ultimately be the product’s demise.

Fin

Anyway, those are my thoughts, I am most certainly not an expert on iteration, a mere novice in Skyrim skill tree terms, plenty of learning yet to do but this is just the start and I’m starting to see where the most powerful tools to enhance the studio and i’s creation processes are. Hopefully soon I’ll be an apprentice.

I will be doing a review of the new website after 3 or so months of uptime, looking at the analytics and the difference in the two sites.  I also plan to run an ad campaign to mirror one we ran previously and review the difference in statistics and behaviour. I will write another blog 

I hope you enjoyed this read, if you saw it to the end, thanks for sticking around and I hope you got something out of it!

Studio Shenanigans #4 – Imposters Among Us…

There is an Imposter among us…

And no, I don’t mean I’m playing “Among Us”, I’m talking about that incessant, almost nagging voice in the back of all of our heads that tells us we aren’t good enough to be doing what we’re doing.

Before having graduated university and started on the Tranzfuser pathway programme, I can honestly say that I had never heard a mention of “Imposter Syndrome” and now I wish that I was still blissfully unaware of it. Now I know about it, it feels more prevalent than it previously did.

Even when I first heard it I was unsure as to whether people were actually just referencing the game or if it was an actual thing. It was probably around the time of starting the Tranzfuser programme having just graduated that I started to understand this annoying feeling that was always grabbing at my heels. 

According to the most trusted source on the internet, Wikapeida, Imposter syndrome is: a psychological pattern in which an individual doubts their skills, talents, or accomplishments and has a persistent internalized fear of being exposed as a “fraud”.

I’m basically here to tell you that we are all frauds 🙂 

It comes in waves…

Having since first heard about Imposter syndrome, it seems to have been everywhere. I’ve been hearing it everywhere within the industry. Develop:Brighton was an example of this as I couldn’t count on my fingers how many times I heard the words being slung around.

But not only have I heard much more mention of it but it has started to creep into my mentality slowly and sneakily like one of those fungi that takes over ants brains in the rainforest. Especially as the studio and myself personally have started to expand our network as we grow, you start seeing more and more incredible people and projects that really kick in those thoughts. 

My experience with it is that it very much comes in waves, intertwined with the work you are currently doing down to the level of the specific task and the overall experiences of your day to day work. Some days you are getting tasks done in rapid time and you have a nice validating chat with a mentor or someone you respect in the industry, no impostor to be found on those days. 

But when you can’t quite find out why that system doesn’t work or that error is popping up, or you read about someone doing something incredible and kickstarting their career, even when there is a big opportunity coming in for the studio, one that feels too big for us to even be considered – those are the days when I feel it, put simply, I think; ‘Who am I to be here doing what I’m doing compared to all these insanely talented people?’ 

Relatability and speaking to other people…

As mentioned before, a lot of these thoughts for myself and Jamie can at least be put down to talking to people who are highly regarded, or work for big studios or someone who has created an awesome debut title that sold stonks. 

It feels like the equivalent of being a little kid pretending to be superman in his homemade costume while standing at the feet of a real Superman and looking up at them wondering if you’ll ever get such big muscles. Although that is just one of two ways things feel like they can go when we have been connecting with new people and speaking more with our current network. 

The second of those two ways is when you come away from a conversation, meeting etc and come away having a feeling of validation and newly engaging motivation based on the belief that you are, in fact, doing the right stuff. 

Jamie pointed out to me and I agreed, that it’s about finding points of relatability within those conversations and connections. Talking about a topic that you both understand or are involved with and finding out that they also think similarly or can confirm that what you are working on is along the right tracks. 

Dealing with it…

For a little while, I had been spiralling through a vicious cycle of “I’m sus, I should eject myself” to “Things are going great! Let’s keep going!”. Eventually though and funnily enough through the process of speaking to more and more people, my mindset on the whole “Imposter Syndrome” has changed somewhat.

Its most certainly a thing, and a powerful, scary one at that, but i found that it doesn’t matter if the person mentioning imposter syndrome had made millions and been super successful or if they were in a similar situation to us being a new start up studio looking to make a mark; Everyone gets it, and in their own way really. 

So as I said before, we are all frauds in some sense really.

But it’s not to say that Imposter Syndrome has been entirely bad for me, because the most certain thing that it has done for me is wholeheartedly confirm that I am on the path of growth. I am putting myself in the position where I can do something I’ve not done before and learn something new. 

So bring it on…

Thank you for coming to my TED talk, that is all.

Joe

Studio Shenanigans #3 – A Beginners Guide to Machine Learning using Python and OpenAI Gym

This week I thought it might be quite nice to have a look at one of the machine learning (ML) frameworks that I am using for one of my AI projects that I’m currently working on as part of my university master’s degree.

I am by no means an expert of the field, but I’d like to take a look at how you can quickly get setup with ML in Python to train your own agents. 

Prerequisites
Before we can get started there are a number of preisequites that we need to take care of.

 First off you’re going to have to get yourself a copy of python, I recommend getting the version below the latest 64-bit version of Python (which is Python 3.9 64-bit at the time of writing).

This is because it can take a little bit of time for some of the changes to filter through to some of the libraries that we require and could save a little bit of time trying to figure out what’s going on. Furthermore, when installing Python it generally makes life a bit easier in the long run to ensure that install to PATH is checked during installation, this will allow us to run Python directly on the command line (cmd or powershell).

 Now you have python you’re gonna want to get yourself an integrated development environment (IDE), don’t worry its basically just a fancy name for a development text editor (a bit like a very clever Notepad). I recommend using JetBarin’s PyCharm Community Edition (CE), this is what we will be using throughout this post, although you can use Visual Code, Atom or IDLE (IDLE is installed with python by default, but is very basic).

 Once PyCharm (or the IDEA of your choice) is installed) we start looking at getting our environment configured for ML using OpenAi Gym and Stable_baseline3 (we’ll discuss these in more detail later).

Upon opening PyCharm for the first time you will be prompted to create a new project, select the location for the project and under Python Interpreter make sure new environment is selected using Virtualenv and ensure you select the 64-bit version of python as the Base Interpreter (it will be the only version if that’s all you installed). and go ahead and hit create.

 Now we can go and install the required packages for OpenAi Gym. We’ll start with PyTorch as this has a little bit of configuration itself. First off head to the PyTorch Website and scroll down to Install PyTorch, for this we want to select the latest stable build (1.10.2 at time of writing), select your Operating System (OS), you want to use the pip package (though you can use conda if installed), language Python and finally we can select the Compute Platform.

Unfortunately, PyTorch only supports Nvidia graphics cards (to the best of my knowledge, at the time of writing). Therefore, if you don’t have an Nvidia graphics card you’ll need to select CPU for the compute platform, otherwise, select the CUDA version that best matches the version installed on your machine. (To find out which version of CUDA you have installed open cmd or Powershell and enter nvidia-smi it will be printed in the top-right of the output). Now copy the Run command and paste it into the terminal in PyCharm (at the bottom).

 (This will be pip3 install torch torchvision torchaudio for non-Nvidia users or pip3 install torch==1.10.2+cu113 torchvision==0.11.3+cu113 torchaudio===0.10.2+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html for uses with the latest version of CUDA installed.)

It may take a few minutes for PyTorch to install but once it is complete we can install the other two packages. To install OpenAI Gym simply enter pip install gym into the terminal of PyCharm, followed by pip install stable-baseline3 to install stable baseline3. Once that is completed we are done with configuring our python environment for OpenAi Gym.
What is OpenAi Gym and Stable Baseline 3?
Well I’m glad you asked, OpenAi Gym is “A toolkit for developing and comparing reinforcement learning algorithms” -OpenAI Gym. Basically it is a framework that implements the basic methods required to train different ML Models (algorithms), with further methods to tick and render the ML environment. Furthermore OpenAi Gym contains a wealth of example projects for learning with included many classic Atari games, which can be used to train agents. On the other hand, stable baseline3 is a set of reliable reinforcement learning algorithms which can be trained and once training is completed make predictions based on the current state of the environment.

We don’t really need to care about PyTorch, but just know that it is a dependence of stable baseline3 library.
Different types of learning
While I don’t want to get into too much detail, regarding different learning types I’ll quickly cover the basics.  You may be asking what the hell is reinforcement learning (mentioned above). Basically it’s a bit like training a pet, if it does something good you give it a reward (or treat) or it does something undesired, so you punish it (don’t worry this is AI, it doesn’t have feelings :DDD ). This process of reward/punishment happens every time the environment updates (or ticks), which in turn tunes a bunch of params based on the current observations of the environment and actions that can be performed.  On the other hand there is supervised and unsupervised learning. These learning methods are generally used to classify data into groups, for example, classifying images of cats and dogs. The main difference between the two, is during the supervised learning training process you tell it (or tag it) this is a dog, this is a cat…. While unsupervised learning will attempt to lean patterns from untagged datasets, which is useful from more ambiguous data. If you would like to explore supervised/unsupervised classification more, I recommend taking a look at the scikit-learn python library. There are other types of ML, however these are the three that I see most commonly.
Hello World in OpenAi Gym
OpenAi Gym comes with many examples environments for learning how to use reinforcement learning algorithms, including MountainCar, Acrobot and some Atari games such as Space Invaders. However for our Hello World project, we’ll have a look at the classic Cart Pole problem. The aim of Cart Pole is to try and balance the pole above the cart for as long as possible by moving the cart left and right.

So let’s walk through the implementation step by step and I’ll explain what we are doing as we go along. First of all in PyCharm create a new python script (right click in project (left panel) -> New -> python file) or delete the contents of the default Python script created by PyCharm. Now we want to include the packages required for OpenAi Gym and the ML algorithms. import gym from stable_baselines3 import DQN Adding the above code to the top of the python script will import the OpenAi Gym module (import gym) and Deep Q Network (DQN) ML algorythm (from stable_baseline3 import DQN) into the project. Now we can go ahead and create our Gym environment by adding the following line environment = gym.make("CartPole-v0") Now we are able to run the script for the first time. In PyCharm you can press the play button in the top right of the main window (assuming you used the default script that PyCharm creates, otherwise click on the dropdown labelled “main“ -> “Edit configuration“ and change the “script path“ to match the script you are working on). You should notice that nothing happens, however you should also notice that the application does not exit, so this means something is happening. The reason we don’t see anything is because we have not told the environment to render the output to screen yet, but before we can do that we could do with training our agent, so let’s have a look at how we can do that next.   model = DQN("MlpPolicy", environment, verbose=1)
model.learn( total_timesteps=100_000 )
model.save("models/dqn_cartpole")
So what does this do,
  • model = DQN("MlpPolicy", environment, verbose=1)
  • This creates a new Deep Q Network ML model, in which we can be trained to play the CartPole environment. As we can see, it contains three parameters of which the first two are required. MlpPolicy is the learning policy, environment is the OpenAI Gym environment that we want to train our agent in. The third parameter verbose=1 is optional and when it is set to 1 (ie =1) it prints the learning statistics to the development console, while setting it to 0 will print nothing. We’ll set it to 1 so we know something is happening.
  • model.learn( total_timesteps=100_000 )
  • This tells our ML model to start learning our environment. It achieves this by exploring the environment through a series of trial and error actions based on the current observations of the environment. You’ll notice that we have included one optional parameter total_timesteps this is the number of timesteps (or updates/ticks) that the ML model will train for. We’ll set this to 100,000 for now (the _ is just a way to space out numbers in python which I think is a nice feature of the langerage) and you can play around with this parameter in your own time. What happens if you increase or decrease total_timesteps? Lastly we have,
  • model.save("models/dqn_cartpole")
  • This simply saves the ML model so we can load it back in at a later time. It contains a single parameter for the file that data should be saved in, relative to the running script. ie. C:/user/username/python Projects/My First ML/models/dgn_cartpole
Now if we run the script again we should notice that it starts printing an output to the development console of PyCharm (at the bottom). The output is just the statistics of the ML Model, it just lets us know that it is trying to learn something. if we leave this running for a few minutes you’ll notice that the application will just exit (and create a new save file). If you don’t want to wait you can either press the stop button or click in the development console and press ctrl+c. Again though we still don’t see our environment (or game) yet, this is because rendering is usually the slow part of running a game, and we want to train our agents as fast as possible. The only time you will need to render the game during training is if you wanted to use machine vision to play the game, but that’s another post for another day.

Once the agent has finished training, we are able to let it play the game by itself (at least to some degree). This is where we can actually render the environment and see how well the agent has done. First of all we need to define a couple of variables episodes = 100 update_steps = 1000 The first one episodes is the amount of times the agent can reset the environment, upon death or completion of the scene. In this case we have set it to 100. The second one update_steps is the maximum number of updates (or ticks) before the scene is automatically reset. Now it’s time to render the environment. This is probably the trickiest part, but don’t worry we go through it line by line.
for episode in range(episodes):

  for step in range(update_steps):
    action, _state = model.predict( observation, deterministic=True )
    observation, _reward, done, info = environment.step( action )
  environment.render()

  if done:
    print( f"Ended environment after {step} steps" )
    print(f"last info: {info}")
    observation = environment.reset()
    break

Starting from the top
  • for episode in range(episodes):
  • is a loop and we are basically saying run the following indented code “episodes“ amount of time. (Note that the indentation is very important in Python)
  • for step in range(update_steps):
  • we are doing the same as above but for update_steps amount of time. This basically means that we are going to tick the environment upto episodes * update_steps (or 100,000 with our configuration) amount of times (assuming it does not reset early).
  • action, _state = model.predict( observation, deterministic=True )
  • Here we are asking the ML model to predict the next action and state of the environment. However we only need the action, so we put an underscore (_) in front of the state to show that we are discarding this value. Alternatively we could do action = model.predict( observation, deterministic=True )[0] witch means we only want the first value.
  • observation, _reward, done, info = environment.step( action )
  • Next we update/tick the environment, witch returns four values.
    • observation is the state of the environment at the end of the tick.
    • _rewardis the reward that the agent would have received if we were training.
    • done has agent finished (ie dead or completed the scene)
    • infoany debug info generated during the tick.
  • environment.render()
  • this renders the frame to the output window (see i told yay we’ll get there soon :D)
  • if done:
  • here are asking if “done“ == “True“. If it is run the following indented code.
  • the next two lines just prints a message to the output console the first print the amounts of ticks that the agent survived while the second print the last frames debug info.
  • observation = environment.reset()
  • simply resets the scene back to its initial state, ready for the agent to make another attempt.
And that’s pretty much it! Now if you hit the play button in PyCharm, it will start the learning process, once it completes the environment windows should appear with the agent attempting to balance the pole for as long as it can!
The only thing i haven’t told you yet is how to load in your saved ML model.
To do that replace

model = DQN("MlpPolicy", environment, verbose=1 )
model.learn( total_timesteps=100_000 )
model.save("models/dqn_cartpole")

with
model = DQN.load(f"models/dqn_cartpole")

Using Different ML Models
In this example we have used the Deep Q Network as our ML model, however there are several available in stable_baseline3. Another model you might want to give a go is the Proximal Policy Optimization (PPO), you can include this in the project by adding PPO to the from stable_baselines3 import DQN line. So, from stable_baselines3 import DQN, PPO and replace the references to DQN with PPO, it’s really that simple.
Further Reading and Resources
If you would like to read more about stable_baseline3 and the models that it implements I highly recommend reading some of their documentation. Furthermore there are a whole bunch of parameters that can be modified to effect how the agent learns which we have not had time to cover in this post.   stable baseline3 DQN
stable baseline3 PPO
OpenAI Gym
Conclusion

In this post we have had a look at how you can configure your system and run a simple example ML environment using OpenAi Gym. We have gone through the process of creating a simple hello world application in OpenAi Gym step by step. Going from zero to hero in under 30 lines of code! However there is a hell of a lot more that can be done with the ML models and OpenAi Gym. This post was designed to only wet your whistle in the ML space and I hope you found this post useful and experiment more with ML in the future! Maybe next time we’ll look at how you can implement your own game into OpenAi Gym using PyGame.

Studio Shenanigans #2 – What is Dev-Ops, why is it so important?

What the hell is Dev-Ops?

Yeah so this was pretty much how we responded when we were first introduced to Dev-Ops. How can this big, vital component of game dev that we so desperately needed have gone unnoticed by us? 

Unfortunately Dev-Ops (at least for us) went under the radar, we had never been properly introduced to Automation, but we are good friends now! Automation is the use or introduction of automatic processes in development or manufacturing, essentially, automatic. As the name suggests, automation takes the boring tasks from the developer and makes the computer do it!

There are relatively large components of game development that tend to go unnoticed, most definitely by a lot of end users and most likely even by a lot of people who are in the game (industry/Development).  Components that if not implemented and done so correctly and with a solid workflow can massively impact development cycles. Before our introduction with Dev-ops, it would be like that section of a company building where people would walk past the door with a curious look on their face thinking “Huh, i wonder what it is they do in there?”

Dev-ops includes a relatively large array of different practises that enable more efficient and solidified workflows for the development of software and games. The most notable practice is the use of computers to automatically package your project for the end user. We, the developer, do not want to spend time compiling, packaging and uploading our 20gb game to Steam, when a computer can do it for us as we continue our day building new cool features!

You may already see the appeal here…

Okay, so what does it do?

We’re glad you asked! Simply put, Dev-Ops makes the computer do all the work when packaging our projects, instead of us. When we say packaging, what we mean is, taking it from inside the Unreal Engine, and making it into a playable application / game so we can play or test it. It’s super useful when trying to quickly find bugs or enhance user experience.

Let’s talk about something called  ✨Continuous Integration & Deployment✨ So, Continuous Integration (CI) and Continuous Deployment (CD) are the two major parts in the Dev-Ops pipeline. One big advantage to Dev-Ops is being able to package our games without any user input, so we can sit back and relax. However, in reality, when a project is busy packaging at the end of each day, the last thing we are doing is relaxing, sadly. 

As the developer, we want to ensure that each day, we update the product, so that when packaging time arrives, it rolls out a new build with new features! This could be a new game mechanic, or something for one of our clients. This whole process (Updating the project as we go and it does a package to test) is called Continuous Integration, as we are continually integrating new features as the product rolls out into staging.

You mentioned.. Versioning?

Ah yes, versioning! I’m sure you know how buggy software and games can be, well to mitigate issues with bugs popping up in builds we release or show to clients, we want to be able to hold onto any stable builds. To do this, we can test a deployed build, check if it is stable, and mark it down as a stable build. This means if our next build fails, we can easily and quickly rollback to our most recent stable build, saving us a lot of stress and time trying to fix anything on the fly!

Ahh, ok, we definitely need Dev-Ops…

Yes you do. Initially, back in our University days, we were blissfully unaware of what Dev-Ops was. As we progressed further into the end of our studies developing our Time Rivals IP, we quickly became aware of the sheer amount of time it takes to not only ‘package’ our projects to play. But to also package them for each target and platform.

For Time Rivals, we needed both a server, and the game itself to be packaged so we could quickly play and test it. Amazingly, we managed to get by without Dev-Ops in place, but it wasn’t easy! Manually packaging the game by pressing File > Package > Windows, then waiting an hour whilst your PC is out of service to package, was a tedious task. And then only to find an array of bugs in the build, making you need to go back, change things, then build again.

When we began diversifying into client based work, within the first milestone and deadline, we had delays and struggles handing over a packaged version of the project. Clearly this was very troublesome and unprofessional. We knew we needed to do better, but we needed to figure out how. We also had bugs which only showed themselves in meetings due to a lack of internal testing, which was very embarrassing! We wanted to avoid this.

Therefore, after all this pain, it had us beg the question “Is there an easier way to do this?”. There was, in fact, an easier, and better way to do this.

Why is it important (feat. Stuart Muckley)

About 6 months ago, we had a quick introductory chat with Stuart Muckley, managing director at Codewizards. At this point, still developing time rivals, and had just stepped out into the industry without the umbrella of university to shield us. One of the key points he made, and one we took away with us was, get Dev-Ops set up as soon as you can.
And so recently we have been connecting with Stuart more and we asked him to give us and all you lovely readers a nice insight into his mastermind on IT practises. Stuart asked initially if we wanted his COVID-sick hardcore response to the topic, we obviously said yes:

     “If you don’t do CI for your game you’re a f****** i**** 😊”
     Stuart Muckley 2022

Words to live by if you ask me, but of course he had more insight to give. 

One of his main points was that ultimately, it’s all about the players’ experience, the end user, the gamers (even the sweaty ones 😉 ). Ensuring that your player base has a good experience playing your game. 

     “It may seem obvious but we want to ensure that players have the best experience, [CI/CD means] that testers can get hold of builds quickly            and that we know when developers introduce errors.  Players are essential, testers cost money when they’re waiting around and developers            lose efficacy when they can’t trust the code to work upon.”

Stuart goes on to say that CI/CD “Cures” these issues as the centralisation of your build chain and means you are “documenting the process to build, you’re enabling test/QA to see the status of the work”. This all contributes to the process which saves your developer heaps of time and stress.

You’ll end up seeing lots of wasted time while in the run up to deadlines when you need builds and deployment without Dev-Ops. Stuart also mentioned that it’s “quicker to build centrally than locally” meaning you can hand over build to QA or clients much faster and “More importantly a developer can carry on working effectively instead of waiting for the CPU core % to drop below 80% 😊”. 

The challenges of setting up Dev-Ops and how to do it right!

Unfortunately, setting up Dev-Ops is no simple task, it is one of those situations where all the effort pays off later down the line, and you will thank your past self. The most notable issue with heavy workload Dev-Ops such as Unreal Engine automation is that we need a fairly powerful computer and a build environment. There are a few ways we can go about achieving this setup.

Self hosted – It is not uncommon to see companies use in house, self hosted hardware for CI / CD operations as cloud based solutions are very expensive in comparison to on site hardware. For the same price as an always on cloud based solution, you could purchase a much more powerful on site machine to handle the task of Dev-Ops. However, this is where scalable Dev-Ops come in.

Scalable Dev-Ops is a cloud solution to the problems we just established above! What if it was possible to only boot up a virtual machine when the build is happening, and then shut it down immediately after? Well, thankfully there is a way, a few companies have this infrastructure, however AWS’ CodeBuild system is up there at the top.

What is really great about AWS CodeBuild is that you can use custom Docker images for your build pipeline, Docker is a container service, this means we can create an Unreal Engine Windows Docker container, which has all the tools we need built in for game packaging. This is also fully scalable, and builds can happen at the same time. 

All of this put together saves us a massive amount of time as we can build our projects to our own specification and simultaneously, meaning we can work on multiple projects all at the same time!

So… What’s the verdict?

Like Stuart said… if you don’t do it, you’re an idiot. 

As daunting of a task it is to get any form of Dev-Ops integrated into your project, it is 100% a necessity, as it will save you an enormous amount of time in the long term. A studio could easily lose a week of development every month due to building and deployment without Dev-Ops, and obviously time is money. 

Big thanks to Stuart for getting involved with us and taking the time to give us some great info. Make sure you go check out CodeWizards and everything they do!

Hopefully you got something out of this blog, and see you in the next one. 

Jamie. J

Scroll to top