🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Game engine GUI & main loop: game vs editor context

Started by
17 comments, last by yah-nosh 1 year, 8 months ago

I implemented the GUI in our engine, with the drawing abstracted so it can be rendered either in Vulkan or by the system 2D drawing commands on Windows, Linux, or Mac. The latter will provide higher quality text rendering and give tools a much snappier more responsive feel. So the editor just calls engine commands to create the entire GUI, using a system window as the base rather than a Vulkan framebuffer.

10x Faster Performance for VR: www.ultraengine.com

Advertisement

@Kylotan Perhaps a little more context is in order: all the major features I plan to have in my engine will be “wrapped” such that I could technically replace the components later, e.g to make the engine platform-independent, but initially I will always stick with a particular OS and particular libraries.

When it comes to window management, input, etc. I decided to go with SDL. I don't actually plan to use it for anything more than creating a window and handling the various events and inputs. For a game context, this is pretty much all I would need.

On the other hand, having plenty of experience with implementing dev tools, I know that the requirements for window and widget management in an editor become far greater than for most games. Competent libraries built for this purpose (wxWidgets, Qt, etc.) also take care of hiding the platform-specific nastiness, and they also come with plenty of utilities to make life easier when creating complex GUI applications (whereas, to my knowledge, SDL is more low-level). The tradeoff is that they also require control over the main loop and dispatching events.

Since I want to develop the engine not just for myself, but potential future collaborators, I want to provide some degree of “freedom”, while also providing utilities so people don't have to reimplement the same things for each project. It's why I'm considering that I should make the engine support two separate “contexts”: create window for games, defer window management for editors.

As for the event loop, I think you're potentially in for a lot of pain there if you wrap that entirely and don't forward events to the places that need them. I'd advise working right now on trying to integrate your game into a minimal Windows application and making sure it does what you expect. For example, just have a rendering window and a button both created by the application, and verify that you can render into that window AND that the button is receiving events and can act on them.

Admittedly, this is another thing I have yet to figure out. Technically I could define a specific list of events supported by the engine, thus allowing clients to receive them, but it may just be easier to do it the other way around, and expect the client application to feed the engine events that it requires (e.g resize render targets)

@Josh Klint I've had multiple people propose this approach to me, and I do see its benefits. It would remove the ambiguity about “who has control over what”, the engine still does all the work, clients simply request all the widgets and behavior that they need.

My only concern is that this puts extra load on the engine, having to rely on callbacks and/or inheritance to allow clients to define the required behavior. It also means I'm restricted to whatever GUI the engine is capable of rendering, which means a lot of extra work implementing things. For dev tools, it's tempting to instead use a dedicated library (e.g Qt) and only include engine features where they are needed. For example, I use an external library to generate all my windows, widgets, etc. and at some point create small viewports where I can preview how the engine would render a scene, at which point all I need is to tell the engine to render to that viewport.

With that in mind, I'm trying to decide between organizing games to work this way as well (i.e provide their OS management, engine just does rendering and other backend features), or to have the engine “do everything” even with editors, or to find some middleground where the engine behaves differently depending on the context.

I think you're on the right track with your original post. My engine/editor, which is relatively mature at this point, uses a similar approach to separate concerns and it's working very well for my purposes. The “engine” is just a library that the editor depends on, which has an update(dt) function and add/remove resource functions (among other things). This allows me to have multiple engine instances running concurrently if I want. For example, my editor is also a multitrack audio editor with export functionality. I can create a new engine instance to do the export on a background thread while the main engine thread stays interactive.

For GUI, I have my own lightweight widget and rendering library. The rendering library is built on top of the graphics subsystem (which wraps OpenGL, etc.), and is not tied to any platform-specific functionality. The GUI just gets rendered into one or more windows (though it is not even aware of this). I use the Visitor design pattern so that each widget can render itself using an abstract GUI renderer. Events (mouse/keyboard/copy/paste,etc) get passed to the GUI, it doesn't care where they come from. Those events get passed down the GUI hierarchy by the widgets themselves from parent to child, until the event is handled by some low-level widget.

The OS-specific event handling (message pump) is wrapped in an “Application” class, which handles dispatching events to various places (e.g. to the Window/Menu classes). The Application and Window classes both have a delegate containing functions (similar to std::function) which are called with the various events. This abstracts the OS-specific stuff from the rest of the engine/editor. Those OS events are generated on the program's main thread, and are forwarded in the delegate callbacks to the engine thread via lock-free queues. The engine executes on a thread that is created by the Application. The advantages of this approach are that OS-specific widgets (e.g. Window) stay interactive, even if the engine thread is stalled.

Below is a rough idea of how it's all put together for the editor, with indentation representing composition.

EditorApplication - created in main(), which forwards control to Application
	Main update loop - runs on engine thread, created by Application as a replacement for main()
		tick engine
		draw all windows
		swap buffers
	Engine - contains various systems, update(dt) function
		Graphics - renders into viewport in editor window
		Physics
		Sound
	Graphics Device - gets passed into engine and GUI renderer
	GUI Renderer - renders all editor GUIs
	Application - callbacks to EditorApplication with events, main loop
	Windows - one or more window for editor, with delegate callbacks
		Editor GUI - drawn into window, gets passed pointer to editor services (engine, etc)

The engine “player” could just remove all of the editor stuff and just create a single window. This makes it possible to create various kinds of programs that all use the same engine and GUI in different configurations, with very minimal boilerplate. Basically, there is one relatively simple class (EditorApplication or EnginePlayer) that is just a container for the various subsystems, one of which (Application) handles wrapping the OS event loop in an abstraction. The EditorApplication just handles forwarding events from Application/Window callbacks to the appropriate subsystems.

The concept of delegates is extremely useful here for doing the event handling in a decoupled way. A typical delegate for the Window class might look like this:

struct WindowDelegate
{
	function<bool (Window&)> open;
	function<bool (Window&)> close;
	function<void (Window&,bool)> focus;
	function<void (Window&,const Vector2i&)> resized;
	function<void (Window&,const Vector2i&)> moved;
	function<bool (Window&,const GUIEvent&)> guiEvent;
	function<bool (Window&,const string&)> textInputEvent;
	function<bool (Window&,const KeyEvent&)> keyEvent;
	function<bool (Window&,const MouseButtonEvent&)> mouseButtonEvent;
	function<bool (Window&,const MouseMotionEvent&)> mouseMotionEvent;
	function<bool (Window&,const MouseWheelEvent&)> mouseWheelEvent;
};

// Somewhere in EditorApplication
WindowDelegate delegate;
delegate.resized = .....;
window.setDelegate( delegate );

I should also define more clearly what I mean by “engine”. In my design, the engine can be thought of as a container of resources of any user-defined type (similar to components in ECS), and at the highest level it effectively works as a time-stepped “operator”/modifier on the state of all resources in the engine. The engine contains an array of systems, which implement the logic for the engine. Systems are ticked by the engine in a predefined order and are notified when resources are added/removed from the engine. Systems apply their logic to the resources to advance the state of the whole simulation on each time step. Resources themselves are generally dumb data containers that are used by the systems to implement the simulation logic. “Entities” and Scenes are just another kind of resource that contains a collection of child resources. The engine keeps track of parent/child relationships between resources, so that when adding/removing an Entity/Scene to the engine, the children of the entity/scene are also added/removed.

@Aressera Thank you for the extensive post, this offers really good ideas!

My design has quite a few similarities with yours on the engine side. At the moment, I achieve game-engine communication via a GameInterface object, akin to a MonoBehaviour as described in the original post. For now, let's just assume all it has are the functions init() and update(), and the engine calls the latter on each frame.

I'll try to make a sketch of what I have so far, based on your own description method (game only, haven't figured out Editor yet):

GameApplication - starts with main(), create Application object with a GameInterface reference, call Application.run()
	- Application.run() --> main loop, checks I/O and events, then updates engine subsystems (e.g rendering)
		- final call is into GameInterface.update()
		- Application also checks if we need to shut down for any reason (console input, main window closed, etc.)
	- Engine
		- UI subsystem --> if built with GUI support, and also not disabled at initialization, it instantiates some GUI implementation module.
			- GUI impl is behind a wrapper, at compile time this can be selected to be a library, e.g SDL
			- Module generates a window, has an interface facing the engine (which becomes agnostic to GUI implementation) and another toward the 	client (some features are only exposed to the engine internally)
			- Window is basically just a surface for rendering, also captures inputs that the engine exposes to the game

The renderer is not yet implemented, but it's intended to work the same way: the engine itself is agnostic to the specific implementation chosen at compile time, all working with the same wrapper API. This does add the need for a bit of magic to pass an appropriate reference to the window to allow rendering to it, but I figured that's tolerable.

From what I've researched, it's best to keep rendering and OS event handling on the main thread, both for the sake of simplicity, and obviously so rendering is in sync with when the window is repainted. I won't go into any multithreading details, the tl;dr is that the game would send tasks into queues in the engine, and for example rendering would have a double (potentially triple) buffer of these tasks, only calling on an actual API (OpenGL, DirectX, Vulkan, etc.) by the time the main application loop calls an update.

With all that said, this works fine for just the game context. With editors, I'm currently less certain. Should the engine still have authority over windows and such? Or should apps simply be able to pass window references to the renderer, while all control is deferred to the editor application?

yah-nosh said:
Should the engine still have authority over windows and such? Or should apps simply be able to pass window references to the renderer, while all control is deferred to the editor application?

I'd go with the second option because it allows for more flexibility. It's an example of “dependency injection.” Window handling and high-level logic is very application-specific. You may want to do things in completely different ways in editor vs engine player. Editor has to potentially manage multiple open documents/windows and handle saving/opening/menus etc., while engine player only needs to create one simple window, load the game, and tick the engine. If you do window management with the engine, then you might be somewhat limited in the kinds of things you can do. For example, the thing I do where I export audio mixes on a background thread wouldn't be possible without the ability to create multiple engine instances, because it literally needs a separate engine while the main one keeps running. If you tied windows to the engine, then this might cause problems for situations like these.

This is more or less how I render a scene viewport, which is pretty similar to your second proposal:

class GraphicsSystem : public EngineSystem
{
	public:
	bool renderScene( Scene*, Camera*, Viewport, Framebuffer* );
	bool renderScene( Scene*, Camera*, Viewport, Window* );
	void setDevice( GraphicsDevice* ); // device created by higher level code
	private:
	SceneRenderer* renderer; // e.g. ForwardRenderer, DeferredRenderer
	GraphicsDevice* device;
};

The renderScene() function can render to either a Framebuffer (hardware render target), or directly to an OS window. It can get called by higher-level code (above the engine level), or by some part of the engine itself.

yah-nosh said:
From what I've researched, it's best to keep rendering and OS event handling on the main thread, both for the sake of simplicity, and obviously so rendering is in sync with when the window is repainted.

That's not strictly necessary. There's no requirement that rendering be done on the main thread, as long as the render target window is created and its device context is bound to e.g. OpenGL on the main thread (at least on Windows, macOS has no such requirement). There are some advantages to doing rendering on a background thread, such as better interactivity (neither thread stalls the other). In some swap chain setups, you can have the render thread wait until the exact right time to render, similar to how low-latency audio callbacks work (which also work on a secondary thread), to get the lowest possible frame presentation latency. If you did rendering on the main thread, the timing would be less precise and waiting for V-Sync would prevent you from processing other events while waiting (leading to higher overall latency).

It took a lot of trial and error to get this right in my editor (and many crashes and bugs as a result), but it does indeed work to render on another thread as long as the setup of the window and device context/pixel format are done on the main thread (again, win32 specific). Window resizing can be tricky though, because to get that to look right, you need to wait on the main thread in the resized() callback until the next frame is rendered at the new window size on the render thread.

@Aressera Good points all around! This feels like a direction I can get started in.

This topic is closed to new replies.

Advertisement