My side project, Panic Spiral, has been progressing quite well. There have been some teething problems, which are mostly caused by the fact that I’m using NextJS and PixiJS together. In spite of this, I have built a title screen for the game, complete with UI sound effects and background music. I also have a system for changing scenes, and have made a start on controlling the player character using keyboard inputs.
In this post, I’m going to cover three topics:
- How I’ve structured the game – the way I’ve laid things out in code and why
- Adding asset management to the game – as this a web game, I want try and minimise up front loading so that the page loads faster
- Implementing sound effects and background music – a slight spoiler: PixiJS and NextJS are not the best combination here
Structuring the code
It can be difficult to sit back and think about this when starting a new project. Often, you’re excited about making something awesome and this feels like a boring chore getting in the way of doing that. However, taking a minute to think about how you want to build the project and where everything should fit will pay dividends later, even if you’re a solo developer. I find that my code is more readable, easier to maintain, and easier to add to if I have a well-structured project. It’s easier to build something if you know where everything goes and where to find all the parts you need, rather than just having everything strewn around the place.
So how have I structured this project?
Firstly, as I detailed in my previous post, the game is on a website which has a backend api to serve up game difficulty settings. If this was a project for work I would probably have these as separate repos. However, for this project I just have them in separate folders named “website” and “api.” For now, I’m going to focus on the website folder, since this is where the game is.
Website root folder
Within the website root folder, there’s all the web config files – package.json, nextjs config, eslint config – the environment files, and a dockerfile. These are the files that were mostly created when I initialised the project, and are necessary to run the website. I could move them elsewhere, but the convention is to just leave them where they are, and I would then have to edit a lot of settings to make everything run properly if I did move them. It’s not worth it.
app folder
I’ve opted to use app routing with NextJS. This means that, for NextJS to automatically deal with urls, I have to use a specific folder and file structure.

Here, you can see that there is an “app” folder, which contains a layout.tsx and page.tsx file. These two files are used to make the home page of the side.
There is also a “leaderboard” folder which also contains a page.tsx file. This page file is used to make a “/leaderboard” page.
game folder
The “game” folder, however, is where I’ve been doing most of my dev work so far – it’s the folder where I’ve been adding the PixiJS code for Panic Spiral. Those of you who are experienced with web development will spot the .tsx file extensions – yes, I am building this game using Pixi-React. I have quite a few thoughts about Pixi-React that I’m going to save for a post looking back at this project.

The Components folder is for components which are common across multiple scenes. Currently it contains a Button and a custom implementation of an animated sprite.
The “constants” folder contains files with various different game constants that aren’t going to change with difficulty (e.g. player movement speed). This could, theoretically, be a single file with multiple exports, but I like keeping things in small files with a single purpose.
Next we have “raw-assets.” This folder is for all the fonts, images, sounds, and music that the game uses. One of the packages I’ve installed turns the assets in here into spritesheets and packs them up so that they can be more efficiently stored on a server and downloaded by a web client when someone visits the site to play Panic Spiral.
“Screens” contains a folder for each different screen. E.g. the Title screen has one folder, and the Main (game) screen has another.
Within the folder for each screen, there is the main file for the screen, and then a folder for each component that comprises the screen. E.g. the Title screen has a Background and a Ship so those have separate sub-folders so I can find them easily. It also has a button, but buttons are common to many scenes, so live in Components.

Finally, there’s the Utils folder. This is for code that’s common across multiple scenes, but isn’t a component. Things that live here are my localisation code, the asset loader, and the SFX and background music players.
Game structure summary
I’ve tried to keep my project tidy by setting myself these rules:
- Each file is responsible for one thing and one thing only
- Files should be sorted into folders such that each folder provides a single feature
These principles are good practice across pretty much all types of software projects.
Assets and asset packing
PixiJS has a recommended tool for optimising assets – AssetPack. And it is very simple to install and use. I followed the installation tutorial without any issues. The only changes I made were to the .assetpack.js file, which tells AssetPack how to pack the textures.
I took a look at how it’s set up in the PixiJS example game, Bubbo Bubbo and copied the texturePacker property, ending up with this variant:
import {pixiPipes} from '@assetpack/core/pixi'
export default {
entry: './src/app/game/raw-assets', // location of assets in my code
output: './public/assets', // folder I want the assets packed in
pipes: [
...pixiPipes({
texturePacker: {
texturePacker: {
removeFileExtension: true
}
},
manifest: {
output: './src/manifest.json'
}
})
]
};
It was also straightforward to add a command (three commands, actually) to my package.json file which packs the assets when run.
"scripts": {
"prebuild": "assetpack",
"watch": "npm run watch:assetpack & npm run dev",
"watch:assetpack": "assetpack -w"
},
I’ve added:
prebuild– this just runs AssetPack once and will be used during deployment to build the asset packs before the site is deployedwatch– runs thewatch:assetpackcommand and runs the website in development mode simultaneouslywatch:assetpack– runs AssetPack in watch mode, so that any changes to the raw-assets folder are detected and integrated immediately
Asset tagging and plugins
The next step is tagging the assets to be packed in a way that optimises them for my game. Unfortunately, this is not well documented in my opinion. The AssetPack site has documentation that sort of explains it, but there’s not a definitive list of tags and what they mean.
{m}is from the Manifest Plugin. It tells the manifest plugin to bundle any files within that folder together. I think of it as “m for module” – everything with a{m}tag can be loaded as its own separate bundle later.{wf}is for Webfonts. Appending this tag causes AssetPack to generate awoff2font fromttf,otf,woffandsvgfiles.{tps}should be added onto folders of images. It tells the TexturePacker plugin to merge every image in the folder into a single spritesheet and generate ajsonmanifest for that spritesheet that PixiJS can use.

As you can see in this image, I’ve split my assets in a way that mimics the structure of my game code in general.
There’s a common folder with a {m} tag – this is for fonts and UI elements.
Alongside that is a screens folder. This doesn’t have its own {m} tag, but each subfolder does. This is because I don’t want to load the resources for every screen at once – I want to be able to do so separately for each individual screen.
Any folder with images in it, e.g. the background folder in title, is tagged with {tps} so that it can be turned into a spritesheet. Currently I have a spritesheet for each element in a screen, but I am considering just merging them into a single spritesheet for each screen.
I am certain that there are better ways of doing this. This is the first time I’ve needed to specifically think about packaging up game assets for the web. So if anyone has any tips and tricks for this, I’m happy to hear them!
Loading assets
AssetPack generates a manifest for the assets so that they can be loaded using their filename without the extension. E.g. button-background.png can be loaded using just button-background in the code. However, the assets can’t be used until they’ve been loaded in. In order to facilitate that, I once again took inspiration from the PixiJS open games, and adapted their solution for my game.
I have three main functions. The first is initAssets.
import { Assets, UnresolvedAsset } from "pixi.js";
import manifest from "../../../../manifest.json";
const initAssets = async () => {
// init the PixiJS assets engine using the manifest and telling it
// where to find the assets on the site
await Assets.init({ manifest, basePath: "assets" });
// load in the bundles we need for the game to start up - these are:
// - the common bundle for the UI
// - the title and title/sprites bundles for the Title screen
await Assets.loadBundle(["common", "title", "title/sprites"]);
// once those bundles are loaded, we get a list of all the other bundles
const allBundles = manifest.bundles.map(
(bundle: { name: string }) => bundle.name
);
// the remaining bundles are loaded in the background
Assets.backgroundLoadBundle(allBundles);
};
This function gets called when the game starts. It’s async, which means the rest of the site doesn’t freeze up while the assets are loading. In it, we load the bundles which are needed for the Title screen and then load the rest of the assets in the background.
The next two functions work in tandem. They are isBundleLoaded and areBundlesLoaded.
const isBundleLoaded = (bundleId: string): boolean => {
// find the bundle using its id
const bundleManifest = manifest.bundles.find(
(b: { name: string }) => b.name === bundleId
);
// if the bundle can't be found, return false - we can't load a bundle
// that doesn't exist.
if (!bundleManifest) {
return false;
}
// check if every asset in the bundle is loaded in
for (const asset of bundleManifest.assets as UnresolvedAsset[]) {
if (!Assets.cache.has(asset.alias as string)) {
// if one isn't loaded, return false. The bundle isn't loaded
// if an asset is missing
return false;
}
}
// if all the assets are loaded, the bundle is loaded
return true;
};
const areBundlesLoaded = (bundles: string[]) => {
// for every bundle listed, check if it's loaded
for (const name of bundles) {
if (!isBundleLoaded(name)) {
// if any bundle isn't loaded, return false
return false;
}
}
// if all the bundles listed are loaded, we can return true
return true;
};
These functions are going to see more use when I start swapping between screens. The idea is that when we swap to a new screen, we can check if the assets are loaded. If they are, then we can swap to that screen. If they’re not loaded, we have to wait until they are loaded, so the player will be put on a loading screen.
Audio – sound effects and background music
Unfortunately, audio has not been as straightforward as images and fonts. This is mostly due to NextJS wanting to compile and render as much as possible on the server rather than in the browser. This approach is excellent for reducing the work that the user’s browser needs to do, but can cause problems when the javascript it’s trying to compile wants access to the web document object model (DOM).
The short version of this problem is that, at the point when NextJS compiles the javascript on the server, there is no DOM. So any code that wants to access the DOM throws an error – there’s no DOM to access at that point. It’s like a toddler saying they want a biscuit, but you have no biscuits left. The toddler doesn’t understand why you’re not giving them a biscuit and throws a tantrum about it.
As a developer, I want to have a single part of the code responsible for playing sounds. This requires PixiJS sound. The PixiJS sound object wants access to the DOM. Since my code for playing sounds is its own module, NextJS tries to compile it on the server, and the sound object throws an error because it can’t access the DOM.
One of the suggested approaches to resolve this is to only ever call the code for sound in something called a ReactJS hook – useEffect. The code in a useEffect hook will only ever be run on the browser, so there will never be a case where it can’t access the DOM. However, this approach wasn’t quite right for my use case. I want a single instance of my sound effect (SFX) player, and a single instance of my background music (BGM) player.
So I turned to the second method – wrapping a React component in a NextJS dynamic import.
React contexts and NextJS dynamic imports
A good way to ensure you only have one instance of an object that can be used in multiple places through React code is with a React Context. To set this up, you define the data your context is going to supply, and then you set up a component which provides that context for any child components to use. This is how I’ve set up my SFX and BGM players – as components that provide the SFX and BGM contexts to their child components.
NextJS allows the concept of a dynamic import. This type of import tells NextJS to just pass the code within to the browser and let the browser deal with it instead of trying to compile it server side. This means that the code within can access the DOM without issue.
So, putting the two together, I have my SFX and BGM context provider components, which are wrapped within a NextJS dynamic import. The PixiJS sound code doesn’t throw an error because it is only run on the browser, which has a DOM the sound code can access.
In practice, the code for wrapping my SFX and BGM players looks like this:
const SFXPlayer = dynamic(() => import("./SFXPlayer"), { ssr: false });
const BGMPlayer = dynamic(() => import("./BGMPlayer"), { ssr: false });
This is deceptively simple for an incredibly annoying problem. The options object with { ssr: false } is the essential part that tells NextJS not to do server side rendering on these modules.
But how do the sound players work?
With that problem solved, let’s move on to implementation.
The SFX player is the simpler of the two.
import { sound } from "@pixi/sound";
import { PropsWithChildren, useEffect, useMemo } from "react";
import { SFXPlayerContext, AudioPlayer } from "./AudioPlayerContext";
import { initSfx } from "./init";
// this should be defined in my constants folder. I have left it here
// to show the default volume level I've set
const DEFAULT_GLOBAL_VOLUME = 0.5;
const SFXPlayer = ({ children }: PropsWithChildren) => {
const globalVolume = DEFAULT_GLOBAL_VOLUME;
useEffect(() => {
// initSfx is a function that loads in sound files and maps them
// to an alias
initSfx();
}, []);
// this function plays a sound, referenced by its alias, at a given
// volume relative to other SFX
const playSound = (alias: string, volume?: number) => {
sound.play(alias, { volume: (volume || 1) * globalVolume });
};
// this is the object that other components can access to play sounds
const sfxPlayer: AudioPlayer = useMemo(
() => ({ play: playSound }),
[playSound]
);
// here we return the context provider component
return (
<SFXPlayerContext.Provider value={sfxPlayer}>
{children}
</SFXPlayerContext.Provider>
);
};
export default SFXPlayer;
This component can be accessed from any child component and used to play a sound effect at any volume. The volume is relative to other sound effects, so when I implement a global volume slider, the player will be able to decide how loud or quiet they want sound effects overall.
The BGM player is a bit more complex, using a library called GSAP to transition smoothly between background music tracks.
import { sound, Sound } from "@pixi/sound";
import { PropsWithChildren, useEffect, useMemo, useState } from "react";
import { initBgm } from "./init";
import { BGMPlayerContext, AudioPlayer } from "./AudioPlayerContext";
import gsap from "gsap";
// this should be defined in my constants folder. I have left it here
// to show the default volume level I've set
const DEFAULT_GLOBAL_VOLUME = 0.05;
const BGMPlayer = ({ children }: PropsWithChildren) => {
// the BGM player has to keep a record of which track it is currently
// playing, and also of the sound player that's playing the track
const [currentAlias, setCurrentAlias] = useState("");
const [current, setCurrent] = useState<Sound | null>(null);
const globalVolume = DEFAULT_GLOBAL_VOLUME;
useEffect(() => {
// initBGM is a function that loads in sound files and maps them
// to an alias
initBgm();
return () => {
// this function ensures that if the BGM player is deleted, the
// current music track is stopped
if (current) {
gsap.killTweensOf(current);
current.stop();
}
};
}, []);
// this function allows us to play a track at a given volume, relative
// to other background music tracks
const playTrack = async (alias: string, volume?: number) => {
// if the requested track is the current track, we just leave it
if (alias === currentAlias) {
return;
}
// if there's already a track playing
if (current) {
gsap.killTweensOf(current);
// we gracefully reduce the volume of the current track to 0
// over a second
await gsap.to(current, { volume: 0, duration: 1, ease: "linear" });
// then we stop the current track
current.stop();
}
// now we find the new track
const newTrack = sound.find(alias);
if (!newTrack) {
// if we can't find a track with the given name, we print out a
// warning and exit gracefully
console.warn(`Track with ${alias} not found.`);
setCurrent(null);
setCurrentAlias("");
return;
}
// otherwise we set the volume of the new track to 0 (to start with)
// and set it to loop
newTrack.volume = 0;
newTrack.play({ loop: true });
// next we increase the volume to the desired volume with a
// transition duration of 1s
await gsap.to(newTrack, {
volume: (volume || 1) * globalVolume,
duration: 1,
ease: "linear",
});
// finally we set the current track to be the one we just started
setCurrent(newTrack);
setCurrentAlias(alias);
};
// this is the object that other components can access to play sounds
const bgmPlayer: AudioPlayer = useMemo(
() => ({ play: playTrack }),
[playTrack]
);
// now we return the context provider
return (
<BGMPlayerContext.Provider value={bgmPlayer}>
{children}
</BGMPlayerContext.Provider>
);
};
export default BGMPlayer;
As with the SFX player, any child component of this component can access the BGM player. Child components can then tell the BGM player to play a specific track at a volume relative to other background music tracks. The biggest different between the BGM player and the SFX player is that the BGM player has to keep a record of what track it’s playing. Then, when a new track is requested, the BGM player checks if the requested track is the same as the current one. If it is the same, the request is ignored – we’re already playing that music! If it’s different, the BGM player transitions smoothly from the old track to the new one over the course of a second.
A strong start
With just these systems, I’ve been able to produce an animated title screen for Panic Spiral, using several assets from itch.io and a font from dafont.com. There’s background music and UI sounds when the player hovers their mouse over the Start Game button. You can see a video of it on my BlueSky post.
I've made my initial title screen for Panic Spiral. There are improvements I want to make, but for now I'm quite pleased. Pixijs with react is proving an interesting challenge, both from a web dev and game dev perspective. I'll go into that more in my devblog on Monday. #gamedev #indiedev #pixijs
— Khemi (@khemitron-industries.net) June 20, 2025 at 8:16 PM
[image or embed]
I’ve now started work on the actual game screen and I’m hoping that by this time next week I’ll have a player character that moves smoothly around the screen and a spaceship for them to move around in. Ideally, I’ll also have started work on allowing the player character to interact with the environment too – which is going to be very important for actually fixing the ship when it breaks down!