mirror of
https://github.com/bmadcode/BMAD-METHOD.git
synced 2025-12-29 16:14:59 +00:00
* feat(bmgd): comprehensive BMGD module upgrade ## New Agents - **Game QA (GLaDOS)**: Game QA Architect + Test Automation Specialist - Engine-specific testing (Unity, Unreal, Godot) - Knowledge base with 15+ testing topics - Workflows: test-framework, test-design, automate, playtest-plan, performance-test, test-review - **Game Solo Dev (Indie)**: Elite Indie Game Developer + Quick Flow Specialist - Rapid prototyping and iteration focused - Quick-flow workflows for solo/small team development ## Production Workflow Alignment Aligned BMGD 4-production workflows with BMM 4-implementation: ### Removed Obsolete Workflows - story-done (merged into dev-story) - story-ready (merged into create-story) - story-context (merged into create-story) - epic-tech-context (no longer separate workflow) ### Added Workflows - sprint-status: View sprint progress, surface risks, recommend next action ### Updated Workflows (now standalone, copied from BMM) - code-review: Adversarial review with instructions.xml - correct-course: Sprint change management - create-story: Direct ready-for-dev marking - dev-story: TDD implementation with instructions.xml - retrospective: Epic completion review - sprint-planning: Sprint status generation ## Game Testing Architecture (gametest/) New knowledge base for game-specific testing: - qa-index.csv: Knowledge fragment index - 15 knowledge files covering: - Engine-specific: Unity, Unreal, Godot testing - Game-specific: Playtesting, balance, save systems, multiplayer - Platform: Certification (TRC/XR), localization, input - General QA: Automation, performance, regression, smoke tests ## Quick-Flow Workflows (bmgd-quick-flow/) - quick-prototype: Rapid mechanic testing - quick-dev: Flexible feature implementation ## Documentation Complete documentation suite in docs/: - README.md: Documentation index - quick-start.md: Getting started guide - agents-guide.md: All 6 agents reference - workflows-guide.md: Complete workflow reference - quick-flow-guide.md: Rapid development guide - game-types-guide.md: 24 game type templates - glossary.md: Game dev terminology - troubleshooting.md: Common issues ## Teams & Installer - Updated team-gamedev.yaml with all 6 agents and workflows - Updated default-party.csv with Game QA and Game Solo Dev - Created _module-installer/ with: - installer.js: Creates directories, logs engine selection - platform-specifics/: Claude Code and Windsurf handlers ## Agent Updates All agents now reference standalone BMGD workflows: - game-architect: correct-course → BMGD - game-dev: dev-story, code-review → BMGD - game-scrum-master: All production workflows → BMGD - game-solo-dev: code-review → BMGD ## Module Configuration - Added sprint_artifacts alias for workflow compatibility - All workflows use bmgd/config.yaml * fix(bmgd): update sprint-status workflow to reference bmgd instead of bmm Replace all /bmad:bmm:workflows references with /bmad:bmgd:workflows in the sprint-status workflow instructions. * feat(bmgd): add workflow-status and create-tech-spec workflows Add BMGD-native workflow-status and create-tech-spec workflows, replacing all BMM references with BMGD paths. ## New Workflows ### workflow-status - Multi-mode status checker for game projects - Game-specific project levels (Game Jam → AAA) - Workflow paths: gamedev-greenfield, gamedev-brownfield, quickflow-greenfield, quickflow-brownfield - Init workflow for new game project setup ### create-tech-spec - Game-focused spec engineering workflow - Engine-aware (Unity/Unreal/Godot) - Performance and gameplay feel considerations ## Agent Updates Updated all BMGD agents to reference BMGD workflows: - game-architect, game-designer, game-dev, game-qa, game-scrum-master, game-solo-dev All agents now use /bmad:bmgd:workflows instead of /bmad:bmm:workflows * fix(bmgd): address PR review findings and enhance playtesting docs ## PR Review Fixes (F1-F20) ### Configuration & Naming - F1: Changed user_skill_level to game_dev_experience in module.yaml - F3: Renamed gametest/framework to gametest/test-framework ### Cleanup - F2: Deleted 4 orphaned root-level template files - F6: Removed duplicate code block in create-story/instructions.xml - F9: Removed trailing empty line from qa-index.csv - F20: Deleted orphaned docs/unnamed.jpg ### Installer Improvements - F7: Simplified platform handler stubs (removed unused code) - F8: Added return value checking for platform handlers - F13: Added path traversal validation (isWithinProjectRoot) - F18: Added type validation for config string values ### Agent Fixes - F10: Added workflow-status and advanced-elicitation to game-solo-dev - F12: Fixed "GOTO step 2a" → "GOTO step 2" references - F14: Removed duplicate project-context.md from principles in 5 agents ### Workflow Updates - F17: Added input_file_patterns to playtest-plan workflow ### Documentation - F4-F5: Updated quick-start.md with 6 agents and fixed table - Updated workflows-guide.md with test-framework reference ### Knowledge Base Updates (from earlier CodeRabbit comments) - Updated unity-testing.md to Test Framework 1.6.0 - Fixed unreal-testing.md (MarkAsGarbage, UnrealEditor.exe) - Added FVerifyPlayerMoved note to smoke-testing.md - Fixed certification-testing.md table formatting ### Playtesting Documentation Enhancement - Added "Playtesting by Game Type" section (7 genres) - Added "Processing Feedback Effectively" section - Expanded from ~138 to ~340 lines * refactor(bmgd): use exec for step-file workflows and multi format Update agent menu items to use correct notation for step-file workflows: **game-designer.agent.yaml:** - Convert 4 step-file workflows to multi format with shortcodes: - [BG] brainstorm-game - [GB] create-game-brief - [GDD] create-gdd - [ND] narrative - Changed from workflow: .yaml to exec: .md **game-architect.agent.yaml:** - Changed create-architecture from workflow: to exec: with workflow.md --------- Co-authored-by: Scott Jennings <scott.jennings+CIGINT@cloudimperiumgames.com>
9.1 KiB
9.1 KiB
Unreal Engine Automation Testing Guide
Overview
Unreal Engine provides a comprehensive automation system for testing games, including:
- Automation Framework - Low-level test infrastructure
- Functional Tests - In-game scenario testing
- Gauntlet - Extended testing and automation
Automation Framework
Test Types
| Type | Flag | Use Case |
|---|---|---|
| Unit Tests | SmokeFilter |
Fast, isolated logic tests |
| Feature Tests | ProductFilter |
Feature validation |
| Stress Tests | StressFilter |
Performance under load |
| Perf Tests | PerfFilter |
Benchmark comparisons |
Basic Test Structure
// MyGameTests.cpp
#include "Misc/AutomationTest.h"
IMPLEMENT_SIMPLE_AUTOMATION_TEST(
FDamageCalculationTest,
"MyGame.Combat.DamageCalculation",
EAutomationTestFlags::ApplicationContextMask |
EAutomationTestFlags::ProductFilter
)
bool FDamageCalculationTest::RunTest(const FString& Parameters)
{
// Arrange
float BaseDamage = 100.f;
float CritMultiplier = 2.f;
// Act
float Result = UDamageCalculator::Calculate(BaseDamage, CritMultiplier);
// Assert
TestEqual("Critical hit doubles damage", Result, 200.f);
return true;
}
Complex Test with Setup/Teardown
IMPLEMENT_COMPLEX_AUTOMATION_TEST(
FInventorySystemTest,
"MyGame.Systems.Inventory",
EAutomationTestFlags::ApplicationContextMask |
EAutomationTestFlags::ProductFilter
)
void FInventorySystemTest::GetTests(
TArray<FString>& OutBeautifiedNames,
TArray<FString>& OutTestCommands) const
{
OutBeautifiedNames.Add("AddItem");
OutTestCommands.Add("AddItem");
OutBeautifiedNames.Add("RemoveItem");
OutTestCommands.Add("RemoveItem");
OutBeautifiedNames.Add("StackItems");
OutTestCommands.Add("StackItems");
}
bool FInventorySystemTest::RunTest(const FString& Parameters)
{
// Setup
UInventoryComponent* Inventory = NewObject<UInventoryComponent>();
if (Parameters == "AddItem")
{
UItemData* Sword = NewObject<UItemData>();
Sword->ItemID = "sword_01";
bool bAdded = Inventory->AddItem(Sword);
TestTrue("Item added successfully", bAdded);
TestEqual("Inventory count", Inventory->GetItemCount(), 1);
}
else if (Parameters == "RemoveItem")
{
// ... test logic
}
else if (Parameters == "StackItems")
{
// ... test logic
}
return true;
}
Latent Actions (Async Tests)
DEFINE_LATENT_AUTOMATION_COMMAND_ONE_PARAMETER(
FWaitForActorSpawn,
FString, ActorName
);
bool FWaitForActorSpawn::Update()
{
UWorld* World = GEngine->GetWorldContexts()[0].World();
AActor* Actor = nullptr;
for (TActorIterator<AActor> It(World); It; ++It)
{
if (It->GetName() == ActorName)
{
Actor = *It;
break;
}
}
return Actor != nullptr; // Return true when complete
}
bool FSpawnTest::RunTest(const FString& Parameters)
{
// Spawn enemy
ADD_LATENT_AUTOMATION_COMMAND(FSpawnEnemy("Goblin"));
// Wait for spawn
ADD_LATENT_AUTOMATION_COMMAND(FWaitForActorSpawn("Goblin"));
// Verify
ADD_LATENT_AUTOMATION_COMMAND(FVerifyEnemyState("Goblin", "Idle"));
return true;
}
Functional Tests
Functional tests run inside the game world and can test gameplay scenarios.
Setup
- Create a test map (
TestMap_Combat.umap) - Add
AFunctionalTestactors to the map - Configure test parameters in Details panel
Blueprint Functional Test
// In Blueprint:
// 1. Create child of AFunctionalTest
// 2. Override "Start Test" event
// 3. Call "Finish Test" when complete
C++ Functional Test
UCLASS()
class APlayerCombatTest : public AFunctionalTest
{
GENERATED_BODY()
public:
virtual void StartTest() override;
protected:
UPROPERTY(EditAnywhere)
TSubclassOf<AEnemy> EnemyClass;
UPROPERTY(EditAnywhere)
float ExpectedDamage = 50.f;
private:
void OnEnemyDamaged(float Damage);
};
void APlayerCombatTest::StartTest()
{
Super::StartTest();
// Spawn test enemy
AEnemy* Enemy = GetWorld()->SpawnActor<AEnemy>(EnemyClass);
Enemy->OnDamaged.AddDynamic(this, &APlayerCombatTest::OnEnemyDamaged);
// Get player and attack
APlayerCharacter* Player = Cast<APlayerCharacter>(
UGameplayStatics::GetPlayerCharacter(this, 0));
Player->Attack(Enemy);
}
void APlayerCombatTest::OnEnemyDamaged(float Damage)
{
if (FMath::IsNearlyEqual(Damage, ExpectedDamage, 0.1f))
{
FinishTest(EFunctionalTestResult::Succeeded, "Damage correct");
}
else
{
FinishTest(EFunctionalTestResult::Failed,
FString::Printf(TEXT("Expected %f, got %f"),
ExpectedDamage, Damage));
}
}
Gauntlet Framework
Gauntlet extends automation for large-scale testing, performance benchmarking, and multi-client scenarios.
Gauntlet Test Configuration
// MyGameTest.cs (Gauntlet config)
namespace MyGame.Automation
{
public class PerformanceTestConfig : UnrealTestConfig
{
[AutoParam]
public string MapName = "TestMap_Performance";
[AutoParam]
public int Duration = 300; // 5 minutes
public override void ApplyToConfig(UnrealAppConfig Config)
{
base.ApplyToConfig(Config);
Config.AddCmdLineArg("-game");
Config.AddCmdLineArg($"-ExecCmds=open {MapName}");
}
}
}
Running Gauntlet
# Run performance test
RunUAT.bat RunUnreal -project=MyGame -platform=Win64 \
-configuration=Development -build=local \
-test=MyGame.PerformanceTest -Duration=300
Blueprint Testing
Test Helpers in Blueprint
Create a Blueprint Function Library with test utilities:
UCLASS()
class UTestHelpers : public UBlueprintFunctionLibrary
{
GENERATED_BODY()
public:
UFUNCTION(BlueprintCallable, Category = "Testing")
static void AssertTrue(bool Condition, const FString& Message);
UFUNCTION(BlueprintCallable, Category = "Testing")
static void AssertEqual(int32 A, int32 B, const FString& Message);
UFUNCTION(BlueprintCallable, Category = "Testing")
static AActor* SpawnTestActor(
UObject* WorldContext,
TSubclassOf<AActor> ActorClass,
FVector Location);
};
Performance Testing
Frame Time Measurement
bool FFrameTimeTest::RunTest(const FString& Parameters)
{
TArray<float> FrameTimes;
float TotalTime = 0.f;
// Collect frame times
ADD_LATENT_AUTOMATION_COMMAND(FCollectFrameTimes(
FrameTimes, 1000 // frames
));
// Analyze
ADD_LATENT_AUTOMATION_COMMAND(FAnalyzeFrameTimes(
FrameTimes,
16.67f, // Target: 60fps
0.99f // 99th percentile threshold
));
return true;
}
Memory Tracking
bool FMemoryLeakTest::RunTest(const FString& Parameters)
{
SIZE_T BaselineMemory = FPlatformMemory::GetStats().UsedPhysical;
// Perform operations
for (int i = 0; i < 100; i++)
{
UObject* Obj = NewObject<UMyObject>();
// ... use object
Obj->MarkAsGarbage(); // UE5 API (was MarkPendingKill in UE4)
}
CollectGarbage(GARBAGE_COLLECTION_KEEPFLAGS);
SIZE_T FinalMemory = FPlatformMemory::GetStats().UsedPhysical;
SIZE_T Leaked = FinalMemory - BaselineMemory;
TestTrue("No significant leak", Leaked < 1024 * 1024); // 1MB tolerance
return true;
}
CI Integration
Command Line
# Run all tests (UE5)
UnrealEditor.exe MyGame -ExecCmds="Automation RunTests Now" -unattended -nopause
# Run specific test
UnrealEditor.exe MyGame -ExecCmds="Automation RunTests MyGame.Combat" -unattended
# Run with report
UnrealEditor.exe MyGame \
-ExecCmds="Automation RunTests Now; Automation ReportResults" \
-ReportOutputPath=TestResults.xml
# Note: For UE4, use UE4Editor.exe instead of UnrealEditor.exe
GitHub Actions
test:
runs-on: [self-hosted, windows, unreal]
steps:
- name: Run Tests
run: |
# UE5: UnrealEditor-Cmd.exe, UE4: UE4Editor-Cmd.exe
& "$env:UE_ROOT/Engine/Binaries/Win64/UnrealEditor-Cmd.exe" `
"${{ github.workspace }}/MyGame.uproject" `
-ExecCmds="Automation RunTests Now" `
-unattended -nopause -nullrhi
Best Practices
DO
- Use
SmokeFilterfor fast CI tests - Create dedicated test maps for functional tests
- Clean up spawned actors after tests
- Use latent commands for async operations
- Profile tests to keep CI fast
DON'T
- Don't test engine functionality
- Don't rely on specific tick order
- Don't leave test actors in production maps
- Don't ignore test warnings
- Don't skip garbage collection in tests
Troubleshooting
| Issue | Cause | Fix |
|---|---|---|
| Test not found | Wrong flags | Check EAutomationTestFlags |
| Crash in test | Missing world | Use proper test context |
| Flaky results | Timing issues | Use latent commands |
| Slow tests | Too many actors | Optimize test setup |