A UAV’s initial flight tests are meant for validating all the predictions about its design and performance (alongside shaking out all the lurking issues). But to validate anything, you need something to check against.
We covered this last week: the structure of a simple performance model, its needed data sources, and how it all fits together. We combine our aerodynamics model with our propulsion system model, and between the two can find our thrust, rate of climb, and resulting performance limits. You can find the full explanation here.
Now that we understand what we’re comparing to, we can do the actual data comparison. So grab your sun hat and some snacks—we’re headed out to the field for a flight test.
Just to be clear: by no means do I cover everything you can or should check from flight test data. This is simply how I like to validate an aircraft, based on my own experiences. Take all of this as suggestions, and if you have other checks you like to do, I would love to hear about them!
Validating the aerodynamics
Checking the aerodynamics of your vehicle via flight test is tricky. Because a real aircraft needs a source of thrust to stay in straight and level flight, the propulsion contribution muddies up any direct aero measurements.
Validating pure aerodynamics is where wind tunnel testing shines. You can run your geometry in a controlled environment, at specific measured conditions, and collect near-pristine aerodynamic data.
But flight data is still useful for validating a few critical behaviors.
One of the first checks I always do relates to the aircraft’s stability and control authority. For a given airspeed, aircraft weight, and CG location, what elevator angle does the autopilot command? How does this compare to what the aerodynamic model predicts for those same conditions?
A couple degrees of difference is fine, especially when propulsion effects are present—i.e., the moment created due to your propeller’s thrust. But if the difference is five degrees or more, it’s time to dig deeper. If the elevator angle is larger than predicted, this could point to an elevator with less control power than expected. If the elevator angle is smaller than predicted, then it’s the opposite.
Another potential culprit is a physical flyable CG range that’s different from what you calculated. Maybe your estimated range is shifted forward or aft of the real location, or is smaller than expected. Evaluate your model against your flight data and see if changes need to be made.
I also do a check of the aircraft’s lift curve. This varies based on your autopilot and aircraft, but for many the “auto” control setting will fly the aircraft at a pre-set lift coefficient. Compare your aircraft’s speed for its weight against what you would expect based on your aero model’s lift curve. Is the aircraft flying notably faster or slower? What does that say about your model?
Because aerodynamics and stability impact everything else, it’s important I know how accurate my aerodynamic model is. If the aero is off, the propulsion telemetry may be off too.
Validating the propulsion
Evaluating the propulsion system is where flight test data proves its real value. Since so many performance predictions depend on propulsion numbers, this is really where the rubber meets the road.
There are a handful of checks that are quick and easy to do in the field. For a given speed/weight/altitude, what is the RPM of the aircraft’s propeller(s)? Run your performance model at the matching conditions. Does it predict the same RPM as what you’re seeing on the real vehicle?
The flight operators should be recording the aircraft’s pre-flight fuel load, and then how much they take out after landing. What fuel burn rate does this average out to, given the flight’s duration? Is it higher or lower than you predicted?
If it’s off by a few tenths of a pound, you’re right on the money. If it’s a difference of a pound or more, your performance code likely needs some work—and the aircraft could use an inspection for possible issues.
After the test event, flight data is the best validation of your propeller model. Most prop manufacturers only provide static test data, or simulated (i.e., predicted) data for forward flight. Very few will actually run a propeller in a wind tunnel to fill out the full thrust vs advance ratio curve.
Thankfully, most autopilots record everything you need to validate your model. You can calculate advance ratio from RPM and airspeed telemetry, and estimate density based on altitude. Doing this for every data point will create a nice scatter of data, revealing the shape of the thrust coefficient and power coefficient curves.
Updating the curves in your performance code to match these flight data-refined ones produces more accurate estimates of aircraft performance. This will also capture effects from the interaction of the propeller with the airframe, like a rear-mounted prop creating a region of lower pressure directly in front of it.
One of the most fun validations is calculating an approximate max altitude from climbing flight data. Calculate the aircraft’s rate of climb for all full-throttle climb sections, and plot that data against altitude. Fitting a curved trendline to the data lets you easily estimate where your max rate of climb would hit 100 ft/min, for service ceiling, and 0 ft/min, for absolute ceiling. Are these ceilings close to what your performance code predicts?
Planning a test for data collection
All these checks require collecting data at a variety of conditions. This doesn’t have to be onerous; you can accomplish it by making some simple changes to your flight test plans.
At the bare minimum, fly at different airspeeds and altitudes. This can be difficult if your only available airspace is limited to 400 ft above ground level, but even then you can vary your speed.
It’s tough to validate performance trends using flight data at a single speed and altitude. Having data at a variety of conditions better fills out propeller performance curves and gives you more cases to compare.
Flying a rectangular pattern, where two legs have opposite headings, is great for collecting ambient wind data. An example of this would be a rectangle where one leg has a heading of 45 degrees, and the opposite leg has a heading of 225 degrees. Knowing the ambient winds helps to refine the airspeed used in all other calculations.
Collecting data at different aircraft weights can be time-consuming, but is very useful. You can accomplish this by either doing a handful of shorter flights with different fuel loads at takeoff, or doing one longer flight of 2+ hours to get a decent change in fuel.
And finally, record everything you can—you should be doing this anyways, but it’s worth emphasizing. Sometimes things are written down on the physical flight record sheets but they don’t make it to the digital database. Having information like temperature at the launch location, or the measured CG location, may improve performance code accuracy by only a small bit. But add up enough minor factors and the impact can be substantial.
It makes sense to check actual flight performance against predictions—after all, you want to see if your design work made the product you expected. But some may question if it’s worth going back and refining your prediction codes based on that flight data, especially if it requires particular flight patterns.
I feel the extra few hours of flight and data reduction is worth it.
Being able to provide accurate predictions of performance helps with applying to proposals or getting investors to buy in. Saying your numbers are backed by flight testing goes a long way to build trust.
Your end users also benefit: you can provide more accurate data in operator manuals. Having good charts showing the aircraft’s capabilities and limitations, like service ceilings and endurance numbers, helps in confidently planning better missions.
The whole point of designing, building, and flying these aircraft is so they can go out and do cool things. Wouldn’t it be great to have a tool that shows just how capable your aircraft is, for any mission that comes your way?