Calculating the Error Budget i

abstract: this application note analyses the parameters that affect the errors in precision digital-to-analog converter (dac) applications. the analysis focuses on the factors introduced by both the data converter and the voltage reference. it describes the calculations required to select the data converter and the reference to meet the system’s target specifications. the calculations are available in a spreadsheet.
overviewwhen designing a digital-to-analog converter (dac) system, the dac specifications and its voltage reference work in tandem to produce the overall system performance. consequently, selection of both dac and reference should be made together. the components' specifications can be traded off against each other to ensure that system specifications are met at the lowest cost.
this application note focuses on maxim's 3-terminal voltage references and precision dacs. to design a system, one must first understand how the parts are specified and then how their performance characteristics interact. voltage references and dacs have many specifications. only those factors relevant to the error budget will be discussed here.
voltage reference specifications initial accuracythis is the output voltage tolerance, ignoring any effects of temperature, input voltage, and load. temperature is normally +25°c.
output-voltage temperature coefficientthis is the change in reference output voltage, measured for a given change in temperature and specified in ppm/°c. maxim uses the box method. the shape of the change vs. temperature characteristic is not specified; only the limits of this function are specified. the limits of the output voltage do not necessarily coincide with the limits of temperature. so, to calculate the maximum change, multiply the temperature coefficient by the temperature range for the part. thus to illustrate, if a part has a temperature coefficient of 5ppm/°c, specified from -40°c to +85°c, the maximum deviation over temperature would be:
δv = (tmax - tmin) × tc = (85 + 40) × ±5 = ±625ppm
it is generally best to select a device that is specified over the required temperature range, rather than a broader range. for instance, the max6025a is specified as a 15ppm/°c reference over 0°c to +70°c. this reference value works out to 1050ppm over the range. if, however, one chose a reference specified over the -40°c to +85°c range, a reference that is 1050/125 = 8.4ppm/°c or better would be required. note that some devices are specified over several temperature ranges.
a graphical example of the box method is shown in figure 1. two different example curves are shown, both of which satisfy the 5ppm/°c specification over -40°c to +85°c.
figure 1. example temperature characteristics.
with series references, therefore, it is generally not possible to relate voltage drift and temperature so that one can calculate the drift over a specific range other than that over which the part is specified.
line regulationthis term defines the incremental change in output voltage for a change in input voltage. it is normally defined in terms of µv/v.
load regulationthis term defines the incremental change in output voltage for a change in load current. some dacs may not buffer the reference input. therefore, as the code changes, the reference input impedance will also change, causing a change in reference voltage. this change is generally small, but should be considered in high-accuracy applications. note that this is more important with some dac topologies such as r-2r ladders, while resistive string topologies are less susceptible.
temperature hysteresisthis is the change in reference voltage at +25°c after the temperature is cycled from tmin to tmax. it is specified as a ratio of the two voltages and expressed in ppm:
temphyst = 106 × (δvref/vref)
where δvref is the change in reference voltage caused by the temperature cycle.
long-term stabilitythis is the change in reference output voltage vs. time, specified in ppm/1000 hours. cumulative drift beyond a 1000-hour interval is not generally specified, but is usually much lower than the initial drift. an application's long-term stability can be improved by pcb-level burn-in. a typical output-voltage long-term stability characteristic is shown in figure 2.
figure 2.typical output-voltage long-term stability.
output noise voltagethis defines the voltage noise at the reference output. the 1/f component is specified in µvp-p over a 0.1hz to 10hz bandwidth, and the wideband noise is usually specified in µvrms over a 10hz to 10khz bandwidth.
dac specificationsonly buffered-voltage-output dacs are discussed here, as the key points about error calculations are easier to illustrate with this architecture. current-output dacs are typically used in a multiplying configuration (mdac) to provide variable gain; they usually require external op amps to buffer the voltage generated across a fixed resistor.
focusing discussion on the reference voltage, the main characteristic of this dac architecture is the varying dac reference input resistance vs. dac code. many dacs are implemented using an r-2r ladder. the resistance of the ladder will change with dac code. if the reference drives the ladder directly, the reference must have sufficient load regulation to avoid introducing errors. care must be taken to ensure that the voltage reference can source enough current at the dac's minimum reference input resistance. note that some dac configurations will draw virtually zero current from the reference at dac code 0. hence, switching from code 0 to code 1 can create a large current transient in the reference.
two other dac specifications are important to voltage-reference selection: reference-input-voltage range and dac output gain. these specifications will define the reference voltage for the particular application.
output error and accuracy specificationsoutput error is defined as the deviation from an ideal output voltage that would be provided by the perfect match of voltage reference and dac. it is important to note that this article addresses absolute accuracy, meaning that everything is referenced to an ideal dac output-voltage range. for example, a 12-bit dac code 4095 should produce an output of 4.096v with a reference voltage of 4.096v; any deviation from this is an error. this performance contrasts to relative accuracy, where the full-scale output is defined more by the application than by an absolute voltage. consider another example: a ratiometric system where an adc and a dac with equal resolution share a reference. it may not matter (within reason) what the actual reference voltage is, as long as the dac-output and adc-input voltages are nearly equivalent for a given digital code.
output error is often specified as a one-sided value (in lsbs at the dac resolution), but it actually implies a double-sided error (figure 3). for example, a 12-bit dac with a 4.096v output range has an ideal lsb step size of 4.096v/4095 ~ 1mv. if the specified output error in this case is 4 lsbs at 12-bit resolution, this means that the dac output at any code could be ±4 lsbs (or ±4mv) from the ideal value. consequently, accuracy is defined by how many actual bits are available to reach a desired output voltage with at most 1 lsb of error: accuracy = dac resolution - log2(error) so in this example: accuracy = 12 - log2(4) = 10 bits therefore, one can only get to within 1 lsb at 10-bit resolution (±4mv = ±4/4096 = ±1/1024) of any ideal dac output value.
sources of system gain error include: reference initial error reference-output temperature coefficient reference temperature hysteresis reference long-term stability reference load regulation reference line regulation reference output noise dac gain error dac offset error dac gain-error temperature coefficient other sources of system error include: dac integral nonlinearity (inl) dac output noise
figure 3. data show how errors compound to define the system dac transfer function.
although the target error applies over the entire dac code range, most of the error sources mentioned above cause an effective gain-error variation that is largest near the full scale (highest dac codes) of the transfer function (figure 3). gain errors reduce with decreasing dac code value; these errors are halved at midscale and virtually disappear near code zero, where offset error dominates. error sources that do not exclusively affect the gain error and apply equally over most of the dac code range include dac integral nonlinearity (inl) and output noise.
inl is typically defined using one of two methods: absolute linearity or end-point linearity (figure 4). the offset error is removed and the gain error is normalized before the inl is measured. absolute linearity compares the dac linearity to the ideal transfer-function linearity. end-point linearity uses the two measured end points to define the linearity (a straight line is drawn between these points); all other points are compared to this line. in either case, inl should be included in the error analysis. in the latter case, the dac inl error is zero at the end points, but can be present at dac code words just inside these values. as an example, for a 12-bit dac with inl defined between the end points of 0v and 4.095v (full scale), the inl specification applies to dac codes near 0 and 4095. for maximum error calculations, it is reasonable to add the dac's inl and noise-induced output errors to the previously mentioned gain errors that are most severe near code 4095. some dacs are specified with differing inl values over the range of codes. dacs are often used in applications where the whole code range is not used, and devices specified in this way can provide better performance over a smaller code range.
figure 4. dac inl measurement.
dac and reference design examplesto illustrate the steps involved with voltage reference selection for dacs, a few design examples cover a range of applications (table 1). the design steps are broken into individual sections by design examples (i.e., design a through design d). a spreadsheet was developed to calculate the various steps and produce the results. in the spreadsheet, cells with blue text should be entered by the designer. cells with red text show calculated results.
table 1. requirements for dac design examples parameter design a design b design c design d
main design objectives low cost, loose accuracy high absolute accuracy and precision one-time calibrated, low drift low voltage, battery powered, moderate accuracy
example application consumer audio device lab instrument digital offset and gain adjustment portable instrument
dac max5304, 10-bit single max5170, 14-bit single max5154, 12-bit dual max5176, 12-bit single
minimum reference input resistance 18kω 18kω 7kω (two shared 18kw inputs) 18kω
output voltage 0 to 2.5v 0 to 4.096v 0 to 4.000v 0 to 2.048v
dac output force/sense fixed gain = 1.638 fixed gain = 2 fixed gain = 1.638
power supply 5v (varying)
4.5v (min),
5.5v (max) 5v (constant)
4.95v (min),
12v available 5v (constant)
4.75v (min),
5.25v (max) 3v (varying vbatt)
2.7v (min),
3.6v (max)
temperature range 0°c to +70°c
(commercial) 0°c to +70°c
(commercial) -40°c to +85°c
(extended) +15°c to +45°c
(> 69µa design requirement).
step 3. final specification review and error-budget analysiswith the preliminary selection of references complete, it is now time to verify the remaining specifications, which include reference-load regulation; input-line regulation; output-voltage temperature hysteresis; output-voltage long-term stability; and output noise voltage. the analysis is shown in the spreadsheet segment below (figure 7).
figure 7. this portion of the spreadsheet helps calculate the remaining specifications and, ultimately, the error budget.
each example is analyzed, focusing on the specifications that apply to that particular design. it is most convenient to do the error-budget accounting in parts per million (ppm), although this could be done equivalently in other units such as %, mv, or lsbs. it is also important to apply the proper scaling and to use the proper normalization factor to obtain the correct error values. reference-error terms can be calculated relative to the reference voltage or the dac output voltage. for example, assuming a reference error of 2.5mv (e.g., noise, drift, etc.) and a reference voltage of 2.5v, then: reference output error = 106 × 2.5mv/2.5v = 1000ppm assuming that the dac output amplifier has a gain of 2.0, then both the error and the reference voltage are scaled. this produces the same result at the dac output (5v full-scale range): dac output error = 106 × (2.5mv × 2)/(2.5v × 2) = 1000ppm into this section of the spreadsheet, enter the reference specifications for: temperature hysteresis, long-term stability, load regulation, line regulation, and output noise. also enter dac specifications for: inl, gain error, gain tempco, and noise.
the spreadsheet will calculate values for: worst-case error, root-sum-square (rss) error, worst-case error margin, and rss error margin. it is important to think of how the errors can stack up. some very accurate applications may be very difficult to meet if a worst-case analysis is used. if one can assume that errors may not be correlated, then the rss method can often be used. yet, theoretically, some boards that result may not be as accurate as they should be.
design a. low cost, loose accuracyno calibration or trimming is planned for design a, so the max6102 initial error of 4000ppm (or 0.4%) directly becomes part of the budget, as does the 4450ppm due to the voltage-reference tempco (70°c × 65ppm/°c). the typical max6102 output-voltage temperature-hysteresis specification is also used directly in the error budget. (remember that this is a typical value, if the design ultimately has marginal accuracy.) for output-voltage long-term stability, assume twice the max6102 1000-hour specification (2 × 50ppm = 100ppm). this is fairly conservative, as it is usually much better after the first 1000 hours. a conservative estimate here at least partially offsets the typical specification used for temperature hysteresis.
to calculate the variation in reference voltage caused by load regulation, one must know the worst-case range of currents that the voltage reference supplies to the dac's reference input. recall that in step 2 above, the maximum dac reference current that the max6102 would have to drive is 140µa. the minimum current is close to 0 since the max5304 in an r-2r ladder. the max5304 reference input is effectively an open circuit (several gω input impedance) when the dac code value is 0. this means that the total output-current variation that the max6102 sees is 140µa. this value should be used for the load-regulation calculation: load-regulation error = 140µa × 0.9mv/ma = 126µv (max)
= 106 × 126µv/2.5v = 50ppm (max)
in general, it is best to be conservative and use the maximum output current directly for the load-regulation calculation. there is a possible exception: if you are trying to extract the last bit of accuracy from a design and both the maximum and minimum dac reference input resistance values are well specified. this approach results in a smaller load-regulation error because of the smaller δiref.
because the power supply is specified as varying for this example, one must consider the effects of input line regulation on the max6102 reference. the power-supply range is specified as 4.5v to 5.5v. from this, a conservative reference-voltage line-regulation calculation is possible: load-regulation error = (5.5v - 4.5v) × 300µv/v = 300µv (max)
= 106 × 300µv/2.5v = 120ppm (max)
the final voltage-reference-related error term to consider is the effect of reference output-noise voltage. conveniently, design a has a signal bandwidth (10hz to 10khz) that corresponds to the exact max6102 noise voltage bandwidth. consequently, the wideband noise-voltage specification of 30µvrms is used directly (that is, bandwidth scaling is not required). comparing the load- and line-regulation values (126µv and 300µv, respectively), it is apparent that noise is not a major contributor in this design. using crude approximations to get numbers for the error analysis, one can assume an effective peak noise value of ~42µv (30µv × √2), which corresponds to 17ppm (106 × 42µv/2.5v) with the dac gain of 1. this analysis is trying purposefully to keep the noise calculations simple; a more detailed analysis can be performed if the relative error of the noise is larger or if the design is marginal. remember that noise is specified as a typical value when judging design margin.
consider now the relevant max5304 dac specifications that impact accuracy at, or near, the upper end of the code range. the dac inl value of ±4 lsb (at 10 bits) is used directly. treating it as a single-sided quantity, as with the other error terms in our analysis, one arrives at a value of 3910ppm (106 × 4/1023). similarly, the dac gain error is specified as ±2 lsb and results in an error of 1955ppm (106 × 2/1023). the final max5304 dac specification to be considered is gain-error tempco, which gives a typical error of 70ppm (70°c × 1ppm/°c). the dac output noise is not specified for the max5304, so it is ignored, probably without adverse consequences in this 6-bit-accurate system.
when all of the error sources are summed, the result is a worst-case error of 14902ppm which, although fairly close, meets the target-error specification of 15640ppm. when confronted with this marginal situation, one can rationalize that the design will probably never have an error of this magnitude, because the error specification assumes worst-case conditions for most parameters. the rss approach gives an error of 7474ppm, which is valid if the errors are uncorrelated. some error sources may be correlated, so the truth probably lies somewhere between these two numbers. but regardless of the approach, the design-a requirements have been met.
design b. high accuracy and precisionthe initial error of the a-grade max6225 is 0.04% or 400ppm, which exceeds design b's entire 122ppm error budget. because this application has gain calibration, virtually all of this reference initial error can be removed. this latter decision assumes that the calibration equipment has sufficient (~1µv) accuracy and the trim circuit has enough precision. the tempco contribution is calculated as 70ppm (70°c × 1ppm/°c), and the typical temperature hysteresis value of 20ppm is used directly. the long-term stability specification of 30ppm is also used rather than a more conservative number, because the instrument in this application has an initial burn-in as well as an annual calibration.
applying the same assumptions used in design a, then design b's reference output current variation is 140µa (coincidentally, the same number as in design a). in this case, the max6225 data sheet specifies the load regulation in ppm/ma. to use the spreadsheet, convert this value to mv/ma, which leads to the following load-regulation error calculation: load-regulation error = 6ppm × 2.5/1000 = 0.015mv/ma
= 140µa × 0.015mv/ma/2.5 = 0.8ppm (max)
the power supply is specified as constant in this application, so the line regulation is assumed to be 0ppm. the precise bounds are not defined, but this does not matter since calibration will remove any errors. note that it would be < 1ppm even if the power supply were not constant, as long as the regulation remains within the specified 4.95v to 5.05v range, because the max6225 line-regulation specification is 7ppm/v max. hence, zero is entered into the spreadsheet.
because the bandwidth for design b is specified as dc to 1khz, one must consider both the 1.5µvp-p low frequency (1/f) noise and the 2.8µvrms broadband noise specified from 0.1hz to 10hz and 10hz to 1khz, respectively. using the same crude rms-to-peak approximation as design a, and adding the two peak-noise terms together, the total noise estimate is 2ppm at the reference output ([[0.75µv + 2.8µvrms × √2]/2.5v] × 106). again, to put the values into the spreadsheet, convert to ppm. notice that this is the same value that one would obtain if it were calculated at the dac output. this is because the equation would be multiplied by 1.638/1.638 to rescale everything to 4.096v. it is worth mentioning that the peak-noise-sum method used here is fairly conservative, yet the total error contribution is still relatively small. an rss approach is probably more accurate, because the two noise sources are most likely uncorrelated. still, this smaller value would be even more in the noise compared to the peak-value approach.
all that remains for the design-b analysis is to include the dac error terms. the inl for the a-grade max5170 dac is specified as ±1 lsb, which is 61ppm and exactly half of the 122ppm error budget of ±2 lsb at 14 bits. the dac gain error is specified as ±8 lsb worst case, but this error is removed completely by the gain calibration mentioned earlier. as with the reference, one can set the gain error to zero in the spreadsheet. the calibration works as follows: the dac is set to a digital code where the ideal output voltage is known (for example, decimal dac code 16380 should produce precisely 4.095v at the output). the reference voltage is then trimmed until the dac output voltage is at this exact value, even if the reference voltage itself is not 2.500v. the max5170 dac does not list a gain tempco, although the gain error is specified over the operating-temperature range. because the gain error is calibrated out at only one temperature, design b should be tested to ensure that the gain does not drift excessively over temperature. the final consideration is the max5170 dac output noise, whose typical peak noise is roughly estimated as 1ppm ([106 × √(1000hz × π/2) × 80nvrms/ × √2]/4.096v).
ultimately, the final worst-case accuracy is 184ppm (~ ±3 lsb at 14 bits), which does not quite meet the accuracy target of 122ppm. in contrast, the rss accuracy is acceptable at 100ppm. based on these numbers, the design can be considered a success. it has illustrated the important points and is close to the target accuracy with several conservative assumptions. in a real-world application, this design could be accepted as is, or the accuracy requirements could be relaxed slightly. alternatively, a more expensive reference could be used if this design were not acceptable.
design c. one-time calibrated, low driftthe initial error of the a-grade max6162 is 0.1%, which consumes the entire design-c error budget of 977ppm. however, like design b this is at least partially calibrated out. note that the uncalibrated +4.096v max5154 dac full-scale output voltage exceeds the required +4.000v output range, and that the dac has 1mv resolution even though only ±4mv of accuracy is required. therefore, it is possible to do a digital calibration on the dac input digital codes to remove some of the reference's initial error and the dac's gain error.
the digital gain calibration is best demonstrated with an example. assume that the dac output voltage needs to be at the full-scale value of 4.000v, but the ideal decimal dac code of 4000 results in a measured output of only 3.997v due to various errors in the system. using digital calibration, a correction value is added to the dac code to produce the desired result. in this example, when the dac output voltage of 4.000v is required, a corrected dac code of 4003 is used instead of 4000. this gain calibration is scaled linearly across the dac codes, so it has little effect at the lower codes and more impact on the upper codes.
the digital gain calibration accuracy is limited by the 12-bit resolution of the dac, so the best one can expect is ~±1mv or 244ppm (106 × 1mv/4.096v) of error after the calibration has been applied. note that the accuracy is calculated on a 4.096v scale in this example to maintain consistency. it could be calculated relative to the +4.000v output range if required by the application; the error would be slightly higher.
if the required output range in this example were 4.096v, there are other options that could be used to always bias the uncalibrated dac output voltage above 4.096v. in this manner, the digital gain calibration scheme described in this example could be employed. such options include the following: use an adjustable reference whose output is always above 4.096v when all circuit tolerances are considered. use a force/sense dac with the gain set slightly higher than necessary. add an output buffer with gain the max6162 reference tempco error is calculated as 625ppm (125°c × 5ppm/°c), and the typical temperature hysteresis value of 125ppm is used directly. the long-term-stability specification is doubled to a more conservative 160ppm, because no burn-in is specified for the application and the reference is never calibrated once it leaves the factory.
design c's worst-case reference output current variation is found to be 293µa (2.5v/[14kω||14kω] remember that there are two dacs driven by the reference), which is used directly in the load-regulation calculation: load-regulation error = 293µa × 0.9mv/ma = 264µv (max)
= 106 × 264µv/2.048v = 129ppm (max)
because reference-load regulation is proportional to the reference output voltage, it can be calculated at either the voltage reference (264µv/2.048v) or the dac output ((2 × 264µv)/(2 × 2.048v)).
the power supply is constant in this application, so the line regulation is assumed to be 0ppm. with the bandwidth for design c specified as 0.1hz to 10hz, half of the 22µvp-p low-frequency (1/f) noise specification (peak value) is used to arrive at a noise contribution of 5ppm at the reference output (106 × (22µv/2)/2.048v)). as mentioned previously, the same 5ppm answer is obtained if the calculation is referred to the dac output, because the equation is just multiplied by 2.0/2.0.
moving to the max5154 dac error terms, the a-grade inl is ±0.5 lsb, which is 122ppm on the 12-bit scale. the dac gain error is ±3 lsb(244ppm), but is ignored because it was already accommodated in the digital reference/dac gain calibration mentioned earlier in this step. it should not be counted twice. the max5154 gain-error tempco has a typical value of 4ppm/°c, which gives a total of 500ppm (125°c × 4ppm/°c). the dac output noise is not specified for the max5154, so it is ignored. recognize, now, that this could present a problem, but experience with design b indicates that dac noise is usually a relatively small contributor to the total error. measurements can be performed to confirm this assumption.
the worst-case error for design c is calculated as 1980ppm, and the rss error is 861ppm. with a target-error specification of 977ppm, the current design is marginally acceptable at best, especially given that some typical values were used and the dac output noise was not considered. some options for improvement follow: use the max6191 instead of the max6162. the max6191 has better load regulation (0.55µv/µa vs. 0.9mv/ma), temperature hysteresis (75ppm vs. 125ppm), and long-term stability (50ppm vs. 115ppm). the end result would be a 1750ppm worst-case error and an 823ppm rss error, which is a net change of 230ppm and 38ppm, respectively. this is a slight improvement, but may not be enough. reexamine the overall system-accuracy specifications to determine if any parameters can be relaxed. the existing design could be the best choice in terms of accuracy vs. cost. reduce the temperature range if the entire extended range is not needed. for example, if the range can be reduced from -40°c to +85°c down to -10°c to +75°c, the worst-case error drops to 1505ppm and the rss error becomes 648ppm. this happens because much of the error budget is consumed by the reference tempco (625ppm) and the dac's gain-error tempco (500ppm). although only one of these error terms is below the 977ppm target, the comfort level increases considerably compared to the original max5154/max6162 design. if an 8v or greater supply is available, consider the max6241 4.096v reference and the max5156 dac (the force/sense version of the max5154) set to unity gain. this combination is slightly more expensive, but produces an approximate worst-case error of 956ppm and an rss error of 576ppm, both of which are under the 977ppm total-error target. consider other dacs that have typical gain tempcos as low as 1ppm/°c. design d. low voltage, battery powered, moderate accuracyno calibration or trimming is planned for design d, so the a-grade max6190 initial error of 1600ppm (106 × 2mv/1.25v) is used directly in the error budget, along with 625ppm (125°c × 5ppm/°c) for the tempco error. the 75ppm temperature hysteresis is also used directly; the risk of using this typical specification is at least partially offset by the reduced operating-temperature range (+15°c to +45°c). once again, the 1000-hour long-term stability is doubled to 100ppm as a conservative estimate of the drift, as there is no burn-in in this application.
the load-regulation error is again calculated from the assumed worst-case max5176 dac reference-input current of 69µa: load-regulation error = 69µa × 0.5µv/µa = 34.5µv (max)
= 106 × 34.5µv/1.25v = 28ppm (max)
the power supply varies between 2.7v and 3.6v in this design, so the max6190 line-regulation specification of 80µv/v (max) must be included in the analysis: line-regulation error = (3.6v - 2.7v) × 80µv/v = 72µv (max)
= 106 × 72µv/1.25v = 58ppm (max)
as with design c, the bandwidth for design d is specified as 0.1hz to 10hz, so half of the 25µvp-p low-frequency (1/f) noise specification is used to arrive at a peak noise contribution of 10ppm at the reference output (106 × [12.5µv/1.25v]). the same 10ppm reference-induced noise term is expected at the dac output, because the reference voltage and noise see the same dac gain.
focusing now on the max5176 dac error terms, the a-grade inl is ±2 lsb, which is 488ppm on the 12-bit scale. the dac worst-case gain error of ±8 lsb with a 5kω load translates to 1953ppm at 12 bits. like the max5170 in design b, the max5176 does not specify a gain-error tempco. this is not a concern in design d for two important reasons: it is not a low-drift design calibrated at one temperature, and the maximum dac gain error is specified over the entire operating-temperature range. the final consideration is the max5176's dac output noise. the estimated typical peak value can be shown to be negligible ([106 × (√10hz × π/2) × 80nvrms/ × √2]/2.048v) ~ = 0.22ppm).
as with designs b and c, the worst-case error of 4462ppm exceeds the 3906ppm target error, while the 2580ppm rss error is well below the target. based on these numbers, design d is considered successful, because it comfortably meets the requirements from an rss standpoint and has demonstrated the important design concepts. if further improvement is desired, alternative dacs should be considered first, because the max6190 is the best low-power voltage reference available with an output below 1.3v (caused by the vdd - 1.4v limitation on dac reference inputs) and such low-quiescent current (35µa).
dac voltage-reference design summarythis article has demonstrated a design procedure for dac voltage-reference selection involving the three steps:
step 1. voltage ranges and reference-voltage determination. the power-supply voltage and the dac output-voltage range were used to determine viable reference-voltage and dac gain options.
step 2. initial voltage-reference device-selection criteria. candidate voltage references were considered. design focus was on reference voltage (determined in step 1), initial accuracy, tempco, and reference output current. from these candidates, an initial device was selected.
step 3. final specification review and error-budget analysis. the selected voltage-reference and dac requirements were established. to meet the design goals, iteration between steps 2 and 3 may be required. when following the design procedure described above, it is convenient to do the error analysis in ppm and to understand how it relates to other system-accuracy and error measures (table 2).
table 2. error analysis in ppm relative to other standard dac system specifications ±lsb accuracy (bits) ±1 lsb error (ppm) ±1 lsb error (%) ±16-bit error lsb ±14-bit error lsb ±12-bit error lsb ±10-bit error lsb ±8-bit error lsb ±6-bit error lsb
16 15.25878906 0.001525879 1 0.25 < 0.25 < 0.25 < 0.25 < 0.25
15 30.51757813 0.003051758 2 0.5 < 0.25 < 0.25 < 0.25 < 0.25
14 61.03515625 0.006103516 4 1 0.25 < 0.25 < 0.25 < 0.25
13 122.0703125 0.012207031 8 2 0.5 < 0.25 < 0.25 < 0.25
12 244.140625 0.024414063 16 4 1 0.25 < 0.25 < 0.25
11 488.28125 0.048828125 32 8 2 0.5 < 0.25 < 0.25
10 976.5625 0.09765625 64 16 4 1 0.25 < 0.25
9 1953.125 0.1953125 128 32 8 2 0.5 < 0.25
8 3906.25 0.390625 256 64 16 4 1 0.25
7 7812.5 0.78125 512 128 32 8 2 0.5
6 15625 1.5625 1024 256 64 16 4 1
5 31250 3.125 2048 512 128 32 8 2
4 62500 6.25 4096 1024 256 64 16 4

Linux mv命令的真正使用方法
python布尔值是什么
人工智能该怎样融入产业才能产生商业价值
盘点L70度量标准在LED照明预期寿命中的应用
红米Note 8T曝光将搭载骁龙730处理器和后置四摄
Calculating the Error Budget i
华为新系统鸿蒙升级方法 鸿蒙系统升级机型名单
量子计算机赛道上从不缺优秀竞争者,谁会是这场马拉松的最后赢家?
Linux内核观测技术eBPF中文入门指南
Allegro MicroSystems推出针对中/大型显示器的新型LED背光驱动器
美光CEO过于乐观 认为未来存储市场一片光明
沙纳汉声称,2020将是美军在人工智能领域实现突破的一年
中国移动揭露2018年的发展状况及2019年市场策略
华为鸿蒙OS 2.0系统的优势有哪些
中兴通讯与陕西移动共同开启5G+赋能各垂直行业数字化转型
博世推出新款气压传感器BMP390,精确的室内定位可挽救数千人的生命
博世家电推出维他鲜动力多门冰箱,可避免设置不当引起的食物保存不良
双子猫智能音箱体验 孩子的良师益友
台式放大器电路解析
卷积神经网络 物体检测 YOLOv2