Is the Leveling Difficulty Multiplier an Exponential Function?
-
- Posts: 11
- Joined: Tue Sep 29, 2020 12:17 pm
Is the Leveling Difficulty Multiplier an Exponential Function?
When defining a character class in Daggerfall Unity, the current Leveling Difficulty Multiplier is shown by the position of a dagger icon on a gauge marked with three settings: 1x, 3x, and 0.3x.
Let us assume that the setting 0.3 actually represents the value 1/3 carried out to just one decimal place. What this then strongly suggests to me is that the Leveling Difficulty Multiplier or LDM is an exponential function of the dagger's position. More exactly, if we call 'd' the distance (or displacement) of the dagger from its starting position at 1x, we would conjecture that LDM is defined by the function:
LDM(d) = 3^d
The expression on the right side of the equation means: 3 raised to the power 'd'. (Superscripts seem to be unavailable on this forum.) We will define one unit of distance as the distance from the 1x to the 3x position of the dagger. Expressed in these units, 'd' will be dimensionless -- as it must be.
We now have:
LDM(1) = 3^1 = 3
LDM(-1) = 3^-1 = 1/3
LDM(0) = 3^0 = 1
each of which matches the game's stated value.
Of what use is this? In the various Let's Plays that I have seen and in my own experience in defining half a dozen or more characters, the value of 'd' ends up somewhere between 0 and 1. We might compare the values taken by the exponential function in this critical region with those taken by a simple linear interpolation, namely, LDM(d) = 2d +1.
At d=1/2, we get
LDM(1/2) = 2.0 for the linear interpolation, versus
LDM(1/2) = 3^(1/2) = 1.73, that is, the square root of 3.
Also , at d=1/4:
LDM(1/4) = 1.5 for the linear function, versus
LDM(1/4) = 3^(1/4) = 1.32, which is the square root of the square root.
One can see that in the region from 0 to 1 the exponential function rises significantly less rapidly than does the straight line interpolation. I think this is worth knowing.
Incidentally, LDM cannot overall be modeled as a linear function of 'd' because, whatever parameters are chosen for it, the function will eventually take the impossible value of zero for sufficiently negative 'd'. The exponential function, on the other hand, can never take a negative value.
"But I thought exponential increase was faster than anything!"
Well, that becomes true in just a bit. For example, compare LDM(2) = 3^2 = 9 with the linear extrapolated value of 5. (A nice graph would clearly show the relationships here.)
One more thing: it is possible to reach d = -2 in the game by taking only disadvantages for the character. If the exponential formula were actually being used, the LDM at this point would be LDM(-2) = 3^-2 = 1/9. It would be interesting to see how the game would play in this case.
Let us assume that the setting 0.3 actually represents the value 1/3 carried out to just one decimal place. What this then strongly suggests to me is that the Leveling Difficulty Multiplier or LDM is an exponential function of the dagger's position. More exactly, if we call 'd' the distance (or displacement) of the dagger from its starting position at 1x, we would conjecture that LDM is defined by the function:
LDM(d) = 3^d
The expression on the right side of the equation means: 3 raised to the power 'd'. (Superscripts seem to be unavailable on this forum.) We will define one unit of distance as the distance from the 1x to the 3x position of the dagger. Expressed in these units, 'd' will be dimensionless -- as it must be.
We now have:
LDM(1) = 3^1 = 3
LDM(-1) = 3^-1 = 1/3
LDM(0) = 3^0 = 1
each of which matches the game's stated value.
Of what use is this? In the various Let's Plays that I have seen and in my own experience in defining half a dozen or more characters, the value of 'd' ends up somewhere between 0 and 1. We might compare the values taken by the exponential function in this critical region with those taken by a simple linear interpolation, namely, LDM(d) = 2d +1.
At d=1/2, we get
LDM(1/2) = 2.0 for the linear interpolation, versus
LDM(1/2) = 3^(1/2) = 1.73, that is, the square root of 3.
Also , at d=1/4:
LDM(1/4) = 1.5 for the linear function, versus
LDM(1/4) = 3^(1/4) = 1.32, which is the square root of the square root.
One can see that in the region from 0 to 1 the exponential function rises significantly less rapidly than does the straight line interpolation. I think this is worth knowing.
Incidentally, LDM cannot overall be modeled as a linear function of 'd' because, whatever parameters are chosen for it, the function will eventually take the impossible value of zero for sufficiently negative 'd'. The exponential function, on the other hand, can never take a negative value.
"But I thought exponential increase was faster than anything!"
Well, that becomes true in just a bit. For example, compare LDM(2) = 3^2 = 9 with the linear extrapolated value of 5. (A nice graph would clearly show the relationships here.)
One more thing: it is possible to reach d = -2 in the game by taking only disadvantages for the character. If the exponential formula were actually being used, the LDM at this point would be LDM(-2) = 3^-2 = 1/9. It would be interesting to see how the game would play in this case.
Last edited by ragray on Thu Oct 22, 2020 11:30 am, edited 1 time in total.
- Interkarma
- Posts: 7247
- Joined: Sun Mar 22, 2015 1:51 am
Re: Is the Leveling Difficulty Multiplier an Exponential Function?
Hey welcome to the forums. Here's the short version of how difficulty multiplier works.
Firstly, every advantage and disadvantage added in the custom class creator contributes some points (positive or negative) to the overall difficulty.
These values are summed as difficultyPoints and held in the custom class career data as AdvancementMultiplier by CreateCharCustomClass.UpdateDifficulty(). Stock classes have a predefined AdvancementMultiplier. Here's the code to calculate AdvancementMultiplier based on total difficultyPoints.
Then when checking for skill uses needed to increase skill, your class AdvancementMultiplier is used by FormulaHelper.CalculateSkillUsesForAdvancement() (along with you character level and other things). The code is below.
The dagger Y position in custom class creator is set using the below code. It's based on the fixed UI dimension from classic.
The UI itself is just flavour. The dagger position is set by the number pixels either side of the zero point relative to the maximum/minimum difficulty score. Basically -1 through 1 with 0 being right in the middle.
Firstly, every advantage and disadvantage added in the custom class creator contributes some points (positive or negative) to the overall difficulty.
These values are summed as difficultyPoints and held in the custom class career data as AdvancementMultiplier by CreateCharCustomClass.UpdateDifficulty(). Stock classes have a predefined AdvancementMultiplier. Here's the code to calculate AdvancementMultiplier based on total difficultyPoints.
Code: Select all
createdClass.AdvancementMultiplier = 0.3f + (2.7f * (float)(difficultyPoints + 12) / 52f);
Code: Select all
double levelMod = Math.Pow(1.04, level);
return (int)Math.Floor((skillValue * skillAdvancementMultiplier * careerAdvancementMultiplier * levelMod * 2 / 5) + 1);
Code: Select all
// Reposition the difficulty dagger
int daggerY = 0;
if (difficultyPoints >= 0)
daggerY = Math.Max(minDaggerY, (int)(defaultDaggerY - (37 * (difficultyPoints / 40f))));
else
daggerY = Math.Min(maxDaggerY, (int)(defaultDaggerY + (41 * (-difficultyPoints / 12f))));
-
- Posts: 11
- Joined: Tue Sep 29, 2020 12:17 pm
Re: Is the Leveling Difficulty Multiplier an Exponential Function?
Thank you for responding so quickly. It is good to know more about the internals of Daggerfall, as I have many questions about how things work exactly.
The point of my post was simply to try to find a functional connection between the dagger position and the difficulty points. This may seem backward, but it is suggested by the markings on the UI control. I will need a little more time to analyze the code you have provided, but it is not obvious to me that it wholly invalidates what I have said.
I am curious how much of the original Daggerfall code is available to you, as I had heard that much if not all of it was lost. It would be very exciting to reconstruct the program using modern programming practices. I have experience with that sort of thing.
The point of my post was simply to try to find a functional connection between the dagger position and the difficulty points. This may seem backward, but it is suggested by the markings on the UI control. I will need a little more time to analyze the code you have provided, but it is not obvious to me that it wholly invalidates what I have said.
I am curious how much of the original Daggerfall code is available to you, as I had heard that much if not all of it was lost. It would be very exciting to reconstruct the program using modern programming practices. I have experience with that sort of thing.
- pango
- Posts: 3358
- Joined: Wed Jul 18, 2018 6:14 pm
- Location: France
- Contact:
Re: Is the Leveling Difficulty Multiplier an Exponential Function?
Hi ragray,
From the look at the code above, it looks like the advancement multiplier, which is a factor in how many skill checks you need to improve skills, later used for leveling up, depends linearly on the difficultyPoints, while the dagger position only depends linearly on difficultyPoints by parts, with a different slope above and below the origin.
So if I'm not mistaken it's not exponential, but linear by parts.
Source code has never been available. Daggerfall Unity mechanics are based on documentation (including extended documentation like The Daggerfall Chronicles), reverse-engineering (disassembly) and observation.
Well, Daggerfall Unity is a rewrite from scratch, so as it is getting closer to the end of its alpha stage, you may be a bit late to the party...
From the look at the code above, it looks like the advancement multiplier, which is a factor in how many skill checks you need to improve skills, later used for leveling up, depends linearly on the difficultyPoints, while the dagger position only depends linearly on difficultyPoints by parts, with a different slope above and below the origin.
So if I'm not mistaken it's not exponential, but linear by parts.
Mastodon: @pango@fosstodon.org
When a measure becomes a target, it ceases to be a good measure.
-- Charles Goodhart
When a measure becomes a target, it ceases to be a good measure.
-- Charles Goodhart
-
- Posts: 11
- Joined: Tue Sep 29, 2020 12:17 pm
Re: Some Further Analysis
It is a mistake to give me code.
I have spent the day inferentially analyzing what was presented last night. Specifically, I have examined the text that calculates the position of the dagger cursor as a function of difficultyPoints and also the text that sets the value createdClass.AdvancementMultiplier. The reconstruction was done using inferential methods, guaranteeing the functional equivalence of any new text to the original.
I need a few things at the start.
The instances of the identifiers in this struct should be replaced with their literal values and any other instances of them in the program should be replaced with references to elements of daggerYRange.
I had some trouble figuring out whether advantages are assigned positive or negative values by DFU. However, once I saw that createdClass.AdvantageMultiplier increases as difficultyPoints increases, I knew that advantages had been given positive values. Accordingly, I will rename difficultyPoints to net_advantage, hopefully clarifying its meaning a little. ("Points" only tells me it's an integer.)
A graph of y_displacement as a function of net_advantage would show two linear segments, each with its own slope. One segment handles negative values of net_advantage and the other handles its positive values. The graph of the function is continuous, but bends at (0,0).
My experiments with Daggerfall Unity confirm that net_advantage values of -12 and 40 exactly position the dagger cursor at 0.3x and 3x, respectively. When we evaluate y_displacement, we find
for net_advantage = -12, y_displacement = (41/12f)*(-12) = -41;
for net_advantage = 40, y_displacement = (37/40f)*40 = 37;
for net_advantage = 0, y_displacement = 0;
Clearly, the labels at 0.3x and 3x have not been placed symmetrically around 1x. They should be positioned to have y_displacements of 40 and -40 from 1x. Naturally, doing this will change the definition of slope.
The distance from 1x to 3x is 40f. This is the distance unit we discussed in the original article. The dimensionless displacement variable also discussed there can now be defined.
We come now to createdClass.AdvancementMultiplier, which is more aptly named createdClass.LevelingDifficultyMultiplier.
There are two problems here. The first is this:
LDM(-12) = 0.3f;
LDM( 40) = 0.3f + (2.7f/52f)*52 = 3.0f;
LDM( 0 ) = 0.3f + (2.7f/52f)*12 = 0.923;
net_advantage = 0 is labelled as 1x on the LDM gauge, but the value calculated here is plainly not 1.0. The reason is that this formula is for a straight line that connects the points (-12,0.3) and (40,3.0), but fails to intersect (0, 1)!
The second problem is that LDM(net_advantage) turns negative when net_advantage < -3*52/27 - 12, which happens at about -17.78. The program would probably fail if anyone ever exited the class definition with net_advantage = -18.
Going back to the first problem, we could try returning to the two segment approach by using
But again, this works only until LDM turns negative.
net_advantage <= -1*120/7 happens at around -17.14. So once more the program will probably crash if net_advantage = -18 when play actually starts.
The best solution is still the one I originally proposed.
This version of LDM will never ever reach zero.
I have spent the day inferentially analyzing what was presented last night. Specifically, I have examined the text that calculates the position of the dagger cursor as a function of difficultyPoints and also the text that sets the value createdClass.AdvancementMultiplier. The reconstruction was done using inferential methods, guaranteeing the functional equivalence of any new text to the original.
I need a few things at the start.
Code: Select all
struct range {
int min, max, default;
};
const range daggerYRange{
minDaggerY,
maxDaggerY,
defaultDaggerY
}
Code: Select all
// just a standard utility function
int Clamp( range r, int value ) {
int v = value;
if (v > r.max)
v = r.max;
else if (v < r.min)
v = r.min;
return v;
}
Code: Select all
// Calculate the new dagger position.
int DaggerY( int advantage ) {
const float slope = (advantage >= 0) ? 37/40f : 41/12f;
float y_displacement = slope*advantage; // the dagger's y displacement from 1x
int ypos = daggerYRange.default - (int) y_displacement; // screen coordinate
// (It appears that screen coordinate y increases from top to bottom of the screen.)
return Clamp( daggerYRange, ypos );
}
Code: Select all
daggerY = DaggerY( net_advantage );
for net_advantage = -12, y_displacement = (41/12f)*(-12) = -41;
for net_advantage = 40, y_displacement = (37/40f)*40 = 37;
for net_advantage = 0, y_displacement = 0;
Clearly, the labels at 0.3x and 3x have not been placed symmetrically around 1x. They should be positioned to have y_displacements of 40 and -40 from 1x. Naturally, doing this will change the definition of slope.
Code: Select all
const float slope = (advantage >= 0) ? 1f : 40/12f;
Code: Select all
float d = y_displacement/40f; // or
float d = (advantage >= 0) ? advantage/40f : advantage/12f;
Code: Select all
// Compute the leveling difficulty multiplier.
float LDM( int advantage ) {
const float slope = 2.7f/52f;
return 0.3f + slope*(advantage + 12);
}
createdClass.LevelingDifficultyMultiplier = LDM( net_advantage );
LDM(-12) = 0.3f;
LDM( 40) = 0.3f + (2.7f/52f)*52 = 3.0f;
LDM( 0 ) = 0.3f + (2.7f/52f)*12 = 0.923;
net_advantage = 0 is labelled as 1x on the LDM gauge, but the value calculated here is plainly not 1.0. The reason is that this formula is for a straight line that connects the points (-12,0.3) and (40,3.0), but fails to intersect (0, 1)!
The second problem is that LDM(net_advantage) turns negative when net_advantage < -3*52/27 - 12, which happens at about -17.78. The program would probably fail if anyone ever exited the class definition with net_advantage = -18.
Going back to the first problem, we could try returning to the two segment approach by using
Code: Select all
float LDM( int advantage ) {
const float slope = (advantage >= 0) ? 2.0/40f : 0.7/12f;
return 1.0 + slope*advantage;
}
net_advantage <= -1*120/7 happens at around -17.14. So once more the program will probably crash if net_advantage = -18 when play actually starts.
The best solution is still the one I originally proposed.
Code: Select all
float LDM( int advantage ) {
const float d = (advantage >= 0) ? advantage/40f : advantage/12f;
return 3 ^ d;
}
- pango
- Posts: 3358
- Joined: Wed Jul 18, 2018 6:14 pm
- Location: France
- Contact:
Re: Is the Leveling Difficulty Multiplier an Exponential Function?
You cannot advance to the next screen if the dagger is in one of the red zones, that's probably what prevents the pathological case already
(like this, before the dagger was constrained)
(like this, before the dagger was constrained)
Mastodon: @pango@fosstodon.org
When a measure becomes a target, it ceases to be a good measure.
-- Charles Goodhart
When a measure becomes a target, it ceases to be a good measure.
-- Charles Goodhart
-
- Posts: 11
- Joined: Tue Sep 29, 2020 12:17 pm
Re: The LDM function can be simplified.
I am going to wrap up this thread by pointing out one way that the LevelingDifficultyMultiplier (also known as createdClass.AdvancementMultiplier) function can be simplified.
Last time I left off here:
Since I would rather not work with literal values, I will introduce these symbolic constants.
const float ka = 2.0/40 = 6f/120 = .05;
const float kd = 0.7/12f = 7f/120 = ka + 1/120;
Notice that the difference between the slopes ka and kd is just 1/120 (.008333).
The value of LDM can now be expressed in a single statement:
float ldm = 1f + (net_advantage >= 0) ? ka*net_advantage : kd*net_advantage;
I now need two more variables.
int tot_advantage; // sum of the values of all advantages taken by the player
int tot_disadvantage; // sum of the values of all disadvantages taken by the player
Since net_advantage is defined as the algebraic sum of all advantage and disadvantage
values, it must be that
net_advantage = tot_advantage + tot_disadvantage;
The next part is crucial. I will rescale the values that the game currently uses for player disadvantages by a factor of 7/6, making disadvantages one-sixth again more powerful than they were before. Keep in mind that these values are really pretty arbitrary to begin with and that they appear only internally. Most players will never know what they are.
Use the rescaled values to calculate tot_disadvantage.
float tot_disadvantage' = (7f/6)*tot_disadvantage;
float net_advantage' = tot_advantage + tot_disadvantage';
I now propose to redefine LDM as a simple linear function.
float new_ldm = 1f + (ka*tot_advantage + kd*tot_disadvantage);
Factoring out ka and using kd/ka = 7f/6, we get
new_ldm = 1f + ka*(tot_advantage + tot_disadvantage');
new_ldm = 1f + ka*net_advantage';
How much difference is there between the two LDM's? Again,
new_ldm = 1f + ka*tot_advantage + kd*tot_disadvantage;
if (net_advantage >= 0) { // sorry,folks, can't indent!
old_ldm = 1f + ka*(tot_advantage + tot_disadvantage);
new_ldm = old_ldm + (kd-ka)*tot_disadvantage;
}
else {
old_ldm = 1f + kd*(tot_advantage + tot_disadvantage);
new_ldm = old_ldm + (ka-kd)*tot_advantage;
}
Stated more compactly, using kd - ka = 1f/120,
new_ldm = old_ldm + (1f/120)*(net_advantage >= 0) ? tot_disadvantage : -tot_advantage;
new_ldm = old_ldm - (1f/120)*(tot_advantage >= -tot_disadvantage) ?
-tot_disadvantage : tot_advantage;
new_ldm = old_ldm - (1f/120)* Min( tot_advantage, -tot_disadvantage );
Notice that
new_ldm < old_ldm
strictly unless one of the totals is 0 in which case the LDM does not change.
Example:
Taking advantages: 3x magery, spell absorption (gen), immunity to x,
and disadvantages: critical weakness to x, darkness-powered magic (lower),
forbidden material: silver
tot_advantage = 10+14+10 = 34;
tot_disadvantage = -14-7-6 = -27;
tot_disadvantage' = -27*7/6 = -63/2 = -31.5;
net_advantage = 34-27 = 7;
net_advantage' = 34-31.5 = 2.5;
old_ldm = 1f + ka*7 = 1.35;
new_ldm = 1f + ka*34 - kd*27 = 1f + 1.7 - 1.575 = 1.125;
or
new_ldm = 1f + ka*net_advantage' = 1f + ka*2.5 = 1f + 0.125 = 1.125;
We see that the formula checks!
new_ldm = old_ldm - Min(34,27)/120f = 1.35 - 27f/120 = 1.35 - 0.225 = 1.125;
Also, these solutions are as they should be.
new_ldm = 3.0 = 1f + ka*net_advantage';
net_advantage' = 40;
new_ldm = 1.0 = 1f + ka*net_advantage';
net_advantage' = 0;
new_ldm = 0.3 = 1f + ka*net_advantage';
net_advantage' = -0.7*20f = -14;
and behold: -14 is the rescaled value of -12!
Last time I left off here:
What dissatisfied me was that this function is not "analytical". It is unlikely to have arisen as part of a mathematically reasonable model of any kind. Having a simple linear function for LDM that hits the three marks of 0.3x, 1x, and 3x would be easier to accept. It then occurred to me is that it might be possible to "straighten out" this bent-line graph by rescaling some of the values associated with player advantages and disadvantages. This would have the effect of altering the original LDM values, which to some might seem unacceptable, but do so in an unobtrusive way.Going back to the first problem, we could try returning to the two segment approach by using
Code: Select all
float LDM( int advantage ) { const float slope = (advantage >= 0) ? 2.0/40f : 0.7/12f; return 1.0 + slope*advantage; }
Since I would rather not work with literal values, I will introduce these symbolic constants.
const float ka = 2.0/40 = 6f/120 = .05;
const float kd = 0.7/12f = 7f/120 = ka + 1/120;
Notice that the difference between the slopes ka and kd is just 1/120 (.008333).
The value of LDM can now be expressed in a single statement:
float ldm = 1f + (net_advantage >= 0) ? ka*net_advantage : kd*net_advantage;
I now need two more variables.
int tot_advantage; // sum of the values of all advantages taken by the player
int tot_disadvantage; // sum of the values of all disadvantages taken by the player
Since net_advantage is defined as the algebraic sum of all advantage and disadvantage
values, it must be that
net_advantage = tot_advantage + tot_disadvantage;
The next part is crucial. I will rescale the values that the game currently uses for player disadvantages by a factor of 7/6, making disadvantages one-sixth again more powerful than they were before. Keep in mind that these values are really pretty arbitrary to begin with and that they appear only internally. Most players will never know what they are.
Use the rescaled values to calculate tot_disadvantage.
float tot_disadvantage' = (7f/6)*tot_disadvantage;
float net_advantage' = tot_advantage + tot_disadvantage';
I now propose to redefine LDM as a simple linear function.
float new_ldm = 1f + (ka*tot_advantage + kd*tot_disadvantage);
Factoring out ka and using kd/ka = 7f/6, we get
new_ldm = 1f + ka*(tot_advantage + tot_disadvantage');
new_ldm = 1f + ka*net_advantage';
How much difference is there between the two LDM's? Again,
new_ldm = 1f + ka*tot_advantage + kd*tot_disadvantage;
if (net_advantage >= 0) { // sorry,folks, can't indent!
old_ldm = 1f + ka*(tot_advantage + tot_disadvantage);
new_ldm = old_ldm + (kd-ka)*tot_disadvantage;
}
else {
old_ldm = 1f + kd*(tot_advantage + tot_disadvantage);
new_ldm = old_ldm + (ka-kd)*tot_advantage;
}
Stated more compactly, using kd - ka = 1f/120,
new_ldm = old_ldm + (1f/120)*(net_advantage >= 0) ? tot_disadvantage : -tot_advantage;
new_ldm = old_ldm - (1f/120)*(tot_advantage >= -tot_disadvantage) ?
-tot_disadvantage : tot_advantage;
new_ldm = old_ldm - (1f/120)* Min( tot_advantage, -tot_disadvantage );
Notice that
new_ldm < old_ldm
strictly unless one of the totals is 0 in which case the LDM does not change.
Example:
Taking advantages: 3x magery, spell absorption (gen), immunity to x,
and disadvantages: critical weakness to x, darkness-powered magic (lower),
forbidden material: silver
tot_advantage = 10+14+10 = 34;
tot_disadvantage = -14-7-6 = -27;
tot_disadvantage' = -27*7/6 = -63/2 = -31.5;
net_advantage = 34-27 = 7;
net_advantage' = 34-31.5 = 2.5;
old_ldm = 1f + ka*7 = 1.35;
new_ldm = 1f + ka*34 - kd*27 = 1f + 1.7 - 1.575 = 1.125;
or
new_ldm = 1f + ka*net_advantage' = 1f + ka*2.5 = 1f + 0.125 = 1.125;
We see that the formula checks!
new_ldm = old_ldm - Min(34,27)/120f = 1.35 - 27f/120 = 1.35 - 0.225 = 1.125;
Also, these solutions are as they should be.
new_ldm = 3.0 = 1f + ka*net_advantage';
net_advantage' = 40;
new_ldm = 1.0 = 1f + ka*net_advantage';
net_advantage' = 0;
new_ldm = 0.3 = 1f + ka*net_advantage';
net_advantage' = -0.7*20f = -14;
and behold: -14 is the rescaled value of -12!