Quantitative vs Qualitative

Should you think about your duty, or about the consequences of your actions? Or should you concentrate on becoming a good person?

Moderators: AMod, iMod

Veritas Aequitas
Posts: 13016
Joined: Wed Jul 11, 2012 4:41 am

Quantitative vs Qualitative

Post by Veritas Aequitas »

I noted when I emphasized on the quantitative over the qualitative example in the rating of human-based FSK in terms of credibility and objectivity, I encounter the usual flock of birds of feature condemning and mocking the quantitative approach;
e.g.
Post: viewtopic.php?p=674937#p674937

Various posts in
Criteria in Rating Credibility & Objectivity of a FSK
Criteria in Rating Credibility & Objectivity of a FSK

I believe quantification [avoiding any analysis paralysis] is critical for any progress.

Here are some notes why quantification is critical for progress from ChatGpt [with reservations];
Arguing for the importance of quantification in subjective matters can be challenging, as some people may prioritize qualitative aspects and argue that certain experiences or values are difficult to capture with numbers.
However, there are several ways you can make a case for the critical role of quantification in progress:

Clarity and Precision:
Argument: Quantification brings clarity and precision to discussions and decision-making processes.
Reference: "Quantitative Research Methods for Communication: A Hands-On Approach" by Jason S. Wrench, Candice Thomas-Maddox, and Virginia Peck Richmond.

Comparative Analysis:
Argument: Quantification enables comparative analysis and helps in identifying patterns and trends over time.
Reference: "The Signal and the Noise: Why So Many Predictions Fail – but Some Don't" by Nate Silver.

Resource Allocation:
Argument: Quantification facilitates efficient resource allocation by providing data-driven insights into where resources are most needed.
Reference: "Moneyball: The Art of Winning an Unfair Game" by Michael Lewis.

Objective Evaluation:
Argument: Quantification allows for more objective evaluation, reducing the impact of personal biases in decision-making.
Reference: "Thinking, Fast and Slow" by Daniel Kahneman.

Tracking Progress:
Argument: Quantification is essential for tracking progress and assessing the impact of interventions or changes.
Reference: "Measure What Matters: Online Tools for Understanding Customers, Social Media, Engagement, and Key Relationships" by Katie Delahaye Paine.

Communication:
Argument: Quantification provides a common language for communication, enabling effective collaboration and sharing of information.
Reference: "Naked Statistics: Stripping the Dread from the Data" by Charles Wheelan.

Policy Formulation:
Argument: Quantification is crucial in formulating evidence-based policies, leading to more effective and targeted interventions.
Reference: "The Science of Policy" by Roger D. Congleton.

Innovation and Problem Solving:
Argument: Quantification encourages innovation and facilitates problem-solving by identifying areas for improvement.
Reference: "How to Measure Anything: Finding the Value of 'Intangibles' in Business" by Douglas W. Hubbard.

When making your case, emphasize that quantification doesn't devalue subjective experiences; rather, it complements them by providing a structured and objective framework for analysis.
Balancing qualitative insights with quantitative data can lead to a more comprehensive understanding and, ultimately, more informed decision-making for progress.
Discuss?? Views??
Veritas Aequitas
Posts: 13016
Joined: Wed Jul 11, 2012 4:41 am

Re: Quantitative vs Qualitative

Post by Veritas Aequitas »

Notes: KIV

Despite the challenges I am aware of and mentioned by ChatGpt, I am confident what I proposed, i.e. comparing of credibility and objectivity across FSKs will work to some degrees.

Btw, I am not trying to achieve absolute numbers but those that are merely relative to an agreed standard. As such, whatever the results, they must be qualified to the agreed standard and contexts.

ChatGpt did not state, it is impossible by such comparisons has its difficulties and challenges.
I asked ChatGpt on how we can overcome those challenges, here is what ChatGpt suggested;
ChatGpt wrote: The approach you describe is an attempt to quantify and compare the relative credibility and objectivity of different knowledge frameworks, specifically science and theology, based on various criteria.
While it's an interesting conceptual exercise, it's important to note that such an endeavor faces several challenges and potential limitations:

... your proposal of transparency and careful consideration of variables is a step in the right direction.
Here are some approaches that may enhance the feasibility and validity of such assessments:

Interdisciplinary Collaboration:
Engage experts from diverse disciplines, including philosophy, science, theology, and other relevant fields. Interdisciplinary collaboration can provide a more comprehensive understanding and reduce biases inherent in a single-discipline perspective.

Peer Review and Deliberation:
Subject the assessment process to peer review by experts with different perspectives. Open deliberation on the criteria and their weights can help refine the evaluation process and identify potential biases.

Structured Deliberative Processes:
Use structured deliberative processes, such as Delphi method or consensus conferences, to elicit and synthesize expert opinions. These methods involve iterative rounds of feedback and discussion to converge toward a more informed consensus.

Transparent Methodology:
Clearly articulate the methodology used, including the criteria, their weights, and the process of evaluation. Transparency helps in understanding how the assessment is conducted and allows for scrutiny by the wider community.

Contextual Sensitivity:
Recognize the contextual nature of knowledge frameworks. Different contexts may require different criteria for evaluation. Consider developing context-specific frameworks rather than a one-size-fits-all approach.

Incorporate Meta-Analysis:
If feasible, incorporate meta-analytic techniques to analyze and synthesize multiple assessments. This can provide a more robust and nuanced understanding by aggregating diverse perspectives.

Longitudinal Assessment:
Recognize that knowledge frameworks evolve over time. Consider conducting longitudinal assessments to capture changes and developments in the credibility and objectivity of different knowledge systems.

Public Engagement:
Involve the broader public in the assessment process, especially when evaluating knowledge frameworks that have societal implications. Public engagement can provide additional perspectives and enhance the legitimacy of the assessment.

Ethical Considerations:
Explicitly address ethical considerations in the assessment, including potential biases, conflicts of interest, and the ethical implications of evaluating certain knowledge frameworks.

Iterative Improvement:
Acknowledge that no assessment is perfect. Encourage an iterative approach, allowing for continuous improvement based on feedback and evolving understanding.

In summary, achieving an optimal approach involves a combination of interdisciplinary collaboration, transparent methodology, contextual sensitivity, and ongoing refinement.
While it may not be possible to eliminate all challenges, a carefully conducted and transparent assessment can enhance the credibility of comparisons between different knowledge frameworks.
Last edited by Veritas Aequitas on Tue Oct 24, 2023 8:12 am, edited 1 time in total.
Iwannaplato
Posts: 6892
Joined: Tue Aug 11, 2009 10:55 pm

Re: Quantitative vs Qualitative

Post by Iwannaplato »

Veritas Aequitas wrote: Tue Oct 24, 2023 5:29 am I noted when I emphasized on the quantitative over the qualitative example in the rating of human-based FSK in terms of credibility and objectivity, I encounter the usual flock of birds of feature condemning and mocking the quantitative approach;
I had problems with the way you presented YOUR quantative approach. I was not mocking THE quantative approach, but had trouble seeing how it could apply for whole fields of inquiry, how such precise quantities could be determined, what was being measured and what FSK was the measuring taking place within. For example.
I believe quantification [avoiding any analysis paralysis] is critical for any progress.
And this implies that there is some kind of syndrome or illness involved when people question your methods. But this 'conclusion' isn't based on anything. It's the kind of implicit ad hom that's better left out.
Here are some notes why quantification is critical for progress from ChatGpt [with reservations];
I have elided the specific responses which are still visible in your post.

Perhaps someone else said that one should never use quantified methods, but I doubt it. Chatgpt's response is at a very abstract general level. It need not apply to YOUR way of coming up with numbers.

IOW there is no justification for your approach to quantification and the specific case of assigning numbers, and out to decimal places, regarding whole fields of inquiry.
When making your case, emphasize that quantification doesn't devalue subjective experiences; rather, it complements them by providing a structured and objective framework for analysis.
And notice what Chatgpt 'thinks' the objections are based on: that we are concerned that subjective experiences will be devalued. But that is not at all the concern. The concern was that you were merely guessing and guessing very specifically with numbers, but presenting it as if you had a kind of methodology for assigning the numbers. Others could easily, using the method you showed, and precisely as Atla did, come up with an evaluation of your FSK which would be rather negative.

For a quantification process to have any meaning, it needs to be usable by different people who will come up with very similar results.
Your method did not and as far as I can tell cannot do this.

Perhaps someone was concerned that subjective experiences would be devalued, but I doubt it.
In fact I think most people's concern was that subjective experiences/evaulations were being dressed up as quantatitive results that were in the slightest objective.
Iwannaplato
Posts: 6892
Joined: Tue Aug 11, 2009 10:55 pm

Re: Quantitative vs Qualitative

Post by Iwannaplato »

Now let's look at what Chatgpt thinks about assigning numbers to the objectivity of science....
Quantitatively estimating the objectivity of the entire field of science as a percentage related to accuracy is a complex and highly subjective task. Objectivity in science is a goal, but it is not an absolute state. It varies from one scientific discipline to another and even among researchers within the same field. The objectivity of science depends on various factors, including the research process, peer review, reproducibility, and the potential for bias.

It's important to note that no field of science can achieve 100% objectivity, as there is always some potential for bias, subjectivity, or error. However, the scientific community continually strives to improve objectivity through rigorous methodologies, transparency, and peer review.

Assigning a specific percentage to the objectivity of science would not be meaningful or accurate due to the inherent variability and complexity of scientific research. It's more appropriate to acknowledge that science is an ongoing, self-correcting process that aims for objectivity but remains subject to human limitations.

In summary, while science aims for objectivity and accuracy, it is not possible to provide a single quantitative estimate in percentage form for the overall objectivity of the field. Objectivity in science is a nuanced and evolving concept that varies by context and discipline.
Of course Chatgpt is not an authority, but if it is being used as an authority, then you might as well use the specific case.

What you have done in this thread VA parallels someone saying they are innocent of a crime because Chatgpt says that people can be found guilty on poor evidence. But in the specific case the criminal was caught on film, there were fingerprints and witnesses and so on.

Generalized defenses are often quite meaningless. No one is saying never use quantitative methods.
Veritas Aequitas
Posts: 13016
Joined: Wed Jul 11, 2012 4:41 am

Re: Quantitative vs Qualitative

Post by Veritas Aequitas »

Iwannaplato wrote: Tue Oct 24, 2023 6:29 am Now let's look at what Chatgpt thinks about assigning numbers to the objectivity of science....
Quantitatively estimating the objectivity of the entire field of science as a percentage related to accuracy is a complex and highly subjective task. Objectivity in science is a goal, but it is not an absolute state. It varies from one scientific discipline to another and even among researchers within the same field. The objectivity of science depends on various factors, including the research process, peer review, reproducibility, and the potential for bias.

It's important to note that no field of science can achieve 100% objectivity, as there is always some potential for bias, subjectivity, or error. However, the scientific community continually strives to improve objectivity through rigorous methodologies, transparency, and peer review.

Assigning a specific percentage to the objectivity of science would not be meaningful or accurate due to the inherent variability and complexity of scientific research. It's more appropriate to acknowledge that science is an ongoing, self-correcting process that aims for objectivity but remains subject to human limitations.

In summary, while science aims for objectivity and accuracy, it is not possible to provide a single quantitative estimate in percentage form for the overall objectivity of the field. Objectivity in science is a nuanced and evolving concept that varies by context and discipline.
Of course Chatgpt is not an authority, but if it is being used as an authority, then you might as well use the specific case.

What you have done in this thread VA parallels someone saying they are innocent of a crime because Chatgpt says that people can be found guilty on poor evidence. But in the specific case the criminal was caught on film, there were fingerprints and witnesses and so on.

Generalized defenses are often quite meaningless. No one is saying never use quantitative methods.
I agree with ChatGpt point re the above in its context.
However, the above is not relevant to my context and the manner I represent it.

I have never claimed science overall ["the entire field of science"] is 100% objective in the absolute sense and I had implied the degrees of objectivity is nuanced and varies within the scientific FSK itself and other non-FSKs.

One point:
ChatGpt: "However, the scientific community continually strives to improve objectivity through rigorous methodologies, transparency, and peer review."
It is not done at present, but to achieve the above more effectively, it is essential that the scientific community must have some objective quantified base to improve on.
Thus something like what I proposed would be appropriate.

What is your question to ChatGpt?
Veritas Aequitas
Posts: 13016
Joined: Wed Jul 11, 2012 4:41 am

Re: Quantitative vs Qualitative

Post by Veritas Aequitas »

Despite the challenges I am aware of and mentioned by ChatGpt, I am confident what I proposed, i.e. comparing of credibility and objectivity across FSKs will work to some degrees.

Btw, I am not trying to achieve absolute numbers but those that are merely relative to an agreed standard. As such, whatever the results, they must be qualified to the agreed standard and contexts.

ChatGpt did not state, it is impossible by such comparisons has its difficulties and challenges.
I asked ChatGpt on how we can overcome those challenges, here is what ChatGpt suggested;
ChatGpt wrote: The approach you describe is an attempt to quantify and compare the relative credibility and objectivity of different knowledge frameworks, specifically science and theology, based on various criteria.
While it's an interesting conceptual exercise, it's important to note that such an endeavor faces several challenges and potential limitations:

... your proposal of transparency and careful consideration of variables is a step in the right direction.
Here are some approaches that may enhance the feasibility and validity of such assessments:

Interdisciplinary Collaboration:
Engage experts from diverse disciplines, including philosophy, science, theology, and other relevant fields. Interdisciplinary collaboration can provide a more comprehensive understanding and reduce biases inherent in a single-discipline perspective.

Peer Review and Deliberation:
Subject the assessment process to peer review by experts with different perspectives. Open deliberation on the criteria and their weights can help refine the evaluation process and identify potential biases.

Structured Deliberative Processes:
Use structured deliberative processes, such as Delphi method or consensus conferences, to elicit and synthesize expert opinions. These methods involve iterative rounds of feedback and discussion to converge toward a more informed consensus.

Transparent Methodology:
Clearly articulate the methodology used, including the criteria, their weights, and the process of evaluation. Transparency helps in understanding how the assessment is conducted and allows for scrutiny by the wider community.

Contextual Sensitivity:
Recognize the contextual nature of knowledge frameworks. Different contexts may require different criteria for evaluation. Consider developing context-specific frameworks rather than a one-size-fits-all approach.

Incorporate Meta-Analysis:
If feasible, incorporate meta-analytic techniques to analyze and synthesize multiple assessments. This can provide a more robust and nuanced understanding by aggregating diverse perspectives.

Longitudinal Assessment:
Recognize that knowledge frameworks evolve over time. Consider conducting longitudinal assessments to capture changes and developments in the credibility and objectivity of different knowledge systems.

Public Engagement:
Involve the broader public in the assessment process, especially when evaluating knowledge frameworks that have societal implications. Public engagement can provide additional perspectives and enhance the legitimacy of the assessment.

Ethical Considerations:
Explicitly address ethical considerations in the assessment, including potential biases, conflicts of interest, and the ethical implications of evaluating certain knowledge frameworks.

Iterative Improvement:
Acknowledge that no assessment is perfect. Encourage an iterative approach, allowing for continuous improvement based on feedback and evolving understanding.

In summary, achieving an optimal approach involves a combination of interdisciplinary collaboration, transparent methodology, contextual sensitivity, and ongoing refinement.
While it may not be possible to eliminate all challenges, a carefully conducted and transparent assessment can enhance the credibility of comparisons between different knowledge frameworks.
User avatar
FlashDangerpants
Posts: 6520
Joined: Mon Jan 04, 2016 11:54 pm

Re: Quantitative vs Qualitative

Post by FlashDangerpants »

Veritas Aequitas wrote: Tue Oct 24, 2023 5:29 am I noted when I emphasized on the quantitative over the qualitative
Prior to that sentence you have been attempting to actualise quantitative values for the qualitative by making up numbers and calling them facts.

Now you are downgrading what you do to just choosing quantity over quality?
Iwannaplato
Posts: 6892
Joined: Tue Aug 11, 2009 10:55 pm

Re: Quantitative vs Qualitative

Post by Iwannaplato »

FlashDangerpants wrote: Tue Oct 24, 2023 9:39 am
Veritas Aequitas wrote: Tue Oct 24, 2023 5:29 am I noted when I emphasized on the quantitative over the qualitative
Prior to that sentence you have been attempting to actualise quantitative values for the qualitative by making up numbers and calling them facts.

Now you are downgrading what you do to just choosing quantity over quality?
And presenting it as if criticizing his way of coming up with numbers means we don't like the quantitative.
And so, he can then argue at the vaguest abstract level. It's not that his process could possibly be problematic in any way, and he is defending reason from the anti-quantitatives.

Whereas we just don't think what he is doing is quantitative, in any worthwhile sense. It's numbers, sure.
Skepdick
Posts: 14602
Joined: Fri Jun 14, 2019 11:16 am

Re: Quantitative vs Qualitative

Post by Skepdick »

Iwannaplato
Posts: 6892
Joined: Tue Aug 11, 2009 10:55 pm

Re: Quantitative vs Qualitative

Post by Iwannaplato »

Veritas Aequitas wrote: Tue Oct 24, 2023 5:29 am Btw, I am not trying to achieve absolute numbers but those that are merely relative to an agreed standard. As such, whatever the results, they must be qualified to the agreed standard and contexts.
Sure, and given that you are deciding to set a standard one could grant FOR COMPARISON'S SAKE a definite number for what is taken as the standard. But once you start giving other fields of inquiry numbers to a few decimal places, it is making stuff up. Further you didn't just assign science as the standard, you at the same time evaulated it's objectivity as nearly 100%. This is going beyond assigning it the role as standard, but making a claim that you have somehow figured out the degree of objectivity of Science. So, there are two process of making numbers up going on.
Interdisciplinary Collaboration:
Engage experts from diverse disciplines, including philosophy, science, theology, and other relevant fields. Interdisciplinary collaboration can provide a more comprehensive understanding and reduce biases inherent in a single-discipline perspective.
You've got a long way to go on this suggestion by Chatpgt. You reject suggestions made by other people and then psychoanalyze them and lump them together and call them names.
And there are plenty of ways to contact experts in various fields, but I assume if you had done this, you would have mentioned it to us. So, either your contacting them failed to elicit any interest, which might tell you something, or you, for some reason decide to keep conversing with people you consider philosophical gnats.
Transparent Methodology:
Clearly articulate the methodology used, including the criteria, their weights, and the process of evaluation. Transparency helps in understanding how the assessment is conducted and allows for scrutiny by the wider community.
Which is precisely what I asked you to do. And twice you used inappropriate examples where the parameters were simple to quantify. Rather than actually showing the steps you take or using my example of history, insulting me along the way for asking what Chatgpt is now actually suggesting. IOW you don't seem to actually read the suggestions, because you don't notice that what Chatgpt is suggesting was actually already suggested by at least one poster here. And what did he get, insults for asking for this. Even your appeals to authority lead to text that should make you question the way you relate to people - here - or go against your own ideas - in other places with a vareity of issues.
Contextual Sensitivity:
Recognize the contextual nature of knowledge frameworks. Different contexts may require different criteria for evaluation. Consider developing context-specific frameworks rather than a one-size-fits-all approach.
And you have a one size fits all approach. I'm not sure that's a bad idea on your part, but here Chatgpt is saying your approach, with that list of criteria, is wrongheaded. Again, I'm not sure I agree with the AI, but you don't seem to actually read what you link and quote to us and take time to digest it.
Longitudinal Assessment:
Recognize that knowledge frameworks evolve over time. Consider conducting longitudinal assessments to capture changes and developments in the credibility and objectivity of different knowledge systems.
I raised this issue.
Ethical Considerations:
Explicitly address ethical considerations in the assessment, including potential biases, conflicts of interest, and the ethical implications of evaluating certain knowledge frameworks.
This is an issue that FDP raises in a number of different ways.


Those were the easy ones to respond to and point out similarities to the critiques you got. This could be done with the others, but it would be more complicated.

I don't think you read carefully what you link and quote. And I am not the only one who has noticed this as having been going on for a long time.

The sad, ironic thing is that what CHatgpt suggest here could have been good guidelines for the way you took critique and questions. I do understand that you are often treated with disdain, but you don't treat people like resources unless you think they are agreeing with you completely.

So, perhaps you will listen to Chatgpt's suggestions or perhaps not.
Veritas Aequitas
Posts: 13016
Joined: Wed Jul 11, 2012 4:41 am

Re: Quantitative vs Qualitative

Post by Veritas Aequitas »

Iwannaplato wrote: Tue Oct 24, 2023 7:39 pm
Veritas Aequitas wrote: Tue Oct 24, 2023 5:29 am Btw, I am not trying to achieve absolute numbers but those that are merely relative to an agreed standard. As such, whatever the results, they must be qualified to the agreed standard and contexts.
Sure, and given that you are deciding to set a standard one could grant FOR COMPARISON'S SAKE a definite number for what is taken as the standard. But once you start giving other fields of inquiry numbers to a few decimal places, it is making stuff up.
Further you didn't just assign science as the standard, you at the same time evaulated it's objectivity as nearly 100%. This is going beyond assigning it the role as standard, but making a claim that you have somehow figured out the degree of objectivity of Science. So, there are two process of making numbers up going on.
Strawman.
That is the problem with dogmatic, narrow and shallow thinking.

I have stated clearly, I am not establishing absolute numbers i.e. merely relative numbers to an agreed standard.

Assigning science as the Standard and taking it at 100% objectivity does not in any way claim 'science is 100%' objective in the absolute sense.

For example, if I set the standard height of humans AT PRESENT at 1.7 meter, I can assign 100% or 100/100 to that height.
If a person is 1.4 meter, that would be 82% of the standard.
If a person is 1.9 meter that would be 111.8*% of the standard.
* actually, 111.76470588235294117647058823529%
As so on.
In this case of setting the standard of 1.7 meter as 100%, I am not stating that 1.7 meter is the maximum height of humans or in any absolute sense.

Similarly, when I set Science as the Standard at 100% objectivity, it is merely a reference point and not to be taken as absolute.
The only judgment is that the science FSK is the most credible and objective [based on an agreed set of criteria] AT PRESENT. It is possible humanity might be able to come up with some FSK that is more credible and objective [not absolute] than science.

The rest of the post is all strawmanning. I will not be bothered to defend.
So, perhaps you will listen to Chatgpt's suggestions or perhaps not.
I have already adopted ChatGpt's suggestions re the above in my approach and views.
User avatar
FlashDangerpants
Posts: 6520
Joined: Mon Jan 04, 2016 11:54 pm

Re: Quantitative vs Qualitative

Post by FlashDangerpants »

Veritas Aequitas wrote: Wed Oct 25, 2023 3:06 am
Iwannaplato wrote: Tue Oct 24, 2023 7:39 pm
Veritas Aequitas wrote: Tue Oct 24, 2023 5:29 am Btw, I am not trying to achieve absolute numbers but those that are merely relative to an agreed standard. As such, whatever the results, they must be qualified to the agreed standard and contexts.
Sure, and given that you are deciding to set a standard one could grant FOR COMPARISON'S SAKE a definite number for what is taken as the standard. But once you start giving other fields of inquiry numbers to a few decimal places, it is making stuff up.
Further you didn't just assign science as the standard, you at the same time evaulated it's objectivity as nearly 100%. This is going beyond assigning it the role as standard, but making a claim that you have somehow figured out the degree of objectivity of Science. So, there are two process of making numbers up going on.
Strawman.
That is the problem with dogmatic, narrow and shallow thinking.

I have stated clearly, I am not establishing absolute numbers i.e. merely relative numbers to an agreed standard.

Assigning science as the Standard and taking it at 100% objectivity does not in any way claim 'science is 100%' objective in the absolute sense.

For example, if I set the standard height of humans AT PRESENT at 1.7 meter, I can assign 100% or 100/100 to that height.
If a person is 1.4 meter, that would be 82% of the standard.
If a person is 1.9 meter that would be 111.8*% of the standard.
* actually, 111.76470588235294117647058823529%
As so on.
In this case of setting the standard of 1.7 meter as 100%, I am not stating that 1.7 meter is the maximum height of humans or in any absolute sense.

Similarly, when I set Science as the Standard at 100% objectivity, it is merely a reference point and not to be taken as absolute.
The only judgment is that the science FSK is the most credible and objective [based on an agreed set of criteria] AT PRESENT. It is possible humanity might be able to come up with some FSK that is more credible and objective [not absolute] than science.

The rest of the post is all strawmanning. I will not be bothered to defend.
So, perhaps you will listen to Chatgpt's suggestions or perhaps not.
I have already adopted ChatGpt's suggestions re the above in my approach and views.
If you used a ruler or something to measure the actual height of a person, that would be a measurement of an objective property.

How does that example help with you making up the number 56.7894% to represent the credibility score for Modern History and 33.5554% for Ancient History, and 42.54231464676% for History of Philosophy, and 91.00000000000000000000000000000000005% for philsophy of history? All of those are nonsense numbers and there is no ruler to arrive at them.
Iwannaplato
Posts: 6892
Joined: Tue Aug 11, 2009 10:55 pm

Re: Quantitative vs Qualitative

Post by Iwannaplato »

Veritas Aequitas wrote: Wed Oct 25, 2023 3:06 am Strawman.
That is the problem with dogmatic, narrow and shallow thinking.
or potentially poor communcation on your part or even you changing the way you word things all the time, when it is temporarily convenient. Or you just making stuff up now.
I have stated clearly, I am not establishing absolute numbers i.e. merely relative numbers to an agreed standard.
Yes, but you did more than that. You have also stated that it was based on actual percentages of qualities.

Look, I truly understand the idea of setting a standard and using 100 has advantages in base ten. Peachy. But you have also said it was an accurate measurement of the objectivity of the FSK, the accuracy, the empiricallness. Not that it was an arbritrary but useful number to use for a standard but a number that comes from your assessment of objective qualities of the field of inquiry. I quote some of the various confusions you've presented below. And they are not limited to these. You start so many damn threads on the same topic it is a real headache tracking down where you contradict yourself or have said something. I know, it works for you to flood us with threads on the same topic, and yes I even remember your reason for you considering it works.
Assigning science as the Standard and taking it at 100% objectivity does not in any way claim 'science is 100%' objective in the absolute sense.
LOL. Y
For example, if I set the standard height of humans AT PRESENT at 1.7 meter, I can assign 100% or 100/100 to that height.
If a person is 1.4 meter, that would be 82% of the standard.
If a person is 1.9 meter that would be 111.8*% of the standard.
* actually, 111.76470588235294117647058823529%
As so on.
In this case of setting the standard of 1.7 meter as 100%, I am not stating that 1.7 meter is the maximum height of humans or in any absolute sense.
I mean, Jesus Christ!!!!????.

Your examples are always with utterly measurable aspects of things. Which has very little to do with giving ratings to a whole field of inquiry.

But let's say you are correct in talking about the standard in this way. The moment you jump to another field of inquiry or FSK and assign it a number you are pulling numbers out of your ass.

And this is clear because you never, ever show the steps to coming to the number. It is always your intuition.
Similarly, when I set Science as the Standard at 100% objectivity, it is merely a reference point and not to be taken as absolute.
The only judgment is that the science FSK is the most credible and objective [based on an agreed set of criteria] AT PRESENT. It is possible humanity might be able to come up with some FSK that is more credible and objective [not absolute] than science.

The rest of the post is all strawmanning. I will not be bothered to defend.
Of course, because you'd have to show steps. and you can't because there aren't any. There are just assertions.

In your example with height we could show the measurements with height and show the math. Show exactly what the criterion is which instantly creates a number for comparison.
So, perhaps you will listen to Chatgpt's suggestions or perhaps not.
I have already adopted ChatGpt's suggestions re the above in my approach and views.
[/quote]No, you haven't. YOu have not let them influence in the least what Chatgpt suggests one should do in relation to others. You insult dissent. You mind read dissent. You cannot collaborate. Not so far and not openly.

Here's some examples of where you are all over the place with your Standard:
Within a typical legal FSK, not all the evidences are relied on verified empirical evidence but rather on the memory of witnesses.
In this case, the empirical evidence criteria can be assessed with say 50/100 in contrast to the scientific FSK [natural sciences] at say 90/100.
That legal FSK final judgment is based on a randomly selected citizens [thus rated appx 50/100] but in science it is based on the general consensus of credible peers from the specific fields of knowledge [thus rated 80/100].
as the standard and compare everything else against that chosen standard.
I have claimed that the scientific FSK is the most credible, thus setting it as the standard at 99.99 [or 100.00] being empirical based.
That's not clear communication.
If we assign the scientific FSK as the standard at 99.99% [99.99% empirical]
See, it's not just a standard assigned 100 for the sake of comparison. You assigned it a number based on your assessment of the degree of something objective.

And there are more ways you have phrased this with different numbers and different criteria for the standard with sloppy sentences. Sometimes it is a mere assignment of 100 as a convenient number as a standard, at other times it is the measure of something like the degree of empiricalness, sometimes it is worded in other ways to make it seem like some other criterion is being used.

And yet the only fucking interpretation you can come up with is that I am strawmanning. I mean, I actually read your posts. Do you?

I mean, at least you could have the fucking humility to consider, and reading your own posts will show this, that you may not be communicating consistantly or clearly. And that's a charitable interpretation on my part. The less charitable is that you keep chaning horses midstream when the inconvenience of what you've already said becomes apparant.

But, as I said, above, even if we grant the new way you are wording the way you have assigned 100 to science, it does not explain the many decimal places for other fields of inquiry. You do not show the steps, you simply make claims. And your evaluations, for example for realism, miraculously fit your desires. Despite your position on objectivity being intersubjectivity and the majority of scientists being realists, and you assigning Science the standard for FSKs, you somehow can rate realism a zero. This is disingenousness at a temper tantrum posing as logical level.

You're accusations of strawmanning usually conflate pointing out what your position entails with attributing that as a conclusion you have stated. Here it just means 'I, VA; don't like what you said cause I don't even read my own posts well and consider AT ALL that I might be confusing my readers or even contradicting myself in what is not my native language.' It's too much to hope you might have some self-awareness.

And in no way are you exhibiting the interpersonl and process suggestions Chatgpt made. Not a tiny bit.

Why has no antirealist in the world become the least bit interested in your work? They're easy to contast and if they find something interesting, they respond.
Veritas Aequitas
Posts: 13016
Joined: Wed Jul 11, 2012 4:41 am

Re: Quantitative vs Qualitative

Post by Veritas Aequitas »

FlashDangerpants wrote: Wed Oct 25, 2023 3:24 pm
Veritas Aequitas wrote: Wed Oct 25, 2023 3:06 am
Iwannaplato wrote: Tue Oct 24, 2023 7:39 pm Sure, and given that you are deciding to set a standard one could grant FOR COMPARISON'S SAKE a definite number for what is taken as the standard. But once you start giving other fields of inquiry numbers to a few decimal places, it is making stuff up.
Further you didn't just assign science as the standard, you at the same time evaulated it's objectivity as nearly 100%. This is going beyond assigning it the role as standard, but making a claim that you have somehow figured out the degree of objectivity of Science. So, there are two process of making numbers up going on.
Strawman.
That is the problem with dogmatic, narrow and shallow thinking.

I have stated clearly, I am not establishing absolute numbers i.e. merely relative numbers to an agreed standard.

Assigning science as the Standard and taking it at 100% objectivity does not in any way claim 'science is 100%' objective in the absolute sense.

For example, if I set the standard height of humans AT PRESENT at 1.7 meter, I can assign 100% or 100/100 to that height.
If a person is 1.4 meter, that would be 82% of the standard.
If a person is 1.9 meter that would be 111.8*% of the standard.
* actually, 111.76470588235294117647058823529%
As so on.
In this case of setting the standard of 1.7 meter as 100%, I am not stating that 1.7 meter is the maximum height of humans or in any absolute sense.

Similarly, when I set Science as the Standard at 100% objectivity, it is merely a reference point and not to be taken as absolute.
The only judgment is that the science FSK is the most credible and objective [based on an agreed set of criteria] AT PRESENT. It is possible humanity might be able to come up with some FSK that is more credible and objective [not absolute] than science.

The rest of the post is all strawmanning. I will not be bothered to defend.
So, perhaps you will listen to Chatgpt's suggestions or perhaps not.
I have already adopted ChatGpt's suggestions re the above in my approach and views.
If you used a ruler or something to measure the actual height of a person, that would be a measurement of an objective property.

How does that example help with you making up the number 56.7894% to represent the credibility score for Modern History and 33.5554% for Ancient History, and 42.54231464676% for History of Philosophy, and 91.00000000000000000000000000000000005% for philsophy of history? All of those are nonsense numbers and there is no ruler to arrive at them.
I have already explained, the "ruler" is the credibility and objectivity of the Scientific FSK as determined via
Criteria in Rating Credibility & Objectivity of a FSK
viewtopic.php?t=41040

Whatever the rating of the science-FSK established above from the exercise based on the criteria, say, the result is 85.01 points.
we take that 85.01 points as the STANDARD at 100%.
If we assess say [guess] the history FSK is 40.05 points, we divide this 40.05 by 85.01x100, we get
47.11210445829..% which can be rounded to 47.1%.
As such, the reading is, the credibility and objectivity rating of the history FSK is 47.1% relative to the scientific FSK.
If we do not want to convert with a 100% base, we can accept the results [85.01 pts, 47.1 pts] as assessed which is cumbersome.
Converting the results and comparing them on a 100% based is easier to understand.
What is the problem with this?

Note it is only a relative reading to give some sense of credibility and objectivity of a FSK to facilitate whatever utility is necessary.

What is most useful for this discussion re morality are those readings which are highly contrasting [science (100%) vs theology, philosophical realism ] or near to each other [science vs my-morality].

Intuitively, it is obvious the credibility and objectivity between science and theology is widely contrasting.
If you have any alternative, show me how can we be more effective in comparing the credibility and objectivity, say theology against science.
Veritas Aequitas
Posts: 13016
Joined: Wed Jul 11, 2012 4:41 am

Re: Quantitative vs Qualitative

Post by Veritas Aequitas »

Iwannaplato wrote: Wed Oct 25, 2023 4:12 pm You start so many damn threads on the same topic it is a real headache tracking down where you contradict yourself or have said something. I know, it works for you to flood us with threads on the same topic, and yes I even remember your reason for you considering it works.
I am ignoring the rest of the points.
Your headache is your problem; if you are competent, you can resolve it yourself, like I did with mine.

You think it would be more efficient and effective to lump all the discussions and post that involved various fields of knowledge into a >600-pages or >1000-pages dumpster thread that is filled with mostly shit?

The general maxim of efficient and effective Problem-Solving Technique is to break a complex problem down into smaller manageable units.
You've never heard of this, I presumed.

The different threads I have raised works like a bibliography to my projects and I have listed them in a column with other columns for sorting into their relevant categories.
Note how easily I linked them in my posts.
Post Reply