Workshop on Computational Models in Social Epistemology
Bochum, Dec 6-8 2023
https://github.com/debatelab/genai-epistemology
📄 Wang et al. (2023)
📄 Betz (2022)
📄 Du et al. (2023)
Prompt: “These are the solutions to the problem from other agents: <other agent responses> Using the reasoning from other agents as additional advice, can you give an updated answer? Examine your solution and that other agents. Put your answer in the form (X) at the end of your response.”
📄 Du et al. (2023)
class AbstractBCAgent():
def update(self, community):
opinions = [peer.opinion for peer in self.peers(community)]
self.opinion = self.revise(opinions)
def peers(self, community):
peers = [
agent for agent in community
if self.distance(agent.opinion) <= epsilon
]
return peers
def distance(self, opinion):
pass
def revise(self, opinions):
pass
class NumericalBCAgent(AbstractBCAgent):
def distance(self, opinion):
"""calculates distance between agent's and other opinion"""
return abs(opinion - self.opinion)
def revise(self, opinions):
"""revision through weighted opinion averaging"""
alpha = self._parameters.get("alpha", .5)
revision = alpha * self.opinion + (1-alpha) * np.mean(opinions)
return revision
class NumericalBCAgent(AbstractBCAgent):
def distance(self, opinion):
"""calculates distance between agent's and other opinion"""
return abs(opinion - self.opinion)
def revise(self, opinions):
"""revision through weighted opinion averaging"""
alpha = self._parameters.get("alpha", .5)
revision = alpha * self.opinion + (1-alpha) * np.mean(opinions)
return revision
[
#1
"Consuming a vegan diet directly contributes to reducing greenhouse "
"gas emissions, as animal agriculture is a significant source of "
"environmental pollution.",
#2
"The scientific evidence supports the health benefits of a vegan diet, "
"which can lead to a reduced risk of various diseases, such as diabetes, "
"high blood pressure, and some cancers.",
#3
"Veganism doesn't support a healthy and balanced diet.",
#4
"There is a negative impact on the environment and economy when people "
"follow a vegan diet.",
#5
"A vegan diet can prevent certain types of cancer.",
#6
"Reducing meat consumption is necessary to avoid a global food crisis.",
#7
"Contrary to popular belief, studies suggest that a well-planned "
"traditional omnivorous diet may reduce the risk of certain diseases "
"compared to a vegan diet.",
#8
"While plant-based diets have their benefits, they are not always easy "
"to stick to in the long run.",
#9
"As someone who has been vegan for over a year, my energy levels have "
"increased significantly while my risk of certain diseases has decreased.",
#10
"My personal experience as a vegan for two years has been plagued with "
"deficiencies and malnutrition, leading to low energy levels and "
"compromised health."
]
class NaturalLanguageBCAgent(AbstractBCAgent):
def distance(self, other):
"""distance as expected agreement level"""
lmql_result = agreement_lmq(
self.opinion, other, **kwargs
)
probs = lmql_result.variables.get("P(LABEL)")
return sum([i*v for i, (_, v) in enumerate(probs)])/4.0
def revise(self, peer_opinions):
"""natural language opinion revision"""
revision = revise_lmq(
self.opinion, peer_opinions, **kwargs
)
return revision
alpha
=“very high”; epsilon
=0.40.5; topic
=“veganism”
🤔 Are LLMs suited for building epistemic agents?
📄 Pan et al. (2023)
📄 Morris et al. (2023)
📄 AI4Science and Quantum (2023)
📄 Betz and Richardson (2023)
But humans’ cognitive architecture is fundamentally different from LLMs’ , or is it?
📄 Goldstein et al. (2020)
📄 The neural architecture of language: Integrative modeling converges on predictive processing. (Schrimpf et al. 2021)
TLDR It is found that the most powerful “transformer” models predict nearly 100% of explainable variance in neural responses to sentences and generalize across different datasets and imaging modalities […].
📄 Brains and algorithms partially converge in natural language processing. (Caucheteux and King 2022)
TLDR This study shows that modern language algorithms partially converge towards brain-like solutions, and thus delineates a promising path to unravel the foundations of natural language processing.
📄 Mapping Brains with Language Models: A Survey. (Karamolegkou, Abdou, and Søgaard 2023)
ABSTRACT […] We also find that the accumulated evidence, for now, remains ambiguous, but correlations with model size and quality provide grounds for cautious optimism.
📄 Artificial neural network language models predict human brain responses to language even after a developmentally realistic amount of training. (Hosseini et al. 2022)
TLDR [A] developmentally realistic amount of training may suffice and […] models that have received enough training to achieve sufficiently high next-word prediction performance also acquire representations of sentences that are predictive of human fMRI responses.
LLMs suited for building epistemic agents?
Come, join the party! 🎉
Vanishing distinctions (due to AGI):
Epistemic redundancy (due to AGI) brings profound philosophical challenges:
📄 Du et al. (2023)
📄 Curtò et al. (2023)