Reasoning architectures

  • transformers/autoregressive-models could reason without assistance (like CoT) as opposed to common thoughts and beat all other architectures on specifically designed tasks/datasets, even though they can’t reason deeply
  • different architectures reason differently depending on different variable-relations
  • no single one architecture can reason omnipotently
  • even human intelligence cannot perform best for all reasoning tasks/datasets, like being defeated by AlphaGo and Deepblue
  • new architecture is needed for new task/dataset, like healthcare/self-driving, for better reasoning

Fundamental deficiencies of human intelligence

  • unable to effectively deal with issues that never happened previously in history, like COVID19, or judging people by their titles and past accomplishments rather than their future achievements. possible solutions: analogy, creativity, inspiration, emulation
  • unable to accurately predict the future. possible solutions: super natural intelligence
  • unable to effectively explore the unknown, like interstellar travel or inventing a new medicine or an innovation. possible solutions: creativity, innovation, trial and error
  • unable to prove counterfacts like what would happen if I had the chance to do it again

Deficiencies of human intelligence that could possibly be complemented

  • lack of enormous computing capabilities. possible solution: AI
  • lack of highly effective collaborations. possible solution: collective intelligence
  • lack of super assistive abilities. possible solution: SAA like CT and wireless communication

The role LLMs will play in future intelligence

Processing sequential data like auto translation between different knowledge representations provided generalisation could be resolved via abstraction, inverse-abstraction and analogy

Language

The essence and purpose of language is for facilitating understanding and communication. Scientists have created too many terms which have complicated things up unnecessarily and this phenomenon will hinder the rapid progress of future scientific research. Will future intelligence help solve this issue?

Deficiencies of evolution of human intelligence and human civilisation 

Avoided the worst but cannot guarantee the best

Hypothesis

Any data can be transformed into combination of hybrid probability distributions, such as Gaussian / Bernoulli / Uniform / Exponential / Gamma / hierarchical distributions for different sections of data. While data itself is uninterpretable, probability distributions are interpretable, therefore achieving explainability. The question is: how to find that sophisticated combination of probability distribution under uncertainties, i.e. observational uncertainties/errors.

AI的数据分析与人类数据分析的不同之处

AI属于训练,只能单纯的从现有的数据集当中分析。而人类的数据分析方式会加入大量的额外知识,比如以前学习到的积累的知识。通过归纳总结类比联想等高级智能,分析出更多有价值结论。所以两者之间有着根本性区别。

智能的几个级别

下一代AI

经验和科学的区别(医疗和AI属于哪个?)

  • 1. 个人特殊经验,在复现和验证前需保持谨慎态度。如某人推荐某种药物
  • 2. 专家经验,有待复现验证,前提是有标准。如名老中医师的个人经验
  • 3. 科学理论,有标准有复现有验证有评估。如西医医学指南

评估的几个级别(AI属于哪个?)

  • 1. 所有人都可以发表意见评估,属于大众级别。如房价,工作,收入
  • 2. 子群兴趣小组评估,属于业余爱好级别。如买车,买手机
  • 3. 领域专家有资格评估,属于专业博士级别。如数学,医学,AI
  • 4. 专家评估专家,领域里面的专家才能够评估其他专家,即同行评审peer review。普通人仅代表个人观点,勿妄谈此事

未来智能的知识表达

语言的本质,是为了解释,交流,沟通,理解,说服,而不是背诵,机械记忆,发音标准,念对读音。偏离了本质,语言变成了炫耀的工具,身份的象征。如果未来的某天,某种形式的语言(假如依然能称之为语言)克服了上述偏见,那将会是未来智能中全新的知识表达。

Scroll to Top