Toggle slideshow »

6.868 / MAS.731

Society of Mind

Spring 2013

« Return to the main page

 

CHAPTER 6: COMMON SENSE
QUESTIONS, CRITICISMS, & SOLUTIONS

  1. Baby machines: If we don't know how to make programs that invent novel representations, will programmers have to invent all the new representations and build an adult machine instead?
  2. How do we generate possible goals to pursue?
  3. What representations might we employ when analyzing our own thoughts?
  4. What representations do our supervising agents have of our different ways to think?
  5. How much structure or information does a child's mind have by the time it is born?
  6. Should we keep track of the common sense beliefs among different cultures? (Maybe we should try to catalogue the differences, or to expunge them from the database so that our information is more neutral)
  7. In childhood, we acquire a lot of novel common sense knowledge. Is there a difference between how we apply common sense knowledge when we first learn it, and then again (years later?) after we've "internalized" it?
  8. Is there an agent which holds on to deprecated common sense facts (e.g. theories like "some fish don't have legs" that are later replaced by more effective rules), so that we avoid generating them again? If we don't have such an agent, how do we weed out ineffective common sense rules?
  9. What different kinds of common sense knowledge are there? For example, the existence of autism suggests that some parts of the brain are specialized for handling some kinds of social common sense.
  10. Can we use common sense knowledge to help a robot learn or understand language the way that humans do? This might be a good alternative or supplement to grammar-based theories of language.
  11. Criticism: This book explains the problem with naive approaches to AI (like constructing long lists of If-Then rules), and convincingly describes more promising approaches. But even though we agree that a truly intelligent program must be able to (for example) change representations and generate abstractions, we don't know how to write a program that does so. How do we begin to know how to do that?
  12. When using difference engines to achieve our goals, how do we decide which differences to reduce?
  13. Since humans evolved to be able to acquire and master a large amount of common sense information, wouldn't it be a good idea to develop a computer program that /evolves/ the ability to handle common sense in the same way?
  14. Are our towers of difference engines unique to individuals? How much do they vary over time, and among individuals? Among cultures? Among the human race?
  15. I'm impressed by Evans's analogy program; Maybe instead of searching for new ways to think, our programs could instead search aggressively for new and better ways to generalize what they already know?
  16. If we have many ways of representing knowledge, does that mean that we also have different strategies for processing each kind of representation? How many of our ways to think are tailor-made for specific representations, and how many are general-purpose?
  17. I know from experience that choosing the right representations can make the difference between efficient, legible code and slow, kludgy code, and I see that humans can retrieve, deploy and combine common sense information incredibly effectively. So, I want to ask: what representations enable us to handle common sense knowledge so well?
  18. How do we interconnect representations that operate at different levels of abstraction? What representations enable us to switch between levels of abstraction so easily?
  19. How can we design a program that can carry out abstractions?
  20. Is art (e.g. playing music) more like a high level activity or a low level one? It seems to rely on planning and effective communication like many high level skills, but also on complicated systems that manage nuances, like the systems that control fine motor movement or those that enable us to read volumes of information from subtle facial expressions.
  21. Many planning systems construct the entirety of their plans before they execute them. However, in everyday life, our plans are often not fully developed before we start, and exigencies often intervene. How would your programming approach differ if you were you write a program that ammends and elaborates on a plan as it executes it? (for example, design a robot that tries to solve a non-routine math problem, or that tries to find its way home from the store based on things it noticed on the way to the store).
  22. I'm interested in the idea of learning from mistakes, and I wonder how we process them to make it easier for us to learn from them and use what we learn from them. What are our most effective strategies for recognizing, cataloguing, representing, consolidating, and generalizing mistakes?
  23. What processes might cause us to "recollect" details that never happened? (confabulation)
  24. Suggestion: Humans have begun to pass down knowledge about mistakes through anecdotes and cautionary tales. I think it would be helpful if computers could do this, as well. I suspect that computers might be even more effective than we are at doing this, perhaps through the use of a central database of mistakes.
  25. Concern: We learn common sense information over time, which enables us to process that information and relate it to other things we know. But the way we generate common sense databases now, I worry that we produce facts that lack these rich interconnections. (I guess that's one of the appeals of building a "baby machine".) How can we resolve this problem?
  26. What are the different competences encapsulated by the suitcase word "Knowing"?
  27. Why don't we notice when we use suitcase words? In other words, what processes enable humans to correctly disambiguate a suitcase word, or otherwise to avoid noticing or being bothered by words that don't carry any real meaning?
  28. Perhaps we use our ability to negotiate multiple representations at once not only when we are trying to use what we already know, but also when we are trying to learn something new, or to transfer something we've learned in one domain to another.
  29. Experience seems like a critical component of common sense reasoning --- because our reasoning depends so much on analogy with things we've experienced before.
  30. What skills are required to give computers a three-dimensional view of the world?
  31. A lot of human behavior is determined by different types of Pain and Pleasure. Are these concepts helpful/necessary for intelligent computers? And are they evolutionary hacks for us, or are they fundamentally useful for some reason?
  32. Criticism: I doubt we can program computers to do the sorts of things that took millions of years to evolve in nature. Even if you could, why would you want to handicap a computer by making it as (in)capable as a newborn infant? Instead, we should program computers to do the things that computers surpass humans at, e.g. tabulating and grinding through large collections of data without getting tired or making a mistake or being argumentative or lazy.
  33. What will common sense systems enable us to do in the near future?
  34. How effectively can we induce particular emotional states in ourselves? What are good strategies for doing so? Relatedly, how do we manage to train children to feel certain ways in certain situations? (e.g. happy at a wedding, sad at a funeral.)
  35. How would you create a robot that appreciates music in the same way we do? What processes might be required to appreciate music? In particular, how is it that certain chords become associated with certain moods?
  36. How would you give a computer the ability to feel "gut feelings"? How much do gut feelings depend on environmental features (like how darkness conveys a sense of foreboding), and how much on physiological processes (like how making a certain facial expression or noticing a surge of adrenaline contributes to the feeling of fear), and how much on subconscious computations?
  37. What decision-making procedures do we use to choose appropriate representations/domains? Do we try everything in parallel, or do we try what worked last time? Is this search procedure for good representations a methodical process, or more arbitrary and random?
  38. How do we choose which representation to try next, when the current representation turns out to be unproductive? Perhaps we start a debugger to explain why the current representation failed, then try to find a better one, or else we rely on some educated guesswork.
  39. Why do young people tend to acquire information more quickly than older people? Is this a societal tendency, or the result of biological changes?
  40. Is the example of "the professor who couldn't remember which concepts were hard" related to the example of "the child who can walk, but who can't explain how it works"? That is, how do parts of our minds manage to develop complicated algorithms without "us" being conscious of how they work?
  41. How do we utilize our common sense knowledge to generate facts on-demand---for example, that you can sit on a diving board or that classrooms are unlikely to contain space shuttles? Is common sense information stored in our brains in a way that makes it easy to generate sentences like these? Is common sense information stored in a way that resembles sentences like these?
  42. When making an abstraction, it's important to decide which features are important. But how do we decide on a representation in the first place, even before we decide on the features of the representation that are meaningful?
  43. Why does society frown upon "stating the obvious" --- that is, making common sense knowledge explicit? Shouldn't it be enlightening to expose the assumptions and background knowledge we have in common?
  44. How is the 6-layer division of the mind related to the division of the mind into specialized domains of knowledge/representation? For example, are there some domains of knowledge that exist only at one level? Are there representations that span many levels? Or, are subdomains of knowlege just another kind of resource that control structures at any of the six levels might use?
  45. How do our minds internally represent the reliability of various common sense facts? For example, do we use probabilities, or qualitative descriptions (sometimes, rarely, usually, always)?
  46. Although eidetic memory (photographic recall) has no reliable evidence nowadays, recent studies (> 2006) suggest that some individuals have /hyperthymesia/, in which they have an eidetic memory for "autobiographical" events, and that this might be the result of time-space synesthesia. What do you think about this, and what mechanisms might explain how hyperthymesia occurs?
  47. How might you design a program that can appropriately answer questions like "what does this remind you of?" or "Have you seen anything like this before"?
  48. Why have high-level/self-reflective difference engines not been studied further? Is there more to learn from difference engines---could they be an active area of research nowadays---or do we basically understand them and their limitations?
  49. Do you think the brain actually uses something very similar to difference engines?
  50. Although brain science may be too primitive nowadays, how do you imagine it might be able to help AI research in the fear future?
  51. What sorts of abilities are children born with? How can we experimentally determine what children can do, if some of their abilities are kept in an internal "prototyping stage" without any sort of behavioral manifestation?
  52. How is the quality of our decisions affected when we use "gut feelings" instead of our high-level explicit planning, linguistic, cognitive procedures?
  53. To what extent do we perform mental hygiene: to forget useless information, to clear out bad ideas, maladaptive habits, and unproductive ways to think? It seems like if all our knowledge is tightly interconnected, then overzealous cleaning would break too many things. Maybe we're only able to make a succession of superficial changes.
  54. Concern: Suppose we make a near-human-level intelligent computer. Doesn't it interfere with the computer's autonomy if we provided all of its common sense information, rather than letting it acquire its own opinions?
  55. When utilizing multiple realms of thought, does one realm usually dominate, or can we have several active at once? If we have several active at once, doesn't that severely constrain the resources that each realm can use?
  56. How would you program a computer whose goal is to find regularities in the environment (and how would you prevent it from making generalizations that are too large to be useful)?
  57. I don't understand how logic makes it hard to do reasoning with analogies. What does that mean?
  58. Although evolution has obscured the inner workings of our minds from us, should we design computers that are fully capable of seeing and modifying even the lowest levels of their minds? (Or would that be too dangerous for them? Maybe we should give them a switch to turn on direct introspection after they've learned enough.) Should we make programs that can indirectly modify their behavior the way that we do (e.g. through music or caffeine or imagining a peaceful/frustrating/melancholy/inspiring scenario)?
  59. How do the representations which children use differ from those which adults use?
  60. Do children have different realms of expertise than adults? Perhaps children are specialists in certain skills that adults don't have or don't need.
  61. How do Frames and Difference Engines interact?
  62. To what extent does culture play a role in how general or specific our metaphors are? Is understanding new metaphors a skill that we are taught, or does it mostly rely on skills that we already have in other areas? Are some culture's metaphors more abstract than others --- or is there some universal consensus on how abstract they generally are?
  63. It seems like Panalogies might sometimes result in duplicated work, as multiple parts of the brain independently try to do the same job simultaneously. How do brains confront this problem?
  64. What are the functions that enable us to acquire and use common sense information?
  65. What do we use our common sense information for, and how could we design programs to perform those functions?