Monday, April 20, 2009

What's in the circle of influence?

A long time ago, I read some advice from Stephen Covey about applying effort within one's Circle of Influence to extend one's Circle of Concern. The advice is useful to remind oneself from time to time.

What are some obvious things within most people's circle of influence?

1. Knowledge acquisition (useful for careers and other things...with a normal human brain, if you study, you will learn...nowadays, an internet connection is sufficient to have fairly good instruction as well...MIT open source classes, wikipedia, lesswrong, etc.)

2. Willingness to think/brain storm/model/track one's success...the only limit is time. But long-term one can free up some time by doing modeling and trying to figure out how to waste less of it.

Monday, April 13, 2009

a good physics book

I really enjoyed this physics popularization I read this weekend, Physics of the Impossible by Michio Kaku

why induction

my thoughts on the justification for induction




Robert Zahra

Status Signaling and Optimization

Here , Robin Hanson argues that society is too oriented towards maximizing status. He also provides a simple mathematical example of signaling resulting in a net loss.

Observations:

1. Signaling gets its power from what we choose to value and admire. Should we therefore make a conscious effort to admire useful activities above less useful ones, and to publicize our doing so? I think so.

2. This observation seems to derive in part from a more general rule. If we are optimizing on utility function U2 (here, status), which is not identical to U1 (here, actual happiness of society), what can we say about the relationship between U2 and U1, where U2 is meant to approximate U1?

(a) Potentially dangerous if taken too literally: For example, if U2 gets a huge positive value for a small part of the domain where U1 is small, then you could wind up choosing a vastly different option than you otherwise would. (for example, you want to be an altrustic person (U1), so you optimize by feeling a warm feeling inside (U2), and then you discover that ingesting a certain drug would with 10% probability give you a massively strong warm feeling, which would not in fact help U1 much at all.

(b) How to measure similarity of utility functions: point (a) suggests you should be careful how you measure the similarity of U2 and U1, before you agree to take U2 as a proxy for U1.

(I) Focus on implied actions: In general, we can say that you're only possibly hurt in cases where U2 would make you select a different action than U1.
(II) Invariances: Some shifts have little no effect on what we care about, which is our expected value of optimizing the function (for example, parallel or scalar shifts of a utility function are known to have no impact on the action chosen to optimize one's pay-off).
(III) More detailed proposals: see this paper here .
(IV) How monotonicity won't work in general: We'd like to say that if U2 strictly dominates U2', i.e., if U2 is closer to U1 in some sense in its evaluation of every outcome than U2' is, that we know we'll do better on U1 by using U2 than U2'. This looks to me untrue. Consider for example three actions A1,A2,A3, with these pay-offs:

U1(A1) = 1
U1(A2) = 2
U1(A3) = 2.1

U2(A1) = 1
U2(A2) = 2.001
U2(A3) = 2

U2'(A1) = 5
U2'(A2) = 10
U2'(A3) = 15

Imagine U2 and U2' were two different people's beliefs about the dollar amounts likely to come from playing strategies A1, A2, and A3. We can see that U2 is far more reasonable than U2' in their model of outcomes, yet U2' will result in a higher pay-off than U2. This result is similar to a no free lunch result.

3. Despite the negative result in 2.b.4, our partial knowledge still proves useful in some way (as evidence, witness the fact that our intelligence, although limited, did manage to evolve). Presumably there's more symmetry in the actual world, which makes progressively more accurate beliefs more useful, at least within a certain range. Further delineating this range would be useful, so we know where to focus our efforts.

Saturday, April 11, 2009

Dieting tactics

There are multiple strategies in the fight with akrasia. One well-known meta-strategy is the one used by Ulysses against the Sirens, namely, controlling one's future environment so that one's future self is unable to compromise the present self's goals. If we can make it easy on the future self, perhaps this is even more effective. This would be more akin to Ulysses putting bee's wax in his comrades ears, than allowing himself to be tortured but more aware by hearing the Sirens' song while tied up.

Some foods which it seems plausible decrease appetite (further discussion here):

1. Pine nuts
2. Green tea (caffeine free at night)
3. Olive oil separated from meals by at least one hour on both sides (see Shangri La diet ...tried this today and I felt very sick until I ate food one hour later, will re-try with different olive oil)
4. Apples
5. Nicotine (possibly some downsides, especially to the gums...I'm undecided)

Robert Zahra

Friday, April 10, 2009

Overcoming Bias Summaries

In the interest of chunking more deeply, I'm going to take these Yudkowsky posts from overcomingbias and try to write one sentence summaries of each. It would be useful if someone does the same for other posters, probably starting with Robin Hanson.

1. The Martial Art of Rationality: Rationality is a technique to be trained.

2. Why truth? And...: Truth can be instrumentally useful and intrinsically satisfying.

3. ...What's a bias, again?: Biases are obstacles to truth seeking caused by one's own mental machinery.

4. The Proper Use of Humility: Use humility to justify further action, not as an excuse for laziness and ignorance.

5. The Modesty Argument: Factor in what other people think, but not symmetrically, if they are not epistemic peers.

6. I don't know.: You can pragmatically say "I don't know", but you rationally should have a probability distribution.

7. A Fable of Science and Politics: People respond in different ways to clear evidence they're wrong, not always by updating and moving on.

8. Some Claims Are Just Too Extraordinary: Certain repeated science experiments imply bayesian priors so extreme that you should believe scientific consensus above evidence from your own eyes, when they conflict.


9. Outside the Laboratory: Outside the laboratory: those who understand the map/territory distinction will *integrate* their knowledge, as they see the evidence that reality is a single unified process.

10. Politics is the Mind-Killer: Beware in your discussions that for clear evolutionary reasons, people have great difficulty being rational about current political issues.


11. Just Lose Hope Already: Admit when the evidence goes against you, else things can get a whole lot worse.


12. You Are Not Hiring the Top 1%: Interviewees represent a selection bias on the pool skewed toward those who are not successful or happy in their current jobs.


13. Policy Debates Should Not Appear One-Sided: Debates over outcomes with multiple effects will have arguments both for and against, so you must integrate the evidence, not expect the issue to be completely one-sided.


14. Burch's Law: Just because your ethics require an action doesn't mean the universe will exempt you from the consequences.


15. The Scales of Justice, the Notebook of Rationality: In non-binary answer spaces, you can't add up pro and con arguments along one dimension without risk of getting important factual questions wrong.

16. Blue or Green on Regulation?: Both sides are often right in describing the terrible things that will happen if we take the other side's advice; the universe is "unfair", terrible things are going to happen regardless of what we do, and it's our job to trade off for the least bad outcome.

17. Superstimuli and the Collapse of Western Civilization: As a side effect of evolution, super-stimuli exist, and as a result of economics, are getting and should continue to get worse.

18. Useless Medical Disclaimers: Medical disclaimers without probabilities are hard to use, and if probabilities aren't there because some people can't handle having there, maybe we ought to tax those people.

19. Archimedes's Chronophone: Consider the thought experiment where you communicate general thinking patterns which will lead to right answers, as opposed to pre-hashed content...

20. Chronophone Motivations: If you want to really benefit humanity, do some original thinking, especially about areas of application, and directions of effort.


21. Self-deception: Hypocrisy or Akrasia?: If part of a person--for example, the verbal module--says it wants to become more rational, we can ally with that part even when weakness of will makes the person's actions otherwise; hypocrisy need not be assumed.

22. Tsuyoku Naritai! (I Want To Become Stronger): Do not glory in your weakness; instead, aspire to become stronger, and study your flaws so as to remove them.

23. Tsuyoku vs. the Egalitarian Instinct: There may be evolutionary psychologicy factors that encourage modesty and mediocrity, at least in appearance; while some of that may still apply today, you should mentally plan and strive to pull ahead, if you are doing things right.

24. Statistical Bias: There are two types of error, systematic error, and random variance error; by repeating experiments you can average out and drive down the variance error.

25. Useful Statistical Biases: If you know variance (noise) exists, you can intentionally introduce bias by ignoring some squiggles, choosing a simpler hypothesis, and thereby lowering expected variance while raising expected bias; sometimes total error is lower, hence the "mean-variance tradeoff" technique.

26. The Error of Crowds: Variance decomposition does not imply majoritarian-ish results; this is an artifact of minimizing *square* error, and drops out using square root error when bias is larger than variance; how and why to factor in evidence requires more assumptions, as per Aumann agreement.

27. The Majority Is Always Wrong: Often, anything worse than the majority opinion should get selected out...so the majority opinion is often strictly superior to no others.

28. Knowing About Biases Can Hurt People:

29. Debiasing as Non-Self-Destruction:

30. Inductive Bias:

31. Suggested Posts:

32. Futuristic Predictions as Consumable Goods:

33. Marginally Zero-Sum Efforts:

34. Priors as Mathematical Objects:

35. Lotteries:

36. New Improved Lottery:

37. Your Rationality is My Business:

38. Consolidated Nature of Morality Thread:

39. Feeling Rational:

40. Universal Fire:

41. Universal Law:

42. Think Like Reality:

43. Beware the Unsurprised:

44. The Third Alternative:

45. Third Alternatives for Afterlife-ism:

46. Scope Insensitivity:

47. One Life Against the World:

48. Risk-Free Bonds Aren't:

49. Correspondence Bias:

50. Are Your Enemies Innately Evil?:

51. Open Thread:

52. Two More Things to Unlearn from School:

53. Making Beliefs Pay Rent (in Anticipated Experiences):

54. Belief in Belief:

55. Bayesian Judo:

56. Professing and Cheering:

57. Belief as Attire:

58. Religion's Claim to be Non-Disprovable:

59. The Importance of Saying "Oops":

60. Focus Your Uncertainty:

61. The Proper Use of Doubt:

62. The Virtue of Narrowness:

63. You Can Face Reality:

64. The Apocalypse Bet:

65. Your Strength as a Rationalist:

66. I Defy the Data!:

67. Absence of Evidence Is Evidence of Absence:

68. Conservation of Expected Evidence:

69. Update Yourself Incrementally:

70. One Argument Against An Army:

71. Hindsight bias:

72. Hindsight Devalues Science:

73. Scientific Evidence, Legal Evidence, Rational Evidence:

74. Is Molecular Nanotechnology "Scientific"?:

75. Fake Explanations:

76. Guessing the Teacher's Password:

77. Science as Attire:

78. Fake Causality:

79. Semantic Stopsigns:

80. Mysterious Answers to Mysterious Questions:

81. The Futility of Emergence:

82. Positive Bias:

83. Say Not "Complexity":

84. My Wild and Reckless Youth:

85. Failing to Learn from History:

86. Making History Available:

87. Stranger Than History:

88. Explain/Worship/Ignore?:

89. Science as Curiosity-Stopper:

90. Absurdity Heuristic, Absurdity Bias:

91. Availability:

92. Why is the Future So Absurd?:

93. Anchoring and Adjustment:

94. The Crackpot Offer:

95. Radical Honesty:

96. We Don't Really Want Your Participation:

97. Applause Lights:

98. Rationality and the English Language:

99. Human Evil and Muddled Thinking:

100. Doublethink (Choosing to be Biased):

101. Why I'm Blooking:

102. Planning Fallacy:

103. Kahneman's Planning Anecdote:

104. Conjunction Fallacy:

105. Conjunction Controversy (Or, How They Nail It Down):

106. Burdensome Details:

107. What is Evidence?:

108. The Lens That Sees Its Flaws:

109. How Much Evidence Does It Take?:

110. Einstein's Arrogance:

111. Occam's Razor:

112. 9/26 is Petrov Day:

113. How to Convince Me That 2 + 2 = 3:

114. The Bottom Line:

115. What Evidence Filtered Evidence?:

116. Rationalization:

117. Recommended Rationalist Reading:

118. A Rational Argument:

119. We Change Our Minds Less Often Than We Think:

120. Avoiding Your Belief's Real Weak Points:

121. The Meditation on Curiosity:

122. Singlethink:

123. No One Can Exempt You From Rationality's Laws:

124. A Priori:

125. Priming and Contamination:

126. Do We Believe Everything We're Told?:

127. Cached Thoughts:

128. The "Outside the Box" Box:

129. Original Seeing:

130. How to Seem (and Be) Deep:

131. The Logical Fallacy of Generalization from Fictional Evidence:

132. Hold Off On Proposing Solutions:

133. Can't Say No Spending:

134. Congratulations to Paris Hilton:

135. Pascal's Mugging:

136. Illusion of Transparency:

137. Self-Anchoring:

138. Expecting Short Inferential Distances:

139. Explainers Shoot High. Aim Low!:

140. Double Illusion of Transparency:

141. No One Knows What Science Doesn't Know:

142. Why Are Individual IQ Differences OK?:

143. Bay Area Bayesians Unite!:

144. Motivated Stopping and Motivated Continuation:

145. Torture vs. Dust Specks:

146. A Case Study of Motivated Continuation:

147. A Terrifying Halloween Costume:

148. Fake Justification:

149. An Alien God:

150. The Wonder of Evolution:

151. Evolutions Are Stupid (But Work Anyway):

152. Natural Selection's Speed Limit and Complexity Bound:

153. Beware of Stephen J. Gould:

154. The Tragedy of Group Selectionism:

155. Fake Selfishness:

156. Fake Morality:

157. Fake Optimization Criteria:

158. Adaptation-Executers, not Fitness-Maximizers:

159. Evolutionary Psychology:

160. Protein Reinforcement and DNA Consequentialism:

161. Thou Art Godshatter:

162. Terminal Values and Instrumental Values:

163. Evolving to Extinction:

164. No Evolutions for Corporations or Nanodevices:

165. The Simple Math of Everything:

166. Conjuring An Evolution To Serve You:

167. Artificial Addition:

168. Truly Part Of You:

169. Not for the Sake of Happiness (Alone):

170. Leaky Generalizations:

171. The Hidden Complexity of Wishes:

172. Lost Purposes:

173. Purpose and Pragmatism:

174. The Affect Heuristic:

175. Evaluability (And Cheap Holiday Shopping):

176. Unbounded Scales, Huge Jury Awards, & Futurism:

177. The Halo Effect:

178. Superhero Bias:

179. Mere Messiahs:

180. Affective Death Spirals:

181. Resist the Happy Death Spiral:

182. Uncritical Supercriticality:

183. Fake Fake Utility Functions:

184. Fake Utility Functions:

185. Evaporative Cooling of Group Beliefs:

186. When None Dare Urge Restraint:

187. The Robbers Cave Experiment:

188. Misc Meta:

189. Every Cause Wants To Be A Cult:

190. Reversed Stupidity Is Not Intelligence:

191. Argument Screens Off Authority:

192. Hug the Query:

193. Guardians of the Truth:

194. Guardians of the Gene Pool:

195. Guardians of Ayn Rand:

196. The Litany Against Gurus:

197. Politics and Awful Art:

198. Two Cult Koans:

199. False Laughter:

200. Effortless Technique:

201. Zen and the Art of Rationality:

202. The Amazing Virgin Pregnancy:

203. Asch's Conformity Experiment:

204. On Expressing Your Concerns:

205. Lonely Dissent:

206. To Lead, You Must Stand Up:

207. Cultish Countercultishness:

208. My Strange Beliefs:

209. Posting on Politics:

210. The Two-Party Swindle:

211. The American System and Misleading Labels:

212. Stop Voting For Nincompoops:

213. Rational vs. Scientific Ev-Psych:

214. A Failed Just-So Story:

215. But There's Still A Chance, Right?:

216. The Fallacy of Gray:

217. Absolute Authority:

218. Infinite Certainty:

219. 0 And 1 Are Not Probabilities:

220. Beautiful Math:

221. Expecting Beauty:

222. Is Reality Ugly?:

223. Beautiful Probability:

224. Trust in Math:

225. Rationality Quotes 1:

226. Rationality Quotes 2:

227. Rationality Quotes 3:

228. The Allais Paradox:

229. Zut Allais!:

230. Rationality Quotes 4:

231. Allais Malaise:

232. Against Discount Rates:

233. Circular Altruism:

234. Rationality Quotes 5:

235. Rationality Quotes 6:

236. Rationality Quotes 7:

237. Rationality Quotes 8:

238. Rationality Quotes 9:

239. The "Intuitions" Behind "Utilitarianism":

240. Trust in Bayes:

241. Something to Protect:

242. Newcomb's Problem and Regret of Rationality:

243. OB Meetup:

244. The Parable of the Dagger:

245. The Parable of Hemlock:

246. Words as Hidden Inferences:

247. Extensions and Intensions:

248. Buy Now Or Forever Hold Your Peace:

249. Similarity Clusters:

250. Typicality and Asymmetrical Similarity:

251. The Cluster Structure of Thingspace:

252. Disguised Queries:

253. Neural Categories:

254. How An Algorithm Feels From Inside:

255. Disputing Definitions:

256. Feel the Meaning:

257. The Argument from Common Usage:

258. Empty Labels:

259. Classic Sichuan in Millbrae, Thu Feb 21, 7pm:

260. Taboo Your Words:

261. Replace the Symbol with the Substance:

262. Fallacies of Compression:

263. Categorizing Has Consequences:

264. Sneaking in Connotations:

265. Arguing "By Definition":

266. Where to Draw the Boundary?:

267. Entropy, and Short Codes:

268. Mutual Information, and Density in Thingspace:

269. Superexponential Conceptspace, and Simple Words:

270. Leave a Line of Retreat:

271. The Second Law of Thermodynamics, and Engines of Cognition:

272. Perpetual Motion Beliefs:

273. Searching for Bayes-Structure:

274. Conditional Independence, and Naive Bayes:

275. Words as Mental Paintbrush Handles:

276. Rationality Quotes 10:

277. Rationality Quotes 11:

278. Variable Question Fallacies:

279. 37 Ways That Words Can Be Wrong:

280. Gary Gygax Annihilated at 69:

281. Dissolving the Question:

282. Wrong Questions:

283. Righting a Wrong Question:

284. Mind Projection Fallacy:

285. Probability is in the Mind:

286. The Quotation is not the Referent:

287. Penguicon & Blook:

288. Qualitatively Confused:

289. Reductionism:

290. Explaining vs. Explaining Away:

291. Fake Reductionism:

292. Savanna Poets:

293. Joy in the Merely Real:

294. Joy in Discovery:

295. Bind Yourself to Reality:

296. If You Demand Magic, Magic Won't Help:

297. New York OB Meetup (ad-hoc) on Monday, Mar 24, @6pm:

298. The Beauty of Settled Science:

299. Amazing Breakthrough Day:

300. Is Humanism A Religion-Substitute?:

301. Scarcity:

302. To Spread Science, Keep It Secret:

303. Initiation Ceremony:

304. Hand vs. Fingers:

305. Angry Atoms:

306. Heat vs. Motion:

307. Brain Breakthrough! It's Made of Neurons!:

308. Reductive Reference:

309. Zombies! Zombies?:

310. Zombie Responses:

311. The Generalized Anti-Zombie Principle:

312. GAZP vs. GLUT:

313. Belief in the Implied Invisible:

314. Quantum Explanations:

315. Configurations and Amplitude:

316. Joint Configurations:

317. Distinct Configurations:

318. Where Philosophy Meets Science:

319. Can You Prove Two Particles Are Identical?:

320. Classical Configuration Spaces:

321. The Quantum Arena:

322. Feynman Paths:

323. No Individual Particles:

324. Identity Isn't In Specific Atoms:

325. Zombies:

326. Three Dialogues on Identity:

327. Decoherence:

328. The So-Called Heisenberg Uncertainty Principle:

329. Which Basis Is More Fundamental?:

330. Where Physics Meets Experience:

331. Where Experience Confuses Physicists:

332. On Being Decoherent:

333. The Conscious Sorites Paradox:

334. Decoherence is Pointless:

335. Decoherent Essences:

336. The Born Probabilities:

337. Decoherence as Projection:

338. Entangled Photons:

339. Bell's Theorem:

340. Spooky Action at a Distance:

341. Decoherence is Simple:

342. Decoherence is Falsifiable and Testable:

343. Quantum Non-Realism:

344. Collapse Postulates:

345. If Many-Worlds Had Come First:

346. Many Worlds, One Best Guess:

347. The Failures of Eld Science:

348. The Dilemma:

349. Science Doesn't Trust Your Rationality:

350. When Science Can't Help:

351. Science Isn't Strict Enough:

352. Do Scientists Already Know This Stuff?:

353. No Safe Defense, Not Even Science:

354. Changing the Definition of Science:

355. Conference on Global Catastrophic Risks:

356. Faster Than Science:

357. Einstein's Speed:

358. That Alien Message:

359. My Childhood Role Model:

360. Mach's Principle:

361. A Broken Koan:

362. Relative Configuration Space:

363. Timeless Physics:

364. Timeless Beauty:

365. Timeless Causality:

366. Einstein's Superpowers:

367. Class Project:

368. A Premature Word on AI:

369. The Rhythm of Disagreement:

370. Principles of Disagreement:

371. Timeless Identity:

372. Why Quantum?:

373. Living in Many Worlds:

374. Thou Art Physics:

375. Timeless Control:

376. Bloggingheads:

377. Against Devil's Advocacy:

378. Eliezer's Post Dependencies; Book Notification; Graphic Designer Wanted:

379. The Quantum Physics Sequence:

380. An Intuitive Explanation of Quantum Mechanics:

381. Quantum Physics Revealed As Non-Mysterious:

382. And the Winner is... Many-Worlds!:

383. Quantum Mechanics and Personal Identity:

384. Causality and Moral Responsibility:

385. Possibility and Could-ness:

386. The Ultimate Source:

387. Passing the Recursive Buck:

388. Grasping Slippery Things:

389. Ghosts in the Machine:

390. LA-602 vs. RHIC Review:

391. Heading Toward Morality:

392. The Outside View's Domain:

393. Surface Analogies and Deep Causes:

394. Optimization and the Singularity:

395. The Psychological Unity of Humankind:

396. The Design Space of Minds-In-General:

397. No Universally Compelling Arguments:

398. 2-Place and 1-Place Words:

399. The Opposite Sex:

400. What Would You Do Without Morality?:

401. The Moral Void:

402. Created Already In Motion:

403. I'd take it:

404. The Bedrock of Fairness:

405. 2 of 10, not 3 total:

406. Moral Complexities:

407. Is Morality Preference?:

408. Is Morality Given?:

409. Will As Thou Wilt:

410. Where Recursive Justification Hits Bottom:

411. The Fear of Common Knowledge:

412. My Kind of Reflection:

413. The Genetic Fallacy:

414. Fundamental Doubts:

415. Rebelling Within Nature:

416. Probability is Subjectively Objective:

417. Lawrence Watt-Evans's Fiction:

418. Posting May Slow:

419. Whither Moral Progress?:

420. The Gift We Give To Tomorrow:

421. Could Anything Be Right?:

422. Existential Angst Factory:

423. Touching the Old:

424. Should We Ban Physics?:

425. Fake Norms, or "Truth" vs. Truth:

426. When (Not) To Use Probabilities:

427. Can Counterfactuals Be True?:

428. Math is Subjunctively Objective:

429. Does Your Morality Care What You Think?:

430. Changing Your Metaethics:

431. Setting Up Metaethics:

432. The Meaning of Right:

433. Interpersonal Morality:

434. Humans in Funny Suits:

435. Detached Lever Fallacy:

436. A Genius for Destruction:

437. The Comedy of Behaviorism:

438. No Logical Positivist I:

439. Anthropomorphic Optimism:

440. Contaminated by Optimism:

441. Hiroshima Day:

442. Morality as Fixed Computation:

443. Inseparably Right; or, Joy in the Merely Good:

444. Sorting Pebbles Into Correct Heaps:

445. Moral Error and Moral Disagreement:

446. Abstracted Idealized Dynamics:

447. Arbitrary:

448. Is Fairness Arbitrary?:

449. The Bedrock of Morality:

450. Hot Air Doesn't Disagree:

451. When Anthropomorphism Became Stupid:

452. The Cartoon Guide to Löb's Theorem:

453. Dumb Deplaning:

454. You Provably Can't Trust Yourself:

455. No License To Be Human:

456. Invisible Frameworks:

457. Mirrors and Paintings:

458. Unnatural Categories:

459. Magical Categories:

460. Three Fallacies of Teleology:

461. Dreams of AI Design:

462. Against Modal Logics:

463. Harder Choices Matter Less:

464. Qualitative Strategies of Friendliness:

465. Dreams of Friendliness:

466. Brief Break:

467. Rationality Quotes 12:

468. Rationality Quotes 13:

469. The True Prisoner's Dilemma:

470. The Truly Iterated Prisoner's Dilemma:

471. Rationality Quotes 14:

472. Rationality Quotes 15:

473. Rationality Quotes 16:

474. Singularity Summit 2008:

475. Points of Departure:

476. Rationality Quotes 17:

477. Excluding the Supernatural:

478. Psychic Powers:

479. Optimization:

480. My Childhood Death Spiral:

481. My Best and Worst Mistake:

482. Raised in Technophilia:

483. A Prodigy of Refutation:

484. The Sheer Folly of Callow Youth:

485. Say It Loud:

486. Ban the Bear:

487. How Many LHC Failures Is Too Many?:

488. Horrible LHC Inconsistency:

489. That Tiny Note of Discord:

490. Fighting a Rearguard Action Against the Truth:

491. My Naturalistic Awakening:

492. The Level Above Mine:

493. Competent Elites:

494. Above-Average AI Scientists:

495. Friedman's "Prediction vs. Explanation":

496. The Magnitude of His Own Folly:

497. Awww, a Zebra:

498. Intrade and the Dow Drop:

499. Trying to Try:

500. Use the Try Harder, Luke:

501. Rationality Quotes 18:

502. Beyond the Reach of God:

503. My Bayesian Enlightenment:

504. Bay Area Meetup for Singularity Summit:

505. On Doing the Impossible:

506. Make an Extraordinary Effort:

507. Shut up and do the impossible!:

508. AIs and Gatekeepers Unite!:

509. Crisis of Faith:

510. The Ritual:

511. Rationality Quotes 19:

512. Why Does Power Corrupt?:

513. Ends Don't Justify Means (Among Humans):

514. Entangled Truths, Contagious Lies:

515. Traditional Capitalist Values:

516. Dark Side Epistemology:

517. Protected From Myself:

518. Ethical Inhibitions:

519. Ethical Injunctions:

520. Prices or Bindings?:

521. Ethics Notes:

522. Which Parts Are "Me"?:

523. Inner Goodness:

524. San Jose Meetup, Sat 10/25 @ 7:

525. Expected Creative Surprises:

526. Belief in Intelligence:

527. Aiming at the Target:

528. Measuring Optimization Power:

529. Efficient Cross-Domain Optimization:

530. Economic Definition of Intelligence?:

531. Intelligence in Economics:

532. Mundane Magic:

533. BHTV:

534. Building Something Smarter:

535. Complexity and Intelligence:

536. Today's Inspirational Tale:

537. Hanging Out My Speaker's Shingle:

538. Back Up and Ask Whether, Not Why:

539. Recognizing Intelligence:

540. Lawful Creativity:

541. Ask OB:

542. Lawful Uncertainty:

543. Worse Than Random:

544. The Weighted Majority Algorithm:

545. Bay Area Meetup:

546. Selling Nonapples:

547. The Nature of Logic:

548. Boston-area Meetup:

549. Logical or Connectionist AI?:

550. Whither OB?:

551. Failure By Analogy:

552. Failure By Affective Analogy:

553. The Weak Inside View:

554. The First World Takeover:

555. Whence Your Abstractions?:

556. Observing Optimization:

557. Life's Story Continues:

558. Surprised by Brains:

559. Cascades, Cycles, Insight...:

560. ...Recursion, Magic:

561. The Complete Idiot's Guide to Ad Hominem:

562. Engelbart:

563. Total Nano Domination:

564. Thanksgiving Prayer:

565. Chaotic Inversion:

566. Singletons Rule OK:

567. Disappointment in the Future:

568. Recursive Self-Improvement:

569. Hard Takeoff:

570. Permitted Possibilities, & Locality:

571. Underconstrained Abstractions:

572. Sustained Strong Recursion:

573. Is That Your True Rejection?:

574. Artificial Mysterious Intelligence:

575. True Sources of Disagreement:

576. Disjunctions, Antipredictions, Etc.:

577. Bay Area Meetup Wed 12/10 @8pm:

578. The Mechanics of Disagreement:

579. What I Think, If Not Why:

580. You Only Live Twice:

581. BHTV:

582. For The People Who Are Still Alive:

583. Not Taking Over the World:

584. Visualizing Eutopia:

585. Prolegomena to a Theory of Fun:

586. High Challenge:

587. Complex Novelty:

588. Sensual Experience:

589. Living By Your Own Strength:

590. Rationality Quotes 20:

591. Imaginary Positions:

592. Harmful Options:

593. Devil's Offers:

594. Nonperson Predicates:

595. Nonsentient Optimizers:

596. Nonsentient Bloggers:

597. Can't Unbirth a Child:

598. Amputation of Destiny:

599. Dunbar's Function:

600. A New Day:

601. Free to Optimize:

602. The Uses of Fun (Theory):

603. Growing Up is Hard:

604. Changing Emotions:

605. Rationality Quotes 21:

606. Emotional Involvement:

607. Rationality Quotes 22:

608. Serious Stories:

609. Rationality Quotes 23:

610. Continuous Improvement:

611. Eutopia is Scary:

612. Building Weirdtopia:

613. She has joined the Conspiracy:

614. Justified Expectation of Pleasant Surprises:

615. Seduced by Imagination:

616. Getting Nearer:

617. In Praise of Boredom:

618. Sympathetic Minds:

619. Interpersonal Entanglement:

620. Failed Utopia #4-2:

621. Investing for the Long Slump:

622. Higher Purpose:

623. Rationality Quotes 24:

624. The Fun Theory Sequence:

625. BHTV:

626. 31 Laws of Fun:

627. OB Status Update:

628. Rationality Quotes 25:

629. Value is Fragile:

630. Three Worlds Collide (0/8):

631. The Baby-Eating Aliens (1/8):

632. War and/or Peace (2/8):

633. The Super Happy People (3/8):

634. Interlude with the Confessor (4/8):

635. Three Worlds Decide (5/8):

636. Normal Ending:

637. True Ending:

638. Epilogue:

639. The Thing That I Protect:

640. ...And Say No More Of It:

641. (Moral) Truth in Fiction?:

642. Informers and Persuaders:

643. Cynicism in Ev-Psych (and Econ?):

644. The Evolutionary-Cognitive Boundary:

645. An Especially Elegant Evpsych Experiment:

646. Rationality Quotes 26:

647. An African Folktale:

648. Cynical About Cynicism:

649. Good Idealistic Books are Rare:

650. Against Maturity:

651. Pretending to be Wise:

652. Wise Pretensions v.0:

653. Rationality Quotes 27:

654. Fairness vs. Goodness:

655. On Not Having an Advance Abyssal Plan:

656. Formative Youth:

657. Markets are Anti-Inductive:

658. Tell Your Rationalist Origin Story... at Less Wrong:

659. The Most Important Thing You Learned:

660. The Most Frequently Useful Thing:

661. Posting now enabled on Less Wrong:

Interesting New Observation Selection Effect

Nick Bostrom's work on observation selection effects leads to his amusing observation of why cars really do go faster in the other lane. The reason, simply, is that you are more likely to find yourself in the lane with more people, because there are more people in that lane!

A similar observation was pointed out by Eliezer Yudkowsky about debates and their frustrating tendency not to converge quickly to agreement. Longer debates where agreement is not quickly reached are more likely to be observed simply because they take longer, potentially giving one the impression that people disagree more than they do.

Thursday, April 9, 2009

In favor of self-modelling

Occasionally people critique a field for becoming too aware of itself, but I'm currently fairly optimistic about the prospects for self-modelling, especially in realms of personal application, for reasons I'll explain shortly.

I've been observing a lot of self-reference recently, for example:

1. The less wrong community, with various posts pro and con about whether the stated goal of pursuing "extreme" rationality is going to actually help people achieve their goals.

2. Douglas Hofstadter's speculative linking of strange loops with consciousness. While we're on the subject, I currently think strange loops are likely relevant to consciousness in the sense of how neural networks come to be able to model themselves free of paradox, but I don't think this necessarily says much about the qualia issues. These are two separate problems, both of which we call consciousness.

3. The issues of model theory, which I've began to try to compress here.

Here's why I think self-modelling will help to achieve one's goals:

1. Easier ability to implement wuffie-like concepts. The ability to look upon oneself from the outside, should make it clearer and easier to apply optimization techniques to one's life and goals....attempting personal modification seems to have much more potential the more one has a sense for what one is.

2. Keeping track of multiple levels of abstraction is intrinsically difficult. This acts as a signalling mechanism for status enhancement. We can't deny the relevance of status in the world we currently live in.

basic lossy notes on model theory

model theory, starting here:

purpose: to explore semantics explicitly by creating a formalized model and exploring its formal properties (explore semantics by investigating syntactical elements of a *corresponding language*

vs. proof theory: study of proofs as syntax, versus models, which study semantics

s-homomorphism:

Löwenheim–Skolem theorem: if theory has an infinite model, it has infinitely many for every infinite cardinal number k.

Absoluteness

finite model: model with finite domain, etc. (e.g. an undirected graph, where A edge B is the relation, which is defined by the interpretation, and nodes are the domain)


isomorphism theorems:

categorical: only 1 model can work, up to isomorphism (only applies for finite models)

first order logic: has there exists, for all, and negation on variables...

"satisfies": Tarski's definition of truth

first order theory: a set of sentences

structure (aka univeral algebra): {domain, signature, "interpretation" linking domain and signature}

signature: models the non-logical symbols of a language, both functions and relations (-, +, x, etc.)
logical symbols: true, false, and, or, not, etc.

domain / carrier / universe: elements

"interpretation": "and this is what the relation means"...i.e. explicit definition of the symbols by functionally defining it on the domain.... for example, can define "is an element of" ... this links in the semantics, before that the symbols are just symbols

government financial maneuverings

if we allow lossy compression, I might summarize this article with: "assume deception, imprecision, and incompetence"

Extrapolated volition

Objective-Subjective morality defined through some version of extrapolated volition looks more and more like the right answer to me. Some useful links:

originally defined

best discussion I've seen

some applications of the concept:

application to fashion sense

Tuesday, April 7, 2009

Utilitarianism requires answers about consciousness

I don't think the issues raised by utilitarianism (pascal's mugging, whether negative utilitarianism is the right answer) can be resolved without solving the qualia problem. It seems like when we better know what pain is, we'll be in a better position to assess how bad it is. At it currently stands, our failure to understand consciousness means we have a shaky understanding of what even "I" means (therefore, debates of egoism vs. altruism, etc., seem likely to shift if we get a better explanation of consciousness).

I imagine that my extrapolated volition would have this additional info available to make the "right" choice. It seems like it will be hard to have clarity on the issues without significantly more scientific work done (how do neurons work, etc.)

Sunday, April 5, 2009

Does rationality always win?

Does rationality always win? No. This is a fairly simple consequence of the no free lunch theorem. Cheaper lunch? Hopefully. Free lunch? No.

Further discussion on less wrong

Friday, April 3, 2009

comment on the connection between No Free Lunch theorems, the usefulness of rationality, the problem of induction, and skepticism


Related: You cannot automatically face reality.

Monday, March 30, 2009

Akrasia / picoeconomics

very strong post, a structured explanation of weakness of will, site should be further processed for application

Sunday, March 29, 2009

could living single increase productivity? a pro argument. cons need to be similarly investigated, anyone know where to look?

whuffie

life as video game

Create a scoring system, and give yourself points based on that system...this appears to be addictive in a way similar to video games, seems like that human "weakness" could be applied to better achieve goals.

Whuffie of course is useful to the extent one has a good scoring system relative to the goal....the more one has precise models of oneself, the goal, and other relevant parts of the problem domain, the more effective whuffie should be.

South Park

commented on Marginal revolution's South Park post. Take away: Reduction of abstract concepts such as corporations or securitization into more specific agent-based examples makes moral hazard and likely incompetence more apparent to human brain architectures.

Disambiguation

There are other people known as Rob Zahra / Robert Zahra on the web.

These are specifically me:


my space page

Less wrong

Facebook

Zoom info

Transhumanist Meetup

Xing Career Profile

Amazon.com profile

Various brief mentions:

old college interview

Singularity Institute supporter

harvard soccer, here and here

review of Nick Bostrom's book on anthropic selection effects.