I am concerned about how the next few years could decide the future of AI.
Mental health improves decision-making and social coordination.
I may be world-class at helping others outgrow persistent issues.1
⇒ I should focus on helping people in strategic AI positions.
Update: Most of my clients now work in frontier AI, including staff at Anthropic and six other organizations influential to AI.
Update 2: Noting that I have uncertainty about point #2 above for reasons hard to describe.
Update 3:
1
I don’t expect you to believe this yet based solely on the publicly available information — Pay-on-results personal growth: first success — LessWrong and Outgrow Lifelong Insecurities! Pay for Results. However, this is my inside view.
I'm curious - with the success of your Bounty Method so far, have you considered seeing if you can start replicating the knowledge/methodology in any way, such as seeing if you can train anyone else to exhibit some of the capabilities you're demonstrating here?
I'm guessing no - certainly, by traditional science's methods, it would be premature - but with the acceleration of the modern age, it might be worth considering.
Which I guess is a silly way of saying 'If you're at all interested in teaching, would love to learn anything I can of how you are helping people'