Skip to main content

Dion Whitehead

2024-01-21

What are the logistical challenges in having whales be the spacefaring race. This makes senes because whales have evolved for low-gravity, which is very harmful to humans, and live in dark and cold environments. Please use as much known physics and math as possible, for example, current rate of increase of mass delivered to space, for making a high level estimation for the feasibility of getting a whale into space. Consider that the whale will need a minimum amount of water.

ChatGPT

The idea of whales as a spacefaring race presents a host of unique logistical challenges. Let's break down the major factors:

  1. Adaptation to Low-Gravity Environment: Whales, evolved for an aquatic environment, might theoretically be better suited to low gravity than land-dwelling creatures like humans. In water, whales experience buoyancy, which reduces the effect of Earth's gravity. However, space's microgravity environment is quite different from being buoyant in water. In microgravity, there's no resistance like water to help whales move or maintain their shape. Their bodies are adapted to be supported by water, and without it, they might face health issues.

  2. Life Support System: Whales require a vast amount of water, which serves as their living medium. A whale in space would need a spacecraft filled with water to simulate its natural habitat. This leads to significant challenges:

    • Mass of Water: Water is heavy (1 cubic meter weighs 1 ton). A blue whale, for instance, could require thousands of cubic meters of water. Let's do a rough calculation considering a smaller whale species, like a Minke whale, which might need a minimum of 1000 cubic meters of water:

    • $Mass of Water=1000 cubic meters×1000 kg/cubic meter=1,000,000 kg$

    • Delivery to Space: Current rockets can deliver about 20-25 tons (20,000-25,000 kg) to Low Earth Orbit (LEO). Assuming the most optimistic capacity of 25,000 kg per launch, we would need:

    • $Number of Launches=25,000 kg/launch1,000,000 kg​=40 launches$

      This is for the water alone, not accounting for the spacecraft, life support systems, or the whale itself.

  3. Temperature and Pressure Control: Whales are adapted to specific pressure and temperature ranges. Any spacecraft would need to simulate these conditions, which adds to the complexity and mass of the spacecraft.

  4. Feeding and Waste Management: Whales consume large quantities of food and produce significant waste. Any space mission would need to address how to supply food and manage waste in a zero-gravity environment filled with water.

  5. Psychological and Health Concerns: The impact of space travel on the whale's psychology and physiology is unknown. Issues like bone density loss and muscle atrophy, seen in human astronauts, could also affect whales, albeit differently due to their aquatic nature.

  6. Cost and Resources: The cost of launching the required mass to space is enormous. Considering the current cost per kilogram to LEO and the mass of water and life support systems needed, the financial resources would be substantial.

  7. Engineering Challenges: Designing a spacecraft that can hold a whale, water, and life support systems is an unprecedented engineering challenge. It would require innovations in materials, propulsion, and life support systems.

In conclusion, while whales have certain characteristics that might seem advantageous for space travel, the practical challenges are enormous. The mass of water needed, life support requirements, and engineering challenges make this a highly impractical endeavor with current technology and understanding of space travel.

Dion Whitehead

See disk space usage

GrandPerspective

SVG Icons (free, no watermark, decent library)

https://iconsvg.xyz/

https://www.svgrepo.com/

https://www.flaticon.com/

Image Editing

Pixelmator Pro

Screen capture, video of UI, tutorials, docs

link_preview

Video Editing e.g. for webinars

https://www.techsmith.com/camtasia-pricing.html

Create animated text/messaging conversation

https://chat-animator.com/

Add AI ML to my app, integrated agents

This one is really good

link_preview

Not as good as dify but still good

link_preview

Forms

These look nice

https://www.fillout.com/

PDF Editing (when Preview etc isn’t enough)

https://www.pdfescape.com/

Dion Whitehead

2023-11-26

Storage is a solved problem

Storing blobs of data for your web application used to be more involved, I had to think about it. Now I don’t (much): when I build an app or website, that needs some blob storage my thoughts are:

  1. create a bucket or whatever it’s called in one of some cloud provider, doesn’t have to be the biggest name, they’re all very reliable
  2. put stuff in
  3. get stuff out

I don’t care where much it is. Why? Because it’s a solved problem. Remote blob storage is:

  • very cheap, and getting cheaper
  • reliable [1]

At scale however, yes, you do have to think about cost, but those calculations are pretty straightforward business calculations. At anything below very large data, you don’t have to think much about it, and this makes it a solved problem.

It’s a solved problem in a similar way that nature has solved storing information: replication, with automated mechanisms for damage repair/reconciliation [2].

Compute is not a solved problem

Because you have to think about it. What I want:

I give you some application, some workflow, for example, a machine learning agent, some program I have created, some tool, and it requires from time to time, some level of computing power. You are able to automatically, safely, connect that application to the right level of compute resources as needed.

  • If the program is in the browser, I might be able to use the GPU, while the tab is open and running.
  • If the program is downloaded and installed program, I have access to your entire computer, but due to that, security and parasitic programs become a problem.
  • If I make available some cloud computing, it suddenly becomes complex, with lots of decisions, but with lots of scalable compute resources of different useful types.

Computing resources are inherently valuable, and often able to be converted to $$ efficiently via automation.

Obviously storage !== compute but if it were as easy, then I could distribute complex scientific simulations, and revive them years later, and they would “just work”.

The team at https://metapage.io aim to solve that problem: compute as a simple commodity.

When we can treat compute as a commodity, we have more power over the compute providers. When they manage to get their system to be difficult to move from, you lose bargaining power.

Personally, I default to https://www.digitalocean.com/. This isn’t a paid plug! They just do a great job of making plenty of options, at the right complexity/resolution, not too many, not too few.

For my full stacks, I’m using AWS but not directly, and I would prefer not to, but

References

[1] How data is lost in the cloud

https://spanning.com/blog/how-data-is-lost-in-the-cloud/

[2] Mechanisms of ionizing-radiation resistance

https://en.wikipedia.org/wiki/Deinococcus_radiodurans

END PAGE

Dion Whitehead

Scientific papers are static artifacts. You read them, but that is about the limit of your interaction.

You can embed code via links into e.g. PDFs, but code, like knowledge, fades into an insurmountable complexity barrier over time:

  • OS versions change, become outdated
  • documentation, already sparse, becomes incomprehensible
  • configuration and input files become versions so old, no-one even remembers

Even papers dealing with purely computational research accumulate entropy over time.

Publishing journal articles as websites is a step in the right direction, but so far there is vast potential unrealized.

A new way of publishing

  • embed running code directly into the “article”
    • “articles” are really a combination of applications/workflows/websites
  • articles run in browsers, which are incentivized more than any other platform to run as much old stuff as possible (”Do not break the internet”), while remaining secure
  • rely on code modules that are open source, and published in sources that are meant to be “forever”

Metapages: publish entire reproducible applications as URLs

https://docs.metapage.io/docs

Metaframes: websites as visualization with editable code

The following is a visualization of a network. The code can be edited by you, and the updated code runs directly in your browser. To edit or copy the code below, click in the top right below:

This is an example of a metaframe: an editable URL, where configuration/code/data is stored in the URL. A metapage is a set of possibly connected metaframes:

Example 2: python and visualization in the browser, completely self-contained, and editable by you

This is discussed in more detail here

Example 3: visualization, user interaction

The following is an embedded metapage application that shows a visual of the sun, and allow you the user to rotate it with your hands (via your webcam):

It consists of three interconnected, communicating metaframes called a metapage. Below is the date flow:

Conclusion

There is a huge space for durable, high information scientific publishing, that can be shared, remixed, and would allow both casual and deep interactions.

Dion Whitehead

This is a deeper dive into an example metapage, that runs python, creates plots using matplotlib.

Concepts:

  • The entire user created application exists completely in the URL
    • When you edit the application below, you are editing the URL
    • Obviously there is a limit, but it’s quite high
    • The URL can be shared, and then edited by someone else
    • If you have this URL, you own it. I do not store any application state, in this example at least
  • Each of the boxes is a metaframe, which is a term used here to mean a separate webpage URL, with a small library to listen to inputs and send outputs (because the application is a graph of metaframes)
    • Because the components are also just URLs, and those URLs are also editable, you can edit the components and the changes are persisted in the main parent URL
  • You can edit the global configuration
  • You can edit the individual components
    • Metaframes have their own internal editing overlays/menus
  • You can see the graph layout

The example published application

This application runs python (in the browser via pyodide), and generates some plots, sends those as outputs to components that just display images.

You can take this, modify it, and share it. No backend servers needed (except for serving plain static assets). Embed in your blog, group websites, company docs, etc.

Click any of the hamburger menus to edit the individual components:

Dion Whitehead

Steps (actually, there is only one step: writing)

  1. I write blog posts in notion, structured how I like
  2. Automatically ~15 minutes later the website + blog is generated from notion, using docusaurus + docu-notion, and pushed to github pages
  3. There’s nothing I actually needed to do except write

That’s it. I just write. The rest is automated.


How it works

Source: https://github.com/dionjwa/dionjwa.github.io

A github action runs a few scripts commands:

  1. Using docu-notion (and a root notion page) the docusaurus blog markdown is generated from specified notion pages
    1. I also use a custom set of plugins for various pieces: https://www.npmjs.com/package/@metapages/docu-notion-plugins
  2. The docusaurus website is built
  3. Then deployed to github pages

It’s otherwise tricky to find a set of tools for writing/publishing a blog+resume with the following requirements (for me):

  • open source, or high data trust
  • able to output from notion. I’m too tired to convert to anything, like multiple publishing endpoints. I just want to write, and have everything be automated
  • but also look good
  • where i just write, and do absolutely nothing else. no saving, no publishing step.
  • but also everything is backed up, with full version history

Published examples in the wild

Dion Whitehead

For building any project, you just need:

  • a platform agnostic reproducible environment: docker
  • an entrypoint to display and run commands: just
  • a way to share/update scripts: deno

The requirement is not just for one project in a single point in time, it’s for as MANY projects as possible over DEEP time. That’s why the setup has to be simple, reproducible, and sharable (bespoke things fade into the complexity barrier quicker)

Wasm

This will be a game changer: super fast to deploy, arbitrary code, running at the edge or the browser, compiled from pretty much any language.

My prediction is that WASM will take over edge compute, and even heavy workloads.

Dion Whitehead

Problem: scripts and utilities written for one repository, and you want to share them with other repositories

  • python: it’s painful, and tied to the python ecosystem. You have to publish etc which is slow
    • this applies to most other languages
  • specific languages mostly require a lot of specific install/tooling

With deno:

  • a single binary
  • URL imports, so you don’t ever have to manually install modules/libraries
  • URL imports can point to anywhere, e.g. a github repo
  • You can publish distinct packages, if you want, but you don’t have to
  • The entire npm ecosystem is available. Size matters.

Having used it for a while now, it definitely fits well into my ability to create / build / deploy servers as quickly and simply as possible, allowing me to effectively maintain a larger set of deployable code than before.

Dion Whitehead

just solves a set of complex problems:

I want a single command to give me a CLI menu, where ever I am

I’m tired, and forgetful, and I just want this repository to build or test or deploy, and I don’t want to remember all the specifics, I just want to get things done and move on.

When you have multiple repositories you create scripts + tools for

  • CI operations
  • Development
  • Testing
  • All kinds of other tasks

You want to organise these commands, so you put them in a Make file or some custom CLI thing.

But:

  • the custom thing isn’t easily transferrable to other projects/repositories
  • Make is very specific, and has a bunch of ugly workarounds for doing simple things (e.g. .PHONY)

so you and up with a bunch of kludge.

Enter just.

just is exactly what I needed:

  • simple well formatted commands in any language
  • introspection
  • the utilities you need when munging strings/paths etc

Now ALL my projects have a justfile in the root

I expect the following:

  • Most common commands FIRST (ie. NOT alphabetical order)
    • test, deploy / publish
  • Commands check for dependencies and libraries/modules at runtime, and automatically install if needed
    • Don’t make me write commands for no reason, there’s no reason to not automatically install e.g. npm modules if they are missing.
  • List various links e.g. to the github repo, to the published URL endpoint, to the docs

I alias just to j so that I am just j'ing everywhere. It’s short, fast, and gets me reliably building/publishing even if I have forgotten all the details of the commands.

Sharing

just on it’s own doesn’t solve the problem of sharing complex commands. That’s where deno comes in. It’s a single binary, running Typescript. I know Typescript is not everyone’s cup of tea, but it’s typed, gets the job done, and the URL imports make sharing between repositories MUCH easier.

Basically before URL imports, I struggled with how to share complex build/publish/deploy scripts.

Just Words of Wisdom (To Myself) about just etiquette

Always include the full path of commands

If you document something with what <command> you should run, e.g.

# And then you type: 'just <command>'

but you are </over/here> but you have to be </somewhere/else> then please include the full path so there is never any doubt and you don’t have to go fd'ing: '</be/specific> just <command>'. It doesn’t have to be this exact formula, but try to remove this kind of friction.

Include links to important locations

See the links at the bottom, they are a click away