- cross-posted to:
- programming@programming.dev
- cross-posted to:
- programming@programming.dev
As a security and DevOps engineer, HTMX has been such a pain in my butt lately.
Something are broken? -> Web devs blame WAF -> Me debugs and researches for hours when I has better stuff to do -> Finally me: WAF is fine. Is your broken JavaScript. Wut do? -> Web devs: Not know, write in HTMX, JS is abstracted, now we fix. -> 15 minutes later web devs: We fix! We do basic thing wrong! Now learn something new about HTMX. -> Me: Great. Thanks so much for that.
I really struggle to see where HATEOAS can be used. Obviously not for machine to machine uses as others have pointed out. But even for humans it would lead to terrible interfaces.
If the state of the resource changes such that the allowable actions available on that resource change (for example, if the account goes into overdraft) then the HTML response would change to show the new set of actions available.
So if I’m in overdraft, some actions are not available? Which means they are not shown at all? How can a user easily know that there are things they could do, it it wasn’t for the fact that they are in a specific state? Instead of having disabled buttons and menus, with help text explaining why they are not usable, we just hide them? That can’t be right, can it? So how do we actually deliver a useable UX using HATEOAS?
Or is it just meant for “exploration”, and real clients would not rely on the returned links? But how is that better than actual docs telling you the same but much more clearly ?
Opinionated summary: Developers saw REST, picked the good parts and ignored the rest (no pun intended). They still called it REST, for lack of a better word, even though things like HATEOAS were overkill for most of the applications.
Maybe I’m wildly misunderstanding something, not helped by the fact that I work very little with Web technologies, but…
So, in a RESTful system, you should be able to enter the system through a single URL and, from that point on, all navigation and actions taken within the system should be entirely provided through self-describing hypermedia: through links and forms in HTML, for example. Beyond the entry point, in a proper RESTful system, the API client shouldn’t need any additional information about your API.
This is the source of the incredible flexibility of RESTful systems: since all responses are self describing and encode all the currently available actions available there is no need to worry about, for example, versioning your API! In fact, you don’t even need to document it!
If things change, the hypermedia responses change, and that’s it.
It’s an incredibly flexible and innovative concept for building distributed systems.
Does that mean only humans can interact with a REST system? But then it doesn’t really deserve the qualifier of “application programming interface”.
The author actually agrees with this take, and even links to this post making it explicit: https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.html
It feels like he’s trying to say something like Swagger should always be required. One of the things about SOAP for example was that it always had a self-generating WSDL that you could consume to get everything. There were quite a few REST endpoints that were missing this when first developed.
But I do agree that “forms” and “html” are quite the opposite of an API.
Well I’m not missing the point then, that’s good to know :)
Okay - will start calling things REST2.