Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Being able to predict how your customers use your API has a strong impact on how you deliver the data behind the API.

If we're talking about highly interconnected objects in an API, it would be expensive (from a DB point of view) to arbitrarily add indexes in the long shot case that someone may use a selector on that property.

Equally, when data is highly interconnected, by designing an API that focuses on particular consumption patterns, I'm able to add custom functionality, pre-processing, or highly customised queries/indexes to ensure the data is read as fast as possible.

Poor performing API's are bad for the consumer, bad for the database, and bad for the art of data.

There's a balance between centralization and independence in software models that is so hard to balance right.

MicroServices, GraphQL, and Docker Swarms are examples of technologies that, if done right, can be enabling, and if done wrong, will pull the ship to the bottom of the ocean.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: