[{"content":"This is a continuation of CotomyElement Value and Form Behavior . The previous article focused on what values the browser actually submits. This time the focus is the form classes themselves: which one to use, what each class adds, and which methods are meant to be overridden in real screens.\nThe practical point is simple. Cotomy does not treat every form as one generic submit helper. It splits query navigation, API submit, entity-aware submit, and entity load-and-fill into different classes so the screen can keep one explicit runtime path.\nThe Form Line in Cotomy The current form line in the implementation is this:\nflowchart TD A[CotomyForm] B[CotomyQueryForm] C[CotomyApiForm] D[CotomyEntityApiForm] E[CotomyEntityFillApiForm] A --\u0026gt; B A --\u0026gt; C C --\u0026gt; D D --\u0026gt; E Each class adds one operational concern. CotomyForm standardizes submit interception. CotomyQueryForm turns that into query navigation. CotomyApiForm turns it into API submit with FormData. CotomyEntityApiForm adds entity identity handling. CotomyEntityFillApiForm adds load and fill behavior on top.\nCotomyForm CotomyForm is the base contract. It is not just a base class. It is the entry point where form submission becomes part of the screen runtime. It intercepts submit in initialize(), prevents native navigation, and calls submitAsync(). Its default method is get, and its default actionUrl is the current path plus query string.\nIn practice, you do not usually instantiate CotomyForm directly for business screens. You inherit from one of the concrete classes. What matters is the baseline behavior. initialize() wires the submit event once, reloadAsync() performs a full window reload by default, and autoReload controls whether page restore should call reloadAsync().\nThat makes CotomyForm the common lifecycle surface for all form variants.\nCotomyQueryForm Use CotomyQueryForm when the form should rewrite the URL and move the screen by query string. This is the right fit for search conditions, list filters, and paging inputs that belong to the page URL itself.\nimport { CotomyElement, CotomyPageController, CotomyQueryForm } from \u0026#34;cotomy\u0026#34;; class CustomerListPage extends CotomyPageController { protected override async initializeAsync(): Promise\u0026lt;void\u0026gt; { await super.initializeAsync(); this.setForm( CotomyElement.byId(\u0026#34;customer-search-form\u0026#34;, class extends CotomyQueryForm {})! ); } } CotomyPageController.set(CustomerListPage); The implementation always uses GET here. When submitAsync() runs, it reads the current actionUrl, merges existing query values with current form values, removes empty values, rebuilds the query string, and sets location.href.\nThat means this class is for navigation, not API transport. If the screen should stay on the same page and call fetch, move to CotomyApiForm instead.\nIf you need a custom target URL, override actionUrl.\nimport { CotomyQueryForm } from \u0026#34;cotomy\u0026#34;; class ProductSearchForm extends CotomyQueryForm { public override get actionUrl(): string { const category = this.attribute(\u0026#34;data-category\u0026#34;) ?? \u0026#34;all\u0026#34;; return `/products?category=${encodeURIComponent(category)}`; } } CotomyApiForm Use CotomyApiForm when the form should submit to an API endpoint but the screen is not using entity identity switching. Typical cases are feedback forms, operation dialogs, import forms, or one-off actions that are not edit screens for one resource.\nimport { CotomyApiForm, CotomyElement, CotomyPageController, } from \u0026#34;cotomy\u0026#34;; class FeedbackForm extends CotomyApiForm { public override get actionUrl(): string { return \u0026#34;/api/feedback\u0026#34;; } } class FeedbackPage extends CotomyPageController { protected override async initializeAsync(): Promise\u0026lt;void\u0026gt; { await super.initializeAsync(); const form = this.setForm(CotomyElement.byId(\u0026#34;feedback-form\u0026#34;, FeedbackForm)!); form.submitFailed(event =\u0026gt; { if (event.response.status === 422) { CotomyElement.byId(\u0026#34;feedback-status\u0026#34;)!.text = \u0026#34;Please review the input.\u0026#34;; } }); } } CotomyApiForm changes the default method from get to post, requires a concrete actionUrl, builds FormData from the real form element, and calls CotomyApi.submitAsync().\nThe important built-in behavior is not only transport. It also does three practical things.\nFirst, datetime-local values are converted before submit. The implementation rewrites them from local browser input format into an offset-aware string.\nSecond, API failures dispatch events. When CotomyApi throws a CotomyApiException, CotomyApiForm emits cotomy:apifailed and cotomy:submitfailed before rethrowing.\nThird, the screen can replace the API client. That is what apiClient() is for.\nThese are not convenience features. They ensure that submit behavior stays consistent across screens.\nimport { CotomyApi, CotomyApiForm } from \u0026#34;cotomy\u0026#34;; class AdminTaskForm extends CotomyApiForm { public override apiClient(): CotomyApi { return new CotomyApi({ baseUrl: \u0026#34;/admin\u0026#34;, headers: { \u0026#34;X-Screen\u0026#34;: \u0026#34;task-runner\u0026#34;, }, }); } public override get actionUrl(): string { return \u0026#34;/api/tasks/run\u0026#34;; } } If the server expects a non-default method, override method.\nimport { CotomyApiForm } from \u0026#34;cotomy\u0026#34;; class ApprovalForm extends CotomyApiForm { public override get actionUrl(): string { return \u0026#34;/api/approvals\u0026#34;; } protected override get method(): string { return \u0026#34;patch\u0026#34;; } } CotomyEntityApiForm Use CotomyEntityApiForm when the form is tied to one entity and the submit path should change between create and update. This is the class that turns one form into a POST-or-PUT form based on whether the form already has an entity key.\nThe entity key lives on data-cotomy-entity-key. When the key is missing, the default method is post. When the key exists, the default method becomes put. The actionUrl also changes. If the action is /api/users and the entity key is 42, the effective actionUrl becomes /api/users/42.\n\u0026lt;form id=\u0026#34;user-edit-form\u0026#34; action=\u0026#34;/api/users\u0026#34; data-cotomy-entity-key=\u0026#34;42\u0026#34;\u0026gt; \u0026lt;input type=\u0026#34;text\u0026#34; name=\u0026#34;user[name]\u0026#34; /\u0026gt; \u0026lt;/form\u0026gt; import { CotomyElement, CotomyEntityApiForm, CotomyPageController } from \u0026#34;cotomy\u0026#34;; class UserEditPage extends CotomyPageController { protected override async initializeAsync(): Promise\u0026lt;void\u0026gt; { await super.initializeAsync(); this.setForm( CotomyElement.byId(\u0026#34;user-edit-form\u0026#34;, class extends CotomyEntityApiForm {})! ); } } On a 201 Created response, the class also tries to read the Location header and store the generated entity key back onto the form. That behavior matters for screens that start as create and continue as update without rebuilding the screen contract.\nThe main override point here is still method when the server contract is different.\nimport { CotomyEntityApiForm } from \u0026#34;cotomy\u0026#34;; class LegacyOrderForm extends CotomyEntityApiForm { public override get actionUrl(): string { return \u0026#34;/legacy/orders\u0026#34;; } protected override get method(): string { return this.entityKey ? \u0026#34;patch\u0026#34; : \u0026#34;post\u0026#34;; } } This is also the point where you should be strict about boundaries. If the API does not follow an entity-oriented URL contract, it is usually cleaner to step back to CotomyApiForm than to fight the entity-aware behavior. If the API does not represent a single stable resource over time, forcing it into CotomyEntityApiForm usually creates drift instead of reducing it.\nCotomyEntityFillApiForm CotomyEntityFillApiForm is the form for edit or detail screens that need both submit and load. It extends CotomyEntityApiForm and adds one more path:\ninitialize() schedules loadAsync() on window ready loadAsync() sends GET to loadActionUrl when canLoad is true fillAsync() writes the response into matching inputs renderer().applyAsync(response) updates data-cotomy-bind targets successful submit also calls fillAsync(response) This is the form type you use when the screen should stay aligned with one entity over time.\nimport { CotomyElement, CotomyEntityFillApiForm, CotomyPageController, } from \u0026#34;cotomy\u0026#34;; class CustomerForm extends CotomyEntityFillApiForm {} class CustomerPage extends CotomyPageController { protected override async initializeAsync(): Promise\u0026lt;void\u0026gt; { await super.initializeAsync(); this.setForm(CotomyElement.byId(\u0026#34;customer-form\u0026#34;, CustomerForm)!); } } The default load condition is whether the form already has an entity key. That is often enough for ordinary edit pages, but the real extension points are in the protected methods.\nOverriding loadActionUrl loadActionUrl defaults to actionUrl. Override it when the save endpoint and load endpoint are different.\nimport { CotomyEntityFillApiForm } from \u0026#34;cotomy\u0026#34;; class CustomerProfileForm extends CotomyEntityFillApiForm { public override get actionUrl(): string { return \u0026#34;/api/customers\u0026#34;; } protected override get loadActionUrl(): string { return `/api/customer-profiles/${encodeURIComponent(this.entityKey!)}`; } } Overriding canLoad Override canLoad when load should wait for more than just an entity key. This is useful when the screen needs a mode flag, permission flag, or another prerequisite before the initial GET.\nimport { CotomyEntityFillApiForm } from \u0026#34;cotomy\u0026#34;; class InvoiceForm extends CotomyEntityFillApiForm { protected override get canLoad(): boolean { return !!this.entityKey \u0026amp;\u0026amp; this.attribute(\u0026#34;data-mode\u0026#34;) === \u0026#34;edit\u0026#34;; } } Overriding bindNameGenerator fillAsync() fills inputs by their name attribute and also uses CotomyViewRenderer for bind targets. The bindNameGenerator() hook exists so both paths use the same naming contract.\nimport { CotomyDotBindNameGenerator, CotomyEntityFillApiForm, ICotomyBindNameGenerator, } from \u0026#34;cotomy\u0026#34;; class DotNameCustomerForm extends CotomyEntityFillApiForm { protected override bindNameGenerator(): ICotomyBindNameGenerator { return new CotomyDotBindNameGenerator(); } } That matters when your screen uses names like customer.name instead of customer[name].\nOverriding renderer renderer() returns a CotomyViewRenderer built from the form and the bind name generator. Override it when the non-input reflection layer needs a different renderer setup.\nimport { CotomyEntityFillApiForm, CotomyViewRenderer, } from \u0026#34;cotomy\u0026#34;; class SummaryForm extends CotomyEntityFillApiForm { public override renderer(): CotomyViewRenderer { return new CotomyViewRenderer(this, this.bindNameGenerator()); } } In many screens, you will not need to override renderer() at all. The practical override point is usually bindNameGenerator(), not renderer() itself.\nAdding custom fillers CotomyEntityFillApiForm already includes filler behavior for datetime-local, checkbox, and radio. If one input type needs custom write behavior, register another filler during initialize().\nimport { CotomyEntityFillApiForm } from \u0026#34;cotomy\u0026#34;; class UserSettingsForm extends CotomyEntityFillApiForm { public override initialize(): this { super.initialize(); this.filler(\u0026#34;date\u0026#34;, (input, value) =\u0026gt; { input.value = String(value ?? \u0026#34;\u0026#34;).slice(0, 10); }); return this; } } The key point is that filler customization is local to the form. Cotomy does not force one global fill rule for every input type.\nWhat This Form Does Not Do There is one boundary worth keeping explicit. CotomyEntityFillApiForm does not automatically fill multiple select inputs. The implementation skips select elements with multiple and also skips names ending in [] while walking object properties.\nThat is intentional. Array synchronization patterns vary too much across projects, so the core form keeps that part explicit instead of pretending there is one universal answer.\nChoosing the Right Form The key question is not the data shape, but the operational path the screen must keep stable.\nIf the screen goal is URL navigation, use CotomyQueryForm. If the goal is API submit without entity identity, use CotomyApiForm. If the form should switch between create and update by entity key, use CotomyEntityApiForm. If the screen also needs initial load and response-to-input reflection, use CotomyEntityFillApiForm.\nThat separation is what keeps Cotomy forms practical. The class you choose already says what kind of runtime path the screen is allowed to take.\nUsage Series This article is part of the Cotomy Usage Series, which focuses on concrete runtime behavior and day-to-day API usage.\nSeries articles: CotomyElement in Practice , CotomyElement Value and Form Behavior , CotomyForm in Practice, CotomyApi in Practice , and Debugging Features and Runtime Inspection in Cotomy .\nConclusion Cotomy forms are easier to use when you do not flatten them into one generic abstraction. Each class exists to keep one runtime concern explicit: navigation, API submit, entity identity, or entity load and reflection. This separation is not about abstraction. It is what prevents multiple informal paths from emerging as the screen grows.\nThe most useful override points are actionUrl, method, loadActionUrl, canLoad, bindNameGenerator, and local filler registration. If those stay aligned with the real screen contract, the form stays small and the page controller does not need to absorb transport logic.\nPrevious article: CotomyElement Value and Form Behavior Next article: CotomyApi in Practice ","permalink":"https://blog.cotomy.net/posts/usage/cotomy-form-in-practice/","summary":"A practical guide to Cotomy form classes, including QueryForm, ApiForm, EntityApiForm, and EntityFillApiForm, with real override points and usage examples.","title":"CotomyForm in Practice"},{"content":"Overview Cotomy v2.0.0 is a major release centered on TypeScript 6 support and a refreshed build and test toolchain for the v2 line. The first follow-up release, v2.0.1, only updates the README to document Node.js 20.19.0 or later and to align the release-note guidance for the v2 series.\nChanges This release updates the core development dependencies to TypeScript 6, Vitest 4, jsdom 29, and webpack-cli 7. It also switches internal DOM identity generation from cuid to @paralleldrive/cuid2 and adds explicit rootDir settings in the TypeScript build configuration to keep the build compatible with TypeScript 6.\nFollow-up v2.0.1 is not a separate functional change release. It is a small documentation follow-up that updates the README with the Node.js 20.19.0 or later requirement and adds v2 release-note information, so the current install target can be treated as v2.0.1 while the major technical change remains v2.0.0.\nInstall npm install cotomy@2.0.1 Links https://github.com/yshr1920/cotomy/releases/tag/v2.0.0 https://github.com/yshr1920/cotomy/releases/tag/v2.0.1 https://cotomy.net ","permalink":"https://blog.cotomy.net/posts/releases/cotomy-2-0-0-release/","summary":"Major release that aligns Cotomy with TypeScript 6, followed by a small v2.0.1 documentation update.","title":"Cotomy v2.0.0"},{"content":"Previous article: The Birth of the Page Controller Why I had to think so much about forms at all I have mentioned form architecture many times in other articles, but I have not properly written down why it grew into several abstraction layers.\nThat structure did not come from a desire to make forms academically elegant. It came from the fact that web application structure felt deeply unnatural to me for a long time.\nMy own development background began with Visual C++ and Visual Basic 6, then continued through C# and Java Swing before serious web work became central. Because of that background, the difference between desktop applications and web applications was not a small implementation detail. It was a structural shock.\nIn desktop development, a form is usually a stable screen object. The screen exists, events are attached to it, and the code around it has one visible owner. In the web, especially in the years when I was building many business systems without a comfortable framework base, the structure felt fragmented. HTML existed in one place, JavaScript in another, request handling somewhere else, and the real screen flow emerged only after those pieces happened to cooperate.\nThat was the part I found difficult to accept.\nEven now, in the range of teams I personally know, desktop-oriented teams often do not move to the web quickly. And when they do, many of them buy expensive third-party control suites that let them treat web screens somewhat like desktop screens. I understand that reaction very well.\nWhy ASP.NET still mattered to me even when I could not use it Looking back, ASP.NET had a very strong appeal for exactly this reason. The idea that you could define events in a way that felt much closer to desktop development, and have the surrounding wiring largely standardized, was genuinely innovative. What I valued there was not the idea of reviving Web Forms as-is, but the fact that event flow had an explicit owner instead of being left as scattered page convention.\nI should say this clearly: I am not a Microsoft partisan. I have simply spent many years in environments where Microsoft products were practical and useful.\nIf I had been in a position where I could choose ASP.NET and C# freely from the beginning, I might not have struggled with web architecture in the same way. But in reality, I was often in situations where even adopting C# itself was difficult. That meant I could not rely on the strong architectural assistance those products offered.\nAt the same time, I also suspect that if I had received too much support too early, I might have understood the web less deeply. And if that had happened, Cotomy probably would not exist.\nThe first web frustration was not the DOM itself When I first started struggling with web development, the biggest irritation was not HTML. It was JavaScript.\nJavaScript certainly supported object-oriented programming in some sense, but for a long time it did not feel like a language that naturally encouraged the kind of structured object-oriented work I wanted. jQuery made DOM operations much easier, and I depended on it heavily for a period, but it did not give me the kind of application structure I was looking for.\nThe source of my dissatisfaction was simple. There was no class-based structure that I could trust as the real shape of the screen.\nThat changed when I adopted TypeScript. Once I could define stable classes, I built ElementWrap, which later became CotomyElement, and then extended that foundation into several specialized directions.\nAt that point, the question stopped being how to manipulate the DOM more conveniently. The question became how to represent a screen as a consistent unit at all.\nWhy form inheritance looked wrong at first When I began to shape the form layer, I hesitated.\nIn desktop application development, I rarely saw form classes deeply inherited in a meaningful way. Perhaps some teams did that, but I did not personally encounter it often, and I was not eager to do it myself. At most, an application might define a shared base form for a common toolbar or status bar. Even that kind of inheritance can be controversial, so if I had only been thinking with desktop habits, I probably would not have designed a multi-layer form hierarchy.\nBut the web changed the problem.\nThe real difference was not visual controls. It was the client-server relationship. Once that difference became the center of the problem, forms stopped being one thing.\nThe split was not only about transport. It was about three different axes that kept colliding in real screens: transport, meaning GET navigation versus POST submission versus API calls; identity, meaning whether the screen already owned an entity key; and lifecycle, meaning whether the screen navigated away or updated itself in place.\nA form could navigate with GET. A form could submit data with POST. A form could call an API through Ajax and remain on the same screen.\nThose were not cosmetic variations. They were different runtime behaviors.\nA single form abstraction breaks down because GET navigation, POST submission, and API-driven updates do not share the same lifecycle. Treating them as one leads to duplicated logic, inconsistent state transitions, and unclear ownership.\nIn practice, this means the same screen starts breaking in ordinary ways. After one submit, reload may behave like GET navigation while the latest save happened through Ajax, the URL no longer explains the current state, and the screen ends up with two state sources: what the DOM currently shows and what the last API response implied. This is the point where a single form abstraction stops being a simplification and becomes a hiding place for contradictory behavior.\nThat is why I ended up classifying forms around transport and lifecycle rather than only around screen appearance. And that is also why the result became a hierarchy rather than a loose set of composable helpers. An execution path is not an optional decoration on top of a form. It is the form\u0026rsquo;s structural meaning. If that meaning is assembled too freely at runtime, informal paths reappear and the same instability returns under a different style.\nWhy one base form was still necessary Even after that classification, I still wanted every one of those paths to remain recognizably a form.\nThat was important to me. If GET-based search, POST submission, and Ajax submission were all treated as unrelated ad hoc techniques, the architecture would split again immediately.\nSo I introduced one base class whose role was simply to make form ownership explicit. In the current Cotomy source, that base is CotomyForm. It extends CotomyElement, keeps the screen rooted in an actual form element, standardizes initialization, and defines submitAsync as the one required boundary.\nFrom there, specialized subclasses diverge by behavior.\nThe current class structure in Cotomy The current hierarchy in src/form.ts looks like this. What matters in this diagram is not visual form type. It shows responsibility separation through runtime behavior.\nclassDiagram class CotomyElement class CotomyForm { #method: string +actionUrl: string +autoReload: boolean +initialize() this +reloadAsync() Promise~void~ +submitAsync() Promise~void~ } class CotomyQueryForm { #method: string = \u0026#34;get\u0026#34; +submitAsync() Promise~void~ } class CotomyApiForm { +actionUrl: string #method: string = \u0026#34;post\u0026#34; +formData() FormData +submitAsync() Promise~void~ } class CotomyEntityApiForm { +entityKey: string | undefined +actionUrl: string #method: string } class CotomyEntityFillApiForm { +reloadAsync() Promise~void~ #loadActionUrl: string #canLoad: boolean #bindNameGenerator() ICotomyBindNameGenerator +renderer() CotomyViewRenderer } class CotomyApi class CotomyViewRenderer CotomyElement \u0026lt;|-- CotomyForm CotomyForm \u0026lt;|-- CotomyQueryForm CotomyForm \u0026lt;|-- CotomyApiForm CotomyApiForm \u0026lt;|-- CotomyEntityApiForm CotomyEntityApiForm \u0026lt;|-- CotomyEntityFillApiForm CotomyApiForm ..\u0026gt; CotomyApi : submits CotomyEntityFillApiForm ..\u0026gt; CotomyViewRenderer : renders The key point is that the hierarchy is not trying to model visual form types. It is modeling runtime responsibilities. CotomyQueryForm fixes the transport path for navigation, CotomyApiForm fixes the submission path, CotomyEntityApiForm fixes how identity changes the operation, and CotomyEntityFillApiForm fixes the load and render path on top of that. That is how the three axes of transport, identity, and lifecycle are turned into explicit runtime boundaries.\nEach form layer owns exactly one responsibility: navigation through GET, submission through POST or PUT, entity identity, or load and render. Mixing these responsibilities inside one abstraction was the source of instability.\nWhy the hierarchy had to keep growing The first major split was simple.\nCotomyQueryForm exists because query navigation should stay a GET concern. It rebuilds query parameters and navigates with the resulting URL.\nCotomyApiForm exists because API submission has a different lifecycle. It gathers FormData, normalizes datetime-local inputs, submits through CotomyApi, and exposes failure events.\nCotomyEntityApiForm adds another distinction that became essential in business CRUD screens. Creation and update often target the same logical resource, but they differ by whether the screen already has an entity key. That is why the current implementation switches between POST and PUT automatically based on the data-cotomy-entity-key attribute.\nOlder web systems often pushed everything through POST, and I worked in periods where that was still common enough. But I do not think that is where the design should stop. If create and update are different operations, it is better to use them in a form the runtime can understand explicitly. That also matches the more natural business expectation that an existing record keeps its identity rather than pretending every save is the same kind of operation.\n\u0026lt;form action=\u0026#34;/api/users\u0026#34; data-cotomy-entity-key=\u0026#34;42\u0026#34;\u0026gt; \u0026lt;input name=\u0026#34;name\u0026#34; /\u0026gt; \u0026lt;/form\u0026gt; With that attribute present, CotomyEntityApiForm treats the form as an update and builds the action URL by appending the key. Without it, the same form goes through the create path.\nThat behavior matters because it keeps one CRUD screen inside one structural model. I did not want create screens and edit screens to become different species of frontend code. In business applications, it is normal to want one screen model for both create and edit. But at the HTTP level they are still different operations. CotomyEntityApiForm exists because I wanted that difference to stay explicit without splitting the whole screen into separate architectures.\nWithout that boundary, create and update flows tend to split in quiet ways. Submit handling diverges, reload paths follow different assumptions, and the same screen starts carrying multiple informal rules about which endpoint shape applies in which state.\nWhy loading created another abstraction layer Even that was still not enough.\nThe next problem was loading.\nIn desktop software, it is natural for a form to load and hold data directly in a way that feels local to that screen. Web applications are more awkward. If the frontend starts defining its own parallel data shape too aggressively, the same practical data contract gets restated in too many places.\nI understand perfectly well why DTOs exist. I use them too. My objection was never to an intermediate layer itself. The thing I disliked was defining the same effective shape again and again across server load handling, screen expansion, and submit flow when the screen was still functionally one vertical slice of the same business feature. The practical problem is that the same screen can easily turn into three overlapping definitions: a load DTO, a submit DTO, and a UI state model that has to keep the other two aligned.\nThat concern is weaker in large SPA systems, because those systems are often genuinely organized around explicit API contracts between more independent frontend and backend layers. But many of the systems I build are not like that. They are made of many business features that remain vertically sliced and operationally local.\nThat is why CotomyEntityFillApiForm appeared. It adds loadAsync, fillAsync, and renderer-based reflection so one form can both submit and refill itself through one consistent path.\nThis was the concrete failure pattern I wanted to remove. When submit and reload travel through separate routes, DTO assumptions, DOM state, and form state start drifting apart. The same business screen then has to be re-understood through several overlapping mechanisms instead of one execution model.\nIt also means the framework can say something very explicit: if a screen is fundamentally an entity-oriented form, then load, fill, submit, and UI reflection belong to one family of behavior.\nThe override points were part of the design from the beginning Another important point is that I did not want this form model to become rigid. Business systems always contain exceptions.\nSo the hierarchy did not only classify behavior. It also exposed stable override points.\nThe goal was not to create more abstractions, but to remove informal paths. Each form type exists to make one execution path explicit and predictable.\nOne example is when the load path should not exactly match the submit path. CotomyEntityFillApiForm currently defaults loadActionUrl to actionUrl, and canLoad to whether an entity key exists. But both are designed to be overridden.\nimport { CotomyEntityFillApiForm } from \u0026#34;cotomy\u0026#34;; class CustomerCodeForm extends CotomyEntityFillApiForm { protected override get canLoad(): boolean { return !!this.attribute(\u0026#34;data-customer-code\u0026#34;); } protected override get loadActionUrl(): string { const code = this.attribute(\u0026#34;data-customer-code\u0026#34;); return code ? `/api/customers/by-code/${encodeURIComponent(code)}` : this.attribute(\u0026#34;action\u0026#34;)!; } } That pattern is important when an existing endpoint uses a natural identifier or another route convention that does not match the default surrogate-key path.\nAnother real extension point is the bind name generator. CotomyEntityFillApiForm uses CotomyViewRenderer internally, and the current implementation exposes bindNameGenerator for exactly that reason.\nimport { CotomyDotBindNameGenerator, CotomyEntityFillApiForm, ICotomyBindNameGenerator } from \u0026#34;cotomy\u0026#34;; class DotNameForm extends CotomyEntityFillApiForm { protected override bindNameGenerator(): ICotomyBindNameGenerator { return new CotomyDotBindNameGenerator(); } } And when submission itself needs a different identifier path, the entity-level action and method can also be overridden.\nimport { CotomyEntityApiForm } from \u0026#34;cotomy\u0026#34;; class LegacyCustomerForm extends CotomyEntityApiForm { public override get actionUrl(): string { const code = this.attribute(\u0026#34;data-customer-code\u0026#34;); const base = this.attribute(\u0026#34;action\u0026#34;)!; return code ? `${base}/by-code/${encodeURIComponent(code)}` : base; } protected override get method(): string { return this.attribute(\u0026#34;data-customer-code\u0026#34;) ? \u0026#34;put\u0026#34; : \u0026#34;post\u0026#34;; } } The important part here is not just flexibility. It is that flexibility stays inside a known class boundary instead of escaping into per-screen improvisation.\nThat is also where Cotomy\u0026rsquo;s value becomes practical. It removes some freedom on purpose. Screens are not encouraged to invent their own submit route, their own reload contract, or their own entity identification rule every time. Those choices are narrowed into a small number of regular paths. Cotomy did not gain value by increasing the number of form classes. It gained value by reducing the number of execution paths a screen is allowed to have.\nIn concrete terms, submit is routed through one form-owned path, load and submit stay inside the same form family, entity identity is attached to the form through DOM attributes, and render reflection is tied to the same load boundary instead of being scattered. That loss of freedom is what makes the screen model harder to break. State transitions become easier to predict, and debugging cost goes down because fewer informal routes are available. It also reduces state inconsistency, because one form keeps one execution boundary and structurally limits how mutation paths can diverge.\nWhy this architecture is not trying to fit every frontend style I do not think the form classes prepared in Cotomy are a natural fit for very large SPA applications.\nThat is not false modesty. It is simply a matter of target shape.\nWhat I usually build is a large number of CRUD-oriented screens for many entity types, plus surrounding screens that still mostly show one coherent slice of information at a time. In that kind of system, the architectural question is not how to make one giant client state graph elegant. The question is how to keep a large system locally understandable.\nIn many frontend approaches, these concerns are distributed across components, hooks, and API layers. Cotomy instead centralizes them into a small number of form types to reduce coordination cost.\nThat is what the form hierarchy was for.\nIt let me keep GET navigation, API submit, entity identification, loading, filling, and rendering inside a small number of repeatable structures. That reduced the number of ways a screen could become strange.\nMy own conclusion now Looking back, I think this part of Cotomy was not born from theory. It was born from discomfort.\nI had too much desktop instinct to be satisfied with loose frontend scripting. But I also had too much direct web experience by that point to pretend the browser behaved like Windows Forms.\nSo the only realistic option was to define a web-native structure that still gave me explicit class ownership. That is what the form abstraction became.\nIn my view, this was the right decision for the kind of business systems I build. It gave me one more way to make each screen understandable as a local unit instead of a loose agreement between markup, JavaScript, and network calls. That is also why it does not fully resemble typical SPA design. It was built for server-led business screens that still need explicit client-side structure, not for a frontend architecture that assumes the client owns everything.\nDevelopment Backstory This article is part of the Cotomy Development Backstory, which traces how Cotomy\u0026rsquo;s architecture emerged through real project constraints.\nSeries articles: Building Systems Alone , Early Architecture Attempts , The First DOM Wrapper and Scoped CSS , API Standardization and the Birth of Form Architecture , Page Representation and the Birth of a Controller Structure , The CotomyElement Constructor Is the Core , Dynamic HTML Boundaries in CotomyElement , Reaching Closures to Remove Event Handlers Later , The Birth of the Page Controller , and The Birth of the Form Abstraction.\nNext article: Data Binding as a Structural Problem is currently being drafted.\n","permalink":"https://blog.cotomy.net/posts/development-backstory/10-the-birth-of-the-form-abstraction/","summary":"Why Cotomy ended up with multiple form layers, and how that structure came from the gap between desktop application habits and web runtime reality.","title":"The Birth of the Form Abstraction"},{"content":"Previous article: Introducing Project Templates for Razor Pages In the previous article, I introduced what the Cotomy Razor Pages templates are for and how Standard and Professional are positioned. This time, I want to focus on something more practical: what you actually receive after purchase, how that package is structured, and what the correct start path looks like.\nThis template is not just a sample application bundled into an archive. It is a structured base for building business applications on Cotomy and Razor Pages, and the structure of the package is part of that design. The package is distributed as a ZIP file, the contents differ slightly by plan, and the correct way to begin is defined by the package itself. The template reference page is here:\nhttps://cotomy.net/razorpages/ The ZIP file is a distribution package, not your working solution After purchase, you download the archive that matches your edition, such as cotomy-razorpages-project-template-standard.zip or cotomy-razorpages-project-template-professional.zip, and extract it. What appears after extraction is the distribution package. That distinction matters because the extracted root is not yet the application workspace you will continue developing in.\nThe first thing to understand is that this root folder is a handoff package. It contains the material used to create or study a solution, but it is not itself the final working solution that should become your real project. The package includes the solution templates, the scaffolding templates, the reference source, and the documentation that defines how to proceed.\nThat is why the first action should always be to open the root README.md. The README is not supplementary reading. It defines the actual setup order.\nWhat is in the extracted package At the root level, the package contains the pieces needed to understand the product and start from the intended path. In practice, the important directories are docs, source, and templates, together with the root README and the edition-specific template package file.\nThe package structure is built around four roles. The README defines the setup order. The solution template package is what enables solution generation through dotnet new. The templates folder contains the smaller scaffolding templates used later for adding segments and pages. The source folder contains reference workspaces that show what the generated structure looks like in a concrete form. The docs folder explains how to use what was generated.\nThat is also why copying folders from source is not the recommended first step. The product itself is centered on the project templates, and the source area is attached as supplementary material so you can inspect or refer to a concrete workspace when needed. The intended route is still README first, then template registration, then solution generation.\nStandard and Professional are different in structure, not only in feature count The easiest way to understand the two editions is not by reading them as a simple feature checklist. The more useful view is structural.\nStandard edition Standard is the lighter package. It focuses on the UI runtime, page structure, and the integration points that let you connect your own backend decisions.\nIn the bundled material, Standard gives you Core as the shared foundation, UISample as the UI reference, AuthSample as the minimum authentication reference, the templates folder, the source folder, and the documentation set. In the generated starter workspace, the minimum structure is the main host project plus Core. In the generated sample workspace, the main host project is joined by Core, UISample, and AuthSample.\nWhat Standard does not include is equally important. It does not include a persistence layer, it does not include the Professional EF Core data setup, and it does not give you a packaged full-stack application baseline. If your system already has backend decisions or if persistence should remain your own responsibility, that is exactly why Standard exists.\nProfessional edition Professional starts from the same base and adds the missing full-stack pieces. The additional structural units are DataModel, Auth, and EFCRUDSample.\nThat changes the package from a UI and integration baseline into a full-stack starting point. In the Professional starter workspace, the minimum structure already includes the main host project, Core, DataModel, and Auth. In the Professional sample workspace, UISample and EFCRUDSample are added so that you can inspect the full connection from UI to persistence.\nThe practical difference is simple. Standard is frontend plus integration boundary. Professional is a full-stack base that already includes the persistence and authentication side needed to begin from a more complete business application skeleton.\nREADME is the real entry point The most important rule in the package is straightforward: do not improvise the setup order. The root README defines the real path, and skipping it causes failures immediately.\nThis is not theoretical. The package documentation explicitly warns that reordering the initial steps leads to startup and environment failures. If you run the application before setting the signing key, startup fails. If you skip the frontend build path embedded in the workspace instructions, the UI is not in a valid runnable state. In Professional, if SQL Server is not running and healthy before the migration and run path, the database steps fail immediately.\nThis article therefore should not be used as a replacement for README. Its purpose is narrower. It explains why the package is structured the way it is and why the flow is extract, read, register templates, generate a solution, and then continue from the generated workspace README.\nWhat to install before you start One thing worth checking before you even run the template commands is whether the base tools are already available on your machine. The bundled documentation under docs assumes this order as well, and it is better to confirm the environment early than to discover it halfway through setup.\nFor both editions, the minimum starting point is .NET SDK 10.x together with Node.js LTS and npm. In the Professional README, the packaged setup path is written around SQL Server and EF Core migrations, so Docker and the dotnet ef tool are also part of that baseline entry route. That said, this should be read as the documented default path for the bundled setup, not as a claim that EF Core can only be used with that database choice. If you intend to run Professional against a different database, read those environment steps in terms of your actual provider and runtime setup rather than copying the SQL Server path literally. VS Code is also the recommended editor in the documentation, not because it is mandatory, but because the generated workspaces already include tasks and guidance that fit that editor well.\nThe safest recommendation here is still the ordinary one. If .NET is not installed yet, use the official installer from Microsoft. If Node.js and npm are not installed yet, use the standard LTS installer from nodejs.org. If you are following the default Professional path documented in the package, install Docker Desktop by the usual installer route and add dotnet ef with the standard global tool command shown in the README. If your Professional environment uses a different database path, replace those infrastructure-specific steps with the equivalent setup for that database before continuing. That is enough for an entry point. It is better to follow the package README for the actual verification and edition-specific steps than to over-specify OS-by-OS installation detail in this article.\nIn other words, this article should tell you what to have ready, but the package documentation should remain the source of truth for the exact setup sequence.\nInstall the templates before creating a solution The package supports two kinds of template installation. First, install the solution template package that matches your edition. Second, install the shared scaffolding templates from the templates folder.\nInstall only the package that matches your edition.\n# Standard dotnet new install ./Cotomy.Templates.Standard.nupkg # Professional dotnet new install ./Cotomy.Templates.Professional.nupkg # Shared templates dotnet new install ./templates In actual use, you install the edition package that matches the archive you purchased, not both editions at once. The shared templates folder enables the smaller generators used after solution creation.\nOnce installed, the available commands fall into two different groups, and it is better not to mix them conceptually.\nThe first group is for creating the solution itself.\ndotnet new cotomy-standard-starter -n \u0026lt;SolutionName\u0026gt; -o \u0026lt;SolutionName\u0026gt; dotnet new cotomy-professional-starter -n \u0026lt;SolutionName\u0026gt; -o \u0026lt;SolutionName\u0026gt; dotnet new cotomy-standard-sample -n \u0026lt;SolutionName\u0026gt; -o \u0026lt;SolutionName\u0026gt; dotnet new cotomy-professional-sample -n \u0026lt;SolutionName\u0026gt; -o \u0026lt;SolutionName\u0026gt; In these examples, SolutionName means the name you want to give your generated solution and main project folder. These are the commands that generate the initial workspace. They decide the solution shape, the main project name, the root layout, and the edition boundary from the start. The starter commands create the minimum working base. The sample commands create the runnable reference workspaces.\nThese two groups serve completely different purposes and should not be confused.\nThe second group is for extending a solution that already exists.\nThese are not alternative ways to start a project. They are follow-up templates used after the starter solution has already been created.\nFor this package, I prepared dotnet new templates not only for creating the solution itself but also for adding a segmented project structure and a full page set that already includes the TypeScript entry point.\nThe word SegmentName matters here. In .NET, the application can be split into multiple projects, and this template set assumes feature-level separation through Razor Class Libraries. At the same time, if Pages paths with the same names are repeated across projects, routing becomes harder to keep clear. That is why the templates intentionally put pages under a segment-shaped path and keep that segment name visible in the generated structure.\ncotomy-rcl dotnet new cotomy-rcl -n \u0026lt;SegmentName\u0026gt; SegmentName means the target project or business area, such as Sales.\ncotomy-rcl creates a new segment as a Razor Class Library. In practical terms, that means a new business area such as Sales or Inventory gets its own project, its own Pages tree, and its own Cotomy-compatible view imports. That separation is important because the segment is not just a folder. It is a project boundary that keeps one screen area grouped as a reusable unit inside the solution.\nThis screenshot shows that cotomy-rcl appears in the IDE UI after installation. Even so, this package expects you to run it from the command line in the workspace root. Here the UI path is basically just an error path, so do not use it from that screen.\ncotomy-page dotnet new cotomy-page -n \u0026lt;PageName\u0026gt; -P \u0026lt;SegmentName\u0026gt; dotnet new cotomy-page -n \u0026lt;PageName\u0026gt; -P \u0026lt;SegmentName\u0026gt; -D \u0026lt;SubDirectory\u0026gt; # example dotnet new cotomy-page -n CustomerEdit -P Sales -D Admin/Master PageName means the page you want to add. SegmentName means the target segment project, such as Sales. SubDirectory means an optional nested folder under Pages within that project.\nThe target segment must already exist, typically created with cotomy-rcl, before using cotomy-page.\ncotomy-page works inside a segment that already exists. It generates the page as a co-located unit of Razor, PageModel, scoped CSS, and TypeScript controller entry point under the segment\u0026rsquo;s Pages path. That matters because a Cotomy page is not only markup. The intended structure is one page name tied to its server logic, page-scoped styling, and client-side controller behavior from the beginning.\nThe P option is the segment project folder name, and at the same time it is the segment folder name used under Pages. The D option adds an optional nested directory under Pages/SegmentName/, so the generated page lands in a shape such as Pages/Sales/Admin/Master/CustomerEdit.cshtml.\nIn practice, leaving that structure as generated is usually the safest choice. One practical reason is that keeping the project name in the page path helps avoid confusion when the same cshtml name exists in more than one project. If that structure still bothers you, the realistic alternatives are to move only the main project pages to a different organization of your own, or to stop using the template for that page and create the cshtml set manually.\nThis screenshot shows that solution templates also appear in the IDE UI. They are visible there, but they must not be used for solution creation in this package. The same warning applies to cotomy-page as well: after installation, these templates will appear in the IDE template UI, but they are designed to be used from the command line. If you try to use that UI flow here, you are again stepping into the error path rather than the supported path.\nThis is the recommended route because it keeps naming, solution shape, and project structure aligned with the package design. Starting by copying the source folder bypasses that initialization path and loses the main advantage of using a project template in the first place.\nCreate a solution through dotnet new After template installation, the next step is solution generation. This is where the package stops being a distribution and starts becoming your own workspace.\nThere are two different generated solution types. Starter is the minimum real project base. Sample is the reference implementation used to inspect how the pieces are meant to fit together. Starter is for beginning your own application. Sample is for understanding and verifying the included structure.\nUsing dotnet new here is important for reasons that are easy to miss if you only look at the generated files afterward. The template binds the solution name, the main project name, the workspace layout, and other generated paths together from the same input. That avoids the manual rename drift that usually appears when people start from a copied reference workspace and then retrofit their own application identity afterward.\nRun the following command in an empty working directory, not inside the extracted package.\nFor example, to create a new Standard solution from the terminal:\ndotnet new cotomy-standard-starter -n MyBusinessApp cd MyBusinessApp code . This creates a new solution with your chosen name and opens the generated workspace in VS Code. For Professional, replace the template name with cotomy-professional-starter and continue with the Professional README steps for the environment side.\nImportant: do not use the Visual Studio or VS Code template UI When you install the templates, Cotomy Standard, Professional, RCL, and page-related templates can appear in the Visual Studio or VS Code template selection UI. That visibility is expected because the IDE is only reflecting what was installed.\nThe important point is that this UI must not be used for this package flow, whether you are creating the solution or trying to add an RCL segment or page from those screens.\nIf you use the UI, the workflow fails in two different ways. For solution creation, the solution structure can break. In particular, the IDE flow can generate an additional solution file at an unintended location, which produces duplicated solution roots and a directory layout that no longer matches the package design. For cotomy-rcl and cotomy-page, the issue is simpler: in this package flow, using the IDE UI is basically just an error path instead of the supported command-line path from the workspace root.\nThis is a structural failure, not a cosmetic one. Once the IDE flow creates that extra solution in the wrong place, the relationship between the solution root, the main project folder, and later segment generation breaks. The workspace no longer matches the assumptions built into the template. That is the failure pattern shown in the screenshots above: the templates are visible in the UI, but using the UI for solution creation leads to the wrong shape.\nBecause of this, the rule is strict. Always create the solution by running dotnet new from the terminal. Do not use the Visual Studio or VS Code template selection UI for solution creation in this package.\nIn shorter form: if you use the UI, the solution structure can break. Use dotnet new only.\nThe source folder is attached for reference, not as the recommended start path The source folder is included intentionally, but it is not the center of the product. The main product is the project template itself. The source area is attached so that you have a concrete workspace you can open immediately and use as reference material when you want to inspect the packaged structure directly.\nIf you start from the source folder, you bypass the template initialization process. That means the solution name is already fixed, the project structure is not aligned with your own application identity, and future extensions through the templates may not match as cleanly as they do in a correctly generated workspace.\nBecause of that, the source folder should be treated as reference material only, not as the starting point for actual development. The cleaner route remains template installation followed by solution generation with your own project name.\nAfter generation, switch to the generated README Once dotnet new has created your solution, the setup instructions move with it. Every generated workspace contains its own README, and that README becomes the operational guide for the next stage.\nThat is where you follow the concrete environment steps for your generated workspace, including authentication key configuration, build preparation, and run order. In Professional, it is also where the SQL Server and migration path matters. This article stops before those details on purpose because repeating them here would only duplicate instructions that already belong to the generated workspace.\nDocs are part of the product, not optional extras The docs folder should be treated as part of the template itself, not as something you open only when you are stuck. README gives you the entry order, but docs give you the usage map once you are inside the template ecosystem.\nSamples.md is the place to read when you want to understand what the included sample projects are meant to teach. TemplateUsage.md is the practical guide for adding segments and pages through the bundled generators. UnitTest.md defines the testing direction. The core-classes folder is the local reference area for the main Cotomy-side classes used from the template.\nThere is also an important division of responsibility between docs and cotomy.net. The docs folder explains how to use this template package and how its generated structure is meant to be extended. cotomy.net is the broader reference for how Cotomy itself behaves at runtime. One explains template usage. The other explains the runtime library and its design.\nWithout the docs, you can run the template, but you cannot fully understand or extend it correctly.\nThe template is designed with the assumption that these docs are read alongside actual development, not only when problems occur.\nCorrect flow The correct path is short and strict.\nExtract the ZIP package. Open the root README. Install the templates. Create your solution with dotnet new. Move to the generated workspace README. Use docs when you need template-specific guidance. If you keep that order, the package makes sense quickly. If you skip it, the structure looks larger and more confusing than it really is.\nNext article: Building the Runtime Environment for the Standard Edition\n","permalink":"https://blog.cotomy.net/posts/razor-pages-templates/understanding-the-cotomy-razor-pages-project-template-structure-and-setup/","summary":"This article explains how the Cotomy Razor Pages template package is distributed, why the root README is the first thing to open, how Standard and Professional differ structurally, and why the correct start path is template install followed by solution generation.","title":"Understanding the Cotomy Razor Pages Project Template Structure and Setup"},{"content":"This continues from Standardizing CotomyPageController for Shared Screen Flows .\nIn daily CRUD-heavy business application development, screens tend to drift when each one is built as a separate local solution. Search conditions, edit flows, read-only displays, and API loading rules gradually diverge, and after that even small changes start requiring more reading than they should.\nIn CRUD-heavy business applications, I usually treat screens as three groups. They are search screens, detail and edit screens, and screens that do not really belong to either of those categories.\nFor ordinary CRUD work, the first two are usually the main path. Search screens list and narrow down records. Detail and edit screens load one record, show its current state, and post changes back.\nThere are also read-only screens that only display data. When the target is not input elements, I usually use the renderer. And if a screen is only for display, I do not always use a form at all. Sometimes I call CotomyApi directly and apply the response to the page.\nThe code below shows these patterns in the simplest possible shape. The entity uses only Id and Name. The search sample keeps only a plain search form. The edit sample uses only input elements. The read-only sample renders values into table cells. This article stays at that category level on purpose. Combining those categories into one CRUD workflow is a separate step, and I treat that as the subject of the next article.\nOne Pattern, Three Shapes The practical structure is usually the same, even though the screens look different.\nScreen type Input path Load path Render path Main boundary Search Query string Server request Server render URL-driven screen Edit Form submit through API API GET Input fill Screen + API contract Read-only No form API GET Renderer Display boundary That is why these screens are easy to standardize later. The category changes, but the screen shape usually does not.\nMost business screens are not unique interaction problems. They are repeated operational patterns.\nThe reason I keep these categories separate is that each one has a different data flow and a different ownership boundary. Search screens are request-driven, edit screens are entity-driven, and read-only screens are display-driven. If those responsibilities are mixed too freely, the screen becomes harder to understand and state consistency becomes harder to preserve over time.\nSo the practical rule is simple. Each screen category should keep one fixed data flow and one clear ownership boundary. Search screens should remain URL-driven, edit screens should remain API-driven, and read-only screens should remain display-driven unless there is a concrete reason to do otherwise. Mixing those patterns too casually increases complexity and makes state inconsistency more likely.\nCotomy matters here because the screen can stay in one DOM-first model even when the interaction style changes. Query forms, API forms, renderer-based display, and page-level orchestration all stay within the same screen boundary instead of forcing different UI models for different operations. That is the main reason this article is not only a general Razor Pages style note. Form, API, renderer, and page controller behavior are designed to stay inside one consistent screen model. This categorization also exists to avoid the kind of state fragmentation described in the Problem series.\nSearch Screens For search screens, I usually use the query string and let the server perform the filtering and list rendering. That keeps the URL state aligned with the screen state, and it makes reload, bookmark, and back-navigation behavior easy to understand. I keep this pattern on the server side because search results are naturally request-driven. Users expect the URL to represent the current filter state, and server rendering keeps that relationship explicit. In real systems, this screen category usually grows beyond a single keyword field. Paging, sort order, and other search conditions are often added to the same request boundary. That is another reason I keep these screens on query strings. When users change conditions or move between pages, I usually want that state to remain in browser history as-is so back navigation and forward navigation continue to work naturally.\nSearch Screen Razor Page This Razor Page keeps the search form minimal and renders the list on the server side. The first cell holds the entity key, and the row transition is triggered from that cell on the client side.\n@page \u0026#34;/sample-entities\u0026#34; @model SampleEntitySearchModel \u0026lt;h1\u0026gt;Sample Entities\u0026lt;/h1\u0026gt; \u0026lt;form id=\u0026#34;sample-entity-search-form\u0026#34; method=\u0026#34;get\u0026#34;\u0026gt; \u0026lt;label for=\u0026#34;keyword\u0026#34;\u0026gt;@nameof(SampleEntitySearchQuery.Name)\u0026lt;/label\u0026gt; \u0026lt;input id=\u0026#34;keyword\u0026#34; name=\u0026#34;@nameof(SampleEntitySearchQuery.Name)\u0026#34; value=\u0026#34;@Model.Query.Name\u0026#34; /\u0026gt; \u0026lt;button type=\u0026#34;submit\u0026#34;\u0026gt;Search\u0026lt;/button\u0026gt; \u0026lt;/form\u0026gt; \u0026lt;table\u0026gt; \u0026lt;thead\u0026gt; \u0026lt;tr\u0026gt; \u0026lt;th\u0026gt;@nameof(SampleEntityListItem.Id)\u0026lt;/th\u0026gt; \u0026lt;th\u0026gt;@nameof(SampleEntityListItem.Name)\u0026lt;/th\u0026gt; \u0026lt;/tr\u0026gt; \u0026lt;/thead\u0026gt; \u0026lt;tbody\u0026gt; @foreach (var entity in Model.Entities) { \u0026lt;tr\u0026gt; \u0026lt;td data-entity-id=\u0026#34;@entity.Id\u0026#34;\u0026gt;@entity.Id\u0026lt;/td\u0026gt; \u0026lt;td\u0026gt;@entity.Name\u0026lt;/td\u0026gt; \u0026lt;/tr\u0026gt; } \u0026lt;/tbody\u0026gt; \u0026lt;/table\u0026gt; Search Screen Page Model The PageModel only reads the query string and asks the repository for the filtered list. The persistence side is intentionally abstracted away because the important part here is the screen pattern.\nusing Microsoft.AspNetCore.Mvc; using Microsoft.AspNetCore.Mvc.RazorPages; public class SampleEntitySearchModel : PageModel { private readonly ISampleEntityRepository _repository; public SampleEntitySearchModel(ISampleEntityRepository repository) { _repository = repository; } public SampleEntitySearchQuery Query { get; private set; } = new(); public IReadOnlyList\u0026lt;SampleEntityListItem\u0026gt; Entities { get; private set; } = []; public async Task OnGetAsync([FromQuery] SampleEntitySearchQuery query) { Query = query; Entities = await _repository.SearchAsync(query.Name); } } public class SampleEntitySearchQuery { public string Name { get; set; } = string.Empty; } public class SampleEntityListItem { public int Id { get; set; } public string Name { get; set; } = string.Empty; } public interface ISampleEntityRepository { Task\u0026lt;IReadOnlyList\u0026lt;SampleEntityListItem\u0026gt;\u0026gt; SearchAsync(string name); Task\u0026lt;SampleEntity?\u0026gt; GetByIdAsync(string id); Task\u0026lt;SampleEntity\u0026gt; InsertAsync(SampleEntity entity); Task\u0026lt;SampleEntity\u0026gt; UpdateAsync(SampleEntity entity); } public class SampleEntity { public string Id { get; set; } = string.Empty; public string Name { get; set; } = string.Empty; } Search Screen TypeScript The TypeScript side initializes CotomyQueryForm and adds one click handler for the first table cell. Clicking that cell moves to the edit screen for the selected entity.\nimport { CotomyElement, CotomyPageController, CotomyQueryForm } from \u0026#34;cotomy\u0026#34;; CotomyPageController.set(class extends CotomyPageController { protected override async initializeAsync(): Promise\u0026lt;void\u0026gt; { await super.initializeAsync(); this.setForm( CotomyElement.byId\u0026lt;CotomyQueryForm\u0026gt;( \u0026#34;sample-entity-search-form\u0026#34;, class extends CotomyQueryForm {} )! ); this.body.onSubTree( \u0026#34;click\u0026#34;, \u0026#34;td[data-entity-id]\u0026#34;, (event: Event) =\u0026gt; { const cell = (event.target as HTMLElement).closest(\u0026#34;td[data-entity-id]\u0026#34;); const id = cell?.getAttribute(\u0026#34;data-entity-id\u0026#34;)?.trim(); if (!id) { return; } location.href = `/sample-entities/edit/${encodeURIComponent(id)}`; } ); } }); In practice, this pattern works well when the screen should remain URL-driven. The state stays visible in the address bar, paging and condition changes stay in browser history, and the server remains responsible for the list output.\nDetail And Edit Screens For detail and edit screens, I usually load and save through API calls. The page itself owns the screen boundary, but the data flow is handled through the API form. That keeps create and update inside one screen pattern without pushing the whole interaction back into server postback flow. The reason is practical. Once a user is editing a record, I do not want the whole page lifecycle to depend on full postback refresh. API-driven load and save keep the screen responsive while still preserving one clear endpoint contract.\nEdit Screen Razor Page This page keeps the edit form to plain input elements only. The screen decides whether it is new or existing from the route value and passes that key to Cotomy through the form attribute.\n@page \u0026#34;/sample-entities/edit/{id?}\u0026#34; @model SampleEntityEditModel @{ var entityKey = string.IsNullOrWhiteSpace(Model.EntityKey) ? (RouteData.Values[\u0026#34;id\u0026#34;]?.ToString() ?? string.Empty) : Model.EntityKey; var isNew = string.IsNullOrWhiteSpace(entityKey); } \u0026lt;h1\u0026gt;@(isNew ? \u0026#34;New Sample Entity\u0026#34; : \u0026#34;Sample Entity Detail\u0026#34;)\u0026lt;/h1\u0026gt; \u0026lt;form id=\u0026#34;sample-entity-edit-form\u0026#34; action=\u0026#34;/api/sample-entities\u0026#34; data-cotomy-entity-key=\u0026#34;@entityKey\u0026#34;\u0026gt; \u0026lt;div\u0026gt; \u0026lt;a href=\u0026#34;/sample-entities\u0026#34;\u0026gt;Back to List\u0026lt;/a\u0026gt; \u0026lt;button type=\u0026#34;submit\u0026#34;\u0026gt;Save\u0026lt;/button\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;div id=\u0026#34;edit-status\u0026#34;\u0026gt;\u0026lt;/div\u0026gt; \u0026lt;table\u0026gt; \u0026lt;tbody\u0026gt; \u0026lt;tr\u0026gt; \u0026lt;th\u0026gt;@nameof(SampleEntitySaveRequest.Id)\u0026lt;/th\u0026gt; \u0026lt;td\u0026gt; \u0026lt;input name=\u0026#34;@nameof(SampleEntitySaveRequest.Id)\u0026#34; value=\u0026#34;@entityKey\u0026#34; readonly /\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;/tr\u0026gt; \u0026lt;tr\u0026gt; \u0026lt;th\u0026gt;@nameof(SampleEntitySaveRequest.Name)\u0026lt;/th\u0026gt; \u0026lt;td\u0026gt; \u0026lt;input name=\u0026#34;@nameof(SampleEntitySaveRequest.Name)\u0026#34; /\u0026gt; \u0026lt;/td\u0026gt; \u0026lt;/tr\u0026gt; \u0026lt;/tbody\u0026gt; \u0026lt;/table\u0026gt; \u0026lt;/form\u0026gt; Edit Screen Page Model The PageModel only exposes the route key to the Razor Page. Load and save are handled through the API form, so the page model stays small.\nusing Microsoft.AspNetCore.Mvc.RazorPages; public class SampleEntityEditModel : PageModel { public string EntityKey { get; private set; } = string.Empty; public void OnGet(string? id) { EntityKey = id?.Trim() ?? string.Empty; } } Edit Screen API Controller The controller keeps the contract simple. GET returns one entity for load, POST creates a new record, and PUT updates an existing one.\nusing Microsoft.AspNetCore.Mvc; using System.Text.Json.Serialization; [ApiController] [Route(\u0026#34;api/sample-entities\u0026#34;)] public class SampleEntitiesApiController : ControllerBase { private readonly ISampleEntityRepository _repository; public SampleEntitiesApiController(ISampleEntityRepository repository) { _repository = repository; } [HttpGet(\u0026#34;{id}\u0026#34;)] public async Task\u0026lt;ActionResult\u0026lt;SampleEntityResponse\u0026gt;\u0026gt; Get(string id) { var entity = await _repository.GetByIdAsync(id); if (entity is null) { return NotFound(); } return Ok(new SampleEntityResponse { Id = entity.Id, Name = entity.Name }); } [HttpPost] public async Task\u0026lt;ActionResult\u0026lt;SampleEntityResponse\u0026gt;\u0026gt; Post([FromForm] SampleEntitySaveRequest request) { var entity = new SampleEntity { Id = request.Id.Trim(), Name = request.Name?.Trim() ?? string.Empty }; var saved = await _repository.InsertAsync(entity); return CreatedAtAction(nameof(Get), new { id = saved.Id }, new SampleEntityResponse { Id = saved.Id, Name = saved.Name }); } [HttpPut(\u0026#34;{id}\u0026#34;)] public async Task\u0026lt;ActionResult\u0026lt;SampleEntityResponse\u0026gt;\u0026gt; Put(string id, [FromForm] SampleEntitySaveRequest request) { var entity = await _repository.GetByIdAsync(id); if (entity is null) { return NotFound(); } entity.Name = request.Name?.Trim() ?? string.Empty; var saved = await _repository.UpdateAsync(entity); return Ok(new SampleEntityResponse { Id = saved.Id, Name = saved.Name }); } } public class SampleEntitySaveRequest { [JsonPropertyName(nameof(Id))] public string Id { get; set; } = string.Empty; [JsonPropertyName(nameof(Name))] public string Name { get; set; } = string.Empty; } public class SampleEntityResponse { [JsonPropertyName(nameof(Id))] public string Id { get; set; } = string.Empty; [JsonPropertyName(nameof(Name))] public string Name { get; set; } = string.Empty; } Edit Screen TypeScript The TypeScript side is where the GUI behavior is wired. CotomyEntityFillApiForm handles the load and submit cycle, while the page controller only connects the screen and shows simple status text.\nimport { CotomyApiResponse, CotomyElement, CotomyEntityFillApiForm, CotomyPageController } from \u0026#34;cotomy\u0026#34;; CotomyPageController.set(class extends CotomyPageController { protected override async initializeAsync(): Promise\u0026lt;void\u0026gt; { await super.initializeAsync(); const status = this.body.first(\u0026#34;#edit-status\u0026#34;); if (!status) { return; } this.setForm(CotomyElement.byId(\u0026#34;sample-entity-edit-form\u0026#34;, class extends CotomyEntityFillApiForm { public override initialize(): this { if (this.initialized) { return this; } super.initialize(); this.apiFailed((event) =\u0026gt; { status.text = `API failed. status: ${event.response.status}`; }); this.submitFailed((event) =\u0026gt; { status.text = `Submit failed. status: ${event.response.status}`; }); return this; } protected override async fillAsync(response: CotomyApiResponse): Promise\u0026lt;void\u0026gt; { await super.fillAsync(response); if (!response.ok || !response.available) { return; } const entity = await response.objectAsync\u0026lt;{ Id: string; Name: string }\u0026gt;({ Id: \u0026#34;\u0026#34;, Name: \u0026#34;\u0026#34; }); status.text = `Loaded ${entity.Id} ${entity.Name}`.trim(); } protected override async submitToApiAsync(formData: FormData): Promise\u0026lt;CotomyApiResponse\u0026gt; { const response = await super.submitToApiAsync(formData); if (!response.ok || !response.available) { return response; } const entity = await response.objectAsync\u0026lt;{ Id: string; Name: string }\u0026gt;({ Id: \u0026#34;\u0026#34;, Name: \u0026#34;\u0026#34; }); status.text = response.status === 201 ? `Created ${entity.Id}` : `Saved ${entity.Id}`; return response; } })!); } }); In practice, this pattern works well when one screen needs to load, edit, and save the same entity without falling back to full postback refresh. The screen stays predictable because load and save move through one API contract while the page controller still owns the screen entry.\nRead-Only Screens For read-only screens, server rendering is usually the more natural default. If the screen only needs to show one entity and the display can be completed in one server response, rendering it on the server is often the simpler choice. That is especially true for publicly visible pages, where crawlability and predictable first render matter more.\nEven so, I do not think read-only screens must always stay fully server-rendered. There are cases where I still consider the renderer pattern shown here. For example, I may want the screen to follow the same client-side structure as edit screens, or I may need to assemble the display from multiple entities, smart-enum style values, or other API-driven data that is easier to combine after the page is already loaded.\nIn those cases, I usually avoid wrapping the screen in a form and call CotomyApi directly. Then I apply the response through the renderer. That keeps the screen in a display-only category while still allowing the UI to share patterns with other screens.\nRead-Only Razor Page This screen does not use a form. It only defines the display area and the bind targets that will receive the API response.\n@page \u0026#34;/sample-entities/view/{id?}\u0026#34; @model SampleEntityViewModel \u0026lt;h1\u0026gt;Sample Entity View\u0026lt;/h1\u0026gt; \u0026lt;div\u0026gt; \u0026lt;label for=\u0026#34;view-id\u0026#34;\u0026gt;@nameof(SampleEntityResponse.Id)\u0026lt;/label\u0026gt; \u0026lt;input id=\u0026#34;view-id\u0026#34; value=\u0026#34;@Model.EntityKey\u0026#34; /\u0026gt; \u0026lt;button type=\u0026#34;button\u0026#34; id=\u0026#34;load-button\u0026#34;\u0026gt;Load\u0026lt;/button\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;table id=\u0026#34;sample-entity-view\u0026#34;\u0026gt; \u0026lt;tbody\u0026gt; \u0026lt;tr\u0026gt; \u0026lt;th\u0026gt;@nameof(SampleEntityResponse.Id)\u0026lt;/th\u0026gt; \u0026lt;td data-cotomy-bind=\u0026#34;@nameof(SampleEntityResponse.Id)\u0026#34;\u0026gt;\u0026lt;/td\u0026gt; \u0026lt;/tr\u0026gt; \u0026lt;tr\u0026gt; \u0026lt;th\u0026gt;@nameof(SampleEntityResponse.Name)\u0026lt;/th\u0026gt; \u0026lt;td data-cotomy-bind=\u0026#34;@nameof(SampleEntityResponse.Name)\u0026#34;\u0026gt;\u0026lt;/td\u0026gt; \u0026lt;/tr\u0026gt; \u0026lt;/tbody\u0026gt; \u0026lt;/table\u0026gt; Read-Only Page Model The read-only page model is only responsible for the initial route value. The actual data load is done from TypeScript.\nusing Microsoft.AspNetCore.Mvc.RazorPages; public class SampleEntityViewModel : PageModel { public string EntityKey { get; private set; } = string.Empty; public void OnGet(string? id) { EntityKey = id?.Trim() ?? string.Empty; } } Read-Only TypeScript This is the simplest display-only pattern. The page calls CotomyApi directly and applies the response to the table through CotomyViewRenderer.\nimport { CotomyApi, CotomyElement, CotomyPageController, CotomyViewRenderer } from \u0026#34;cotomy\u0026#34;; CotomyPageController.set(class extends CotomyPageController { protected override async initializeAsync(): Promise\u0026lt;void\u0026gt; { await super.initializeAsync(); const loadButton = this.body.first(\u0026#34;#load-button\u0026#34;); const input = this.body.first(\u0026#34;#view-id\u0026#34;); const view = this.body.first(\u0026#34;#sample-entity-view\u0026#34;); if (!loadButton || !input || !view) { return; } const renderer = new CotomyViewRenderer(view); loadButton.on(\u0026#34;click\u0026#34;, async (event: Event) =\u0026gt; { event.preventDefault(); const id = input.value.trim(); if (!id) { return; } const response = await new CotomyApi().getAsync(`/api/sample-entities/${encodeURIComponent(id)}`); await renderer.applyAsync(response); }); } }); This kind of structure is usually worth considering when read-only display is not really simple output anymore. If one screen needs multiple API values, shared client-side formatting rules, or the same screen composition rules used elsewhere in the application, renderer-based display can still be the more practical option even without edit behavior.\nIn practice, this pattern works well when display is still screen logic even though there is no submit path. It avoids forcing a form model where none is needed, while still letting the screen stay close to the same client-side conventions as the rest of the application.\nThe Main Pattern In my own systems, I often turn these categories into base classes. Even then, the core point is not the inheritance itself. The important part is that most screens can be implemented with the same pattern once the category is clear.\nThat consistency matters in large business applications. When the same screen category always follows the same structure, people do not need to read deeply just to understand how the screen works. They can focus on the domain-specific part instead.\nIn practice, that is one of the main reasons I keep returning to these patterns. Most business screens are not unique interaction problems. They are repeated operational patterns, and repeating the same implementation shape makes those systems easier to build, easier to understand, and easier to maintain.\nMore directly, this is the direction in which I implemented Cotomy itself. I wanted to build the CRUD-centered business applications I work on every day more quickly, while still allowing a reasonably rich interface where screens can search, edit, and display API results without collapsing back into scattered DOM code. The way I approached that was to think through the recurring CRUD patterns first, then implement class structures that fit those patterns. That is why Cotomy has this shape. It is not a general-purpose abstraction invented first and applied later. It is a structure that came from repeatedly building the same kinds of business screens and trying to make those screens faster to implement, easier to understand, and easier to keep consistent.\nThis is also the same direction I use in the project templates. The examples in this article are intentionally minimal so the screen pattern stays visible. The templates follow the same screen categories, but they organize them further with shared classes and common infrastructure so they can work as a practical application foundation rather than only as isolated examples.\nIn short, the rule is not to make every screen look the same. It is to keep each category internally consistent so that screens remain predictable, onboarding cost stays lower, and the same business application can keep growing without every screen inventing its own interaction model. That predictability reduces cognitive load for both the original author and the next person reading the screen.\nUse the search pattern when the screen mainly filters and lists records with URL state. Use the edit pattern when one entity must be loaded and mutated through an API-driven form. Use the read-only pattern when the screen mainly displays values and has no real input responsibility.\nPractical Guide This article is part of the Cotomy Practical Guide, which focuses on hands-on usage patterns for the framework.\nSeries articles: Working with CotomyElement , CotomyPageController in Practice , Standardizing CotomyPageController for Shared Screen Flows , and Building Business Screens with Cotomy.\nNext Next article: Handling Validation and Error Display with Cotomy, focusing on how to keep field validation, submit errors, and API failures inside one consistent screen flow.\nLinks Previous: Standardizing CotomyPageController for Shared Screen Flows . More posts: /posts/ .\n","permalink":"https://blog.cotomy.net/posts/practical-guide-4-building-business-screens-with-cotomy/","summary":"A practical guide showing how I usually structure search screens, edit screens, and read-only screens in Razor Pages with Cotomy.","title":"Building Business Screens with Cotomy"},{"content":"The Cotomy project templates for Razor Pages are now available.\nThey are for developers building CRUD-heavy business applications, internal tools, and other long-lived screen-based systems on Razor Pages, especially when a full SPA stack would add more complexity than value. If that describes your project, the main benefit is straightforward: you can skip rebuilding the same application foundation again and start from a working baseline instead.\nI created them to remove the repetitive setup work that appears at the start of these projects. Instead of rebuilding the same page architecture, form handling baseline, API wiring pattern, page lifecycle structure, authentication entry points, and CRUD foundation each time, the goal is to begin from a working base and move directly into project-specific implementation.\nThis release starts with two editions, Standard and Professional, and each edition includes both Starter and Sample workspaces.\nWhat the Templates Include The main practical value is simple: there is less foundation work you need to do yourself before you can start building screens. The biggest time savings are usually the same three areas: page architecture, form and API flow, and authentication plus persistence baseline. In the Professional edition, those last pieces are already included before feature work begins.\nStandard is the lighter option. It is intended for projects that already have a backend design or need only the UI-side foundation. It includes the screen architecture, form and renderer infrastructure, dialog and side panel components, webpack-based TypeScript integration, and an authentication hook that can be connected to your own application design.\nProfessional includes everything in Standard and adds the parts that usually take longer to rebuild correctly in business systems. That includes Cookie authentication, login and logout flow, role-based authorization structure, EF Core and SQL Server integration, and CRUD templates for Product, Order, and User screens.\nIf your main goal is to start building application screens immediately instead of first rebuilding the same project skeleton again, that is the reason to use the template.\nStarter and Sample The packaged templates include both Starter and Sample workspaces.\nStarter is the default starting point for new development. It gives you the minimal project structure without filling the workspace with reference segments, so it is the right choice when you want to begin building your own application immediately.\nSample exists for a different reason. It is there so you can run the code, inspect the included screens, and understand how the base structure is meant to be used in practice. In Standard, that means UI and minimal authentication flow references. In Professional, it extends further into authentication, CRUD, persistence, and the connection between those layers.\nThe main recommendation is still to start real development from Starter. At the same time, the Sample workspace is ordinary source code, so you can read it, modify it, and reuse the parts you need. If adapting the sample helps you move faster while you are still understanding the structure, that is a reasonable way to explore the template, even though Starter is the cleaner default for a new application.\nWhich Edition to Choose The edition choice is mostly a question of whether you want the database-related foundation included from the beginning.\nIf you only need the UI and page foundation, or if your project already has its own API, authentication, or persistence conventions, choose Standard. It keeps the UI layer and page structure in place while leaving the backend boundary under your control.\nIf you need the authentication and persistence baseline included from the beginning, choose Professional. It is the more natural choice when you want login, CRUD, validation flow, and persistence structure to exist before feature work begins.\nThe current packaged setup is built around EF Core and SQL Server, and that is the path documented in the template itself. Because the persistence layer is based on EF Core, other supported databases should also be adaptable with relatively limited changes, but the packaged configuration and startup guidance currently assume SQL Server.\nIf you want to build the database layer yourself, if you want to use something outside the relational database path, or if you already have your own persistence library and conventions, Standard is usually the better fit. In practical terms, Standard is for teams that want the UI and page foundation without adopting the packaged data side, while Professional is for teams that want that baseline included from the beginning.\nMost projects that are building a new business application from scratch will likely fit Professional more naturally. Standard is more appropriate when the backend is already established or when your project has infrastructure decisions that should remain your own.\nWhy I Started With Razor Pages These templates are aimed at the kind of applications where server-rendered screens are still the more natural structure: internal systems, CRUD-heavy screens, and operational tools that benefit more from predictable page boundaries than from SPA-style routing and client-side state layers.\nThat is the same environment in which the underlying structure was designed. The purpose here is not to turn Razor Pages into something else. It is to provide a stronger starting structure for projects that already fit Razor Pages well.\nHow to Get It The templates are available through the Cotomy Store.\nStore: https://store.cotomy.net Reference site: https://cotomy.net/razorpages/ The reference site explains the differences between Standard and Professional in more detail. Purchase is handled through the store, and setup guidance is included with the downloaded package.\nIf you already know you want to begin with a structured Razor Pages business application base, the practical next step is to choose the edition based on one question. If you want the data and authentication baseline included, choose Professional. If you want to keep the backend and persistence side under your own control, choose Standard.\nIf this is the kind of system you build regularly, and too much early project time keeps disappearing into rebuilding the same base structure before real feature work begins, this is the point where the template is meant to help.\nWhat Comes Next This article is only the starting point. In the articles that follow, I plan to explain the templates in more detail, including how to use Starter, how to read the Sample workspace effectively, and how the initial structure is meant to be extended during real application development.\nNext article: Understanding the Cotomy Razor Pages Project Template Structure and Setup ","permalink":"https://blog.cotomy.net/posts/razor-pages-templates/introducing-project-templates-for-razor-pages/","summary":"Cotomy project templates for Razor Pages are now available in Standard and Professional editions. They are designed for CRUD-heavy business applications where teams want to avoid rebuilding page structure, form flow, authentication, and persistence from scratch.","title":"Introducing Project Templates for Razor Pages"},{"content":"This is the eleventh post in Problems Cotomy Set Out to Solve. This continues from Synchronizing UI and Server State .\nIn the previous post, I described how Ajax made UI and server state drift apart once load and save stopped sharing one contract. The next problem is closely related, but more specific:\nthe real failure is usually not where state is stored. It is that the mutation path is scattered.\nServer Rendering Was Cruder, but Structurally Safer In systems built with server-side rendering and ordinary POST, major structural failures were less likely to appear. That model had many usability limits. Browser history could become awkward. Progress indication during submission was limited. Input error handling also had fewer options than modern asynchronous screens.\nEven so, the operational path was relatively honest. The same request flow usually loaded the screen, accepted input, validated the result, and rendered the next state. That did not guarantee correctness, but it reduced one important category of failure: the same Entity was less likely to be interpreted through several unrelated update paths.\nEven so, I now think simple CRUD-oriented business systems should still use Ajax in many cases. The interaction model is usually better. But once Ajax is introduced, the implementation must become more disciplined. Otherwise the screen gains flexibility at the price of structural drift.\nThe Core Problem Is Mutation Path, Not State Location It is easy to describe the problem as state duplication across the server and the client. That is not wrong, but it is not the deepest issue.\nData can exist in several places without immediately causing failure. The real problem begins when the system no longer defines where changes are allowed to enter.\nIn a typical CRUD screen, the same Entity can change through several paths:\nflowchart TB Load[\u0026#34;Initial load\u0026#34;] Input[\u0026#34;User input\u0026#34;] Save[\u0026#34;Form save\u0026#34;] Reload[\u0026#34;Ajax reload\u0026#34;] UI[\u0026#34;Local UI interaction\u0026#34;] Entity[\u0026#34;Same Entity state\u0026#34;] Load --\u0026gt; Entity Input --\u0026gt; Entity Save --\u0026gt; Entity Reload --\u0026gt; Entity UI --\u0026gt; Entity If those paths are independent, the screen becomes difficult to reason about. One path updates visible inputs. Another updates display-only DOM. Another updates server-side truth. Another patches the screen after save.\nAt that point, the issue is not simply that state is distributed. The issue is that mutation entry points are distributed.\nThat is the structural definition I wish I had much earlier. The screen becomes unreliable when the same thing can change from many places without a shared contract.\nA mutation path is the defined entry point through which state is allowed to change. If this is not defined, the system is not just complex. It is structurally ambiguous. Without enforcing this boundary, every handler becomes a potential mutation path, and the system silently loses its structure.\nThe Frontend Starts Learning Too Much Once Ajax-based load and save are introduced, one familiar pattern appears very quickly. The frontend starts knowing more about data structure than it should.\nAt first, this looks harmless. A screen loads JSON, then jQuery or another client-side layer places values into inputs and display blocks. But that means the client now needs to know property names, value shape, and special handling rules.\nThen the same screen saves through another path. Validation rules may still live on the server, but the client already knows enough about the data shape that it starts collecting more logic around it. Soon the DTO is effectively declared twice: once in the server-side contract, and again in the frontend behavior that fills, reads, and patches the screen.\nPart of the reason this bothered me so much is that I had already worked with three-tier client-server systems around SOAP. Those systems also declared DTO-like structures in multiple places. But the contract was at least communicated through WSDL, so each side had a more explicit way to share the same structure and generate or verify types against it.\nThat did not remove every integration problem, of course. But the structure felt more formally announced. Compared with that, Ajax-heavy Web screens often looked as if the same data shape was being reinterpreted locally without an equally strong contract. If someone had worked through those older three-tier systems, I think this difference is fairly easy to recognize.\nThat is where structure begins to collapse. The server should hold authoritative data and business rules. The frontend should mainly handle presentation and input assistance. But once the frontend becomes a second interpreter of the same Entity, the change surface is no longer readable.\nThe immediate cost is not always a dramatic bug. More often, it appears when fields are added, when display rules change, or when one endpoint response evolves slightly. The problem is maintenance visibility. No one can easily see the full impact range of a change.\nComponent Frameworks Narrow One Kind of Chaos I did look into React and Vue when I became unable to tolerate continued frontend growth around jQuery and older alternatives. I did not end up using them in real work, so this is not an implementation report from production use.\nEven so, one point was already clear to me. A component-oriented framework is a strong help when the frontend itself is large and complex. Hiding direct DOM manipulation behind a state-driven rendering model makes the screen easier to understand.\nThis is especially true when the frontend is not just a thin screen layer, but an independent large system in its own right. In that kind of architecture, the frontend is not merely a visual continuation of one server-side business flow. It becomes its own substantial runtime structure, with its own internal screen composition, state transitions, and coordination burden.\nIn that situation, a component model is very persuasive. It gives the frontend an internal architecture that can stand on its own. That is different from a business screen whose main role is still to connect server-side operations, input, and display through one relatively narrow request-and-response-oriented flow.\nWhen data loaded through Ajax is expanded into the screen, it is easier to understand a model where memory state changes first and the UI follows that change. The actual implementation is never as simple as the idea sounds, but the idea itself is still easier to reason about than many direct DOM patches.\nThis matters because a component model often narrows mutation paths inside the frontend. State change is expected to enter through a smaller number of routes. That is a real architectural advantage.\nComponent frameworks reduce chaos inside the frontend by narrowing mutation paths. But they do not define how those paths relate to server-side state. As a result, the system may be internally consistent on the client, while still being structurally split across the application. They define how state changes inside the frontend, but not where change should enter the system as a whole.\nBut it does not solve the whole business-screen problem. It mainly solves frontend-internal consistency. Server synchronization is still a separate issue. And in CRUD-centered systems, forcing every screen into a large SPA model can split one business function into two architectures: server-side business flow and frontend application flow.\nThat split is often too heavy for the actual job. Many business systems do not need a massive SPA. What they need is a way to keep asynchronous load, fill, save, and render within one understandable structure.\nOne reason I developed and published Cotomy was that I could not find many options that fit that middle space well enough.\nMost Business Screens Fall Into a Small Number of Types In ordinary business systems, screens usually belong to one of three categories.\nThere are screens for searching data. There are screens for showing and editing one record. And there are screens that do not fit either shape and must be designed more individually.\nSearch screens should usually be server-rendered in the first place. One major reason is that search conditions and paging fit naturally into the query string. That means the current search can be retained in the URL, revisited later, and shared with someone else without inventing a separate client-side state model.\nThat property matters a great deal in business systems. Search is often not just an interaction. It is also a reference point that needs to be reopened, bookmarked, compared, or passed to another person.\nFor that kind of screen, a server-side query flow is usually the more natural architecture. There is often little reason to pay extra frontend complexity for something that already fits the Web\u0026rsquo;s ordinary address model.\nOf course, after arriving through that URL, the client can still perform additional filtering or other small interactions if necessary. But search screens usually do not require the frontend to become an independent application with its own heavy coordination model. In most cases, it is more natural for the search and the rendering to happen within the same request that the URL already represents.\nEdit screens are different. That is where Ajax becomes more useful, because save behavior, validation feedback, and user interaction benefit much more directly from asynchronous flows.\nSpecial screens still need individual design.\nBut the important point is not the category label by itself. The important point is that each category should have a defined mutation path. If a search screen, an edit screen, and a special coordination screen all mutate data through arbitrary local handlers, then the category distinction no longer helps the architecture. The purpose of this classification is not categorization itself, but to ensure that each type of screen has a predictable mutation path. Otherwise, the same Entity ends up being mutated through unrelated mechanisms depending on the screen type.\njQuery Made the Mutation Problem Hard to Name When I was first building these kinds of edit screens with jQuery, I wrote the logic that expanded loaded data directly into the screen. That meant the client-side code had to know the data format explicitly. Even if field names rarely changed, adding properties meant updating several places.\nSo I moved toward automatic filling through attributes such as name and related screen metadata. That idea itself is not unusual. It is a fairly ordinary way to reduce repetitive screen code.\nThe real value was not just convenience. It was that screen-specific code became smaller, while common fill behavior could be applied across many screens through one route. The server continued to decide the data shape. The frontend no longer needed custom field-by-field expansion for every form.\nThe problem is that jQuery still did not give me a satisfying structural unit for this. Common behavior and screen-specific behavior kept drifting apart. Page-level behavior and form-level behavior were hard to treat as one readable object. Updates were event-driven, local, and easy to add, but hard to classify.\nThat is the main reason jQuery-based screens became difficult for me. The problem was not simply syntax. The problem was that state change entry points were hard to define clearly. The difficulty was not writing code, but identifying where change was supposed to happen.\nNaming the Flow Was More Important Than DRY JavaScript has become much easier to use in an object-oriented style than it used to be. If the language environment had looked then as it does now, I might not have felt the same pressure to adopt TypeScript so early.\nBut at that time, given the browser environments my clients actually used, class-style JavaScript was much less readable in practice. With jQuery, event-oriented code still looked more approachable, even though it was structurally weaker.\nAfter I started building the predecessor of Cotomy in TypeScript, I began standardizing forms at a fairly early stage. That work was not only about code reuse. It was about assigning names and meaning to the flow of CRUD processing.\nLoad, fill, submit, validate, and render should not exist as accidental local procedures. They should exist as recognized mutation paths.\nThat distinction matters. When a structure is named, it becomes easier to preserve. When it is preserved, a larger system can still be edited by a small team without each screen turning into a private architecture.\nHow Cotomy Tries to Keep the Path Understandable This is the context in which Cotomy\u0026rsquo;s form structure matters to me. CotomyForm defines one submission lifecycle. CotomyApiForm turns that into an API submission path using FormData. CotomyEntityApiForm keeps the Entity key on the form and adjusts the request method based on whether identification already exists. CotomyEntityFillApiForm then loads data through the API when the page is ready, fills matching inputs, and projects values to display-oriented DOM through its renderer.\nThis does not remove all complexity. It does something narrower and more practical. It tries to keep update paths recognizable.\nThe point is not that state must live in only one place. The point is that mutation should enter through a small number of named structures instead of scattered handlers.\nThat was the real design pressure. I did not just want less repetitive code. I wanted fewer unofficial ways for one screen to become current. The goal is not abstraction, but to prevent mutation from escaping into unnamed paths.\nConclusion: Define the Entry Point of Change The deeper problem in CRUD screens is not merely client state, server state, or DOM state existing at the same time. The deeper problem is that the same Entity is often allowed to change through too many unrelated routes.\nServer-side postback systems hid many UX problems, but they at least kept one main execution path. Ajax-based systems improve interaction, but only if mutation paths are defined well enough that load, fill, save, and render still belong to one understandable model.\nThat is the axis I consider most important now. The question is not only where state is stored. The question is where state is allowed to change. A system without defined mutation entry points is not just hard to maintain. It is impossible to reason about. State can be duplicated. Mutation paths cannot be undefined.\nProblem Series This article is part of the Cotomy Problem Series, which examines recurring structural failures in business UI design.\nSeries articles: HTML and CSS as One Unit , Form Submission as Runtime , Screen Lifecycle and DOM Stability , Form State and Long-Lived Interaction , API Protocols for Business Operations , Runtime Boundaries and Operational Safety , UI Intent and Business Authority , Binding Entity Screens to UI and Database Safely , Screen State Consistency in Long-Lived UIs , Synchronizing UI and Server State , and Where State Actually Changes.\nNext Next planned: Why Business Systems Rarely Need SPA Frameworks\n","permalink":"https://blog.cotomy.net/posts/problem-11-where-state-actually-changes/","summary":"In CRUD screens, the core problem is often not where state is stored, but whether the mutation path is defined. Once load, input, save, and reload are allowed to diverge, the screen becomes difficult to reason about.","title":"Where State Actually Changes"},{"content":"This is the tenth post in Problems Cotomy Set Out to Solve. This continues from Screen State Consistency in Long-Lived UIs .\nIn the previous post, I focused on state ownership inside long-lived screens. That naturally leads to the next issue:\nhow the UI and the server fall out of agreement once one screen starts loading and saving through separate paths.\nOnce load and save paths diverge, UI and server state consistency becomes a structural problem.\nA Practical Starting Point The first Web system I built for my own independent work was an e-commerce site for second-hand goods. It was outside my company work, and I built it with very limited frontend experience. The stack was PHP, Smarty, and jQuery on top of an OSS package that already matched the business shape I needed.\nThat project influenced my early Web architecture more than I understood at the time. Because I had to customize the package heavily, I ended up learning not only how to make pages appear, but also how fragile a screen becomes once several different update paths accumulate around one business flow.\nWhy Server-Side Postback Was More Stable Than It Looked That first e-commerce site was built almost entirely around server-side POST and server-rendered output. As far as I remember, it did not rely on Ajax for the core flow.\njQuery was there, because at the time it was everywhere, but the client-side processing stayed thin. The server controlled almost all display and registration behavior. There were small client-side helpers such as postal-code-based address lookup and screen switching by select value, but they did not own the business state of the screen.\nThat model had obvious UI limitations. Still, it had one major advantage: the path that displayed data and the path that saved data were almost the same path.\nAs a result, the screen could certainly have bugs, but it was less likely to show one thing while the server believed another. Unless I introduced a defect myself, the screen usually did not drift into strange visible behavior. That was a very practical form of reliability.\nThe Moment Ajax Changed the Problem Later, I built a new system for operational reporting that had to run on smartphones. If I could have chosen freely, I would rather have built it as a separate desktop client in a three-tier client-server model. But smartphone-based reporting was a hard requirement, and distributing a dedicated application was not realistic in that environment.\nSo I built it as a Web system.\nAround that time, I had already grown to dislike the ordinary pattern of filling a form and posting the whole page. One reason was simple: after submission, browser back behavior became awkward. That was enough reason for me to move registration to Ajax.\nThe problem is that Ajax submission is not difficult by itself. What becomes difficult is everything around it.\nOnce saving happens through Ajax, the frontend suddenly owns more of the coordination problem. And if loading still happens differently from saving, the screen ends up with two interpretations of the same data.\nThat is where synchronization problems begin.\nIf the screen loads through one rendering path and saves through another, it becomes possible for the same value to appear differently before and after save. Those inconsistencies are particularly dangerous because they are easy for tests to miss.\nWhy This Became Hard to Control I built several systems with Smarty and jQuery. Smarty itself was not the real problem. It was just one server-side rendering approach.\nThe harder part was trying to keep one screen coherent while DOM operations, event handlers, and Ajax callbacks kept accumulating. That is manageable while the screen is small. But when the screen grows, procedures scatter and side effects begin to define the runtime more than the screen structure itself.\nI tried many ways to organize that over time. Most of them eventually required patching and then produced another problem. The least fragile approach I found was to treat almost everything as events and continue attaching application-specific attributes as event targets.\nEven so, it did not scale well enough for me. The systems were not especially large, yet I kept spending too much time investigating how the current behavior had been assembled from execution order and local side effects.\nThat experience clarified something important for me: the main difficulty was not simple lack of frontend knowledge. The deeper issue was that JavaScript and jQuery alone did not give me a structure I could trust for larger business screens without accumulating procedural drift.\nSynchronization Is Not Just an Update Problem The central failure pattern was not merely that Ajax existed. It was that load and save stopped sharing one operational contract.\nThe old server-side postback model had one important discipline: the server received the request, decided the state, and rendered the next screen. Once Ajax entered the screen, that discipline disappeared unless I rebuilt it explicitly.\nIf the real goal is to build a large client-side application with rich internal state, component-oriented frameworks such as React provide a very strong model. They give teams a disciplined way to structure rendering, state flow, and UI composition at that scale.\nBut that is not automatically the right fit for every business system. When the job is to build many small operational screens, each tied closely to server-side behavior, that model can become heavier than the screen itself requires. It also moves more of the screen construction boundary to the client, which can create another kind of split between server-side business flow and UI behavior.\nSo the issue is not whether component frameworks are good. The issue is whether the screen actually needs a client-side application model that large.\nThe structural problem can be stated more directly: UI state and server state exist independently. Load and save paths do not share a single contract. Synchronization is not enforced by the runtime. Consistency depends on implicit timing and local patches.\nThe problem can be summarized like this:\nflowchart LR ServerLoad[\u0026#34;Server load path\u0026#34;] UI[\u0026#34;Visible UI and inputs\u0026#34;] AjaxSave[\u0026#34;Ajax save path\u0026#34;] Patch[\u0026#34;Local DOM patch\u0026#34;] ServerLoad --\u0026gt; UI UI --\u0026gt; AjaxSave AjaxSave --\u0026gt; Patch Patch --\u0026gt; UI This looks small, but it creates a structural split. If the Ajax response is patched into the screen informally, the save path and the initial load path are no longer the same system.\nThat is exactly where the UI and the server start disagreeing.\nWhat TypeScript Changed for Me TypeScript did not solve architecture by itself, but it gave me a workable foundation for building one.\nThe important change was not only static typing. It was that I could finally structure the frontend as code I could continue to maintain. Classes, explicit contracts, and controlled composition made it far easier to stop scattering behavior across ad hoc handlers.\nAfter moving to TypeScript, I eventually rebuilt most of the frontend of the systems I managed over roughly five years. That involved risk and some unreasonable effort, but the alternative was to continue carrying systems whose complexity kept increasing without limit.\nAt that point, rebuilding was the safer long-term decision.\nHow Cotomy Narrows the Synchronization Surface Cotomy was one answer to that experience. Its role is not to eliminate all frontend state. Its role is to narrow the number of unofficial paths through which a screen can change.\nAnother important point is that Cotomy does not try to solve this by adding a separate client-side store model for each screen.\nThat matters not only because it reduces state fragmentation, but also because it keeps the screen itself readable as one recognizable unit. A screen is understood through a small number of explicit parts: page entry, page-level coordination, form lifecycle, and rendering.\nThis is important for more than reuse. It makes the overall runtime easier to understand intuitively. Instead of each screen becoming its own local architecture, screens can be read through the same structural classification. That lowers the cost of both maintenance and diagnosis.\nThis is also why the form hierarchy matters.\nCotomy does use inheritance, but not only as a code-sharing technique. It helps screens fall into recognizable operational categories.\nA query form, an entity detail form, and a fill-and-render form are not just different implementations. They represent different kinds of screens with different runtime roles.\nThat matters because the screen stops being an arbitrary collection of event handlers. It becomes a member of a small number of understandable screen types. In practice, that makes the architecture easier to read, easier to extend, and easier to diagnose.\nThis matters even more in larger projects. When screens are kept within a small number of recognizable shapes, the system becomes dramatically easier to scale in practice. Teams can build, review, and maintain more screens without each one becoming a new local architecture.\nThere is a tradeoff here. A stricter screen model can reduce how much freedom is available for highly custom visual expression, and in some cases that can work against building the most cognitively refined UI for a specific screen.\nBut under real constraints such as limited budget, limited time, and limited human resources, this kind of structural control is often the more effective choice. It increases the size of the system a team can realistically sustain.\nThe synchronization contract becomes easier to see when the screen is read as a runtime sequence. Page entry initializes the controller. Ready marks the point where startup has completed. If the form already has an entity key, the screen can load current data through the API, fill matching inputs, and apply the same response to display-oriented DOM. If the form submits a newly created entity and receives 201 Created, the key is read from the Location header and later submissions move onto the update path. That is not a universal state system. It is a narrower contract that keeps create, load, fill, render, and update closer to the same model.\nIn the actual implementation, CotomyPageController is registered once through the page entry point. Its initializeAsync method runs from the page load path, and the ready event is triggered only after that initialization completes. That gives the screen a defined startup boundary instead of many unrelated startup fragments.\nForm handling is also narrowed. CotomyForm standardizes submission as one lifecycle. CotomyApiForm turns the form into a FormData-based API submission path. CotomyEntityApiForm keeps the entity identifier on the form and switches from POST to PUT when the server has issued identity through the Location header.\nMost importantly for this topic, CotomyEntityFillApiForm does not invent a separate client state store. When an entity key is present, it loads through the API when the page becomes ready, fills matching inputs, and applies response data to display elements through its renderer. That means the same response model can feed both input state and visible display state through one explicit runtime path.\nThis is also why the distinction between load path and save path matters so much. If one Ajax callback updates only a status label, another path fills inputs, and a third path redraws display fields, the UI may look coherent only by accident. Cotomy does not prevent every bug, but it tries to make those paths explicit enough that the screen can return to one recognizable contract after each operation.\nThis is the design point that mattered to me. The screen should not be kept in agreement by many unrelated jQuery patches. It should be brought back into agreement by a smaller number of named structures with defined ownership.\nThe Boundary Still Matters This does not mean Cotomy solves synchronization by hiding the server behind a frontend-only abstraction. The server remains the authority for persisted business data.\nCotomy only narrows the UI-side execution paths. The page controller owns startup and page-level orchestration. The form owns submit and reload behavior. The renderer owns projection into display-oriented DOM.\nThat boundary matters because it keeps the responsibility readable. If the screen shows the wrong persisted value, the server contract or reload path should be examined first. If inputs fail to reflect the response correctly, the form fill path is the next place to inspect. If a display block differs from the input model, the renderer path is the likely source.\nThat is far easier to reason about than a screen assembled from scattered handlers and local patches.\nConclusion: Synchronization Needs a Shared Contract My early Web systems taught me two different lessons.\nServer-side postback was clumsy, but it stayed structurally honest because load and save were close to the same runtime path. Ajax improved interaction, but it also made it easy for the UI and the server to drift apart unless the screen had a clear synchronization contract.\nThat is why this problem belongs in the series. The real issue is not whether the screen uses Ajax. The issue is whether loading, saving, filling, and rendering are part of one understandable model.\nCotomy\u0026rsquo;s response is deliberately narrow. It does not try to turn the whole frontend into a universal client-state machine. It tries to define the page entry, form lifecycle, and rendering path clearly enough that a business screen can stay explainable after years of maintenance.\nFor me, that was the real requirement. I did not need more local patches. I needed a structure that made synchronization failures easier to prevent, easier to detect, and easier to repair.\nThe practical rule is simple. Load and save must share a contract. State mutation must have a defined entry point. UI must not update through informal paths.\nProblem Series This article is part of the Cotomy Problem Series, which examines recurring structural failures in business UI design.\nSeries articles: HTML and CSS as One Unit , Form Submission as Runtime , Screen Lifecycle and DOM Stability , Form State and Long-Lived Interaction , API Protocols for Business Operations , Runtime Boundaries and Operational Safety , UI Intent and Business Authority , Binding Entity Screens to UI and Database Safely , Screen State Consistency in Long-Lived UIs , and Synchronizing UI and Server State.\nNext Next: Where State Actually Changes ","permalink":"https://blog.cotomy.net/posts/problem-10-synchronizing-ui-and-server-state/","summary":"Server-side postback screens were limited, but they kept one execution path. Once Ajax became the main update mechanism, keeping display, input state, and server truth aligned became a structural problem.","title":"Synchronizing UI and Server State"},{"content":"This is the ninth post in Problems Cotomy Set Out to Solve. This continues from Binding Entity Screens to UI and Database Safely .\nIn the previous post, I focused on how Entity structure and screen controls can drift apart when their contract is implicit. That boundary problem leads naturally to the next one:\nhow state breaks over time inside a screen that stays alive and keeps handling user work.\nHere, long-lived does not mean only a large SPA with a permanent client-side store. It also means ordinary business screens that stay open, return from browser history, or keep handling edits and reloads without becoming a fresh server render each time.\nThe Problem That Kept Repeating This article comes from a recurring implementation problem, not from an interest in state management as an abstract topic.\nThe core point should be stated early: state inconsistency is usually not a rendering issue. It is an ownership problem across multiple state holders.\nI kept running into the same kinds of failures in business screens. Ajax-based partial updates would leave the DOM in an inconsistent state. Some parts of the screen were rendered on the server, while other parts were rendered or patched on the JavaScript side, and keeping those outputs aligned became harder than it should have been. After a client-side state change, the next update sometimes reflected a different server interpretation than the one the screen appeared to assume.\nThe problem became worse when storage entered the picture. Once localStorage or similar browser storage was used, the system no longer had just the server and the visible screen. It had yet another place where state could survive, drift, and later be read back into the UI through a completely different path.\nOne reason I take this seriously is that I have seen this fail in real outsourced development. In that case, browser-side storage was introduced much more aggressively than the screen actually required. As a result, state no longer had one understandable path. Some values came from the server, some from the current DOM, and others from client-side storage restored through separate logic. Once those paths drifted apart, it became extremely difficult to trace the source of inconsistencies or repair the behavior incrementally. In the end, the screen had to be rebuilt.\nThat experience made one thing very clear to me: unnecessary client-side persistence is not a harmless convenience. It can become a major source of operational complexity.\nThat does not mean every form of temporary client-side recovery is wrong. If a team deliberately stores draft input to protect the user from a crash or an accidental refresh, that can be a reasonable local decision. But it should remain a narrowly defined recovery path, not a second business authority that silently competes with the server and the live screen.\nAt that point, even something as ordinary as \u0026ldquo;show the current information on this screen\u0026rdquo; stopped being one operation. It became a coordination problem across multiple state holders.\nThat was one of the pressures behind the design direction that later became Cotomy. The issue was not that state existed. The issue was that business screens were accumulating too many informal state paths.\nA Screen Usually Does Not Need That Much Shared Client State In many business systems, there are surprisingly few screens that truly need broad JavaScript-side shared state across the application. Most screens are narrower than that. They mainly receive a request, show data, accept an edit, send a response, and then reflect the result.\nWhen that is the real job of the screen, a request-and-response model is often easier to understand than a large client-side state model. It is easier to teach, easier to inspect, and easier to debug.\nReal teams are trying to move a project toward a business goal, not to spend limited time learning additional layers of client-side complexity that the application does not actually need. If a screen can be understood and maintained through a request-and-response model, forcing it into a broader shared-state architecture can increase the learning burden without improving the business outcome.\nSo the issue for me was not \u0026ldquo;state on the client is bad.\u0026rdquo; It was that business screens often did not benefit from the amount of shared client state they were being asked to carry.\nThe more important distinction is whether a feature is designed as one end-to-end business function across backend and frontend, or as a frontend application with its own independent state model.\nIn the kinds of systems I had been building, many screens belonged to the first category. The frontend existed mainly to express business data, accept edits, and connect screen behavior to server-side operations. In that environment, server interpretation, visible DOM state, form inputs, and partial updates remain close to the same operational path. So inconsistency appears directly as a screen-level reliability problem.\nIn a more independent frontend application, the same consistency problem does not disappear, but its center of gravity changes. The main concern shifts more toward the relationship between the client-side state model and the API contract.\nThe Three-Layer State Problem Once this became clear, the underlying structure was easier to name.\nIn long-lived screens, state is usually spread across three layers.\nThe DOM holds current inputs, visible labels, and element attributes. JavaScript memory holds controller-level variables, registered handlers, temporary runtime objects, and sometimes browser-side cached values. The server holds the authoritative business state.\nIf localStorage or similar browser persistence is used, JavaScript-side state also gains a longer-lived storage path, which makes the distinction between temporary runtime state and quasi-persistent client state even more important.\nIf the synchronization model between these layers is unclear, the screen may appear correct for a while and then gradually become unreliable.\nThe structure can be summarized like this:\nflowchart TB Server[\u0026#34;Server state\u0026lt;br/\u0026gt;authoritative business truth\u0026#34;] JS[\u0026#34;JavaScript memory\u0026lt;br/\u0026gt;runtime coordination\u0026#34;] DOM[\u0026#34;DOM state\u0026lt;br/\u0026gt;visible and input state\u0026#34;] Init[\u0026#34;Initialization\u0026#34;] Submit[\u0026#34;Form submission\u0026#34;] Reload[\u0026#34;API reload\u0026#34;] Render[\u0026#34;Renderer update\u0026#34;] Server --\u0026gt; Reload --\u0026gt; JS JS --\u0026gt; Render --\u0026gt; DOM DOM --\u0026gt; Submit --\u0026gt; Server Init --\u0026gt; JS Init --\u0026gt; DOM Cotomy tries to make those transitions less informal than they usually become in Ajax-heavy screens. The page controller is registered once at page entry. Its initializeAsync flow runs first, and the page reaches ready only after that startup boundary completes. If the browser restores the page from bfcache, the restore path can reload forms again instead of assuming the old screen state is still trustworthy.\nThat matters because a restored screen is one of the easiest places for stale state to hide. The DOM may still look current even when the business data is no longer current. If restore behavior is not part of the model, teams often end up treating a cached screen as if it were still authoritative.\nThis is also where stale server data becomes part of the same problem. If another user or process has already changed the Entity on the server, a restored tab can look valid while already being behind the current business truth. That is why the server remains the authority and restore behavior must be able to start from reload instead of trusting the old screen.\nWhy These Breakdowns Keep Appearing The usual failures are predictable.\nPartial DOM replacement causes the visible screen to diverge from runtime objects that were created earlier. Server-rendered sections and JavaScript-rendered sections drift because they are updated by different rules. Inputs and display-only bind targets stop matching after an update. Screen-local caches survive longer than the server data they were based on. Late responses overwrite newer intent because the update path is not clearly owned.\nThese are not separate categories of clever bugs. They are symptoms of missing ownership rules.\nOnce a screen lives longer than one render cycle, the main question is no longer where state can be stored. The real question is which layer is allowed to own which kind of state, and which path is allowed to change it.\nDOM state is interaction state. It represents what is currently visible or currently typed.\nJavaScript memory is runtime coordination state. It should hold page-level control flow, temporary flags, and references needed to run the screen.\nServer state is business state. It is the only place that should be trusted for persisted business truth.\nProblems start when these roles are mixed.\nIf DOM text is treated as business truth, reload behavior becomes unreliable. If JavaScript objects are treated as permanent truth, partial updates create stale memory. If server responses are merged into the screen without a defined update path, different parts of the UI stop agreeing with each other.\nThe issue is not that multiple layers exist. The issue is that long-lived screens need an explicit contract for how they cooperate.\nMutation Paths Matter as Much as Ownership Ownership alone is not enough. A screen also needs defined mutation paths.\nInitialization is one path. Form submission is another. API reload is another. Renderer-driven projection into display state is another.\nIf state can also change through informal handlers, fragment-level patches, and browser-storage restoration with no common rule, the screen stops being predictable even when each local update looks reasonable.\nA simple example is a screen that mostly follows one update path, but still contains a small bypass such as a jQuery handler that directly rewrites one status span after an Ajax call. That local patch may look harmless, but it updates visible DOM outside the screen\u0026rsquo;s normal mutation path. Once the next form reload or renderer update happens, that span can disagree with the rest of the screen because it was never part of the official state transition model.\nIn Cotomy terms, the practical question is whether the screen changes through named paths or through accidents. CotomyPageController owns startup and restore timing. CotomyForm owns submit entry. CotomyEntityFillApiForm owns API load, input fill, and renderer application. Once those roles are explicit, a stale value is easier to trace back to the specific path that produced it.\nCotomy does not try to make every unofficial DOM write physically impossible. It does not place the page inside a sandbox that forbids direct mutation. Its design goal is narrower and more practical: to make the official paths recognizable enough that informal bypasses stand out as architectural exceptions instead of blending into the normal model.\nWhy Re-Rendering Does Not Define the Model A full re-render can hide some symptoms, but it does not define ownership. It only overwrites the screen again.\nIn business systems, screens often contain partial edits, dialogs, side panels, query forms, and server-driven updates that happen at different times. Even if a rendering strategy refreshes visible output, the core question still remains:\nwhat is the trusted source for each kind of state, and what event is allowed to replace it.\nState libraries do not automatically answer this either. They can centralize storage, but they do not decide architectural boundaries by themselves. If a team still mixes UI concerns, runtime concerns, and server authority, the same drift simply moves to a different API surface.\nCotomy\u0026rsquo;s response to this is deliberately narrow. It tries to reduce the number of unofficial paths through which a screen can become current, so page entry, form submission, API reload, and display update are handled through a smaller and more explicit runtime structure.\nThat structure is also visible in the implementation. The page controller initializes on load and then raises ready. Entity fill forms listen to that ready point, load only when an entity key is present, fill matching inputs, and then apply the same response to display elements through the renderer. That is a concrete attempt to keep load, fill, and display projection inside one understandable path instead of several unrelated patches.\nIn sequence, the model is meant to work like this. Page entry starts the controller. initializeAsync completes before ready is raised. If the browser restores the page, restore logic can start from reloadAsync instead of trusting the old DOM. When the form has an entity key, the form load path retrieves current server data, fillAsync applies that response to inputs, and the renderer applies the same response to display-only elements. The important point is not the individual APIs by themselves. It is that startup, restore, load, fill, and render are forced back into one ordered path.\nWhat This Means in Practice The practical rule is simple.\nThe page controller should own screen startup and page-level coordination. Forms should own submission and input filling behavior. Renderers should own projection from response data into display-only DOM. The server should remain the authority for persisted business state.\nIn a typical edit screen, that means startup should not be split across several unrelated document-ready handlers. It should begin from one page entry, reach ready once, and then let the form handle load or submit according to whether the screen already has an entity key. If the screen returns from browser history, restore logic should decide whether the form must reload rather than trusting the visible DOM at face value.\nThis division is important because it makes stale state easier to detect.\nIf a value is wrong in a display block, the renderer path is the first place to inspect. If duplicated submission happens, the form initialization path is a likely source. If a restored page shows outdated business data, the issue is not \u0026ldquo;frontend state\u0026rdquo; in general but the relationship between restore behavior, reload behavior, and server authority.\nThat kind of diagnosis is only possible when the ownership model is explicit.\nLong-Lived UI Requires Trust Rules The deeper point is that not all state deserves the same level of trust.\nTyped input can be trusted as the current interaction state of the form. A local runtime flag can be trusted only for the lifetime of the current screen instance. Business truth should be trusted only from the server-side operation result or reload path.\nWhen these trust levels are left implicit, maintenance becomes guesswork. Teams start patching symptoms instead of controlling the model.\nLong-lived UI therefore needs more than convenient state access. It needs rules for ownership, update paths, and trust boundaries.\nThere is also a practical design tradeoff here. In the browser, everything visible on the screen is ultimately expressed through the DOM. Some modern UI models intentionally hide direct DOM state behind an abstract state layer, and that can be a reasonable responsibility boundary.\nBut in smaller business screens, that boundary is not always a net gain. If the screen mainly exists to connect server-side operations to visible form state, adding another generalized state layer can increase coordination cost without adding equivalent operational value. It creates one more place where stale values, delayed updates, and ownership ambiguity can accumulate.\nThat does not mean abstraction is wrong. It means the state model should match the job of the screen. When the screen is small and operationally narrow, hiding the DOM behind an extra state system can become architectural inflation rather than protection.\nDesign Rule The rule I take from this is simple.\nEach state layer must have a single responsibility. Each mutation must have a defined entry point. No state should be updated through informal paths.\nThat is the real issue behind long-lived UI inconsistency. Without those rules, the screen gradually stops being explainable.\nConclusion: State Consistency Is an Architectural Rule Screen state consistency is not mainly a question of convenience APIs. It is a question of structural control.\nThe problem that pushed me toward this design was not a desire for more state tools. It was repeated difficulty keeping server-rendered output, JavaScript-updated output, browser-stored values, and visible DOM state in agreement on ordinary business screens.\nBusiness screens become unstable when DOM state, runtime memory, and server state are all treated as if they were equally authoritative. They become more stable when each layer has a clear role and updates are forced through explicit lifecycle and form protocols.\nThat is why this problem belongs in the series. After lifecycle, form continuity, runtime boundaries, and intent separation, the next unavoidable question is state ownership.\nIf ownership is undefined, state drift becomes normal. If ownership is explicit, long-lived screens become easier to reason about, debug, and keep reliable over time.\nThe next step is the synchronization problem itself: how UI state and server state should be brought back into agreement once they inevitably diverge.\nProblem Series This article is part of the Cotomy Problem Series, which examines recurring structural failures in business UI design.\nSeries articles: HTML and CSS as One Unit , Form Submission as Runtime , Screen Lifecycle and DOM Stability , Form State and Long-Lived Interaction , API Protocols for Business Operations , Runtime Boundaries and Operational Safety , UI Intent and Business Authority , Binding Entity Screens to UI and Database Safely , and Screen State Consistency in Long-Lived UIs.\nNext Next: Synchronizing UI and Server State ","permalink":"https://blog.cotomy.net/posts/problem-9-screen-state-consistency-long-lived-uis/","summary":"State failures in business UIs are usually not isolated bugs. They appear when DOM state, in-memory state, and server state have no explicit ownership and synchronization rules.","title":"Screen State Consistency in Long-Lived UIs"},{"content":"This note continues from Object-Oriented Thinking for Entity Design .\nIntroduction I started thinking seriously about database design only after I had already begun working as a professional engineer.\nFor a long time, I designed keys differently for each table. At the time, that felt completely ordinary. In many teams, I suspect it still does.\nOnly much later did I learn that this style is usually called natural key design.\nI do not think that approach is strange or irrational. In fact, it is probably the most human way to think about identity at first. What makes a record unique is usually decided by business requirements, so it feels natural to use that business meaning directly as the key.\nAn order record is a simple example. In many systems, the order number is the obvious identifier. A daily work report can be different. If several sites each submit one report per day, then site code plus date may feel like the most natural identifier. That way of thinking is easy to understand because it follows the shape of the work itself.\nFor a long time, I had no strong reason to question it.\nWhy I Moved Away from Natural Keys I did not move to surrogate keys because I had studied the theory deeply, or because natural keys had already caused some dramatic failure.\nThe actual trigger was much more practical.\nAt one point, I planned and built a system for a related line of business that handled many kinds of operational reports and requests. One can think of it as something in the general area of building maintenance or security operations, where many different people, contractors, and sometimes customers submit information, and managers need to review that information to make decisions.\nThis was not just a support task where I inherited someone else\u0026rsquo;s structure. I was driving the design myself, so I had room to rethink the whole shape of the data flow.\nAn important part of the context was that the source systems could also be adjusted. More than half of them were systems I had built myself, and for the others I still knew the developers or the responsible departments well enough that coordination was easy. So I was not trapped by a fixed external contract. If the cross-system design needed a small adjustment on the registration side, that was realistic.\nThe goal was cross-domain search. I wanted to gather reports, requests, and other operational records from multiple systems into one place and let managers search across them as one body of information.\nThat was where a normal relational approach started to feel wrong. If the data stayed split into separate tables by source or by business type, then it stopped being truly flat from a search perspective. I could still search each table, of course, but I could not treat the incoming data as one unified searchable surface without building a more artificial structure on top of it.\nThis system was not intended for heavy clerical processing, and it did not need aggregation. It was mainly for collecting and searching operational information. Because of that, I started considering a key-value or document-style database, even though that was not the mainstream choice then and probably still is not the mainstream choice now for this kind of internal system.\nThat is why I decided to use what was then called Azure DocumentDB together with Azure Search. Today those products are called Azure Cosmos DB and Azure Cognitive Search, but at the time they still had their earlier names.\nThe key question became difficult at exactly that point. Once I tried to treat all of those records as one flat searchable body of data, it stopped making sense to let every incoming record carry a completely different key shape derived from its original business meaning.\nLooking back, that experience clarified one design point very sharply. Once record shapes stop being uniform, natural-key assumptions also stop being uniform. If the schema is heterogeneous but the system still needs to store, update, index, and search records through one common flow, then identity has to be separated from business structure.\nA Different Kind of Data Model Changed the Key Question Using a key-value or document-style database changed the design pressure immediately.\nThe incoming data was roughly divided into three groups: daily reports, monthly reports, and records that arrived without any date-driven cycle at all. I could have forced dates into the key shape for consistency, but that started to feel wrong very quickly. If some data had no real relationship to a date, putting a date into its identifier just to preserve one naming pattern felt artificial.\nThat was the point where I stopped trying to derive identity directly from the business meaning of each record.\nInstead, I standardized record identity on GUID values.\nThat choice fit the system better. In that system, the original business identifiers were not the primary meaning of the stored record anymore. They were attributes of the record. The record itself needed one stable identity so it could be stored, updated, indexed, and searched in one flat structure. From that perspective, letting the key shape change every time the source system changed would only have added implementation cost without adding real value.\nIt also fit the tools I was using at the time well enough that the implementation cost was low.\nI was aware that people sometimes debate whether GUID collisions are truly impossible. I did not need philosophical certainty there. In that system, if a collision had somehow surfaced as an exception, retrying from the client side would have been enough. Most registrations were triggered automatically from other systems rather than typed manually by end users, so even a defensive retry path for that unlikely case would have been easy to add. The system\u0026rsquo;s primary purpose was search, not financial settlement or inventory control, so the operational risk was acceptable.\nThe broader context mattered too. Data was entered through several other systems, and those systems still used relational databases. Writing the generated GUID back to those source systems made it possible to update DocumentDB and Azure Search in the same transactional flow from the application side. That was enough to avoid the kind of inconsistency that would actually have been dangerous for the product.\nWhy That Design Felt More Rational Than I Expected That system was not large in volume. It handled roughly one thousand records per day at most.\nAt that scale, almost any reasonable database technology could have worked. I was not solving a scale problem that only Cosmos DB and Azure Search could solve.\nBut the architecture still made sense because it matched the actual goal. Reports from multiple systems could be treated as flat searchable data. Full-text search became straightforward. Record-level sharing with other users also became easier because each record had one stable identifier that did not need to carry business meaning inside itself.\nWhat mattered was not that the system was globally distributed or massively scalable. What mattered was that identity had become simpler.\nThat experience stayed with me.\nReturning to Relational Databases Later I returned to relational database design for internal business systems.\nBy then, I already knew that I liked working with records whose primary identity did not depend on a specific business number, date pair, or composite requirement. When I came back to MySQL-based design, it became obvious that relational databases did not force me to abandon that comfort.\nA table can have one primary key and still define separate unique indexes for the business-level rules that actually matter.\nThat was the moment the idea clicked for me in a much more stable way. The system could use one surrogate key for record identity, while business uniqueness could still be enforced where needed.\nOnly after I had already settled into that style did I look up the terminology and learn that surrogate key was the established name for it.\nWhy Surrogate Keys Feel Better in Application Code The most obvious benefit for me is that record identity becomes structurally uniform.\nIn my own systems, I usually manage entities through a shared base class. When every entity has the same kind of Id, I can treat record identity as one stable concern across the whole application. I do not need a different equality rule for each table just because one entity is identified by order number, another by site code plus month, and another by some external reference number.\nThat improves the code in a very practical way. Identity becomes simpler to reason about. Base classes remain cleaner. Cross-cutting code does not need to know the business rule for every individual table just to answer the question of whether two records refer to the same stored entity.\nI still check the type as part of identity reasoning, of course. A customer with a certain Id and an order with the same Id are not the same thing. But once the entity type is known, the record key itself can stay simple.\nBusiness Uniqueness Still Matters Using surrogate keys does not mean business uniqueness disappears.\nDaily and monthly data still need date-related constraints. Some records still need a master code, customer-controlled number, or another domain-specific identifier. But in a relational database, that is exactly what unique indexes are for.\nI do not see that as a compromise. I see it as clearer separation of responsibility.\nThe primary key answers one question: which stored record is this.\nA unique index answers another question: which business condition must never be duplicated.\nThose are related questions, but they are not always the same question.\nAn order record is a good example. In one of my systems, the order itself is uniquely identified by its order number. But some customers also have their own inquiry number or management number and contact us using that identifier instead. Ignoring that number because it is not the official primary identifier would be an application defect. If it matters operationally, it should be modeled and, where necessary, protected with uniqueness rules.\nSurrogate keys do not prevent that. They make it easier to express it without overloading record identity itself.\nThat distinction matters because I do not think of business keys as meaningless. I think of them as constraints and lookup handles rather than as the one universal identity that every layer of the system must carry. A screen, an API endpoint, and an entity instance usually need one concrete way to point to one stored record. Business rules often need more than that, but that is a different responsibility.\nLooking at it now, I think this is part of a broader design principle. Business rules are often specific, changeable, and different across domains. System structure usually benefits from being more stable than that. For me, surrogate keys became valuable not only because they simplify record identity, but because they help separate changeable business uniqueness from the parts of the application that benefit from staying structurally uniform.\nWhy Relationships Become Easier to Change The biggest advantage is not elegance. It is lower coupling.\nWhen related tables reference a surrogate key, the link field itself carries almost no domain meaning. It is only an identity bridge. That makes relationships less fragile.\nIf I later discover that a business-level uniqueness rule was incomplete, and I need to add another field to a unique index, that change usually does not force a redesign of every foreign key relationship in the system. The record identity can stay the same while the business constraint becomes more precise.\nThat is a very practical form of safety during development. Design mistakes still happen, but the blast radius is smaller.\nA Small but Real Benefit in URLs Another benefit is smaller, but I still value it.\nWhen natural keys are composite, URLs often drift toward query strings such as /orders?orderNo=\u0026hellip; or /work-reports?siteCode=\u0026hellip;\u0026amp;month=\u0026hellip;.\nThere is nothing inherently wrong with that. But for ordinary CRUD screens, I find it clearer when one record is addressed by one path segment, such as /orders/{id}.\nThat is also closer to how I prefer to structure APIs and screens in practice. Search conditions belong in query strings. A request for one concrete record feels cleaner when it uses a stable path identity instead.\nThis is partly a matter of taste, but it is not only taste. A uniform URL shape removes unnecessary variation from both frontend and backend code.\nFor me, this is not only a database design preference. It is a boundary design decision for the whole application. If one record is always addressed through one stable identifier, then the database key, the API path, the screen URL, and the entity form can all follow the same shape.\nThis is the principle I would state more directly now. Record identity should be single, stable, and independent of business meaning. Business identifiers should be treated as constraints and lookup keys, not as the primary identity of a stored record. Once that separation is made, the UI, API, and persistence layers can all rely on the same identity shape. Without a single stable identity, the layers cannot align. Without that single identity shape, the layers do not line up cleanly, and composite identity begins to distort the boundary between them.\nWhy This Also Fits Cotomy\u0026rsquo;s Entity Form Design This preference also lines up with the current Cotomy entity-form flow.\nCotomyEntityApiForm reads one entity key from data-cotomy-entity-key, appends that single key as the final path segment of the action URL, and switches from POST to PUT when the key exists. On a 201 Created response, it reads the Location header and stores the generated key back onto the form.\nThat flow assumes one stable record identifier. It is a good fit for the way I now design business systems, because the UI does not need to understand a composite business identifier just to edit one entity. Business-level uniqueness can stay in validation rules and database constraints where it belongs.\nIf composite natural keys were pushed directly into this kind of UI flow, the complexity would spread immediately. The URL shape stops being uniform. Hidden fields increase because the UI has to carry multiple identifying values. POST to PUT transitions stop being a simple one-key decision. Reload behavior also becomes heavier because re-fetching one record now depends on reconstructing multiple business conditions instead of reusing one stable identity. That is exactly the kind of cross-layer distortion I prefer to avoid.\nIs This Unusual From conversations and projects around me, I still get the impression that this style is not especially common in Japan. I often see designs where business identifiers remain close to the center of table identity, especially in internal business systems.\nAt the same time, when I look at major frameworks used internationally, surrogate-key-centered design does not look unusual at all. Many popular frameworks and ORMs assume one primary identifier such as Id by default, and then let developers define additional uniqueness constraints separately where business rules require them.\nSo I do not think of this as an eccentric design choice. It feels less like a special preference and more like one reasonable way to separate stored record identity from business-level uniqueness.\nWhere This Fits Best This approach fits best in CRUD-oriented business systems where screens, APIs, and persistence all operate on one entity at a time. It is especially comfortable when the UI, URL structure, and API routes all benefit from one stable identifier shape.\nI would be more careful when the same identifier must remain primary across external system boundaries, when the business key is legally or contractually authoritative, or when the natural key is guaranteed to be immutable and is already the true operational identity. Outside cases like those, I would not put a natural key at the center by default. In those cases, keeping the natural key at the center may still be the more honest design.\nConclusion I did not arrive at surrogate keys by starting from database theory.\nI arrived there because one search-oriented system forced me to stop embedding business meaning into every record identifier, and once I experienced that simpler identity model, I did not want to go back.\nNatural keys still make sense as business constraints. In some cases, they are exactly the rules that matter most.\nBut today I treat record identity and business uniqueness as different concerns. A surrogate key identifies the stored entity. Unique indexes protect the business rules. That separation is not only a preference. It is what allows the database, API, URL structure, and entity form to keep the same shape, remove unnecessary branching, and stay structurally aligned across the application.\nDesign Series This article is part of the Cotomy Design Series.\nSeries articles: CotomyElement Boundary , Page Lifecycle Coordination , Form AJAX Standardization , Inheritance and Composition in Business Application Design , API Exception Mapping and Validation Strategy , Why Modern Developers Avoid Inheritance , Inheritance, Composition, and Meaningful Types , Designing Meaningful Types , Object-Oriented Thinking for Entity Design , and Entity Identity and Surrogate Key Design.\nPrevious article: Object-Oriented Thinking for Entity Design ","permalink":"https://blog.cotomy.net/posts/design/10-entity-identity-and-surrogate-key-design/","summary":"Why I moved from natural keys to surrogate keys, and why that change made both database design and application code easier to manage.","title":"Entity Identity and Surrogate Key Design"},{"content":"Previous article: Shared Foundation Layout for Razor Pages and Cotomy Up to the previous note, I summarized the structure I use when I develop with Cotomy and Razor Pages. In truth, the exact shape of the structure is not the most important thing to me. What matters to me is how to get through a large amount of daily work efficiently and with fewer mistakes.\nThe structure I described helps with development, but it also shortens maintenance and later investigation quite noticeably. The biggest effect, however, is that most of the systems I have built since a certain point now follow almost the same structure. There are small differences, of course. Some older systems still use an earlier form of what later became Cotomy. But the broad arrangement is now stable across most of my work.\nThat kind of consistency obviously improves day-to-day work. But there is another problem I wanted to solve, and adopting Razor Pages helped me solve it. This time, I want to write about that problem.\nThe problem is how to keep one consistent naming path from persisted data, in other words from database tables, all the way to the screen the user actually touches.\nThis way of thinking is especially effective in form-centered SSR applications, particularly in CRUD-heavy business systems. It works best when the practical unit of change is the screen, because the server-rendered form and the UI behavior usually change together. I do not think it applies equally well to every frontend architecture. In more independent SPA structures or highly dynamic UI products, different tradeoffs often become more important.\nThe same field used to have too many names If I build in a strongly separated SPA style, the server side and client side are treated as distinct systems, and the frontend is implemented against API contracts. That is a rational approach. It is also a strong way to build very large applications with more dynamic UI behavior.\nBut the systems I build, and the kind of systems I imagine people building with Cotomy, are usually not that kind of product. They are not trying to be flashy. They need to be steady, durable over long operation, able to handle complexity, and at the same time avoid creating unnecessary complexity.\nFor a long time, the problem that kept bothering me in web development was that the database field name, the Entity property name, the name attribute on the screen, and the property name read by JavaScript were all disconnected. To access one field, the same concept had to be declared again and again as strings or program identifiers at several different points in the route.\nAnd those declarations were usually made separately.\nThat means the same business fact could drift by small errors. One place used one spelling. Another place used a slightly different spelling. Another place changed and one more did not. During development, that is already annoying. In a large system, when only one small part is reading data incorrectly, the place where the defect appears can be far away from the place where the naming mismatch was introduced. That makes investigation slower than it should be.\nMany of these issues can eventually be found by testing. But that is not really the point. The problem is that the cost of detecting, tracing, and rechecking them keeps accumulating. Even when the defect is not severe, the structure keeps generating small verification costs that add up over time.\nIn desktop-style systems or older two-tier client-server systems, this kind of mismatch often surfaces earlier because the compiler catches more of it. On the web, especially when strings are passed across multiple layers, the same problem tends to survive longer.\nWhat I liked in older connected tools There was a period when I often chose Microsoft Access for solo development.\nAccess had many problems. I do not want to romanticize it. But it had one remarkable strength. When I built screens and reports from database data, the implementation was connected in a way that made simple naming mismatches much less likely. It was not easy to mistype a field name in one place and silently drift away from the data definition in another.\nBecause of that, when I developed systems in Access, there were times when I could implement more than ten different reports in only a few days. This was long before AI assistance existed, which impressed me quite a lot at the time.\nThat speed was not only because naming stayed connected. Access also had very strong built-in reporting features, and that was a major part of why so many reports could be produced in such a short time.\nI saw the same kind of strength again in older SOAP-based three-tier client-server systems. When the client-side DTOs were generated from WSDL, the route from server contract to client-side structure was already connected by the toolchain. That did not make those systems simple overall, but it did mean this particular naming problem was much less likely to occur. The client was not retyping every field contract by hand.\nSo what I wanted to recover later was not Access itself, and not SOAP itself either. What I wanted to recover was that same structural quality. I wanted the route from data to screen to stay connected instead of being manually re-declared at every step.\nEntity Framework solved the first half Before I moved to Razor Pages, this problem kept bothering me. Razor Pages did not solve everything by itself, but it gave me an environment where the missing pieces could be lined up properly.\nThe first large improvement came from Entity Framework.\nOnce I moved to a code-first style, I could implement the Entity class first and let that definition drive the database structure. In that arrangement, the program and the database become connected in a much tighter and more reliable way. At least on the server side, the gap between program structure and database structure becomes much smaller.\nThat alone made development far more comfortable for me.\nThe point is not only convenience. It changes where the authoritative name lives. If the Entity definition is the source, then the database no longer evolves as a separate naming world that the application must constantly reinterpret.\nThe remaining problem was the screen Even after that, one large problem remained.\nNo matter whether I use Cotomy or not, data usually moves from frontend to server through forms and inputs. The HTTP method may be GET, POST, or something else, but the practical route is still often a form with fields, and the field name is decided by the name attribute.\nThat name may point to a simple property, one field inside a nested object, or one array element. But structurally it is the same problem. If the names at that stage drift away from the names used on the server side, the route is broken again.\nC# has a feature that was extremely important for me here: nameof.\nIf there is a property named PartnerId on Order, then nameof(Order.PartnerId) gives me the string PartnerId. That sounds small, but to me it solved one of the most persistent problems in web development. It is also one of the major reasons I have continued developing in C#.\nIn practice, almost every form input has a name attribute. I set those names with nameof whenever I reasonably can.\n\u0026lt;input name=\u0026#34;@nameof(UserEntity.Name)\u0026#34; /\u0026gt; \u0026lt;input name=\u0026#34;@nameof(UserEntity.Email)\u0026#34; /\u0026gt; \u0026lt;input name=\u0026#34;@nameof(UserEntity.Phone)\u0026#34; /\u0026gt; That one habit changes a surprising amount.\nNow the form is no longer inventing its own field vocabulary. The screen is borrowing the property name from the C# side instead of retyping it as an unrelated string.\nThis does not mean DTOs are always unnecessary. There are cases where I still define DTOs explicitly. In particular, I think external API boundaries, security boundaries, and versioned contracts often justify a clearer DTO layer. For externally exposed or versioned APIs, I think a clearer DTO boundary is often necessary. For internal business screens, it can often be reduced or omitted.\nWhen I do define DTOs or projection shapes, they are usually edited intentionally as part of the contract, which makes mistakes easier to notice. In C#, there are also cases where it is enough to project only the fields to expose through an anonymous object instead of sending a broader model as it is, or to match fields automatically by property names. So even when DTOs exist, the naming problem is still much easier to control than in a structure where each layer re-declares names independently by hand.\nRazor Pages made this route feel much more natural This is where Razor Pages helped me a great deal.\nBecause the screen is server-rendered, the frontend does not need to define a second full model just to know what field names exist. The server emits the form with its name attributes already decided. The client side can then submit those names back as they are, or use the same names to fill values later.\nThat does not mean the frontend can never know anything about the data. Some pages still need page-specific display logic. But in many ordinary business screens, the client side does not need to redefine the data structure just to move values between the API response and the DOM. The HTML already contains the route.\nThat is an important difference for me.\nThe names do not have to be rediscovered separately on the client. They are already present on the screen that the server rendered.\nOf course, this has limits. If the UI becomes highly interactive, if the frontend is released independently, or if the API boundary must be versioned and controlled separately, then a stronger DTO boundary and a more explicit client-side model can be the correct choice. I do not think this article describes a universal answer.\nHow Cotomy fits into this Cotomy does not solve database design. It does not define server-side models. Its role is narrower than that.\nWhat Cotomy gives me is a way to continue the same naming path on the UI side without adding another unnecessary model layer.\nCotomyEntityApiForm handles entity-oriented API submit flow. It manages the entity key and switches from POST to PUT when an entity key is present.\nCotomyEntityFillApiForm extends that behavior with load and fill support. When the form has an entity key, it can load data from the API on page ready. And when it fills the form, it applies values to inputs by matching their name attributes.\nThat behavior is visible in the implementation itself. CotomyEntityFillApiForm searches for inputs, textareas, and selects by name and fills matching fields from the API response. For non-input display elements, it uses CotomyViewRenderer and data-cotomy-bind.\nThis also includes nested objects and indexed data when the naming rule stays consistent. In the sample structure, names such as profile.code, details[0].itemCode, Lines[0].Quantity, or the more general shape of Order.Items[0].Name can still be matched because the route is expressed through names instead of through a separately rebuilt client-side model. With nested names, the name itself becomes the path, so structural changes on the server side can affect the UI route directly.\nIn practice, that means I can build a screen around names already present in the DOM. Those names may come directly from entity-side definitions, or from DTOs and projection shapes prepared on the server side. The important point is that the client does not need to invent a separate naming layer again.\n\u0026lt;form id=\u0026#34;user-form\u0026#34; action=\u0026#34;/api/user\u0026#34;\u0026gt; \u0026lt;input name=\u0026#34;@nameof(UserEntity.Name)\u0026#34; /\u0026gt; \u0026lt;input name=\u0026#34;@nameof(UserEntity.Email)\u0026#34; /\u0026gt; \u0026lt;span data-cotomy-bind=\u0026#34;@nameof(UserEntity.Name)\u0026#34;\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;span data-cotomy-bind=\u0026#34;@nameof(UserEntity.Email)\u0026#34;\u0026gt;\u0026lt;/span\u0026gt; \u0026lt;/form\u0026gt; import { CotomyEntityFillApiForm, CotomyPageController } from \u0026#34;cotomy\u0026#34;; CotomyPageController.set(class extends CotomyPageController { protected override async initializeAsync(): Promise\u0026lt;void\u0026gt; { this.setForm(CotomyEntityFillApiForm.byId( \u0026#34;user-form\u0026#34;, CotomyEntityFillApiForm )!); } }); The important point is not that the TypeScript side now knows the whole Entity definition. The important point is that it usually does not have to recreate the naming contract by hand.\nThe form submits by the names already attached to the inputs. The fill process restores values by those same names. And display-only elements can be updated through data-cotomy-bind using the same response object.\nThis is why I care so much about name consistency When each layer reads data with its own independently declared names, the system becomes harder to trust.\nOne mismatch may only break one small area. But in a large system, that small area may be physically far from the source of the problem. The code that writes the name, the code that sends the request, the code that reads the response, and the place where the user notices the failure may all be different.\nThat is exactly the kind of structure that slows investigation down.\nWhat I wanted instead was a route where the same meaning could travel from persisted data to the screen with as little reinterpretation as possible.\nIn my current style, the route is roughly this.\nEntity Framework keeps the persisted model and the database definition aligned. Razor Pages emits form fields using names borrowed from C#. CotomyEntityFillApiForm uses those names again when loading and filling. CotomyViewRenderer uses the same response object for display bindings.\nThat does not remove all mistakes, of course. No structure can do that. But it removes a whole category of mistakes created only by duplicated naming declarations. Names alone are not sufficient for total safety. Type mismatches, null handling, nested structure drift, and partial update behavior still need to be considered separately.\nWhere Cotomy stops This boundary is important.\nCotomy is not replacing Entity Framework. It is not deciding the Entity model. It is not designing the API contract. And it is not business logic.\nWhat it does is continue the UI-side part of the route once the server has already decided what names exist on the screen. In other words, Cotomy helps me avoid rebuilding the same mapping yet again on the frontend.\nThat is why I think of this less as a frontend convenience and more as a consistency mechanism in the overall application path.\nAt the same time, I do not mean that every page must be completely ignorant of data shape. Some pages still need extra UI behavior and may inspect a few response properties directly in page-specific code. The important point is that the normal form and display path does not require a second full declaration of the data model just to move values around.\nEven when extra screen-specific behavior is needed, I do not think that justifies rebuilding the whole data contract on the client side. In many cases, it is enough to define only the small interface that the extra behavior actually needs. That keeps the implementation local, keeps the impact surface small, and reduces the risk introduced by the additional code itself.\nThis also means I am not against view-specific transformation itself. Screens still need formatting, display-only values, and other presentation decisions. The point is simply that those concerns should be added deliberately at the edges where they are needed, instead of forcing the whole route from persistence to the screen to fork into a second broad naming system too early.\nAlthough I am explaining this through Entity Framework, Razor Pages, and Cotomy, I do not think the underlying idea belongs only to that stack. The broader principle is to keep naming contracts connected for as long as practical, and to introduce extra translation layers only where they are truly required.\nClosing What I wanted for a long time was simple to say and surprisingly hard to achieve in ordinary web development: I wanted the same field to keep the same identity from persistence to the screen.\nEntity Framework solved the gap between program structure and the database. Razor Pages let the server render the field names directly into the screen. And Cotomy let the UI continue using those names for load, fill, submit, and display without demanding another parallel client-side model in the ordinary case.\nFor me, this has had a very large effect. It reduced mistakes, reduced the amount of declaration I had to repeat, and made later investigation much easier when something still went wrong.\nMore than anything else, it made the route from stored data to actual screen behavior feel connected again.\nThe next thing worth writing about is what Entity Framework changed after that first comfort. Once data design and implementation were brought together, the longer-term effects on migration work, schema change tracking, and operational maintenance became impossible for me to ignore.\nC# Architecture Notes This article is part of the Cotomy C# Architecture Notes, which reflect on backend and project-structure decisions around business systems.\nSeries articles: Why I Chose C# for Business Systems and Still Use It , From Global CSS Chaos to Scoped Isolation , Unifying Data Design and Code with Entity Framework , How I Split Projects in Razor Pages Systems , Integrating TypeScript into Razor Pages with Cotomy , Shared Foundation Layout for Razor Pages and Cotomy , and Consistent Data Flow from Persisted Models to the Frontend.\nNext article (planned): how Entity Framework changed not only my data design style, but also migration work, change tracking, and long-term operations.\n","permalink":"https://blog.cotomy.net/posts/csharp-architecture/07-consistent-data-flow-from-persisted-models-to-the-frontend/","summary":"Why I care about keeping names consistent from Entity Framework models to Razor Pages forms and Cotomy, and how that reduces mistakes in long-lived business systems.","title":"Consistent Data Flow from Persisted Models to the Frontend"},{"content":"Previous article: Integrating TypeScript into Razor Pages with Cotomy In the previous note, I summarized the structure I currently use when developing with Razor Pages. There I explained that I place page-level TypeScript entry points at each endpoint. This time, I want to continue from that point and write about the shared foundation that sits behind those entry points.\nOnce page-level TypeScript starts increasing, some common base becomes unavoidable. A page controller base class, form helpers, rendering helpers, and shared UI parts all need one place to live. The practical question is not whether a shared foundation exists. The question is where that foundation should be placed and what boundary it should belong to.\nMy first idea was wrong When I first started building this style of system, I placed the TypeScript foundation in a separate folder directly under the solution root. Each page entry point imported shared modules from there.\nThe structure looked roughly like this.\nSolution ts controller.ts form.ts renderer.ts MainProject Pages Sales Orders Index.cshtml Index.cshtml.ts That approach did provide code sharing. In that narrow sense, it worked.\nBut the problems were obvious once the system grew.\nFirst, page entry points became heavily affected by the internal folder structure of that shared TypeScript area. If the foundation was reorganized, imports across the application were dragged along with it. That is not what I want from a module that claims to be independent.\nSecond, even if the classes inside that folder were designed with some abstraction, the module itself was not truly independent. The application was still reaching directly into the internal file arrangement of the shared area. That creates friction for refactoring and makes the so-called foundation much less portable than it appears at first.\nThird, if a foundation is really meant to be a foundation, it is natural to want to use it in multiple systems. But a shared folder sitting under one solution and being imported through local path assumptions is not a stable form for reuse. It is only a local convenience.\nAt that point I had to admit that the structure was not merely imperfect. It reflected a mistaken boundary.\nThe real mistake was how I drew the line This is related to the boundary problem I have mentioned several times in this series.\nIf a project is large, if staffing is abundant, and if server-side work and frontend work are handled by different specialists, then drawing a boundary between server-side and frontend development can be reasonable. In that situation, separate ownership and separate project structure often have organizational value.\nBut that is not the reality of most of my projects.\nIn practice, it is usually just me, or at most a couple of people joining partially. In that environment, I cannot develop the server side and frontend as if they were independent delivery worlds. I have to build through the server and through the frontend as one continuous path.\nWhen I inserted an unnecessary line between those areas, the result was a distorted structure. One project ended up depending on multiple foundations living in physically separate places. The server-side base lived in one place. The frontend base lived elsewhere. Yet the actual work still crossed both every time one screen was implemented.\nThat was the real problem.\nFrom a pure dependency point of view, the frontend foundation built on Cotomy does not need Razor Pages specifically. The server could be written in PHP or Ruby and the Cotomy side could still work. On the other side, the server is ultimately deciding what HTML to emit. In that abstract sense, there is no forced implementation dependency between them.\nBut when I think specifically about building a web application with Razor Pages and Cotomy, I do not think those sides should be treated as two unrelated foundations by default. I think they should be treated as one application foundation with different responsibilities inside it.\nSo I moved the TypeScript foundation into Core That is why I eventually decided to place the TypeScript foundation inside Core.\nAfter all, in Razor Pages projects and RCL projects, I am already writing TypeScript that belongs to those projects. Putting the shared frontend base outside the project structure at the solution root was strange from the beginning.\nTo make this clearer, Core in this structure is not a dump for anything shared. I treat it as the shared application foundation across server-side and frontend concerns. It may contain common page controller bases, shared form helpers, shared rendering helpers, and other infrastructure that supports multiple screens. But it should not contain business logic, because business rules belong to the application and domain side. And it should not contain page-specific behavior, because page-specific flow belongs to the screen entry that owns that endpoint. Core is the side many other parts depend on, so it is important to keep it thin and stable instead of letting responsibilities accumulate there.\nThe structure I now prefer looks more like this.\nSolution Core Core.csproj _front src index.ts controller.ts form.ts renderer.ts elements.ts UISample UISample.csproj Pages UISample FormBase.cshtml FormBase.cshtml.ts FormEntity.cshtml FormEntity.cshtml.ts I created a folder named _front under Core and placed the shared TypeScript there.\nThe underscore is a small detail, but it is intentional. I did not want the TypeScript area to visually blend into the C# folders. The shared foundation is part of the same project boundary, but it still helps me to distinguish the frontend source area immediately when I scan the tree.\nThis arrangement made the overall structure much easier for me to understand. Core became the place where the common foundation lives across both C# and TypeScript. Individual page entry points still stay beside each Razor Page, but the reusable base they depend on belongs to Core rather than to a detached solution-level directory.\nThe structural failure of the old solution-root approach can be stated more directly. It created path dependence, because page entries were forced to know where the shared frontend files physically lived. It leaked internal structure, because application code imported through file layout instead of a public module surface. And it widened refactoring impact, because reorganizing the shared area immediately changed import paths across the screen layer. For example, moving one shared controller file or reorganizing the shared folder forced import-path changes across many page entries even when the public behavior had not changed at all. That is why I no longer think of it as a minor inconvenience. It is a boundary mistake.\nThe next problem was how pages should import Core Once I moved the shared frontend foundation into Core, I had to decide how page entry points should reference it.\nIf a page entry point directly imports internal files under Core/_front/src, the same structural problem returns immediately. The entry point becomes coupled to the internal file layout of the shared foundation.\nSo I did not want imports like this to become the public style.\nimport { AppPageController } from \u0026#34;../../../Core/_front/src/controller\u0026#34;; Instead, I wanted page code to import Core as Core.\nIn the structure I use now, that is exactly what happens. The public entry for the frontend side of Core is Core/_front/src/index.ts, and the page scripts import from @core.\nimport { AppPageController } from \u0026#34;@core\u0026#34;; import { CotomyPageController } from \u0026#34;cotomy\u0026#34;; CotomyPageController.set(class extends AppPageController { protected override async initializeAsync(): Promise\u0026lt;void\u0026gt; { await super.initializeAsync(); } }); This is the same shape used by libraries distributed through npm. External code imports from the package surface, not from arbitrary private files inside the package.\nThe important point is not the alias syntax itself. The important point is that the public surface is explicit.\nIn this structure, Core/_front/src/index.ts re-exports the types that pages are supposed to consume.\nexport { AppPageController, ListPageController } from \u0026#34;./controller\u0026#34;; export { DialogSurface, ProcessingArea, SidePanel } from \u0026#34;./elements\u0026#34;; export { CheckInputElement, EntityDetailForm, InputElement, SelectElement, TextAreaElement } from \u0026#34;./form\u0026#34;; export { AppViewRenderer, LookupEvent } from \u0026#34;./renderer\u0026#34;; That means page-level scripts depend on the Core surface, not on the internal arrangement behind that surface.\nHow the @core import is configured The point is not just nicer import syntax. I also want TypeScript resolution and bundler resolution to follow the same public surface. If those rules drift apart, the source code may look stable while the actual build becomes fragile. Keeping both layers aligned means @core remains a real module boundary rather than just a TypeScript convenience.\nIn my setup, tsconfig.pages.json maps @core to Core/_front/src/index.ts.\n{ \u0026#34;compilerOptions\u0026#34;: { \u0026#34;baseUrl\u0026#34;: \u0026#34;.\u0026#34;, \u0026#34;paths\u0026#34;: { \u0026#34;@core\u0026#34;: [\u0026#34;Core/_front/src/index.ts\u0026#34;] } } } And in webpack.pages.config.cjs, the alias is resolved so page entry points under each Pages folder can import the same module consistently during bundling.\nresolve: { extensions: [\u0026#34;.ts\u0026#34;, \u0026#34;.js\u0026#34;], alias: { cotomy: cotomyCjs, \u0026#34;@core$\u0026#34;: coreIndex, \u0026#34;@core\u0026#34;: coreRoot } } This part matters because it keeps the import rule stable at both the TypeScript level and the bundler level. The page author writes import { AppPageController } from \u0026ldquo;@core\u0026rdquo;; and does not need to know where the internal file actually lives. Just as importantly, when the internal files inside Core move, I only need to update the public surface and the build mapping in one place instead of chasing import changes across many page entries.\nThat is exactly the level of indirection I wanted.\nPage entries still stay at the endpoint boundary None of this changes the rule from the previous article.\nThe page-specific TypeScript file still belongs beside the Razor Page file. In this arrangement, webpack scans each project and treats Pages/**/*.cshtml.ts as page entry points. Core is not replacing page-level entry points. Core is providing the shared base those entry points stand on.\nThat distinction is important.\nThe entry point belongs to the page because the page is the endpoint boundary. The shared controller classes, form helpers, and UI building blocks belong to Core because they are part of the common application foundation.\nThis arrangement works most naturally when the practical unit of change is the screen. In that situation, one implementation task usually reaches the Razor Page, the page model, the rendered HTML, and the page TypeScript together. If the practical unit of change is instead an independently released frontend layer, then a different structure can make more sense. Put more simply, if the change unit is the screen, integration is usually the more natural choice. If the change unit is the frontend as its own product, separation is usually the more natural choice.\nOnce I organized it this way, the structure became much more coherent. The project no longer depends on page scripts reaching into a detached TypeScript area. Instead, the common base is part of Core, and page entry points consume that base through a defined public surface.\nWhy I am writing about this mistake so openly I am writing about this because it is a good example of a boundary error.\nA class or a method needs semantic independence, not just technical separation. I think projects are similar. A project should also have a meaningful boundary. If the way I split projects does not match the way development actually flows, then the structure is only pretending to be clean.\nIn some systems, it may still make sense to treat server-side and frontend as separate units. I do not deny that. But when I think about ordinary web application development, especially in a small team or solo environment, I find it easier to understand the whole development cycle when the primary split is between business responsibilities and shared foundation rather than between server path and frontend path.\nThis point is not universal. I think it applies most naturally to small and mid-sized Razor Pages systems, especially when the same developer, or a tightly aligned team, is implementing one screen across both server-side and frontend work. In a larger organization with separate specialist teams, a different split may be more appropriate. I do not think this article describes a rule for every web project.\nIf I cut the system mainly by where the code runs, even though one screen usually changes across both sides at once, the structure becomes harder to follow. The separation does not really remove dependency in practice. Instead, it raises the cost of working across the boundary, because design, implementation, and investigation still have to cross it repeatedly. If I cut the system mainly by business role and foundation role, then the route from requirement to implementation tends to stay more visible.\nThere are also cases where the stronger split is the better design. If the frontend is independently released, owned by a separate team, or stabilized around a fixed API contract, then separating the frontend foundation more aggressively can be the correct choice. My point here is narrower than that. In the kind of Razor Pages system I am describing, the change unit is usually the screen rather than an independently evolving frontend product.\nWhat I want here is not a server side and frontend that are tightly entangled or strongly dependent on each other. That is not the point. I simply want the structure to let me naturally relate those sides when I am designing a screen, investigating behavior, or tracing a problem across one piece of work. I want them to stay understandable as parts of one application without forcing them into one mixed responsibility.\nThat is why I now treat Core as one foundation that spans both C# and TypeScript, while still keeping the UI boundary itself clear. Cotomy remains on the UI side. Business logic still does not belong there. The goal is not stronger coupling. The goal is to avoid splitting the application into physically separate worlds in a way that makes normal development, investigation, and design thinking less natural than it needs to be.\nI should also be clear about reuse. In this article, Core is an application foundation, not a general-purpose library intended for arbitrary projects. It is the shared base of one application structure. It may contain code that is reused widely inside that application, but that does not make it a cross-project library by itself. If true cross-project reuse becomes the main goal, then the reusable portion should be extracted and published as its own package instead of remaining only as part of Core.\nClosing At first, I thought placing shared TypeScript under the solution root was a reasonable way to centralize frontend code. It did centralize it, but it also exposed the wrong boundary and made page entry points depend too much on internal file layout.\nMoving the shared frontend foundation into Core solved that problem more cleanly. It let Razor Pages and Cotomy form one coherent application foundation, while still preserving page-level entry points at each endpoint. And by exposing the shared frontend base through index.ts and the @core alias, page scripts now depend on Core as a module rather than on private file paths.\nFor me, that made the whole structure feel less distorted. The server side and frontend side are still different responsibilities, but they now live in a form that better matches how I actually build business web applications.\nThe next thing to explain is the data flow that sits on top of this structure. Once the shared foundation is in place, the remaining question is how persisted models, page models, and frontend state should stay aligned without turning into a naming mess.\nC# Architecture Notes This article is part of the Cotomy C# Architecture Notes, which reflect on backend and project-structure decisions around business systems.\nSeries articles: Why I Chose C# for Business Systems and Still Use It , From Global CSS Chaos to Scoped Isolation , Unifying Data Design and Code with Entity Framework , How I Split Projects in Razor Pages Systems , Integrating TypeScript into Razor Pages with Cotomy , Shared Foundation Layout for Razor Pages and Cotomy , and Consistent Data Flow from Persisted Models to the Frontend .\nNext article: Consistent Data Flow from Persisted Models to the Frontend , about keeping one naming path consistent from persisted models to Razor Pages forms and Cotomy bindings.\n","permalink":"https://blog.cotomy.net/posts/csharp-architecture/06-shared-foundation-layout-for-razor-pages-and-cotomy/","summary":"Why I stopped placing shared TypeScript infrastructure at the solution root and moved it into Core so Razor Pages and Cotomy could form one coherent application foundation.","title":"Shared Foundation Layout for Razor Pages and Cotomy"},{"content":"Previous article: How I Split Projects in Razor Pages Systems In the previous note, I wrote about how I split projects in a Razor Pages system. This time, I want to move one layer closer to the screen and explain how I integrate TypeScript into that structure.\nThis blog is about Cotomy, and I am the author of Cotomy, so in practice most of the systems I build now use Cotomy or an earlier form of the same idea. The interesting question is not whether TypeScript exists in the system. The question is how to place it so the Razor Pages boundary remains understandable instead of getting blurred by frontend infrastructure.\nTo keep the scope clear from the beginning, I should separate three things. Cotomy itself provides a page-level UI boundary through CotomyPageController. Keeping page-local TypeScript beside Razor Pages files is not a Cotomy requirement, but an application architecture rule I use around it. At the same time, the basic idea of colocating page-specific files is not limited to Cotomy. I think that part generalizes reasonably well to Razor Pages development in general.\nCotomy is page-based, so I treat the page as a real unit Cotomy is built around CotomyPageController as a page-level control boundary. In actual use, I call CotomyPageController.set with a subclass for the page entry and let that class gather page-local initialization and form coordination.\nThe important point for me is not only technical possibility, but meaning. A page controller should represent a page.\nTechnically, I can reuse the same controller class across multiple pages. I can also create a more specialized base class for list screens or edit screens and pass that type directly. In some real screens, the subclass I add does not even contain extra logic yet.\nEven so, I still prefer to define one subclass per page. If the page exists as an endpoint, I want the page to exist as a type.\nI know this can look unnecessary when the subclass is empty. But to me, the class still has a clear purpose. It exists to represent that page as an independent unit. If it currently contains no additional implementation, that only means the page does not need extra behavior yet. It does not mean the page has no identity of its own.\nThis is consistent with a point I keep repeating in this blog. A type is not only a container for code. It is also a way to state meaning and boundary. When I define a page-specific controller class, I am making that page explicit in the codebase, even before that page grows its own behavior later.\nAn empty subclass is also a practical extension point. It keeps the page searchable as a type, gives later changes one obvious place to land, and makes it easier for both humans and AI tools to identify where page-specific behavior is supposed to belong. In that sense, it works as a fixed point for change scope, not just as a symbolic type.\nThat is partly architecture and partly personal insistence. Cotomy requires a page controller boundary, but defining one named subclass per page is my own policy on top of that. I do not think reusing the same base class directly is a serious problem. There is little practical harm in doing so. But when I come back to a system later, I want the page boundary to remain visible in code, not only in routing.\nThe practical problem appears when pages start to branch Once I follow one page, one page controller, the next problem is file placement.\nA Razor Pages system normally grows with nested folders and endpoint structure like this.\nSolution MainProject Pages Index.cshtml Sales Orders Index.cshtml Confirm.cshtml Shipments Index.cshtml Confirm.cshtml If I create one page controller per screen, the natural place for the page TypeScript is beside the page file with the same name.\nSolution MainProject Pages Index.cshtml Index.cshtml.ts Sales Orders Index.cshtml Index.cshtml.ts Confirm.cshtml Confirm.cshtml.ts Shipments Index.cshtml Index.cshtml.ts Confirm.cshtml Confirm.cshtml.ts For me, this is the stable shape. The endpoint, the markup, and the page controller stay physically close to each other.\nThat proximity improves work in very ordinary ways. Access becomes simple. When I implement or modify a screen, I do not need to jump across multiple unrelated directories just to follow one unit of behavior. The relevant files are already nearby, so development becomes more direct and less error-prone. It also reduces the chance that a page-specific script is forgotten or left uncreated because the expected location is already part of the screen structure itself.\nThis also matters after implementation. During testing and after release, tickets usually begin from a screen. Someone reports that a specific page behaves incorrectly, displays the wrong data, or fails in one operation. When that happens, I want the investigation path to be obvious. If the page file and the page-level TypeScript are colocated, the route from reported screen to relevant implementation stays short and predictable. That has real value not only during development, but also in maintenance and incident response.\nEven if Cotomy did not exist, I would still consider this kind of colocated layout a strong option in Razor Pages. Cotomy gives me a page controller model that fits the arrangement well, but the benefit of reducing search distance and keeping screen responsibility visible is broader than one framework.\nWhy I do not want a separate frontend tree The alternative is obvious. I could create a separate frontend folder and gather all TypeScript there.\nSolution MainProject Pages ... FrontEnd Index.ts Sales Orders Index.ts Confirm.ts Shipments Index.ts Confirm.ts I understand why this structure exists. If frontend work and server-side work are assigned to different people or different teams, separating those trees can be organizationally reasonable. My first large web project in ASP.NET was close to that reality, and at that time it actually helped me. Most of my background was in C# and Windows application development, so JavaScript felt difficult to me in a very direct way. A loosely constrained language was simply harder for me to handle well then. To be fair, it is still not the kind of thing I would describe as relaxing even now.\nThat is why I do not think this is a universal right-or-wrong question. If everything is colocated around the page, small teams will naturally tend toward one person carrying one feature from server to frontend. For a solo developer or a small team with full-stack expectations, that is often a benefit. But if team size is larger, or if members have clearly different strengths, a more separated structure can be the better fit. The best arrangement changes with team scale and with the technical range the people involved can actually cover.\nBut when one person or a small team is building one screen end to end, this becomes an anti-pattern for me. The screen is one unit of work, yet the files that define that unit are pushed apart. Every modification becomes a small search task. The server-side endpoint is in one place, the rendered markup is in another, and the page control logic is somewhere else again. That may sound minor, but repeated hundreds of times, it becomes real friction.\nThe cost is not only editing speed. When a bug appears, tracking the responsibility chain becomes slower because one screen no longer has one obvious implementation area. The same separation also makes AI-assisted editing less reliable, because the model sees one fragment and more easily invents the rest of the structure incorrectly. Over time, the practical result is that the screen boundary becomes weaker even though the endpoint boundary still exists.\nMy rule is simple: keep the page script beside the page So I keep the TypeScript file in the same location as the cshtml file and give it the same base name. Once I do that, one screen becomes easier to read as one object.\nThat also fits how I think about CotomyPageController. Even if the actual registration is short, the screen still has one visible control point.\nimport { CotomyPageController } from \u0026#34;cotomy\u0026#34;; export class OrderConfirmPageController extends CotomyPageController { protected override async initializeAsync(): Promise\u0026lt;void\u0026gt; { await super.initializeAsync(); } } CotomyPageController.set(OrderConfirmPageController); This example is intentionally small. The point is not that every page needs much code. The point is that the page has its own name and its own control boundary.\nThe shared foundation matters more than the page script itself Of course, placing TypeScript beside each page only works if the infrastructure supports it. I do not want every screen to manually specify a script path or manually register a build entry one by one.\nSo in my own systems, I build a small shared foundation around that rule.\nThe server side can derive the page script path from the endpoint path. The frontend build can scan the page tree and treat colocated TypeScript files as page entries. The layout can load only the script that belongs to the current endpoint.\nIn concrete terms, the foundation usually does three jobs for me. It resolves a convention-based path from the current Razor Pages endpoint. It lets the build treat matching .cshtml.ts files as entry points without manual registration. And it keeps script injection at the layout level so each page only receives the frontend code that belongs to that page.\nThat is enough to make the rule tangible. In practice, this usually means the layout or a shared Razor helper resolves the current page script path from the endpoint and emits the corresponding script tag, while the frontend build discovers page-local .cshtml.ts entries by convention under the Pages tree. I do not need each author to remember webpack entries, script tags, or path mapping details for every screen. The shared foundation absorbs that coordination work once.\nMore importantly, this kind of rule becomes much stronger when the project foundation supports it from the beginning. I now prefer to make this structure part of the shared base itself, so page authors do not have to remember it manually every time.\nOnce that foundation exists, adding screen-specific TypeScript becomes almost mechanical. I place the file where the page already lives, and the rest follows the rule.\nThis is the same kind of idea I wrote about in the previous note when I discussed project boundaries. I do not want developers to remember structure by discipline alone. I want the structure to be easy to follow because the system itself keeps the rule in place.\nThe real value is continuity of thought I have written this many times in other articles, but I think it matters enough to repeat. Human attention does not keep a large structure in active memory as well as we like to imagine. Some people are exceptionally good at relating distant files and distant concerns in their head, but I do not design around rare ability.\nI design around ordinary limits, including my own.\nThe same problem now appears with AI-assisted development, but in a different form. A human struggles to keep too many distant parts in mind at once. An AI often does the opposite kind of failure. It reads the part of the system it has been shown, then fills the missing whole with a plausible guess. Depending on the model and the workflow, that guess can look convincing while still being structurally wrong. Colocation helps here as well, because the page-level context I need to hand to an AI becomes much simpler. In many cases, I can say that the relevant implementation is this Razor Page and the files beside it, instead of assembling a scattered set of references by hand.\nIf one screen is spread across too many locations, the mental cost of understanding that screen rises immediately. That cost does not show up as a dramatic architectural failure. It shows up as slower edits, more re-checking, more context loss, and more accidental omission. Keeping Razor Pages markup and page-local TypeScript together is one of the simplest ways I know to reduce that cost.\nWhere Cotomy ends and application infrastructure begins This boundary is important.\nCotomy itself provides the page controller model and the page-level UI boundary. It does not dictate how a Razor Pages application should discover page scripts, how a bundler should scan the project, or how the server should emit script tags. Those are application-level integration decisions.\nThat separation is good. It lets Cotomy stay focused on page lifecycle and screen coordination while the host application decides how frontend assets should be organized and loaded.\nIn other words, the one page, one controller idea is part of Cotomy\u0026rsquo;s page model. The rule that same-name TypeScript files sit beside cshtml files is my application architecture decision built around that model.\nClosing When I integrate TypeScript into Razor Pages, I do not want a parallel frontend world. I want the TypeScript side to remain attached to the same endpoint boundary that already exists on the server side.\nFor that reason, I keep page-local TypeScript beside each Razor Page, define one page controller type per page even when the class is nearly empty, and build a small shared foundation so the loading rule stays automatic instead of manual.\nThat arrangement has improved my working speed noticeably because it preserves one screen as one visible unit.\nMore importantly, it makes the active scope of the current task easier for me to grasp, even though I do not think of myself as someone with especially exceptional cognitive capacity. Because the relevant range is easier to see before I start changing things, regressions caused by insufficient consideration have decreased compared with how I used to work. In practical terms, this has made it possible for me to build larger systems than before without the structure collapsing so easily under its own complexity.\nAt the same time, this alone is not enough to build a large system. In most real applications, various recurring behaviors end up being standardized into a shared foundation. The exact shape depends on what kind of system is being built and what architectural direction the project takes. How I organize that shared foundation while still keeping Razor Pages and Cotomy boundaries explicit is something I want to dig into in the next article.\nIn practice, I treat this as a small set of enforceable rules. Each Razor Page must have its own visible page controller type. Page-specific TypeScript must live beside the corresponding Razor Page and must follow the page name exactly. Pages must not manually register script tags or bundler entries one by one. Shared behaviors must belong in the foundation, not in repeated page-by-page improvisation.\nC# Architecture Notes This article is part of the Cotomy C# Architecture Notes, which reflect on backend and project-structure decisions around business systems.\nSeries articles: Why I Chose C# for Business Systems and Still Use It , From Global CSS Chaos to Scoped Isolation , Unifying Data Design and Code with Entity Framework , How I Split Projects in Razor Pages Systems , Integrating TypeScript into Razor Pages with Cotomy , and Shared Foundation Layout for Razor Pages and Cotomy .\nNext article: Shared Foundation Layout for Razor Pages and Cotomy ","permalink":"https://blog.cotomy.net/posts/csharp-architecture/05-integrating-typescript-into-razor-pages-with-cotomy/","summary":"How I place page-level TypeScript next to Razor Pages files and build a small shared foundation so Cotomy can stay aligned with endpoint boundaries.","title":"Integrating TypeScript into Razor Pages with Cotomy"},{"content":"Previous article: Unifying Data Design and Code with Entity Framework In the previous note, I wrote about data boundaries. This time, I want to focus on project boundaries inside a Razor Pages system and how I usually split them. This is not a comparison of patterns or a claim about the single best structure. It is simply the way I currently look at project structure so the system remains understandable and maintainable while I continue operating it alone.\nSolution and project boundaries are still one of the best parts of C# In C#, there is a clear distinction between a solution and a project, so individual functions or technical responsibilities can become separate projects while the solution gathers them into one development space. Other ecosystems can build something similar, and npm workspaces are an obvious example, but when I open a .NET solution in an IDE, the classification remains especially easy to understand at a glance. Since moving back to Razor Pages development, I have relied on this separation continuously, not only because it is technically possible, but because it helps me keep the system organized in a form my own brain can continue to handle. That visual clarity matters more than people sometimes admit.\nThe smallest Razor Pages starting point In a small system, the starting point is simple. When I create a Razor Pages solution in .NET, I usually begin with a solution folder and one main Razor Pages application project. At that point, the whole application is still a single project.\nflowchart TD S[\u0026#34;BusinessSystem.sln\u0026#34;] A[\u0026#34;BusinessSystem.Web\u0026lt;br/\u0026gt;Razor Pages app\u0026#34;] P[\u0026#34;Pages\u0026#34;] W[\u0026#34;wwwroot\u0026#34;] PR[\u0026#34;Program.cs\u0026#34;] AP[\u0026#34;appsettings.json\u0026#34;] S --\u0026gt; A A --\u0026gt; P A --\u0026gt; W A --\u0026gt; PR A --\u0026gt; AP If I am only making a very small website, I could keep adding files there and continue without much trouble, but if the site is simple enough that it only needs static pages, I probably would not be using C# in the first place. Once the system needs to handle server-side data, rules, authentication, persistence, or shared operational behavior, I have to decide where the application foundation should live. That is why I usually add a class library project called Core very early.\nAdding Core before the system becomes messy Core is where I place the application foundation that should not be buried directly inside the main web project. The exact contents differ by system, but the role is stable. It is the place where cross-cutting application structure begins to take shape.\nAt a simple stage, it may look like this.\nflowchart TD S[\u0026#34;BusinessSystem.sln\u0026#34;] W[\u0026#34;BusinessSystem.Web\u0026lt;br/\u0026gt;Razor Pages app\u0026#34;] C[\u0026#34;Core\u0026lt;br/\u0026gt;Class Library\u0026#34;] CD[\u0026#34;Data\u0026#34;] CA[\u0026#34;Auth\u0026#34;] S --\u0026gt; W S --\u0026gt; C C --\u0026gt; CD C --\u0026gt; CA I suspect many Razor Pages systems end up with a similar structure even if the naming differs. In projects I joined in the past, some teams used names such as 00.Core so the common foundation would appear near the top in the IDE. It is not beautiful naming, but from the viewpoint of visual organization, it is effective.\nI also usually create a separate class library called DataModel because the data model defines the structure of the system\u0026rsquo;s target domain itself, rather than one business function among many. For that reason, I normally do not split DataModel by business area. It remains one project that expresses the overall domain structure of the system.\nSo even in a relatively small application, I often move fairly quickly to a three-part baseline: the main Razor Pages application, Core, and DataModel. I separate them early because the foundation and the domain structure become easier to reason about when they are visible as distinct units.\nWhy this mattered to me so much When I was developing in PHP, structure became a serious problem once the system reached a certain size, and Razor Pages does not magically erase that problem. If I look only at one project, it can still become crowded and hard to understand. What helps is the relationship between the solution and its multiple projects, because it becomes visually obvious in the IDE that the system is made of several large units rather than one flat mass of files. Each project also contains its own internal folder structure, so it behaves like a semi-independent function boundary. That is extremely important to me because a project is not only a compilation unit, but also a way to keep responsibility visible.\nI often use Razor Class Libraries as real feature boundaries To take advantage of this separation, I do not stop at class libraries. I also use Razor Class Libraries when I want to package shared or independent screen functions. In a small system, putting everything into the main project is acceptable, and I have done it myself, because there is no need to split things just for the sake of appearing sophisticated. But in a larger system, one project becoming too large is a serious long-term problem. It makes continued feature addition and modification harder than it needs to be, so for me the question is not whether to split, but how to split.\nThe first split pattern: by business function One way I often divide projects is by business function. I saw this kind of classification many times in larger Japanese enterprise development, including SES-style projects, so I suspect it is a fairly common pattern at least in Japan. It has obvious strengths: each domain area can accumulate its own knowledge, progress can be understood per area, responsibility sharing is easier, and cooperation between multiple people also becomes easier because the system is already grouped by business meaning.\nHere is a simplified example based on a real system I built for field maintenance operations centered on cleaning work. The actual system was larger, but I am simplifying it here both for explanation and to avoid exposing the business too directly.\nflowchart TD S[\u0026#34;Field Maintenance System.sln\u0026#34;] Core[\u0026#34;Core\u0026#34;] DataModel[\u0026#34;DataModel\u0026#34;] Hygiene[\u0026#34;Hygiene and Cleaning Management\u0026#34;] Schedule[\u0026#34;Work Scheduling\u0026#34;] Daily[\u0026#34;Daily Cleaning Operations\u0026#34;] Monthly[\u0026#34;Monthly Cleaning Operations\u0026#34;] Closing[\u0026#34;Closing Confirmation\u0026#34;] Escalation[\u0026#34;Issue Tracking and Escalation\u0026#34;] Damage[\u0026#34;Facility Damage Reports\u0026#34;] Patrol[\u0026#34;Patrol Inspection Reports\u0026#34;] S --\u0026gt; Core S --\u0026gt; DataModel S --\u0026gt; Hygiene S --\u0026gt; Escalation Hygiene --\u0026gt; Schedule Hygiene --\u0026gt; Daily Hygiene --\u0026gt; Monthly Hygiene --\u0026gt; Closing Escalation --\u0026gt; Damage Escalation --\u0026gt; Patrol The point is simple. At the top level of the solution, I place the foundation projects and then the major business projects organized by area. If the business later expands into adjacent operations, I can add new projects while preserving the relative independence of the existing ones. For business systems, that is a very practical advantage.\nWhy that model does not always fit solo development That said, this approach does not always match my current working reality because I now build most systems alone. In team development, one of the biggest advantages of business-function splitting is that assignment becomes easier and different people can own different domains. But when I am the only developer, that benefit is much smaller. The structure is still valid, but the strongest advantage of that pattern is no longer available to me, so I often choose a different split.\nThe second split pattern: by actor What I now do more often is split by actor. Here is another simplified example, this time from an order management system I developed in the past.\nflowchart TD S[\u0026#34;Order Management System.sln\u0026#34;] Core[\u0026#34;Core\u0026#34;] DataModel[\u0026#34;DataModel\u0026#34;] Sales[\u0026#34;Sales Management\u0026#34;] Customer[\u0026#34;Customer Information\u0026#34;] Quote[\u0026#34;Quotation Information\u0026#34;] Order[\u0026#34;Order Information\u0026#34;] Product[\u0026#34;Product Management\u0026#34;] Item[\u0026#34;Product Information\u0026#34;] Stock[\u0026#34;Inventory Information\u0026#34;] Shipping[\u0026#34;Shipping Management\u0026#34;] Shipment[\u0026#34;Shipment Information\u0026#34;] S --\u0026gt; Core S --\u0026gt; DataModel S --\u0026gt; Sales S --\u0026gt; Product S --\u0026gt; Shipping Sales --\u0026gt; Customer Sales --\u0026gt; Quote Sales --\u0026gt; Order Product --\u0026gt; Item Product --\u0026gt; Stock Shipping --\u0026gt; Shipment This example is also simplified, but the atmosphere should be clear enough. The system manages orders, products, inventory, and shipment flow. More importantly, each project can be understood as a cluster of use cases tied to a particular actor\u0026rsquo;s work. Seen from that angle, the split looks more like this.\nflowchart LR SalesStaff((Sales Staff)) OfficeStaff((Office Staff)) DevelopmentStaff((Development Staff)) QualityStaff((Quality Staff)) ProductionStaff((Production Staff)) WarehouseStaff((Warehouse Staff)) UC1([\u0026#34;Maintain customer information\u0026#34;]) UC2([\u0026#34;Prepare quotations\u0026#34;]) UC3([\u0026#34;Register orders\u0026#34;]) UC4([\u0026#34;Maintain product definitions\u0026#34;]) UC5([\u0026#34;Check inventory\u0026#34;]) UC6([\u0026#34;Prepare shipments\u0026#34;]) UC7([\u0026#34;Confirm shipment targets\u0026#34;]) subgraph SalesMgmt[\u0026#34;Sales Management\u0026#34;] UC1 UC2 UC3 end subgraph ProductMgmt[\u0026#34;Product Management\u0026#34;] UC4 UC5 end subgraph ShippingMgmt[\u0026#34;Shipping Management\u0026#34;] UC6 UC7 end SalesStaff --\u0026gt; UC2 SalesStaff --\u0026gt; UC3 OfficeStaff --\u0026gt; UC1 OfficeStaff --\u0026gt; UC2 OfficeStaff --\u0026gt; UC3 DevelopmentStaff --\u0026gt; UC4 DevelopmentStaff --\u0026gt; UC5 QualityStaff --\u0026gt; UC4 QualityStaff --\u0026gt; UC5 ProductionStaff --\u0026gt; UC4 ProductionStaff --\u0026gt; UC5 WarehouseStaff --\u0026gt; UC6 WarehouseStaff --\u0026gt; UC7 Sales Management covers the use cases around maintaining customer information, preparing quotations, registering orders, and reviewing the progress of deals. Product Management covers the use cases around maintaining product definitions, checking stock conditions, and coordinating the information needed by development, quality control, and production. Shipping Management covers the use cases around preparing shipments, confirming shipment targets, and completing the operational flow in the warehouse.\nIn reality, systems are more detailed and messier than this. Even so, the actor boundary is often clearer in day-to-day operation than a pure business-function taxonomy, because it maps more directly to the work people actually perform.\nWhy actor-based splitting is often easier for me The practical advantage is authorization, but only because I design it that way from the beginning. When I split Razor Class Library projects, I follow a rule that each project owns the first path segment, so Sales Management lives under /sales/ and Shipping Management lives under /ship/. Then I build the authorization foundation on top of that rule so permissions can be granted and checked by segment. In other words, this is not a lucky side effect of project splitting. I intentionally make the segment boundary and the authorization boundary match.\nThat matters a lot in solo development because the rule is structural. I do not need to reconsider permission strategy screen by screen every time I add something. The project boundary already tells me where the authorization boundary should be. Data is still shared through DataModel, of course, and a function used by one actor may still need to read or reference data primarily maintained by another actor\u0026rsquo;s area. But that access can be limited through domain models and APIs rather than by collapsing all screens into one project.\nI do not know what the global mainstream is for this kind of project split. What I often saw in Japanese IT work was permission control configured screen by screen or actor by actor, and many real services also allow very detailed permission settings. That flexibility has value. But for a system I need to keep building alone over time, simplicity matters more. If a simple structure is available, I would rather use it. That is why actor-based project splitting has worked well for me.\nClosing This note focused on how I split projects in C# systems. The answer is not universal. It depends on team size, operational structure, and what kind of clarity the system needs most.\nFor me, the stable starting point is a Razor Pages main project, a Core project, and a DataModel project. From there, I usually choose either business-function boundaries or actor boundaries, depending on what will keep the structure easiest to operate over time.\nI also build these systems with Cotomy, even though Cotomy itself is TypeScript-first. But that part deserves its own explanation. In the next note, I want to write about how I integrate Cotomy into this kind of split Razor Pages structure without blurring the project boundaries I rely on.\nC# Architecture Notes This article is part of the Cotomy C# Architecture Notes, which reflect on backend and project-structure decisions around business systems.\nSeries articles: Why I Chose C# for Business Systems and Still Use It , From Global CSS Chaos to Scoped Isolation , Unifying Data Design and Code with Entity Framework , How I Split Projects in Razor Pages Systems, and Integrating TypeScript into Razor Pages with Cotomy .\nNext article: Integrating TypeScript into Razor Pages with Cotomy ","permalink":"https://blog.cotomy.net/posts/csharp-architecture/04-how-i-split-projects-in-razor-pages-systems/","summary":"How I use solution and project boundaries in Razor Pages systems to keep business functions understandable, extensible, and manageable in solo development.","title":"How I Split Projects in Razor Pages Systems"},{"content":"Previous article: Use Cases and Alignment in Solo Development I do not think development methodologies should be followed as if they were laws. Not Agile, not object-oriented design principles applied as a moral code, not any particular school of thought about how good software is built. I have followed these things when they made sense. I have abandoned them when they did not. Treating a methodology as something that must be applied correctly regardless of context is, in my experience, one of the more reliable ways to make a project worse.\nSome environments simply do not support the assumptions those methods make. You can understand Agile clearly, believe in its reasoning, and still find that it does not function inside a particular organization because the structural prerequisites for it do not exist. That is not a personal failure. It is a mismatch between method and environment. The correct response is not to try harder to apply the method. It is to adapt.\nWhat I have learned, over time, is that what actually moves projects forward is reading the real constraints accurately and adjusting behavior to match them. Not applying theory and hoping reality will cooperate. The constraints are always organization, people, feedback availability, and time. Those determine what is actually possible. Methods that ignore them will fail, regardless of how sensible they are on paper. Reliability in this context does not come from following the right process. It means staying in the project continuously enough to catch problems before they compound — keeping the work moving through interruptions rather than executing a methodology correctly. Continuity is the precondition. Everything else is secondary.\nThe Environment That Changed My Thinking The situation that taught me this most clearly was not a large corporate project. It was a job at a small company with no technical staff to speak of. I was, in practical terms, the entire development function. There was no engineering manager to escalate to, no architect to consult with, no peer review process because there were no peers. There was a business owner with a clear idea of what the system needed to do and very little patience for explanations of why software development takes time.\nThe company had real operational problems that software could solve. That part was clear. The people who were supposed to help think through requirements were exhausted. Not temporarily, not unusually — exhausted in the way that becomes the background condition of a job, the state that stops feeling like a problem because it has been the normal state for long enough. When daily work consumes all available capacity, there is no remaining attention for thinking about how daily work could be different. The possibility of improvement is not something they were skeptical about. It was something they had stopped considering entirely.\nGetting feedback was structurally difficult. Not because people were uncooperative, but because everyone was operating at capacity. A formal review cycle assumed you had someone who could reliably show up for a meeting, engage with what was shown, and respond with considered input. That was not available. What was available was the person who had no time to look at what I built showing up to interrupt my work with something unrelated — and always, somehow, having plenty of time for that.\nDocumentation existed, but I want to be precise about what that meant in practice. I wrote things. I sent them. Occasionally they were acknowledged. But acknowledged is not the same as read, and read is not the same as used. When I produced a document to confirm shared understanding after a discussion, it would come back approved without any evidence of engagement. The approval was reflexive, not considered. That kind of blind sign-off does not create alignment. It creates a paper trail that implies alignment while the actual understanding on each side remains divergent. If anything, it made things worse, because I briefly believed the gap had been closed.\nIterative delivery ran into a different version of the same problem. Even when I delivered something and waited for a response, the response often did not come in a usable form. People did not always know how to articulate what was wrong. They could feel that something was off, but converting that feeling into a clear description of what should be different is a skill, and it requires a kind of reflective engagement that was not available here. What happened instead was simpler and more final: if something did not feel right, people stopped using it. Not a correction. Not a complaint. Just quiet disengagement. That is the hardest kind of feedback to work with, because by the time you notice it, the damage is already done.\nWhat I Actually Did What I actually ended up doing looked nothing like what I would have designed from first principles.\nThe first thing I standardized was the development environment and design patterns, and I did it not because it was architecturally satisfying but because I was the only person who would ever touch this code, and I needed to be able to pick up from where I had been without spending twenty minutes remembering what I had been trying to do. Without consistent structure, resuming work after an interruption meant rebuilding context from scratch every time, and interruptions in this environment were constant. If I left a screen half-finished for two days and came back to it, a consistent pattern meant I could continue rather than spend twenty minutes re-reading my own code to remember what I had been trying to do. Consistency became a survival mechanism. If every screen followed the same pattern, I could context-switch more quickly. If the data access layer followed a predictable contract, I could move without rethinking from scratch. Standardization was not a quality goal in the abstract sense. It was a working memory management strategy. And in a solo environment, working memory is the only resource that cannot be replaced.\nThe communication approach changed as well. I started calling almost every day. Not because there was always something urgent to discuss, but because I had learned that the only reliable channel was one that felt like ordinary contact rather than a formal request for input. I would frame it as a quick status update, mention something specific that had just been built, and let the conversation drift. The useful information came out sideways, embedded in casual exchanges, not in structured feedback sessions. It felt inefficient by any reasonable definition of the word. It was not. The alternative, waiting for scheduled meetings that produced nothing actionable, was the thing that was actually inefficient.\nThere was a specific category of difficult person I learned to avoid. Not every stakeholder was worth pursuing for detailed feedback. Some were reliably negative regardless of what was shown. Others would agree to anything in the moment and contradict it a week later, apparently without any memory of having agreed. A few were the kind of people who ask questions not to understand but to perform the act of asking — involvement as visibility, not as contribution. I stopped chasing those conversations entirely. I acknowledged their comments, gave nothing actionable back, and built without them. That sounds harsh. I stopped caring about that. The alternative was letting the noisiest people in the room determine what got built for the people who actually had to use it every day.\nThe larger change was in planning orientation. I had started this project trying to operate with something like Agile rhythm, on the theory that iterative delivery would allow requirements to emerge naturally and reduce late-stage correction costs. Within two months I had stopped. The feedback loop that iterative delivery depends on did not exist. I was iterating, but no one was engaging with the iterations in a way that produced useful input. The delivery cadence was generating anxiety about whether things were moving rather than generating calibrated feedback about whether things were right.\nWhat replaced it was a much more structured upfront planning phase. I mapped the workflows explicitly before writing code. I asked narrow, specific questions about particular scenarios during informal calls and transcribed the answers immediately. I built a mental model of the full system before building any part of it, because I had learned that partial delivery without that context would be misinterpreted. The irony of this was not lost on me. I had moved, in some respects, toward a more waterfall-shaped process, not because I believed waterfall was superior, but because waterfall\u0026rsquo;s assumptions matched what was actually available.\nWhy Agile Did Not Work There Thinking about why it happened this way is worth being honest about.\nAgile, as a way of working, assumes things. It assumes that the people who will provide feedback can be reliably present and engaged across multiple cycles. It assumes that embracing change is something the organization has the social and operational capacity to do, rather than a phrase that sounds good but means nothing when the same person is simultaneously managing a daily operational crisis and being asked to review a software prototype. It assumes that trust between developer and stakeholder is built iteratively, which it can be, when both parties have time to build it.\nNone of these assumptions were true in that environment. This is not a criticism of Agile as a methodology. The premises are sound under the conditions the method was designed for. The problem is that those conditions are specific, and they are not universal. Presenting Agile as the obviously correct approach for software development in general means ignoring the fact that most small organizations, and a surprising number of large ones, do not operate in ways that satisfy those premises.\nPart of what makes this easy to miss is that the IT industry, and the companies that work closely with it, tend to produce people who are actually capable of this kind of engagement. A stakeholder who has worked in a technical environment, or alongside developers for years, often does have the vocabulary and the cognitive habits to participate meaningfully in iterative development. In that world, Agile\u0026rsquo;s assumptions do not feel like assumptions at all. They feel like obvious facts about how reasonable adults operate. The environment I was in was simply not that world. The people there had never had any reason to develop those habits, and there was nothing wrong with them for not having done so.\nThe more theoretical objection I had to my own situation was that I felt vaguely like I was doing it wrong. The literature suggests iteration and adaptation. The practice I was converging on felt rigid and predetermined. It took longer than it should have to accept that the discomfort was not a sign of failure but a sign that the theoretical model I had internalized did not apply here. The people who wrote about iterative development were not describing this room, this company, these constraints. They were describing a condition that sometimes exists in the industry. I was working somewhere that condition did not exist, and spending time trying to force it into existence was not useful.\nThe Method Is a Tool What I actually concluded from all of this is something that resists reduction to a simple rule, which is probably correct because the situation does not admit of a simple rule.\nThe right method is the one that fits the constraints. Not the one that is most respected in the current period of the industry, not the one that the people doing the best work in high-resource environments happen to use, not the one that aligns with the belief system you arrived with. The constraints are the facts. The method is a tool. Match the tool to the facts.\nIn some environments, frequent informal contact will get you further than any structured review process. In others, written documentation is essential because people will not be available when questions arise. In some organizations, upfront planning prevents more problems than it creates. In others, the requirements are genuinely too uncertain to plan in detail and iteration is the only rational response. Reading which situation you are in, and adjusting accordingly rather than insisting on a pre-selected method, is the actual skill.\nI no longer feel conflicted about having abandoned Agile on that project. The project moved forward. The software got built and used. The client\u0026rsquo;s operational problems improved. That is the goal. Not adherence to process.\nThe same principle applies to every other choice that could be framed as methodology. Object-oriented design is valuable when the problem structure benefits from it and the team has the capacity to maintain the abstraction. When those conditions are absent, the value disappears and you are left paying the cost without receiving the benefit. No principle, applied rigidly without reading context, produces good results consistently.\nThis is not a relativistic position. Some approaches are genuinely better than others in most conditions. But the qualifier matters. Even good approaches fail when the environment contradicts the assumptions they depend on. And in practice, you are always operating inside a specific environment with specific constraints, not inside the idealized context that methodological literature tends to assume.\nThe people who describe methodology in the abstract are usually describing the best-case version of a situation they have been lucky enough to work in. That is not wrong. It is just incomplete. The complete version includes the environments where those conditions do not exist, where the feedback loop is broken, where the stakeholders cannot tell you what they want until they see what they did not want, where the project has to move forward anyway. Most of the interesting problems are in that version.\nThe project moves forward or it does not. That is what matters. Everything else is in service of that.\n","permalink":"https://blog.cotomy.net/posts/misc/how-i-work-solo-without-losing-reliability/","summary":"I do not follow development methodologies rigidly. Some environments break them entirely. What actually moves projects forward is adapting to the real constraints in front of you, not applying theory.","title":"How I Work Solo Without Losing Reliability"},{"content":"Previous article: Designing Software for AI Code Generation The Risk That Actually Kills Projects When I think back over the projects that went seriously wrong, the pattern I find most consistently is not a design failure.\nThe design was sometimes mediocre. That is true. But mediocre design rarely killed the project on its own. What actually caused the damage was something earlier and less visible: the people involved had stopped sharing a clear understanding of what was being built.\nThat sounds simple. In practice it is surprisingly hard to maintain.\nWhen I was working within Japan\u0026rsquo;s contract engineering services structure, I experienced this problem mostly from one side. A requirement would be handed down, and its ambiguity would only surface after implementation was finished. The fix would arrive labeled as a specification change, and everyone would quietly pretend that this had been the intent all along. That was an uncomfortable ritual, but it was still a ritual. There were established procedures for absorbing the gap.\nSolo development with a client is different. There is no handoff ceremony to partially contain the misunderstanding. If what you built is not what the client meant, that gap belongs to you entirely. And by the time it becomes visible, you are usually far further into development than the misalignment deserved.\nMisalignment Is Not a Technical Problem I want to be precise about what I mean by misalignment, because it is easy to confuse it with other problems.\nMisalignment is not a disagreement about technical implementation choices. It is something simpler and more dangerous: the client and the developer no longer agree on what the thing being built is actually supposed to do, and neither party may realize this until the costs of correction are already high.\nI have walked into this situation several times. It does not appear dramatically at first. It arrives looking like a requirements change. The client introduces a scenario you assumed was out of scope. They reject behavior you were certain was obviously correct. They describe a screen in a way that reveals they had been imagining something structurally different from what you built.\nThe engineer replacement arguments I sometimes encounter online seem to me to largely miss this point. Whether AI writes more code or less code does not change the fact that someone has to understand what to build before building it. I think those arguments are mostly nonsense. But the frustration underlying them is real: software keeps delivering something other than what people actually needed, and the explanation offered is usually technical, while the actual failure was often not.\nDesign matters. I have written about the importance of structural decisions across several articles in the design series, and I stand behind that work. Weak entity modeling causes real damage. Poor lifecycle coordination produces screen behavior that becomes fragile under pressure. But design problems typically appear after a project has some momentum. Misalignment can destroy a project before it earns any momentum at all.\nHow My Own Practice Changed I started treating use cases as an early deliverable because of a specific kind of failure that I could not ignore.\nI had completed something technically solid, and the client was still clearly dissatisfied. Not because the software was broken or slow. Because it did not match what they had been picturing. What they had been picturing was not written down anywhere. I had not asked. I had moved from a general description of the need into architecture, and then into implementation, and the result was internally consistent and technically reasonable, while still being wrong about what mattered.\nThat failure cost more to repair than it would have cost to prevent. And it was not the first time.\nUse cases forced me to stop treating alignment as something that would happen naturally. A use case, written at the right level of detail, describes what a person does inside the system, in sequence, in plain language. Not how the system is constructed. Not what data model underlies it. Just what happens, step by step, from the perspective of the person using it.\nA non-engineer can read a use case description. They may not be able to parse a class diagram or follow a sequence diagram without guidance. They may not be able to interpret a wireframe with confidence. But a clearly written use case is something a client who has never participated in software development can follow and respond to. They can check it against their own mental model and tell you whether it matches what they need.\nThat feedback, obtained before a single screen exists, prevents the most expensive category of late-stage correction. Use cases can be shared early precisely because they do not depend on the implementation. They describe intent, not structure. That makes them readable at the stage when changing direction is still cheap.\nBuilding It Together There is a second reason I now treat use cases as the first deliverable, and it is less about accuracy and more about how projects feel to the people who commissioned them.\nStakeholders need to sense that something is taking shape. When a project runs for months and the only visible output is \u0026ldquo;we are working on it,\u0026rdquo; anxiety builds. That anxiety changes the working relationship in ways that are hard to recover from. Clients begin asking more questions than they would otherwise. They suggest changes they might not have raised if they felt more confident. They start second-guessing earlier decisions.\nBut I think there is something more important than progress visibility alone, and it took me a while to clearly separate the two.\nWhat clients actually need is not only to feel that the project is moving. They need to feel that the thing being built is theirs, that they participated in shaping it, and that the direction reflects decisions they made intentionally rather than decisions that were made for them. The difference matters. A client who watched the project progress from the outside will still show up at delivery with a different mental model than the developer. A client who was involved in clarifying what each flow does will arrive at delivery with ownership of those decisions.\nUse cases create that involvement before any implementation exists. When a client reads a use case description and corrects something in it, they are not just giving feedback. They are participating in design. That participation changes how they receive the final result. They recognize it as something they built with you, not something delivered to them.\nUse cases, written clearly and shared early, make this kind of involvement practical.\nWhen a client can read a description of what each actor does in each major flow, they can understand what is in progress. When those descriptions are paired with rough mockups, they can begin imagining the final system. When the use cases are structured so that additional flows can be introduced naturally, they can see how the system will grow.\nI think of this as three things that need to be true at once. The work has to be understandable by the client, not only by the developer. It has to be imaginable in its final shape, not only describable in abstract terms. And it has to feel expandable in a way the client can sense, so that changes and additions feel like natural progression rather than disruptions to a finished thing.\nWhen this goes well, something else happens too.\nA client who has been reading and responding to use cases throughout the project starts approaching mockups differently. They are no longer looking at a screen layout for the first time and trying to figure out what it means. They already hold a partial mental model of how each flow works. The mockup becomes confirmation of something they already partially understand, rather than a new thing to interpret from scratch. That changes the quality of feedback they can give, and it changes the kind of questions they ask.\nThere is also a harder problem that many engineers have learned to quietly accept: most clients do not have a precise understanding of what software can and cannot do. That gap produces feedback that can be frustrating on both sides. Clients ask for things that require significant structural changes without realizing the cost. They reject limitations that are not arbitrary but are inherent to the approach being used. This is not a failure of intelligence. It is the expected result of never having had a way to build that understanding.\nUse cases do not solve this completely. I would not want to overstate what any process can do. But when a client has been actively reading flow descriptions and asking questions throughout the project, they do tend to develop a working sense of what the system does at each point. And from that, some understanding of what it would take to make it do something different tends to follow. It is not always enough. But it is more than what you get when the system appears as a finished object at delivery.\nNone of these three things require a working system. They require that thinking be made legible. Use cases are the most direct path to making that happen early enough to matter.\nWhen Requirements Change Anyway I want to be honest about where use cases stop helping.\nThey do not prevent clients from changing what they want. Late-stage reversals still happen, and some of them are large. In Japanese development culture there is an expression I use internally when this happens: ちゃぶ台返し, which describes the kind of reversal where the whole table gets turned over and everything has to begin again.\nThat risk remains real. What use cases do is reduce the probability of the worst version of that scenario, which is the reversal that happens because the client only now discovered that they never properly understood what was being built. If use cases were shared, reviewed, and agreed upon at an early stage, the conversation around a late change shifts. A reversal becomes harder to frame as a correction to a misunderstanding. It becomes more clearly a change of direction, and a change of direction is a more honest and negotiable thing than a discovered failure.\nThere is another side to this worth naming. Sometimes a difficult change cannot simply be refused. The system as designed may be technically correct but operationally wrong, and refusing to accommodate the change might mean delivering something that no one will actually use. That is its own kind of failure, and it is one that engineers often underweight.\nWhen a client has been involved from the beginning through use cases, that conversation changes shape as well. They are not arriving with a demand and waiting for a verdict. They understand enough of the constraints to think alongside you. And clients who know their own business deeply will sometimes come up with approaches that a developer would not have found alone. I have experienced this more than once. The client\u0026rsquo;s domain knowledge, once the technical situation is made legible to them, becomes a resource rather than an obstacle. The problem becomes a shared one, and shared problems tend to get better solutions.\nThat is not a guarantee either. But it is considerably more likely to happen when the client has been a participant throughout than when they have been a passive recipient of updates.\nThe Real Goal Is Building Together Solo development is not especially hard at the implementation level. A single capable developer can build large and complex systems. The hard part is keeping the direction aligned with the person who ultimately needs to use and live with the result. In solo development, there is no team layer to absorb that misalignment, no handoff ceremony to surface the discrepancy. The gap belongs entirely to the developer, and it grows quietly until delivery makes it visible.\nThis kind of direct alignment is most achievable at the scale where the developer and the client can communicate without heavy intermediaries. Once a project grows large enough that the client side becomes a corporate IT department, or the commercial relationship adds layers of vendors and subcontractors in between, the mechanics change entirely. The direct line disappears, and with it much of what makes this approach work. What I am describing here applies most directly to the scale where that direct line still exists.\nEverything I have written about in this series, from compact screen structure to careful entity design , is about building systems that hold together structurally and remain maintainable over time. All of that matters. But it matters at a later stage. Before structure can matter, the direction has to be right. And the direction is only reliably right when the client is not a passive recipient but an active participant in deciding what is being built.\nThat is the real goal. Not use cases specifically, but building together. Use cases are how I consistently get there. They are not a formal methodology I follow rigidly, and they are not the only possible approach. They are the tool that has most reliably prevented the kind of failure that no amount of good architecture can fix.\nWhen it works well, everyone gains something from it. The client ends up with a system that reflects genuine decisions they made, not a product they received and are now learning to interpret. The people who actually operate the system day to day see their real workflows represented, because those workflows were part of the conversation from the start. And the developer works with clearer direction, receives fewer late-stage corrections that require structural rework, and delivers to a client who already understands what they have.\nThat is as close to a genuine win for everyone involved as solo development tends to get. Not because use cases are magic, but because building things together tends to produce better outcomes than building things for people.\nAlignment does not maintain itself. Left unattended, it decays quietly until the distance becomes visible only at delivery. It must be enforced, and enforcing it requires something legible enough for both sides to engage with. That is exactly why I rely on use cases.\nIf that alignment breaks down, good architecture will not save the project. No amount of careful design will save a project that is building the wrong thing.\nNext article: How I Work Solo Without Losing Reliability ","permalink":"https://blog.cotomy.net/posts/misc/use-cases-and-alignment-in-solo-development/","summary":"Solo development does not fail because of design alone. It fails when the shared understanding of what to build drifts. Use cases are the simplest and most effective tool to prevent that.","title":"Use Cases and Alignment in Solo Development"},{"content":"This note continues from Why Modern Developers Avoid Inheritance , Inheritance, Composition, and Meaningful Types , and Designing Meaningful Types .\nIntroduction In the previous articles, I discussed why many modern developers tend to avoid inheritance, and why inheritance and composition need to be placed in the right architectural layer.\nI do not personally reject inheritance as a concept. Cotomy itself uses inheritance in its framework foundation, and I still think inheritance is useful when the base type has stable meaning.\nAt the same time, when I look back at the business systems I have actually built, I notice something very simple. In entity design, inheritance almost never survives.\nThis article is about that specific point.\nScope of This Article This is not an attempt to explain object-oriented theory from the beginning.\nThere are many ways to interpret object-oriented design, and I do not claim that my own practice should be treated as a universal rule.\nWhat I want to describe here is narrower than that. This is a summary of how I have tended to model entities in real system development, and why inheritance usually did not survive for long when I tried to introduce it there.\nSo the focus is practical rather than theoretical.\nHow Modeling Usually Starts in Real Projects When I start a new system, the first step is usually not class design.\nNormally I begin by identifying the business domain, talking with the people involved, and organizing the required operations. I suspect many engineers would recognize roughly the same flow in their own work. In a reasonably formal project, that often leads to use cases and use case descriptions before the model is fixed.\nIn smaller internal systems, however, the process is sometimes lighter than that.\nIf I am building a small system for internal use and I already understand the business domain well, I may not write every use case in a formal document. I may postpone class diagrams as well. In some cases, I implement entity classes directly while the model is still being clarified through the screen and operation flow.\nMore recently, AI tools have made it much easier to reconstruct documentation from an implemented system. Because of that, there are times when I knowingly take a shortcut: implement the system first, and organize the use cases or model documentation afterward.\nI do not think that is the right approach for larger systems. But when the whole system is small enough to build in one or two months, the cost of a design mistake is limited, and iteration can be faster than trying to freeze the entire model too early.\nThe important point is not whether the process looks formal. The important point is whether the model becomes clear enough before it spreads through the system.\nWhy Entity Modeling Matters So Much Once the business domain and the goal of the system are understood, the next step is to design the model.\nThat part matters more than almost anything else.\nWhat the system represents, how that information is persisted, how screens read it, and how operations change it are all built on top of the model. If the model is unstable, every screen and every API becomes harder to keep consistent.\nFor that reason, entity design is one of the areas I treat most carefully.\nWhy Inheritance Rarely Survives There Cotomy uses inheritance in the framework itself. CotomyElement and CotomyForm are obvious examples, and that structure is intentional because those classes represent stable roles in the UI layer.\nEntity design usually does not feel like that.\nI may consider inheritance briefly when I first see similar fields across several entities. But in real business systems, those similarities are often shallow. Once the operational meaning of each entity becomes clearer, the common base class usually starts to feel forced.\nI have created such inheritance structures before, but they were often removed later.\nA Typical Business Example A simple example is an internal order management system with three entities: Estimate, Order, and Shipment.\nAt first glance, they look similar. Each one has line items, and each line item may contain a product code, quantity, and amount.\nThat superficial similarity can make inheritance look attractive. It is easy to imagine a shared transaction base with a shared line type underneath it.\nThe Inheritance Model That Looks Reasonable at First This is the kind of structure that an experienced engineer would probably reject very quickly, but that can still look quite reasonable when design experience is limited. I remember it feeling more persuasive to me earlier on than it does now.\nclassDiagram class Transaction { id date } class TransactionLine { productCode quantity amount } class Estimate class Order class Shipment class EstimateLine class OrderLine class ShipmentLine Transaction \u0026lt;|-- Estimate Transaction \u0026lt;|-- Order Transaction \u0026lt;|-- Shipment TransactionLine \u0026lt;|-- EstimateLine TransactionLine \u0026lt;|-- OrderLine TransactionLine \u0026lt;|-- ShipmentLine Estimate \u0026#34;1\u0026#34; *-- \u0026#34;*\u0026#34; EstimateLine Order \u0026#34;1\u0026#34; *-- \u0026#34;*\u0026#34; OrderLine Shipment \u0026#34;1\u0026#34; *-- \u0026#34;*\u0026#34; ShipmentLine At first glance, this feels reasonable. The names are familiar, the shared fields look obvious, and the duplication appears to disappear neatly.\nIf the design discussion stops at field similarity, this kind of diagram can easily look like a good object-oriented model.\nWhy That Model Usually Breaks Down The first problem is naming.\nI have a hard time finding a base concept that remains both meaningful and stable across Estimate, Order, and Shipment. Calling them all Transaction may look convenient, but the meaning is already unstable. An estimate is not the same kind of thing as an order in operational terms, and a shipment is not simply another version of the same concept.\nThis is where the meaningful type question becomes important again. Even if several entities share some fields, that does not mean there is a meaningful base entity waiting to be discovered.\nThe Domain Differences Matter More Than the Shared Fields The real problem becomes clearer once the business differences are examined.\nAn estimate may have the same product appear more than once for different price options. An order usually expects each product to appear only once. A shipment goes further: its lines represent physical dispatch units, not copies of order lines, and what gets shipped in what grouping is driven by logistics operations rather than the order structure itself.\nThese are not variations on the same concept. Each entity carries different rules, different operational identity, and a potentially different structural shape from the start.\nThe line items look similar. They do not mean the same thing.\nThese are not minor variations around one stable concept. They represent different stages of business operation with different rules, different identities, and different future change directions.\nLooking at the three entities separately makes this structural divergence easier to see.\nclassDiagram class Estimate { estimateNo date } class EstimateLine { productCode quantity unitPrice } class Order { orderNo date } class OrderLine { lineNo productCode quantity amount } class Shipment { shipmentNo orderNo shipDate } class ShipmentLine { orderLineNo shippedQuantity trackingNumber } Estimate \u0026#34;1\u0026#34; *-- \u0026#34;*\u0026#34; EstimateLine Order \u0026#34;1\u0026#34; *-- \u0026#34;*\u0026#34; OrderLine Shipment \u0026#34;1\u0026#34; *-- \u0026#34;*\u0026#34; ShipmentLine The differences are visible once each entity stands on its own. EstimateLine does not require a unique row key at the product level, because the same product may appear more than once for different quantity tiers. OrderLine uses a line number to uniquely identify each row, since the same product is not expected to appear twice. ShipmentLine links back to the original order line and carries a tracking number, which neither estimate nor order lines need.\nThat is why the inheritance line starts to become dangerous. The shared structure suggests a stronger conceptual unity than the domain actually provides.\nWhy the Shared Base Becomes a Liability One might ask whether a shared base class used only as a holder for line items could have worked. If the line structures were truly identical across all three entities, that argument might have some weight. But that condition is rare in practice. And even when the structures happen to look identical at the start of a project, the moment an inheritance relationship is committed to, it introduces a structural constraint that is difficult to undo once screens, services, and persistence rules have formed around it.\nLater, when the estimate logic grows in one direction and the shipment logic grows in another, the base class becomes a place where old assumptions remain embedded in the structure. The coupling is not only about shared code. It is about shared shape.\nInheritance is strong structural coupling. It fixes both structure and meaning at the same time. Once several screens, services, and persistence rules depend on that hierarchy, separating the entities again becomes expensive.\nIf the domain is likely to diverge, that strength becomes a cost rather than a benefit.\nWhat Actually Gets Reused That does not mean reuse disappears.\nIn practice, reuse often exists somewhere else. For example, the UI parts used to display or edit line items may be shared. In the order management system I described, I do share a CotomyElement subclass that handles the visual structure of a line item row. That shared part works across estimate, order, and shipment screens without any of those entities needing to carry an inheritance relationship to each other.\nThat kind of reuse feels much safer because the shared concern is local and visible. It does not pretend that the entities themselves all mean the same thing.\nThe Structure I Prefer Instead When I want to share behavior across entities, I usually prefer interfaces, composition, or just independent classes that happen to follow the same rule.\nInterfaces are especially useful when the shared concern is behavioral rather than structural. Unlike inheritance, they do not require the classes themselves to share one structural line. That makes them easier to apply across different entity types when only a narrow cross-cutting operation needs to be shared.\nIn practice, I usually define an interface when some part of the system needs to treat several entity types under the same narrow rule. In other words, the interface is often driven by a real cross-cutting requirement rather than by a desire to make the model look more abstract.\nA simplified version of that idea looks like this.\nclassDiagram class ILineItemContainer { lineItems } class ILineItem { productCode quantity } class Estimate { lineItems } class EstimateLine { productCode quantity unitPrice } class Order { lineItems } class OrderLine { lineNo productCode quantity amount } class Shipment { lineItems } class ShipmentLine { orderLineNo productCode shippedQuantity trackingNumber } Estimate ..|\u0026gt; ILineItemContainer Order ..|\u0026gt; ILineItemContainer Shipment ..|\u0026gt; ILineItemContainer EstimateLine ..|\u0026gt; ILineItem OrderLine ..|\u0026gt; ILineItem ShipmentLine ..|\u0026gt; ILineItem Estimate \u0026#34;1\u0026#34; *-- \u0026#34;*\u0026#34; EstimateLine Order \u0026#34;1\u0026#34; *-- \u0026#34;*\u0026#34; OrderLine Shipment \u0026#34;1\u0026#34; *-- \u0026#34;*\u0026#34; ShipmentLine This kind of structure lets shared logic work at two levels. Container-level logic can depend on a line collection contract, while line-level logic can depend on a smaller shared interface for the fields that really are common.\nThat is one of the main advantages of interfaces here. They let me define only the common operation or contract that needs to be shared, without pretending that the full class structure is also common.\nAt the same time, the concrete line types remain separate. That is important because the lines are not interchangeable even if some operations can treat them through the same narrow contract.\nThat freedom matters more to me than removing a little duplication from the class definitions.\nConclusion Avoiding inheritance in entity models is not the same thing as rejecting object-oriented programming.\nThe real issue is that business entities often represent different meanings even when they look superficially similar. When that is true, forcing them into a shared base class usually creates more trouble than it removes.\nThat is why, in my own work, inheritance appears much more naturally in framework foundations than in entity design. The framework roles stay stable. Business entities often do not.\nFor the same reason, Cotomy does not assume entity inheritance in its application patterns.\nDesign Series This article is part of the Cotomy Design Series.\nSeries articles: CotomyElement Boundary , Page Lifecycle Coordination , Form AJAX Standardization , Inheritance and Composition in Business Application Design , API Exception Mapping and Validation Strategy , Why Modern Developers Avoid Inheritance , Inheritance, Composition, and Meaningful Types , Designing Meaningful Types , and Object-Oriented Thinking for Entity Design.\nPrevious article: Designing Meaningful Types Next article: Entity Identity and Surrogate Key Design ","permalink":"https://blog.cotomy.net/posts/design/09-object-oriented-thinking-for-entity-design/","summary":"Why inheritance is rarely used when designing entities in business systems.","title":"Object-Oriented Thinking for Entity Design"},{"content":"This is a continuation of CotomyApi in Practice . The previous article focused on transport behavior and exception handling. This time the focus is how debugging is actually done when developing with Cotomy and how runtime behavior can be inspected without changing application code.\nIntroduction One design concern when creating Cotomy was debuggability. When I first started using TypeScript, debugging was often harder than writing the code itself. Older JavaScript environments created a large gap between the TypeScript source and the transpiled output. In the browser, the code that actually ran often did not resemble the TypeScript code I had written, so stepping through behavior became confusing.\nModern editors such as VS Code improved this situation, but in practice the setup was not always simple. Debug configurations were often fragile, browser attachment could be unreliable, and the result was not always worth the friction for normal screen development. Because of that experience, I thought carefully about debugging when the internal utilities were reorganized into Cotomy.\nIn practice, browser debugging is still the most reliable method. Cotomy now targets ES2020 or later, so the distance between TypeScript and executed JavaScript is much smaller than it used to be. Modern browsers are easier to debug directly, and many screens contain only a small amount of custom TypeScript, so source-level debugging is less painful than before.\nThat does not remove the need for runtime inspection. When a form submits unexpected values, when a page controller initializes in the wrong order, or when API-bound HTML is filled incorrectly, logs are still useful. That is why Cotomy includes a small debug logging mechanism instead of trying to replace normal browser tools.\nDebug Logging Philosophy Debug information should not always be emitted. In some business systems it may not be catastrophic if logs remain enabled, but that is usually not the right default. During development, detailed runtime information is helpful. In production, unnecessary log output should usually be minimized.\nCotomy handles this by categorizing debug logs. The framework does not treat debugging as one permanent on or off switch for all runtime behavior. Instead, it separates logs by concern so that API traffic, form payload inspection, page initialization, and other runtime areas can be enabled independently.\nCotomy Debug Logging System The implementation is centered on the debug classes documented in the Cotomy reference. See CotomyDebugFeature . Cotomy defines a CotomyDebugSettings class and a CotomyDebugFeature enum. The mechanism is deliberately small. It does not use a remote logger or a framework-wide diagnostics pipeline. It reads debug flags directly from localStorage.\nThe storage key prefix is cotomy:debug. There are two levels of enablement.\nGlobal enable uses the key cotomy:debug. If that value is true, all debug categories are considered enabled.\nFeature-specific enable uses keys such as cotomy:debug:api or cotomy:debug:formdata. When a specific feature key is true, that category logs even if the global flag is not enabled.\nThe class methods are thin wrappers around those keys. enableAll() sets the global key to true. disableAll() sets the global key to false. enable(feature) sets one feature key to true. disable(feature) sets one feature key to false. clear() removes the global key only. clear(feature) removes the key for that specific feature.\nBecause the state lives in localStorage, developers can toggle debugging directly from the browser console. That is the practical goal of the design. Runtime inspection can be turned on at the browser level without editing page code, rebuilding, or adding temporary console statements into the application.\nDebug Feature Categories The current categories are defined by the CotomyDebugFeature enum. See CotomyDebugFeature . Their meaning becomes clearer when you trace where they are used in the framework.\nFeature Purpose Typical scenario Api Logs request and response behavior around CotomyApi operations. Inspecting request URLs, request bodies, response bodies, or error responses during submit and load flows. Fill Logs form input fill operations in CotomyEntityFillApiForm. Checking which input name is being filled and what value is being written into it. Bind Logs data binding into elements that use data-cotomy-bind. Inspecting why displayed text does or does not match API response data. FormData Logs the FormData entries generated from a form before submit. Confirming the exact submitted payload, especially with datetime-local conversion and browser-native form behavior. Html Logs CotomyElement creation paths that build HTML or apply scoped CSS. Inspecting how a CotomyElement was created from HTML, tag metadata, or inline scoped CSS. Page Logs CotomyPageController initialization. Confirming when the page controller starts its initializeAsync flow on load. FormLoad Logs entity form load warnings around entity key handling. Investigating cases such as an existing entity key combined with a 201 Created response. These categories are intentionally narrow. They are tied to concrete runtime boundaries inside Cotomy rather than broad generic logging levels.\nEnabling Debug Logs The simplest way to enable logs is from the browser console by setting localStorage flags directly. That works because CotomyDebugSettings reads those keys each time it checks whether a category is enabled.\nEnable all debug output:\nlocalStorage.setItem(\u0026#34;cotomy:debug\u0026#34;, \u0026#34;true\u0026#34;); Disable all debug output:\nlocalStorage.setItem(\u0026#34;cotomy:debug\u0026#34;, \u0026#34;false\u0026#34;); Enable one feature:\nlocalStorage.setItem(\u0026#34;cotomy:debug:api\u0026#34;, \u0026#34;true\u0026#34;); localStorage.setItem(\u0026#34;cotomy:debug:formdata\u0026#34;, \u0026#34;true\u0026#34;); Disable one feature:\nlocalStorage.setItem(\u0026#34;cotomy:debug:api\u0026#34;, \u0026#34;false\u0026#34;); Clear the global flag through the framework API:\nCotomyDebugSettings.clear(); CotomyDebugSettings.clear(CotomyDebugFeature.Api); CotomyDebugSettings.clear(CotomyDebugFeature.FormData); Reset all debug-related keys manually from the browser console:\nlocalStorage.removeItem(\u0026#34;cotomy:debug\u0026#34;); localStorage.removeItem(\u0026#34;cotomy:debug:api\u0026#34;); localStorage.removeItem(\u0026#34;cotomy:debug:formdata\u0026#34;); These correspond to the same storage keys that CotomyDebugSettings uses through enableAll(), disableAll(), enable(feature), disable(feature), and clear(). The important point is not the class call itself. The important point is that runtime inspection can be switched on from devtools without modifying application code.\nHow Debugging Works in Practice The normal workflow is simple.\nOpen browser devtools. Enable the relevant debug category in the console. Reproduce the behavior on the screen. Inspect the emitted logs in the console. Chrome is my usual development browser, and in practice this is still the fastest path. If I need to step through code, I use the normal browser debugger. If I need to inspect runtime boundaries such as generated FormData, bind targets, page initialization, or API response handling, I enable the relevant Cotomy debug category and reproduce the screen flow.\nThis combination is important. Cotomy does not assume that logs alone are enough, and it also does not assume that source-level stepping is enough. Browser debugging handles execution flow. The debug categories make runtime state easier to inspect at the exact framework boundaries where business screens usually become confusing.\nConclusion Cotomy does not try to replace browser debugging tools. The practical answer is still to use the browser debugger for execution flow and stack inspection.\nWhat Cotomy adds is a small runtime logging layer around clear framework boundaries: API transport, form payload creation, binding, HTML generation, page initialization, and form loading. That fits the broader design direction of the framework. The goal is predictable DOM behavior, explicit runtime boundaries, and optional inspection when the screen needs it.\nWith that combination, developers can inspect runtime behavior with minimal tooling friction.\nUsage Series This article is part of the Cotomy Usage Series, which focuses on concrete runtime behavior and day-to-day API usage.\nSeries articles: CotomyElement in Practice , CotomyElement Value and Form Behavior , CotomyForm in Practice , CotomyApi in Practice , and Debugging Features and Runtime Inspection in Cotomy.\nPrevious article: CotomyApi in Practice ","permalink":"https://blog.cotomy.net/posts/usage/debugging-features-and-runtime-inspection-in-cotomy/","summary":"How debugging is handled in Cotomy, from browser debugging to localStorage-based runtime log categories.","title":"Debugging Features and Runtime Inspection in Cotomy"},{"content":"Previous article: Working Safely with AI Coding Agents Vibe Coding and Real Systems Lately, I keep seeing people argue that software can be built with vibe coding alone. I do not reject that idea completely. For very small tools, especially things that are short-lived and not especially risky, AI can often get you somewhere useful surprisingly quickly, and I also use it that way myself from time to time.\nWhat I do not think is that this truth extends very far. Once the target becomes a real business system, the conditions change. Security, long-term maintenance, and data integrity all start to matter in a different way. And repeated modification matters much more than the first successful demo.\nThe place where that difference feels most obvious to me is the earliest design stage. In foundational design, there is rarely one obviously correct answer waiting to be discovered. The right structure depends on how the system will actually be operated, what the users are comfortable with, what shape the existing system already has, and even how much capacity the internal information systems team has to support it later.\nThose are not only technical questions. They are questions about work style, trade-offs, constraints, and ownership. That is why I do not think it is very realistic to ask AI to decide the broad design by itself. It can help once a direction already exists. But deciding that direction still feels like a human responsibility to me.\nHow Much of My Own Code Comes From AI If I had to describe it as a rough feeling, I think around eighty percent of the code output in my recent work now comes from AI or AI agents in one form or another. I could probably push that number even higher if I wanted to.\nEven so, I still tend to write the core parts myself. That includes foundational design decisions and important entity design. Part of the reason is simple. Those parts shape the meaning of the system, and if I stop understanding them directly, the convenience I gain later starts to feel fragile.\nThere is another reason as well, and in practical terms it may matter just as much. If I stop writing important parts of the program consciously by myself, I think my coding ability would fall fairly quickly. And once that falls, it becomes much harder to judge whether the code produced by AI is actually good, only superficially plausible, or quietly dangerous.\nThat matters even more when something breaks. AI may help investigate a problem, but it will not always be able to do that. And even when it can help, someone still has to decide what information matters, which logs are relevant, which files are probably involved, and what should be treated as signal rather than noise. To do that well, I think it is important to keep myself in a state where I can still build the system directly.\nTo borrow a comparison from people operating at a far higher level than I am, I have heard that even though aircraft can land automatically now, pilots still often land manually. I mention that partly as a joke, but only partly. Delegating work and retaining skill are not the same thing.\nWhat Keeps Feeling Wrong What keeps bothering me about AI coding is not simply that AI makes mistakes. Human engineers make mistakes all the time as well. The deeper issue, as I currently see it, is that AI does not stably hold broad design intent. It works from a limited local range and then produces code that is often locally convincing.\nWhen I look at online discussions around AI coding or vibe coding, I also keep noticing a difference in tone that is hard to ignore. This is only my own impression, so I do not present it as a measured fact. Still, people who say that AI can now build everything, or that it already replaced the need for real engineering judgment, often do not sound like people who have spent much time dealing with long-term maintenance, production operation, or the consequences of software decisions after release. By contrast, people who are more cautious, meaning they actively use AI but do not feel comfortable handing everything over to it, often sound like people who have spent more time living with exactly those consequences.\nThat may simply reflect the difference between building something once and having to live with it afterward. If you have spent enough time maintaining software, it becomes much easier to feel how dangerous locally plausible code can become over time. That is where this limitation starts to matter.\nRepeated maintenance is where the real risk begins to appear. A change can look reasonable inside the visible area while still damaging the larger structure. Human engineers often feel a vague discomfort before they can fully explain it. They think something feels wrong here, or this may become dangerous later. That kind of unease comes partly from long experience.\nAI generally does not have that kind of feeling. It can describe patterns. It can imitate caution. But that is not the same thing as actually sensing structural danger in the way an experienced owner often does.\nThere is another part of this that I find hard to ignore. AI will sometimes ignore direct instructions, ignore AGENTS-style personalization, ignore existing implementation that should obviously be respected, and sometimes even ignore checks that are sitting right next to the area being changed. I cannot say exactly what triggers that shift. But I do think there are moments when the whole output suddenly becomes less stable, less careful, and more half-finished than it was just before.\nContext Is Narrower Than It Looks One thing becomes obvious very quickly when using coding agents such as Codex. The amount of history and project context they actually work from is much smaller than people sometimes imagine.\nI do not say that as a complaint. The tools are still useful enough that I use them heavily. But they are not continuously understanding the whole project in the way a human maintainer with long ownership gradually does. They are working from a moving slice, and sometimes that slice is enough while sometimes it clearly is not.\nPart of me also suspects that this is not merely a temporary weakness of the tools. If left completely unconstrained, AI systems would probably keep consuming more and more resources. And if these tools are going to exist as commercial products that ordinary people can actually use, then their working range has to be limited somehow. So my guess is that they are built quite intentionally around a balance between the amount of context and computation they are allowed to consume and the practical value they are expected to return.\nThat point matters because fluency can easily create the illusion of broad understanding. The answer sounds coherent, so it is easy to imagine that the model is holding the whole system together in its head. I do not think that is usually what is happening.\nWhat makes this more irritating is that the answer will sometimes sound as if the model checked the implementation or read the relevant documents carefully when in fact it did nothing of the kind. It simply fills the gap with a plausible-sounding guess. That happens far more often than I would like.\nWhy I Prepare Instruction Documents This is one reason I often use ChatGPT to prepare instruction documents for Codex or Copilot. Longer conversations help me accumulate and refine design intent before the coding agent starts editing files.\nI cannot prove this in any strict documented way, and I do not want to overstate it. Still, ChatGPT often feels as if the effect of accumulated conversation history is larger than what the immediate prompt alone would suggest. At least in actual use, there are times when a response seems influenced by a broader accumulated interaction rather than only the text directly in front of it. I remember at least one case, probably around the GPT-4 period, where it brought up information that had not appeared anywhere in that chat, and the response gave me the impression that something from earlier interaction had carried over into the answer.\nThere is another practical point as well. With the VS Code connection and similar tooling, I can to some extent control what information is brought closer to the working context and what is not. So my impression is that the result is influenced not only by the immediate prompt but also by a larger body of accumulated information that I can partially steer into view.\nThe document is not there because I enjoy paperwork. It is there because I need some way to compress design intent into a form the agent can actually follow. Once I started working this way, the amount of implementation that ignored the intended design dropped significantly.\nThat does not mean documents solve everything. They do not remove the need for review. But they do reduce the chance that implementation starts from an underspecified idea and then drifts toward a convenient but wrong structure.\nWhat That Means for Design Once I started thinking about it that way, the design side of the problem stopped feeling especially abstract. If AI only sees a relatively narrow area each time it works, then I think the software also has to be shaped so that each change can stay relatively narrow. At least right now, that still seems to me like the most practical response.\nPut a little more plainly, compact feature boundaries matter more. A system does not become safe merely because prompts improve. It becomes safer when the structure itself limits how far one local mistake can spread.\nI do not mean that structure alone solves everything. Of course it does not. But if one feature can be understood and edited in a compact unit, the agent is less likely to damage distant parts of the system without anyone noticing. That matters a great deal in AI-assisted development.\nWhy Razor Pages Plus Cotomy Fits This Well In my own system, a screen is commonly built around files such as:\nPage.cshtml Page.cshtml.cs Page.cshtml.css Page.cshtml.ts That structure brings the elements needed for one screen into the same area. Markup, server-side page handling, screen-specific styling, and client behavior stay close together, so the feature boundary is naturally compact.\nI should also explain why I keep talking about screens here. Whether a system is written in a strongly object-oriented style or not, many of the tickets that appear during integration testing or after release still begin from a screen. That is where users notice the problem, that is where operators report it, and that is usually where investigation starts. So even if the underlying cause is deeper, the practical entry point for debugging and correction is often the screen boundary.\nThe model still matters, of course. If the model is weak, the whole system becomes unstable. But if the model has been examined with reasonable care, I do not think it is common for broad model-level revisions to happen constantly afterward. And when they do happen, the situation is usually serious enough that the team has to respond with full force anyway. At that point, the question is no longer whether AI happened to produce especially elegant local code. The problem is that the system itself now requires a large structural correction.\nI should be clear here. I did not originally choose that structure for AI. I ended up there because I needed to write a large amount of CRUD, while also needing some parts of the system to remain server-rendered so they could be indexed by search engines. I also needed customer-facing features and partner-facing features to coexist safely without turning into a mess, and I wanted the work to stay organized in a way that made progress easier to measure and schedules easier to plan.\nRazor Pages was important in that process. I did not first invent the full structure in my head and then go looking for a framework that matched it. Rather, I was trying to solve those practical constraints, arrived at Razor Pages because its page boundary fit that kind of work, and then kept developing the client-side structure from there. So the current shape was not designed from nothing by me alone. It emerged through that development path.\nLooking back, I think Razor Pages itself also helped push the work toward a cleaner classification of CRUD behavior. List, detail, create, edit, and related screen transitions could be treated more explicitly as page-level units, and that made it easier to organize both the server side and the client-side behavior around the same operational boundary. Cotomy then grew on top of that reality, including the TypeScript side, rather than replacing it with a completely unrelated model.\nAnother part that mattered was the .NET solution and project structure itself. Managing multiple projects inside one solution was not merely an IDE convenience for me. For a large business system, it provided a practical way to split responsibilities into physical boundaries and keep the whole system from collapsing into one oversized unit.\nThat kind of decomposition is not unique to .NET. Other ecosystems also have their own monorepo and workspace models. Still, in my own experience, the solution and project structure in .NET made that separation especially explicit and operationally useful when the system became large and CRUD-heavy.\nThat was the real motivation. It was not designed as some AI-era pattern from the beginning. I was simply trying to build a large system without getting lost while building it. But in practice it seems to work quite well with AI coding agents. The agent can usually inspect a smaller and more meaningful set of files, and the working area for one change tends to stay relatively compact. Because of that, the impact of a single modification is often easier to reason about.\nThat matters more than it may sound. When the files for one screen are scattered too widely, the agent has more room to miss something important. When they stay close together, the local working set becomes easier for both the agent and the human reviewer to hold. And because so much real maintenance work begins from a screen-level ticket, that compactness is useful in ordinary operations, not only in AI-assisted implementation.\nA Similarity to Japanese SI Structure Thinking about this also brings back memories of older Japanese SI work. I do not mean that in any nostalgic way. There were many parts of that world that I disliked very strongly, and even now I still think much of that structure was unhealthy.\nStill, one part of it looks a little different to me now. What I remember is not an elegant theory but the feel of the work itself. Design and coordination sat on one side, implementation sat on the other, and the implementation work was usually divided into pieces small enough to hand out screen by screen.\nAt the time, I mostly saw the bad side of that. It made the whole system harder to see. It encouraged local optimization. And it often produced a rigid way of working that was unpleasant to live with. I still think all of that is true.\nBut I also understand something now that I did not appreciate as much back then. That shape was also a way to keep development under control. If the work is divided into units small enough to assign, review, and estimate, then progress becomes easier to read and schedules become easier to plan. The final result also depends a little less on the strengths or weaknesses of one particular implementer.\nPart of what I wanted in my own system was exactly that kind of stability. Even though I mostly work alone, there are still cases where I bring in help, and I do not want the result to depend too much on the personal style or skill level of whoever happens to touch one part of it. To be honest, that remains true even when the worker is an AI agent rather than another person.\nThat is one reason AI agents feel oddly familiar to me. They do not feel like broad-ownership designers. They feel closer to very fast implementers, or maybe coders would be the more precise word. They can produce a large amount of code quickly, they usually have no malicious intent, and yet they can still do damage that a human would not normally produce in quite the same way.\nSo when I think about the engineer directing the work and the AI agent carrying out a large share of the implementation, I cannot help seeing some resemblance to that older structure. The scale is different. The risks are different. But the relationship between the side that controls the work and the side that executes it does not feel entirely new to me.\nDesign Matters More, Not Less So after thinking about all of this for a while, I do not end up feeling that design matters less because AI writes more code. What I feel is almost the opposite.\nIf the structure is weak, AI will only help that weakness spread faster. And if the structure is strong, AI can move quickly without letting one local change turn into a wider mess. That is why I keep coming back to the same point. The tool mostly accelerates whatever structure is already there.\nSo at this point, I no longer find myself wondering whether AI can write code at all. What stays with me instead is a different question: what kind of software structure lets AI move quickly without turning every local change into a wider risk?\nConclusion AI is still developing, and I assume the tools will change a great deal from here. The limits we see today may shift. The workflows may shift with them.\nSo I do not think this article points to any final answer. The situation is still moving too quickly for that. Something much better, or much more efficient, may appear and change the development environment again.\nEven so, I do not really expect that part to go away. Whatever tools appear, engineers will still end up having to think about the design and operating model that best fits the environment they are actually working in. If anything, the AI era seems more likely to force that question forward than to remove it.\nNext article: Use Cases and Alignment in Solo Development ","permalink":"https://blog.cotomy.net/posts/misc/designing-software-for-ai-code-generation/","summary":"AI coding agents are useful, but they work from narrow context rather than stable architectural intent. That makes software design and compact feature boundaries more important, not less.","title":"Designing Software for AI Code Generation"},{"content":"Previous article: How AI Will Change Software Development Work Why I Do Not Let AI Agents Drive Alone I wrote in the previous articles that AI agent coding is powerful, but not naturally safe. I still think that is the right starting point.\nIn practice, AI agents do not output especially clean code by default. They are very strong at the fine-grained parts of implementation. They are also strong at repeating similar work without getting bored, and that alone has enormous value. But if I simply hand over everything and wait for a complete result, I think the project will eventually break.\nThe reason is not only that the models make mistakes. My current view is that they are structurally unable to fully hold the whole design intent of a long-lived system in the same way a human owner does. But they can approximate that state for a while. They can sometimes do it impressively. But understanding the whole architecture, retaining the meaning of each boundary, and designing new behavior while respecting that intent still does not seem to be what these systems fundamentally do.\nMaybe some of this improves as available resources expand. Maybe some of it improves more than I expect. But unless there is some real paradigm shift, I suspect the core limitation does not change very much. That is why my main concern is not how to make AI code faster. It is how to make AI coding safer.\nI Start With ChatGPT, Not With Coding In my recent development style, I usually start by explaining the business domain and the thing I want to build to ChatGPT. That is where I begin to solidify the specification.\nWhy ChatGPT first? My impression is that it handles long conversational history better than most coding agents such as Codex or Copilot. I do not present that as a proven fact. It may simply be my own impression. I treat it as a workflow-level observation, not as a benchmark comparison between products. Still, in actual use, ChatGPT often feels better at pulling together earlier discussions, older context, and even information that seems to have carried over from other conversations.\nThat matters more than it may sound. In the earlier article, I wrote about problems such as ignoring the intended framework structure or slipping into implementation that violates the architectural boundary I wanted. Those problems seemed to happen less often once I stopped starting directly from the coding agent and instead used ChatGPT first to organize the work.\nSo I now use ChatGPT as the place where I shape the problem before any agent starts touching the repository. That one change alone seems to have reduced a lot of avoidable friction.\nConversation First, Then a Design Document The conversations I have while clarifying the idea are not disposable. I treat them as accumulated knowledge. Once I feel that enough knowledge has been built up, I let ChatGPT produce the final requirement summary.\nAt that stage I may also provide documents or other materials, as long as they are safe to send outside. Then I review the generated requirement summary, correct details, tighten the wording, and continue until it feels close enough to the real intent.\nThe next step is important. I do not ask Codex or Copilot to write code yet. First, I ask for an instruction document for the coding agent. That document is still not code. It is a design document.\nThis distinction matters a lot to me. If I skip directly from a vague idea to implementation, then I am asking the model to perform design implicitly while coding. That is exactly the kind of situation where AI starts making local decisions that look efficient but drift away from the intended structure.\nWhat I Still Write Myself Even with this workflow, there are parts I still write myself in principle. The clearest example is the entity classes.\nI usually write those on my own. The reason is simple. They are part of the foundation of system design, and if there is a problem there that I do not understand, it can turn into a future bug that becomes very difficult to deal with.\nThere is another reason as well. Even if AI could write those classes perfectly with no mistakes at all, I still think the trial and error involved in writing them myself has value. That is where I often notice things that are easy to miss if I stay one level too far above the actual structure. I do not want to give that up casually.\nSo for me, this is not only about reducing AI mistakes. It is also about deciding which parts of development should remain direct contact points between the system and my own thinking.\nDesign Documents Need To Align Human and Agent Attention The design document itself often includes diagrams. When needed, I have Codex or Copilot produce documents that include class diagrams, sequence diagrams, and similar materials. I use Mermaid for that.\nThe point is not decoration. The point is alignment. Those diagrams help both me and the AI agent hold the same mental picture of the feature before coding begins. Because of that, the output has to be readable in a way that suits me, not merely machine-generated.\nThere is also a cultural background here that I should mention. In Japanese system integrators, design documents are often made in Excel. A common style is to narrow the columns until the cells become almost square and then use the sheet like a layout canvas. I disliked that style from the beginning of my career. I hated it quite a lot, in fact. Even when I was less senior and had less authority, that style always felt to me like a bad way to think and a worse way to communicate.\nAfter I moved into a position where I had more control over how development was done, I generally switched to Word for written documents. For UML and similar drawings, I used dedicated tools, exported the result as images, and pasted them into the document. Outside Japan, I imagine many teams use wikis instead. That is a reasonable choice. But personally, even writing Markdown by hand often felt like a chore, so for a long time my own practical compromise was Word plus drawing tools.\nAI Removed Most of the Friction of Documentation That changed dramatically with AI. The inconvenience of writing Markdown and the inconvenience of producing Mermaid diagrams both became much smaller. On this point, I do not think there is much room for doubt.\nRight now, for system development documentation, excluding revisions to old projects that already have a legacy format, I no longer use Word, Excel, or dedicated drawing software at all. AI agents produce Markdown and Mermaid documents quickly and at surprisingly high quality.\nThis has had a bigger effect on my workflow than I expected. Documentation used to be something I knew was important but still experienced as resistance. Now it is much easier to keep the documents alive while the system is changing. There was also a period when agile became fashionable in Japan, although I am not sure how many teams were actually practicing anything that deserved that name. Some people even argued that agile meant not writing documentation at all. Of course that was never true. But I think the fact that such an idea could sound attractive at all says something real about the way documentation often feels to engineers. It can create enough friction that people become tempted by obviously wrong shortcuts.\nThat matters because the value of documentation is not just that it exists. Its value depends on whether it stays synchronized with the actual implementation and with the current design intent.\nAfter That, the Main Worker Becomes the Agent At that stage, the work is still centered on improving the design materials rather than writing code. Usually there are multiple rounds of adjustment. I refine the Markdown. I refine the diagrams. I refine the instruction document until I think the output has reached a practical level of quality.\nWhen it seems complete, I do not trust one model alone. I review it with another model. If I used Codex for one pass, I may ask Copilot to review it. And because GitHub Copilot itself can use multiple models, I sometimes use that flexibility as well when I want another angle on the same material. Since the macOS version of ChatGPT can also work with VS Code, I may ask ChatGPT to review the same material as well. That cross-check raises the quality further.\nOnly when the design document reaches a level I consider good enough, and the major problems have been resolved or at least reduced to an acceptable level, do I ask the coding agent to implement the feature.\nAfter that comes code review and refactoring. At this point, the number of outright behavioral defects tends to drop significantly. That contributes to speed, of course. But the larger gain may be that I end up with maintained documents that do not substantially drift away from the code. That should be normal in principle, but I doubt that many projects in the world have consistently achieved it in practice.\nAnd when those documents do exist, they are extremely useful. They help when explaining the system to a new member. They help when explaining the system to my future self. They help because they reduce the cost of remembering why the code exists in the shape it has.\nWhat I Deliberately Do Not Put Into the Design Document Even in this workflow, the design document is not meant to describe everything. I generally do not describe details such as how every method should be split. Sometimes a sequence view naturally implies part of that structure, but even then the goal is to express the broad shape of the implementation, not to prescribe every local method boundary.\nThat may be one reason a certain type of annoyance still appears so often. Even though I explicitly prohibit it in AGENTS.md, I still frequently get those private methods that do not represent real abstraction and merely move logic sideways. The code becomes more fragmented without becoming more meaningful.\nSo I do not think documents alone solve this. They reduce the risk a great deal. They do not remove the need for review.\nSafer Does Not Mean Slow Even with all of these precautions, I think the risky part of AI-driven development decreases quite a lot. Of course, I do not believe this produces the kind of speed improvement claimed by people who say everything can simply be thrown at AI with no trade-offs. My experience is not that extreme.\nEven so, the improvement is still more than enough to matter. It is large enough that I think any engineer doing serious development should be thinking about how to build a disciplined workflow around these tools.\nFor me, the point is not maximum raw speed. It is a balance between speed and quality that avoids severe problems. By that standard, the current workflow feels rational.\nThe Other Half of Safety Is Architecture Even so, process is only one side of the problem. The other side is architecture.\nIn some ways, AI-driven development resembles the older Japanese system integrator split I discussed in the previous article, where the System Engineer handled design and the Programmer handled implementation. If I use that old distinction, AI-driven development sometimes feels like a world in which every engineer is expected to behave as a System Engineer.\nIf that is true, then there may be something worth learning from Japanese system integrator design practices. In my own experience with SES and other system integrators, many of the projects I worked on used object-oriented languages, but not all of them were designed in what I would call a truly object-oriented way. I often worked in smaller teams, or in teams whose headcount looked larger than their actual development capacity, such as twenty people with only five doing real implementation and the rest mainly testing. In those situations, more deliberate OOP design often made sense.\nBut when I think back to projects that were not like that, I often remember something different. The classes were used more like containers for functions than as carefully meaningful models. And, more importantly, the processing was often split very strictly by screen. Each person owned a screen. Sometimes even an external team was given only the related screens and nothing else.\nThat approach has an obvious weakness. It makes the whole system harder to understand. It also makes it easier for problematic SQL, deadlock-prone access patterns, or poor performance decisions to be created separately in each screen until the total situation becomes hard to control. In that sense, I do think the more deliberate OOP-oriented designs I used in other projects had real justification. They were not only about elegance. They were also a way to keep those concerns from scattering too freely. But that screen-split approach also has one very large advantage. Changes to one screen are less likely to spread into others.\nNarrow Change Surfaces Matter More With AI That advantage matters more in AI-driven development than it once did. What is frightening about AI agents is not only that they can break the project. If they break it badly and obviously, then the answer is simple. Revert it. If I commit frequently, I can throw away the changes with git.\nThe more dangerous case is the half-broken state. The system still appears to work. Most of the behavior still looks normal. But somewhere inside it now contains a severe problem that was introduced by a partially correct change.\nTo reduce the chance of that kind of failure, I think it is better when the impact range of one modification is as narrow as possible. The less an agent has to consider outside the target feature, the lower the risk that it silently damages some distant part of the system.\nWhy I Think AI May Push Design Away From Huge Client-Side Surfaces SPA frameworks such as React remain extremely popular. That makes sense. But I also think AI may push some design thinking back toward non-SPA structures.\nI do not mean that the Japanese system integrator style of splitting everything mechanically by screen should simply be copied. I do not think that is the answer. The data still benefits from being defined as models. And I suspect AI also works better when it can rely on explicit model structure.\nBut if both the model layer and the screen layer are organized by feature, and each feature contains only the amount of implementation it actually needs, then the amount an AI agent has to reason about becomes smaller. That lowers the risk to the whole system.\nIn other words, I think architectures with narrower feature boundaries and smaller change surfaces may become more attractive in the AI era, even if the exact form differs from older Japanese screen-oriented systems.\nWhy Design Matters Even More Now So when I think about safe AI-driven development, I do not end up with a conclusion about prompts alone. I end up with a conclusion about engineering.\nAI-driven development does not reduce the need for design ability. It increases it. The better the design, the safer and more productive AI becomes. The worse the design, the faster the damage spreads.\nThat is why I think the quality of design matters even more now than it did before. The tool became stronger. So the cost of weak structure also became stronger.\nAt the same time, I also suspect that the design most suitable for human development and the design most suitable for AI-assisted development will not always be exactly the same. I do not think I can say with confidence yet what that new optimum looks like. But I do think it is a question worth pursuing seriously.\nNext article: Designing Software for AI Code Generation ","permalink":"https://blog.cotomy.net/posts/misc/working-safely-with-ai-coding-agents/","summary":"A practical reflection on how I use ChatGPT, Codex, and Copilot to design first, code second, and reduce the risk of AI-driven development.","title":"Working Safely with AI Coding Agents"},{"content":"Previous article: Real Problems I Encountered When Developing With AI Agents Early ChatGPT Usage When ChatGPT first appeared, I would not say that it immediately transformed my coding work in some dramatic way. It mattered quickly, but in a narrower sense at first.\nThe way I used it in those early days was fairly ordinary. I asked it to generate small methods. I used it to discuss possible design directions when I had not yet decided which structure I wanted. And I used it as a practical research tool when I needed to investigate libraries, tools, small implementation details, or operational procedures outside my current focus.\nThat stage was useful, but it still felt like assistance around development rather than a fundamental change in development itself. The coding work still centered on me, and the AI mostly acted as a fast conversational helper around the edges.\nOne detail that people often focus on now is freshness of knowledge. In practice, I did not feel that as a major problem in those early workflows. If I needed the latest factual precision, I could verify it by other means. For the kinds of work I was actually doing most often, slight lag in freshness was not what mattered most. What mattered was whether the answer helped me move forward.\nThe Copilot Moment The first moment that really changed how coding itself felt was GitHub Copilot.\nI found it while moving from Visual Studio for Mac to VS Code. At that time, the end of support for Visual Studio for Mac had been announced, and that announcement pushed me to start shifting my C# work toward VS Code. Copilot was not the main reason for that move. I discovered it more or less by chance while trying to rebuild a usable daily environment.\nThen I saw it completing code directly in the IDE. That moment had a very specific kind of impact on me. It was not abstract anymore. It was not a chat window on the side. The code was appearing where I was already working, at the moment I was trying to write it.\nEven when the results were often wrong, the workflow was obviously faster. That part was clear almost immediately. You did not need a benchmark to see it. You could feel the difference in your hands.\nWhat especially struck me was how much it helped when I returned to development after being busy with other work. In my actual life, development is not the only thing I do. There are stretches where business tasks, operational issues, and other responsibilities occupy most of my attention. After a period like that, returning to code usually requires a kind of rehabilitation period. You have to load the structure back into your head. You have to remember your own patterns again. You have to recover your development rhythm.\nCopilot shortened that recovery period in a very practical way. It did not replace understanding, but it helped me re-enter the flow much faster. That was one of the earliest points where I thought: this is not just a convenience feature. This changes the shape of actual work.\nThe ChatGPT and Copilot Period After that, there was roughly a year in which most of my AI-assisted development was built around ChatGPT and GitHub Copilot.\nThat combination became my normal working environment. ChatGPT handled design discussion, idea shaping, research support, and occasional code generation. Copilot handled the direct coding rhythm inside the editor.\nCopilot chat and inline command features existed by then, but I did not use them especially heavily. My practical usage stayed centered on the more direct completion-oriented flow.\nLooking back now, I sometimes wonder whether early forms of agent-style development might already have been possible in some limited sense. Perhaps the boundary was not as sharp as it now appears in hindsight. But even if that is true, I do not think the practical feeling was the same. The real shift came later, when the tool stopped being mostly predictive assistance and started behaving more like a delegated worker.\nThe Arrival of AI Coding Agents That next shift came with AI coding agents.\nOnce those tools became usable in everyday development, the workflow changed again. Now the tool was no longer only completing local code or answering local questions. It could modify an entire solution, run commands, manage git operations, inspect the project more broadly, and help with environment setup.\nAt that point the scale changed. The range of actions changed. And because of that, the practical value changed as well.\nNow engineers often use tools in the broad family of Codex-style agents, Claude Code, and other coding agents that operate with similar ambitions. The names will continue to vary. The details may differ. But the general direction is already clear.\nUnless there are strict security constraints, it is becoming difficult to justify not using such tools. That does not mean blind trust is justified. It certainly is not. But refusing the tools entirely begins to look less like discipline and more like unnecessary self-limitation.\nJapanese Engineering Structure Thinking about where this may lead also makes me think about the structure of engineering work in Japan.\nWhat I am describing here is based on my own experience, and I do not claim it represents the entire industry in a complete or current way. Still, in the system integration work I saw, there was often a recognizable split between roles that were treated as more like System Engineer work and roles that were treated as more like Programmer work.\nThe System Engineer side was expected to handle design, specification coordination, and communication. The Programmer side was expected to implement.\nThere were also pricing differences between those roles. That distinction was not merely conceptual. It appeared in how work was valued and billed.\nAt the same time, the reality was often less clean than the names suggested. Programmers were frequently expected to do more than the role title implied. Responsibility could expand beyond nominal boundaries without compensation or authority expanding at the same rate.\nThat historical context matters. I do not think the structure existed only because people misunderstood software work. Part of it also reflected the fact that implementation itself consumed a tremendous amount of time and human effort. The reason was often simpler than people now remember: writing the software itself took an enormous amount of sustained labor.\nProgramming as Intellectual Manual Labor Because of that, I have often thought that programming historically functioned as a kind of intellectual manual labor.\nI do not mean that dismissively. I mean that a very large amount of value came from sustained, detailed, repetitive implementation effort. That effort was intelligent, but it was still labor-heavy in a very concrete way.\nThe separation between engineer and programmer existed partly because programming itself was so time consuming. Someone had to think at the higher structural level. Someone had to absorb the enormous cost of turning that structure into working code.\nAI agents reduce the weight of that labor significantly. Not to zero, and not safely without supervision, but significantly enough that the old balance between design and implementation work may not remain stable. And to be fair, the need for supervision is not unique to AI. The same is true when humans are involved. At the same time, AI agents have a different risk profile from human teams, including no interpersonal retaliation dynamics and no intent-driven disclosure behavior.\nA Small Team Hypothesis This leads me to a hypothesis I keep returning to.\nI think development productivity per engineer tends to decrease as team size grows. That is not because more people are useless. It is because communication cost grows relentlessly.\nWith one person, there is no coordination cost. With two or three people, coordination is still manageable. With five to ten people, communication becomes heavy enough that it starts changing the nature of the work itself.\nAt that point, meetings, alignment, interpretation, handoffs, and review structures consume a meaningful share of energy. Sometimes they consume far too much of it. When the team grows, shared context also starts to fragment, and the cognitive load required just to stay aligned becomes part of the cost of development itself. Coordination work begins to grow faster than productive work.\nSo my current hypothesis is that the optimal team size may often be around three people. Not always, and not universally, but often. That size is still small enough for shared context to remain alive while being large enough to divide work meaningfully. I also think three can be better than two for another reason. With two people, the discussion can easily stay trapped in a direct back-and-forth. In Japanese there is a proverb, three people together have the wisdom of Monju, which expresses the idea that a third perspective can create a disproportionate increase in insight. When a third viewpoint exists, the team can sometimes gain a disproportionate amount of leverage from that extra perspective.\nAI agents change this equation because they reduce part of the mechanical work that previously required multiple programmers. In other words, the implementation labor that once justified larger teams may shrink, while the coordination cost of those teams remains. If AI agents continue improving, then teams of that size may be able to produce systems that previously required much larger organizations. That possibility seems very real to me.\nAI and Software Review One of the more interesting things about AI is that it often makes mistakes while coding and yet performs surprisingly well at review.\nOne of the real problems in solo development or very small-team development is that it is hard to recognize the problems you caused yourself. In larger teams, that weakness is usually offset by having someone else review the work. That is one of the classic ways software quality has been maintained.\nI have seen it point out design inconsistencies, logical problems, and code quality issues that were worth taking seriously. That contrast is interesting. And that is exactly why AI review matters so much. It can take over part of the role that, in a larger organization, would normally be handled by another engineer. At the same time, I do not think that removes the need for final human judgment on domain invariants, security-critical decisions, and accountability-heavy trade-offs.\nWhy would a model that often writes questionable code also review fairly well? I do not think I can answer that rigorously. But I suspect it may relate to the statistical nature of generative models. Review is closer to pattern recognition than to original construction. In practice, code review is also closer to anomaly detection than to construction. Generative models are fundamentally optimized for recognizing patterns in large amounts of data.\nWhen reviewing code, the model does not need to invent a structure from scratch. It only needs to detect inconsistencies, unusual patterns, or deviations from common design structures. That kind of task appears to align surprisingly well with how these models actually work.\nThat is only a working thought, not a final conclusion. Still, the contrast is real enough that I no longer think of AI only as a generator. It is also a reviewer, and sometimes a very useful one.\nFuture Outlook So what kind of engineer is likely to succeed in the AI era?\nI think it will be the engineer who can imagine the full system architecture rather than only a local code fragment. It will be the engineer who can communicate with customers, extract the real operational problem, produce and share design materials with AI support, and then drive development forward quickly using AI tools. And if smaller teams really do become more important, then each member will need to be a relatively independent presence. They will need to share the business problem, understand it, think through a solution, and actually execute development with a meaningful degree of autonomy.\nIt will also be the engineer who can connect software to the customer\u0026rsquo;s actual business growth. That matters more than people sometimes admit. A system is not valuable because code exists. A system is valuable because it changes the customer\u0026rsquo;s operation in a useful direction.\nI also think many systems will continue running primarily on cloud infrastructure. That direction already feels normal. If that is combined with strong AI assistance, then small teams, and in some cases even individual engineers, may be able to deliver systems that previously required much larger organizations.\nI do not say that with blind optimism. There will still be mistakes. There will still be weak designs, security failures, and bad decisions. The difficulty does not disappear. But the shape of the work is changing, and I think cautious optimism is justified.\nAt the same time, I do have one concern. Small teams often do not have much room to train junior engineers patiently. AI can support learning in many ways, but I do not think it easily replaces the kind of sustained human guidance that helps someone grow through real work. And unless the person is proactive, there is only so much the tool can do. The problem is that truly proactive people are not the majority. So while I think AI may make very small teams more powerful, I also think it leaves a real question about whether the next generation of engineers will be developed well enough.\nAI will not eliminate engineers. Instead, it may raise the value of people who can think structurally, communicate clearly, and move quickly without losing control. The engineer who succeeds in that environment may not be the one who writes the most code. It may be the one who can understand the whole system, communicate with real business problems, and guide both humans and AI toward a coherent design.\nWhen the mechanical part of programming becomes cheaper, design decisions, system understanding, and communication with real business problems become more valuable. The center of gravity of software work may shift. In that sense, the role of engineers may shift. Less manual coding. More responsibility for the direction of the system.\nNext article: Working Safely with AI Coding Agents ","permalink":"https://blog.cotomy.net/posts/misc/how-ai-changes-software-development-work/","summary":"A personal reflection on how ChatGPT, Copilot, and AI coding agents changed my workflow and why they may also change the structure of software development teams.","title":"How AI Will Change Software Development Work"},{"content":"Previous article: AI in Real Development Work Introduction Since the Codex extension in VS Code appeared and became usable enough for everyday work, I have been using AI agents regularly in development. Looking at the whole process rather than isolated demo moments, I think my overall productivity became roughly two to three times higher than development before AI agents.\nThat is already a very large change. At the same time, I do not think it should be described carelessly. Some individual tasks really can become dozens of times faster. If I need a rough screen prototype, a repetitive TypeScript form adjustment, or a batch of small mechanical edits, the time difference can become almost absurd. But total development productivity does not increase by that same ratio.\nThe reason is simple. AI agents rarely produce the exact result I want on the first try. Sometimes they break existing code. Sometimes they touch unrelated areas. Sometimes they produce something that looks correct in the UI while silently damaging the actual behavior underneath.\nThis article is about those kinds of situations. Most of the work I am describing was for internal company systems, so I need to omit or soften certain concrete details where necessary. Even so, the patterns themselves are real.\nUnexpected Modifications by AI Agents The first kind of problem I ran into was unexpectedly broad modification. An AI agent would sometimes change parts of the system that had nothing to do with what I actually instructed.\nI noticed this very clearly while developing a company system that manages the flow from order entry to shipping. This was in the latter half of 2025. It was also one of the first truly full-scale systems I built with Cotomy.\nWithout AI agents, I think that system probably would have taken around a year to finish. With AI agents, most of it was completed in about four months. That difference is too large to dismiss. The productivity benefit was real, visible, and practically important.\nAt the same time, there were many cases where bugs came from changes I never asked for at all. That became one of the most unnerving parts of AI-driven development.\nOrder Screen Incident One of the clearest examples was the order screen. On that screen, the user can select a business partner, quotation data that determines the pricing basis, a destination, and a requester. Depending on how the user arrives there, the business partner or quotation data may already be specified before the screen opens.\nIf either of those values has already been specified, it must not be editable on the order screen. For safety, the business partner and quotation are treated as non-changeable once they have been fixed in that flow. If one of them is registered incorrectly, the rule is to enter the reason, invalidate that record, and create a new one instead. There is server-side validation for this, but the UI must also prevent editing in the first place.\nBy that point, the basic behavior of the screen was already working, and I was in the phase of tightening the detailed behavior. The instruction I gave to Codex was not vague. I specified the relevant fields explicitly, and in practice I also attached fairly detailed conditions to when editing should be disabled. At first glance, the result looked correct. The screen showed the expected values, and the disabled state appeared to be working.\nThen testing exposed a serious problem. Once the field was disabled, its value was no longer submitted.\nThe technical reason was annoying in a very specific way. The selection component in question was implemented as a class derived from CotomyElement. It reads several data attributes during initialization and dynamically generates the internal input elements and other required DOM nodes from there. The display-side behavior for a disabled state already existed. Codex ignored all of that architecture and simply disabled the visible element that happened to be on screen.\nAs a result, the input element required for FormData submission effectively did not exist. The visible label text was filled, so the UI looked fine, but the value itself was never sent to the server.\nThat kind of bug is especially frustrating. The screen looks correct. The label is there. The user thinks the value exists. And underneath all of that, the submission is already broken. What made me especially angry was that it populated only the visible text, as if it were trying to hide the problem instead of solving it. It is the kind of result that makes you stare at the screen and feel your mood drop immediately.\nSecond Unintended Modification Further testing revealed another issue on the same screen. The order screen also contains destination address and requester fields. The destination address master has a default requester associated with it, but the requester itself still needs to remain editable.\nI had no intention of making the requester non-editable, and of course I gave no such instruction. Nevertheless, Codex applied the same modification pattern to every component of the same shape. As a result, the requester field also became completely uneditable.\nThis bug was found during testing, so no production damage occurred. Still, it is deeply unsettling to find bugs in parts of the screen you never asked to change. I think many engineers will immediately understand that feeling.\nWhat made it worse was that my instruction had been extremely explicit. I included the exact field names, and in reality the conditions I gave were even more detailed than that. Even so, the change expanded to other fields without that expansion being reported clearly in the chat output. There are certainly human engineers who make that kind of unilateral generalization too. But I had not expected an AI, which presumably wants to conserve resources wherever possible, to behave in quite that way. That was one of the moments when I started to understand, at a more visceral level, that AI agents can generalize the wrong thing with great confidence.\nImplementation Experience With an Item Master Another case came from the item master. In that system, I chose to treat products and materials as the same entity inside the item master. I do not think that is the kind of design people would normally adopt. But this system had many intermediate products, and shipping those intermediates directly was not unusual. Because of that, I wanted to handle them more flatly, and for this system I still evaluate that decision as rational.\nAt the same time, I think there are probably many more cases where this shape would not be an appropriate general master design. That is important context, because I do not want this article to sound as if I am presenting that master shape as a general recommendation.\nThe item master contained several categories such as product items, novelty products, labels, boxes, and other materials. Each category had different fields. For example, GTIN existed only for product items and novelty products, and novelty products were registered with an internal in-house ID.\nThe actual screen control was more complicated than that summary suggests, but Codex could produce working behavior after only a few rounds of instruction. In that sense, it was undeniably useful. But the resulting code needed heavy refactoring.\nThat is the point I do not want to hide. I am not programming as a hobby. The system has to remain maintainable whether AI is available or not. If the structure is not resilient to change, then even AI-assisted modification will eventually hit a limit. Because I already understood that much at the time, I always performed code review. And in many cases, I ended up refactoring a large portion of the generated code myself.\nCode Quality Problems If I had to say what the problem was in one phrase, it would simply be this: the code was dirty.\nOne of the worst patterns was a large number of private methods that were called only once and did nothing except move a piece of logic sideways. Those methods did not have an independent meaning as screen behavior. They simply scattered the code, and once that kind of thing begins to accumulate, following the logic becomes extremely difficult.\nOf course, giant methods that contain everything are also bad. That is simply true. Honestly, refactoring from that state would have been easier. In modern environments, especially with strong refactoring tools, restructuring that kind of code is often more straightforward than cleaning up fake modularity that never had any real meaning.\nVariable naming also caused friction. Many generated variable names were far too long. They were not technically wrong, but they forced too much effort into simply reading the code.\nIgnoring Coding Rules AI agents also ignored coding style rules surprisingly often.\nI have my own rules for writing this kind of code. For example, in PageController code I usually place a private field and the property that exposes it close together, instead of gathering all member variables at the top of the file. I do that because I dislike increasing the distance between things that conceptually belong together. The same idea appears in CotomyElement-related code as well, where I usually implement properties with lazy initialization using ??= on first access.\nThose rules were not hidden. They were present in my instructions, in the surrounding code, and often in the concrete examples I gave. Even so, Codex frequently ignored them and generated a completely different shape. Private member variables would be gathered far away from the properties that used them, and sometimes it would even create a separate area just to assign a batch of members together. The result was not necessarily impossible to fix, but it meant the first output often failed at the level of code organization even when the rough behavior looked usable.\nThis happened often enough that I stopped expecting the correct implementation on the first attempt. The agent could become useful very quickly, but precise compliance with the intended coding style was much less reliable.\nDesign Misunderstandings Another recurring problem was design misunderstanding at the type level.\nThere were times when Codex failed to generate valid TypeScript around CotomyElement.byId. The generic parameter of byId must extend CotomyElement. Even so, Codex sometimes used HTMLElement as the generic parameter.\nThis was not a subtle issue. It was easily verifiable from the type definitions in node_modules. That made the mistake especially frustrating, because it was not the kind of thing that required deep inference or hidden project knowledge.\nLater, I strengthened AGENTS.md and that reduced the frequency of this specific error. But in the early stages of development, many frustrations had exactly this flavor. Unused variables would appear for no real reason. Form classes would be defined in unrelated locations. Pieces of code that should have lived together would be separated without any architectural logic behind the separation.\nI also felt that tools like Codex and ChatGPT tended to emphasize responsibility boundaries more when reviewing code than when writing it. When generating code, they often seemed much less strict. Meaningless commonization and strange naming were almost routine.\nAI Development Myths Online, people often say that anyone can build systems with AI now. Some even say that engineers themselves will disappear.\nI think those claims are mostly built on a very small set of examples. When people actually show what they built, it is often something extremely simple, like a calculator, a clock, or some other tiny standalone program.\nThat is not the same thing as building a large system with durable structure. And it is very far from building one safely.\nIn my experience, building a large system with robust design using AI alone is still nowhere near present reality. Maybe that becomes possible in a very distant future, but at least from where I stand now, that goal is still so far away that it is not even clearly visible yet. Security-related areas are especially dangerous if handled without deep understanding. If I already encountered problems like the ones described above, I do not want to imagine what happens when someone delegates sensitive security behavior to AI without understanding it deeply.\nConclusion Even after all of these complaints, I still think AI provides extraordinary productivity. It is also true that AI helps me reach levels of quality that I probably could not have reached alone.\nThe issues I described here are relatively minor in one sense. They are not evidence that AI agents are useless. I think they are problems that can be addressed through stronger design discipline, better instructions, better review habits, clearer architectural boundaries, and improvement in my own engineering skill.\nBut that does not make them trivial. If my own engineering skill is weak, or if the system design is vague, AI does not save me from that weakness. It amplifies it. At the same time, if I am seriously trying to improve my engineering skill, AI can give remarkable support because of the sheer amount of knowledge it can bring into the conversation.\nThat is why I still think improving my own engineering skill and system design remains essential. AI is a powerful tool. But how it is used, and whether that power becomes real leverage or just accelerated disorder, still depends on the engineer.\nNext article: How AI Will Change Software Development Work ","permalink":"https://blog.cotomy.net/posts/misc/why-ai-agents-break-software/","summary":"A practical account of the real implementation and maintenance problems I ran into while using AI agents for internal business system development.","title":"Real Problems I Encountered When Developing With AI Agents"},{"content":"Previous article: Early Architecture Attempts Opening When ChatGPT suddenly became a global topic from late 2022 into 2023, I was extremely busy with company work. I did not have the spare attention to follow every news cycle in detail, but I still remember very clearly that it had taken over the conversation almost everywhere.\nThere was also a small side detail that was strangely memorable in Japan: a surprising number of people said ChatGTP when they meant ChatGPT. It was a minor joke at the time, but it also showed how quickly the name had spread beyond people who normally follow software closely.\nEven before I had fully organized my thoughts about it, I had already concluded that it was something I needed to try in real work.\nChatGPT Changed My Work Before It Changed My Code I started using ChatGPT soon after it became impossible to ignore. At least in my memory, it felt as if only a few days passed before the paid tier arrived and serious usage started to separate itself from casual curiosity.\nWhat I remember first is the GPT-3.5 period. It was already useful, but it still felt unstable, uneven, and sometimes oddly shallow. Then GPT-4 arrived, and I remember being genuinely surprised by how much the answer quality improved. It was not perfection, but it crossed a line where the tool started to feel materially different from ordinary search and ordinary code suggestion.\nIt was also not especially stable in those days. There were mornings when I would wake up, sit down to work, open ChatGPT, and find that it was down. On such days, my motivation for the entire morning dropped more than I would like to admit. That is half a joke, but only half.\nWhat the Early ChatGPT Phase Was Actually Good For At the beginning, my use was centered almost entirely on the browser UI. I would ask questions, request code, copy useful fragments, and then adapt them by hand.\nBut if I look back honestly, the main value for me was not dumping coding tasks into the chat and waiting for finished answers. The deeper value was design assistance. I could present an idea that existed only vaguely in my head, ask questions about structure, trade-offs, and alternatives, and use the conversation to turn rough intuition into something more concrete.\nCode generation mattered, but design clarification mattered more. In that phase, AI was already affecting my engineering work, even when it was not directly writing large amounts of code.\nSearch Work Started Moving Toward AI My work has never been limited to system development alone. I work as an internal systems engineer inside a non-IT company, but a meaningful part of my actual work is not directly about systems at all. Depending on the situation, that can include sales-related work, product and production management, procurement, quality control for consumer household goods, document preparation, and other ordinary business responsibilities that have little to do with software itself.\nBecause of that, a large portion of my day has always involved finding out how to do something. That behavior gradually shifted away from ordinary search engines and toward AI. Once GPT-4-level answers became normal, and later once web-assisted answers improved, that shift became much more visible.\nI also used generative AI in areas that had little to do with programming. One practical example was writing an SDS for one of the household products our company handles when there was no one inside the company who was already used to that work. AI helped me understand the general format, the kind of writing expected, and how to investigate the necessary information. In the end, some of that research still had to be done through more traditional means, including going to the library, but without AI the path to getting that document done would have been much less clear.\nAnother example was preparing reskilling training material in a field outside my own specialty. Expert support was available only in limited amounts, so I used AI to reduce how much expert time I needed while still building training material at a reasonably high quality. In both cases, AI did not remove the need for judgment. It reduced the amount of blind searching and let me use scarce human support more efficiently.\nI still use normal search when I need source verification, precise documents, or conflicting viewpoints. And depending on the task, I may also ask the same question to multiple AI tools and compare the answers. But for a very large part of practical knowledge work, asking AI became the faster first move. In that sense, ChatGPT changed how I worked before it changed how I coded.\nCopilot and the Era of Narrow but Powerful Assistance At that time, I was still using Visual Studio for Mac on my Mac. When the end of support became clear, I moved my daily work more seriously to VS Code. That migration also made it natural to start using the early GitHub Copilot more actively.\nCompared with what people now call AI agents, the early Copilot experience was extremely narrow. Even so, it reduced development cost quite a lot.\nIt was especially strong in a very specific kind of task: cases where the method contract was already clear, but the implementation itself was tedious. If the input and output were well defined, Copilot could often help meaningfully at the method level. For repetitive CRUD API work and similar routine implementation, it made development noticeably easier.\nStill, I think it is important to describe that period correctly. Copilot was not replacing software development. It was removing some of the most tedious and mistake-prone parts inside software development, and that was already a very large benefit.\nWhen ChatGPT Started Looking at My Editor I think this was around 2024, when ChatGPT on macOS entered a more useful phase for coding work through the Work with Apps feature and later expansion of supported coding applications. That mattered because the interaction stopped being only a detached browser conversation.\nOnce AI could look at files while I was working in VS Code, the practical pattern changed. I started selecting which files to expose, which context to emphasize, and how to create a situation in which the model could move with less ambiguity.\nThat was still not full agentic coding in the sense we now use the term, but it was already a major shift. The AI was no longer responding only to manually pasted excerpts. It was beginning to work while seeing more of the real local context.\nAround Claude Code and Codex Sometime around the middle to later part of 2025, I also tried Claude Code through the kinds of editor and terminal integrations that were available around that period. My impression at the time was mixed. To be clear, this was not because I thought Claude Code was unusable or weak. By that time, it was already clearly usable enough for real work, and many engineers were using it seriously. The main issue was simpler and more personal: it did not fit my preferences very well. In particular, in the version I tried, it occupied one of the editor panes, and that alone was enough to make the experience feel wrong for me. So even though it felt fast, and in some cases faster than the alternatives, the overall fit in terms of working style, UI friction, and cost balance was not comfortable enough for me to keep using it continuously. So I stopped after a relatively short trial.\nI say that carefully because these tools were changing quickly. Later forms of native editor integration became more polished, and I do not want to pretend that my earlier trial represented the final state of the product.\nThe tool that fit me more naturally in practice was Codex. Sometime in late 2025, when using it from the VS Code sidebar started feeling familiar, it matched my habits better. It felt closer to the flow I already knew from Copilot Chat, so the barrier to using it seriously was lower. In practical terms, Codex became the first tool I used seriously for day-to-day agentic coding.\nAt this point, I mainly use Codex, ChatGPT, and GitHub Copilot together. There are cases where one tool is enough. But there is also real value in using different models for review, asking similar questions in parallel, or comparing how each tool approaches the same change.\nBefore Agents, the Risk Was Relatively Small Before the current generation of coding agents, the range of work AI could really take over was limited. Because of that, the risk was also relatively limited.\nThe developer still had to understand the whole system, define the details, make the structural decisions, and translate those decisions into code. AI helped with one important part of that process: it could take over some of the most annoying and error-prone local work.\nThat alone was enough to raise my effective development speed substantially. If I had to describe it as a feeling rather than a measured number, it often felt like more than a fifty percent improvement. And because the instructions were still fairly simple, when the answer was wrong I could usually correct it with a small follow-up.\nAgentic Coding Changed the Scale of Both Speed and Damage When AI tools became agents, the situation changed completely.\nNow they could move across the project, inspect multiple files together, make coordinated edits, summarize the changes, and present the result in a form that was much closer to delegated work than to suggestion-based assistance. That gave me productivity from a different dimension.\nAt the same time, it also gave me code contamination from a different dimension.\nThat stronger wording is intentional. The gain is real, but the failure mode is also real. The same tool that can compress hours of mechanical work can also spread bad structure across multiple files faster than an ordinary human mistake usually does.\nWhat makes this worse, in my view, is not only the agent itself but also a weakness on the human side. When an enormous batch of changes arrives all at once, it becomes surprisingly difficult to maintain enough energy to verify every part of it properly. People tell themselves they will review it carefully, but in practice some part of the change is often waved through because the total volume is already exhausting. I cannot prove that this is the central reason damage spreads, but I do think it is one of the real reasons agentic coding can become more dangerous than it first appears.\nVibe Coding Can Produce Software, but Not Automatically Designed Software What is sometimes called \u0026ldquo;vibe coding\u0026rdquo; is a relatively recent phrase. I do not reject it outright. If the goal is to produce something that works, especially at small scale, then yes, AI can often get you surprisingly far. I also use that style myself for disposable shell scripts and very small programs. In that kind of case, what matters is often the result rather than the internal elegance, and sometimes I do not need to inspect the inside very much as long as the output is correct. For that kind of short-lived work, especially when there is little chance of causing a security incident or some other serious operational problem, vibe coding can be extremely well suited.\nBut working software is not the same thing as designed software.\nAI can generate a moving system. It does not automatically generate a system whose boundaries, responsibilities, extension points, and failure handling were shaped with clear intent. That difference matters more as the software becomes larger and more long-lived.\nFor a small tool, a rough shape may be enough. For business systems that will be modified repeatedly over time, it usually is not.\nWhy Design Remains Necessary This is the center of the issue as I currently see it.\nIf a system is not designed with clear intent, then repeated modification will eventually break it, regardless of whether the system is large or small. AI does not remove that rule. If anything, it can accelerate the path toward that outcome.\nTake a typical order management system. An AI agent can often produce the minimum working behavior without much trouble. But without design discipline, the processing starts concentrating into one oversized function. Similar logic gets duplicated across screens and endpoints, and awkward shared helpers appear that solve one local problem while damaging another part of the system.\nAfter that, every change makes the structure a little more complicated. No one sees the whole shape clearly anymore. Eventually the system turns into debt, even if it still technically runs. In other words, the real problem is that software complexity keeps increasing unless something deliberately holds it in check.\nThat is why I do not think the core problem is whether AI can generate code. The core problem is whether the software is being shaped by explicit design decisions or only by local convenience.\nCan AI Build Everything by Itself There are people who believe AI can build software entirely on its own. I do not want to dismiss that possibility in absolute terms.\nIn fact, over roughly the last year, a fairly large share of the code I produce has come from AI in one form or another. So this is not an argument from distance.\nMy current view is simply that such output becomes practically valuable only when someone is still holding the whole picture, understanding the details, making design decisions, and directing the work. Even if model quality keeps improving, I suspect the basic issue does not disappear. The scale at which things break may move upward, but the tendency for software complexity to keep increasing does not vanish.\nThat is my present view, not a final law. I am open to being proven too pessimistic. But I have not yet seen enough in practical work to conclude otherwise. And even if enough experience eventually makes these problems far more manageable, I think that would still mean we are only shifting toward a different kind of required skill. In that case as well, the people using AI would still need to build techniques, discipline, and working methods of their own.\nFinite Resources and the Problem of Experience Another reason I remain cautious is more operational than philosophical. As long as these services are provided at prices ordinary users can actually pay, the computational resources available to each request are finite.\nThat means an AI system cannot always hold or process all of the information that might matter beyond the immediately relevant slice. It can make very good proposals based on learned knowledge. What I am less convinced about is whether it can reuse accumulated experience in the same practical way a human engineer does across long periods of ownership and maintenance.\nMy guess is that this is not only a model-quality issue. At a more fundamental level, current AI still seems to me like a way of converting computation into results. Because of that, I suspect the problems I am talking about may become more manageable and more resistant to failure, but not truly disappear.\nThat is also why I separate this from speculation about a genuinely different technological basis. Maybe a completely different computing paradigm could change the picture. For example, if something like quantum computing became practical and general enough, perhaps the situation could change. Whether that would actually solve the problem is far from clear. But as long as we are talking about the broad family of systems we use now, I do not think resource limits are a minor detail.\nWhat AI Agents Keep Getting Wrong in My Own Framework This became clearer after I built Cotomy and then used Cotomy to build several systems. Working that way gave me a concrete environment in which I could watch AI agents help productively while also creating recurring structural problems.\nOne thing I learned very quickly is that instructions are not the same as compliance. I can write rules in AGENTS files. I can repeat those rules in prompts. And still, they are sometimes ignored.\nCotomy is a good example. CotomyElement does expose its underlying DOM element through the element property. But the framework is not designed around treating raw HTMLElement access as the normal primary path for ordinary screen work. Even so, AI will casually reach for direct HTMLElement handling if that seems locally convenient. Because of that, I sometimes need to audit for DOM policy violations explicitly.\nThe same thing happens with coding style. I sometimes prefer very local definitions using anonymous classes because they keep certain kinds of UI logic close to the place where it is used. AI agents have a tendency to normalize that into a different style even when I did not ask for such a change.\nI have also seen type-level misalignment. For example, where a generic flow is clearly intended to work with a CotomyElement subclass, an agent may still try to put an HTMLElement-oriented type into the design because that looks acceptable from a narrower local reading.\nNone of these mistakes are especially mysterious. They look to me like the result of incomplete retention of project-wide rules under finite context and finite resources, combined with a tendency to choose local optimization when global intent is not held firmly enough.\nAn Interim Conclusion At this point, I think the problem is wider than hallucination. AI also reproduces many of the ordinary design mistakes and convention violations that humans make.\nTo let coding agents do large amounts of work safely, you need high-quality instructions and a software structure that gives the agent a rational path to follow without constant ambiguity. In other words, the age of AI does not make design less important. It makes design more important.\nClosing This article is only an introduction to the problem as I see it. In the next few articles, I want to look more directly at why AI agents break software, how to let them work more safely, and what software design might need to look like in an era where code generation is normal.\nI am not interested in denying the value of AI. AI has already delivered productivity and even quality gains that would have been very difficult to obtain in the past. I expect that benefit to continue growing.\nBut everything depends on how it is used. For engineers, thinking seriously about how to use AI well is no longer optional. It is becoming part of the job itself.\nNext article: Real Problems I Encountered When Developing With AI Agents ","permalink":"https://blog.cotomy.net/posts/misc/ai-in-real-development-work/","summary":"A practical reflection on how AI changed my daily work, why coding agents raise both productivity and risk, and why design matters even more now.","title":"AI in Real Development Work"},{"content":"This note continues from Why Modern Developers Avoid Inheritance and Inheritance, Composition, and Meaningful Types .\nIntroduction In the previous article, I introduced the idea of meaningful types as a practical way to think about inheritance and composition.\nThat idea is simple to state, but it still leaves an important question behind. How do developers actually recognize a meaningful type while designing a system?\nI do not think that judgment appears immediately when someone first learns object-oriented programming. In many cases, it develops slowly through experience, after seeing both useful inheritance and misleading inheritance in real work.\nWhy Inheritance Often Feels Unnatural at First When I first learned object-oriented programming, inheritance did not feel impossible to understand, but it did feel difficult to use naturally.\nI could follow the syntax. I could understand what a base class was supposed to be. But when I tried to design real structures for myself, it was not always obvious where inheritance genuinely belonged.\nThat difference matters. Understanding the language feature is not the same thing as having design intuition for it.\nIt takes time before the structural role of inheritance starts to become visible. Until then, inheritance can feel like something that exists in textbooks and framework code, but not something that naturally appears in everyday design decisions.\nI sometimes suspect that this is one reason inheritance is so often avoided in modern frontend work. Many developers may simply not have had enough chances to build intuition for it.\nThere is probably another reason as well. Modern composition-oriented design is often strong enough to let teams build quite large features without falling into obvious structural collapse. If composition already provides a practical way to scale a screen, a feature, or a local UI structure safely, then the pressure to learn inheritance deeply is naturally weaker than it may once have been.\nThere may also be a broader shift in how frontend developers think about structure in the first place. In many modern frontend environments, especially those shaped strongly by React and Vue, developers may not think in classes very often at all. React now teaches function components and Hooks as the modern default, and Vue 3 recommends Composition API rather than class-based components. That does not mean classes disappeared everywhere. Angular, for example, still uses TypeScript classes directly. Even so, across a large part of the frontend world, class-centered design is no longer the main mental model.\nIf most design experience happens in that kind of environment, the situations where inheritance feels structurally natural may remain hard to see. It is not only that inheritance is difficult. It is also that many developers can continue building useful systems without being forced to develop much intuition for either inheritance or class-based design itself.\nState Modeling That Looks Like Type Modeling Part of the confusion comes from the kinds of modeling examples developers encounter early on.\nI remember seeing designs where different runtime states of an object were expressed as subclasses, even though the object itself remained the same kind of thing.\nAn estimate, for example, might be represented with subclasses such as UnderReviewEstimate and CompletedEstimate.\nclassDiagram class Estimate class UnderReviewEstimate class CompletedEstimate Estimate \u0026lt;|-- UnderReviewEstimate Estimate \u0026lt;|-- CompletedEstimate At first glance, that can look object-oriented. It appears to classify objects into more specific forms. But in most business systems, that structure is not actually describing a type hierarchy. It is describing state.\nIf the estimate is still an estimate before review, during review, and after completion, then those differences are usually better expressed as instance state, transition rules, and behavior tied to that state. They are not necessarily separate types.\nI saw similar patterns in other places as well. Sometimes customers were divided into MaleCustomer and FemaleCustomer as though sex itself defined a different structural type. In other cases, staff members were split into several subclasses only because they belonged to different internal classifications at the moment, even though they were all still staff members inside the same operational model.\nI should add an important limitation here. Much of what I saw came from Japanese contract development environments, including projects where I was sent into client sites in an SES-style arrangement. Because of that, I cannot claim with confidence that these examples reflect some global standard pattern. It is entirely possible that my experience is somewhat skewed, or that developers in other environments would find these examples much less familiar. I vaguely remember seeing similar modeling in books as well, but I am not confident enough in that memory to treat it as evidence. If this does not match your own experience, it is probably best to take this part as one engineer\u0026rsquo;s partial observation rather than as a universal rule.\nOf course, those distinctions can matter in business processing. They may affect permissions, workflow, display rules, reporting, or legal handling. But that still does not automatically make them good candidates for inheritance. In many cases, they are closer to attributes, classifications, or state carried by an instance than to a durable type hierarchy.\nThat is what made inheritance difficult to understand early on. The code looked object-oriented, but much of it was not expressing stable type meaning. It was expressing runtime state, mutable classification, or business-side labeling through subclasses.\nOnce that kind of modeling becomes common, inheritance starts to feel vague. It no longer looks like a way to represent structural roles. It starts to look like a slightly formal way to draw branches around whatever distinctions happen to exist in the current business rules.\nSeeing that kind of modeling early makes inheritance harder to understand, because the structure looks like object-oriented design while quietly representing something else. What is being drawn as a type hierarchy is often only a runtime status flow or a temporary classification boundary. Once that distinction becomes clear, many earlier inheritance structures start to look misplaced.\nWhy Frameworks Show Inheritance More Clearly Inheritance tends to appear more naturally in framework foundations than in everyday application modeling.\nIn application domains, many relationships are contextual. A screen contains forms. A process uses a service. An entity moves through states. Those relationships are often operational rather than structural.\nFramework infrastructure is different. There, stable roles are easier to identify.\nForm is a structural role. A concrete screen form can inherit from that role.\nA base service abstraction is a structural role. A more specific service implementation can inherit from that role.\nA controller boundary is a structural role. Individual controllers can extend it while preserving the same architectural responsibility.\nThese are not incidental similarities. They are stable positions in the architecture.\nAn External Service Example External service boundaries are a practical example of the same idea.\nAn application may first define a broader ExternalService role for integrations that live outside the application boundary. Under that, it may define more specific service roles such as StorageService, EmailService, or KeyValueStoreService. The application should then talk to those stable application-side abstractions rather than to each external API directly.\nConcrete implementations can be built under each role. A storage branch might include AzureBlobStorageService, S3StorageService, and, in some older environments, FtpStorageService. An email branch might later grow into SendGridEmailService or SmtpEmailService. A key-value branch might eventually separate into RedisKeyValueStoreService or another provider-specific implementation. Those systems all behave differently, expose different APIs, and carry different operational assumptions, but from the application\u0026rsquo;s point of view they can still belong under stable service roles.\nclassDiagram class ExternalService class StorageService class EmailService class KeyValueStoreService class AzureBlobStorageService class S3StorageService class FtpStorageService ExternalService \u0026lt;|-- StorageService ExternalService \u0026lt;|-- EmailService ExternalService \u0026lt;|-- KeyValueStoreService StorageService \u0026lt;|-- AzureBlobStorageService StorageService \u0026lt;|-- S3StorageService StorageService \u0026lt;|-- FtpStorageService In that structure, inheritance is not being used to collect convenience methods. It is being used to express meaningful roles in the system at more than one level.\nThe application-level types remain stable, while each provider-specific implementation encapsulates the differences in API shape, authentication style, error handling, and operational details.\nThat is useful for several reasons. It isolates service-specific behavior behind consistent application boundaries. It stabilizes the architecture seen by the rest of the application. It lowers the learning cost for other developers working in the codebase, because they can depend on clear service roles instead of learning each provider separately. It also reduces the impact of future changes in external provider APIs.\nThis kind of structure feels natural because each base type already means something on its own. Even before seeing any concrete subclass, ExternalService, StorageService, EmailService, and KeyValueStoreService can all read as meaningful architectural roles. Each of them describes a stable architectural responsibility rather than merely a place where shared methods happen to live.\nMeaning Comes Before Reuse That is the point I now find most useful.\nInheritance works best when the base class represents a meaningful role before any subclasses are added. In other words, the base type should still make sense as a type even if no subclass existed yet.\nFramework boundaries often satisfy that condition. UI structural types often satisfy it. Service abstractions often satisfy it.\nWhat usually causes trouble is the opposite pattern: a base class is introduced only because several classes happen to share code.\nMy own impression is that this difference becomes easier to see if I compare it with older structured-programming habits. In that style of development, many engineers tried very hard to avoid writing the same processing twice. That was understandable. If a function could be defined clearly through its input and output, then extracting common logic was often relatively straightforward. Of course that approach could still break down as systems grew, but the act of classification itself was usually simpler.\nObject-oriented design feels different to me. There, the most important question is not whether two pieces of code look the same at first glance. The more important question is whether the class name is appropriate and whether the type still has independent meaning as a type.\nShared code can justify refactoring, but it does not automatically justify inheritance. Reuse alone does not prove that a meaningful type exists.\nEven when the same code appears in multiple places, I do not think it is safe to conclude immediately that those places are expressing the same thing. Sometimes the logic is truly shared. Sometimes it only happens to look similar because different parts of the system currently pass through a similar step. That distinction is easy to lose if reuse becomes the first design goal.\nOnce I started looking at inheritance that way, many design choices became easier to evaluate. The question stopped being whether two classes looked similar enough. The more important question became whether the proposed base type still made structural sense as a type.\nRecognizing Meaningful Types Takes Time This also explains why the previous article could not end with a simple formula.\nIt is easy to say that inheritance should be used only where the abstraction remains meaningful. It is harder to recognize that meaning consistently without experience.\nDevelopers usually build that intuition gradually. They see frameworks where inheritance fits naturally. They also see application code where inheritance was used to represent convenience, temporary similarity, or object state, and where the structure became harder to reason about as a result.\nOver time, the distinction becomes clearer. The problem is not inheritance itself. The problem is whether the design is expressing a stable structural role or only forcing reuse into a hierarchy.\nConclusion Inheritance often becomes confusing when it is used to represent convenience or temporary similarity.\nIt feels much more natural when the base type expresses a stable role in the architecture.\nThat is why meaningful types matter so much. They provide the most reliable way I know to judge when inheritance belongs in a design and when composition is the more honest structure.\nDesign Series This article is part of the Cotomy Design Series.\nSeries articles: CotomyElement Boundary , Page Lifecycle Coordination , Form AJAX Standardization , Inheritance and Composition in Business Application Design , API Exception Mapping and Validation Strategy , Why Modern Developers Avoid Inheritance , Inheritance, Composition, and Meaningful Types , and Designing Meaningful Types.\nPrevious article: Inheritance, Composition, and Meaningful Types Next article: Object-Oriented Thinking for Entity Design ","permalink":"https://blog.cotomy.net/posts/design/08-designing-meaningful-types/","summary":"Explains how meaningful types help determine when inheritance is structurally appropriate.","title":"Designing Meaningful Types"},{"content":"This note continues from Inheritance and Composition in Business Application Design , Why Modern Developers Avoid Inheritance , and Form AJAX Standardization .\nIntroduction In the earlier articles, I discussed the modern tendency to avoid inheritance and the reasons that reaction became so strong.\nI do not personally reject inheritance in general. At the same time, there are clearly places where inheritance should not be used, and that judgment still has to be made carefully in every design.\nSo the real question is practical rather than ideological. In application design, when does inheritance fit naturally, and when is composition the better structural choice?\nModern Frontend Culture In modern development, especially on the frontend side, the common guideline \u0026ldquo;prefer composition over inheritance\u0026rdquo; is now very familiar.\nThat recommendation exists for good reasons. Frameworks such as React are designed around composition-oriented architecture.\nHistorically, React also provided class-based components through React.Component. Modern React development, however, is largely centered on function components and hooks.\nEven so, inheritance still exists inside many frameworks when they structure their own internal architecture.\nSo the issue is not inheritance as such. The issue is where inheritance is placed, and what kind of responsibility it is being asked to represent.\nCotomy\u0026rsquo;s Position Cotomy takes a fairly simple position on that point.\nAt the framework foundation level, inheritance is used positively. CotomyElement is a base boundary for DOM-oriented UI handling. CotomyForm extends that boundary for form behavior. CotomyPageController is also designed as a page-level structural base and is meant to be extended through CotomyPageController.set(class extends CotomyPageController { \u0026hellip; }).\nThose classes are not utility boxes. They exist as named structural roles inside the framework.\nAt least for now, I also do not see much need to push Cotomy toward something like React-style function components. CotomyElement is fundamentally a wrapper around HTMLElement, and CotomyPageController can already aggregate page-level behavior around those elements and forms. Just as importantly, Cotomy is designed with server-rendered HTML and client-side generated UI living together in the same screen model. In that kind of environment, the current inheritance-oriented structural base still feels like the better fit to me.\nInside an individual screen, however, the structure is usually assembled through composition. A page is made from forms, elements, and other local parts placed together for that screen.\nThat split is intentional. The framework foundation uses inheritance for stable roles. Screen content uses composition for local structure.\nWindows Forms as a Useful Comparison I already used Windows Forms briefly in the previous article, but it is worth returning to here from a slightly different angle because it shows the boundary between inheritance and composition very clearly.\nA Form represents a screen. Inside that screen, controls such as Button, TextBox, Label, and grid components are placed together.\nIn that sense, the structure of the screen is compositional from the start. A screen is a collection of controls.\nAt the same time, each concrete screen is normally defined by inheriting from Form.\nclass CustomerForm : Form { } class OrderForm : Form { } This is easy to understand because the base type already has an independent meaning. Form means a screen. CustomerForm and OrderForm are more specific screens built on that role.\nThat kind of inheritance does not feel forced because the type relationship itself is meaningful.\nA Note on Swing For many years I used Windows Forms heavily. Later, after my personal development environment moved to Mac, I switched some client-side work to Java Swing.\nAt the time, NetBeans was available without cost and was quite practical as a development environment.\nWhat mattered more to me was that, in one important structural sense, Swing and Windows Forms were not very different. A screen was still a screen, and the screen was still built from controls and other contained elements.\nThe event model was different. Swing is built around listener-based event handling, while Windows Forms exposes events through the Control hierarchy.\nHowever, that difference does not change the basic structure. A screen is still composed from controls and contained UI elements.\nWhere Inheritance Was Actually Used What is interesting is that, even in those desktop UI environments, I almost never used inheritance between screen classes in everyday screen design.\nThe screen layout itself was nearly always built through composition. Buttons, inputs, labels, tables, and small reusable pieces were arranged inside a screen. Even when two screens looked similar, they were often only superficially similar.\nThat is an important distinction. Similar appearance does not automatically imply a meaningful inheritance relationship.\nCustom Controls Are Different Custom controls are a different case.\nWhen extending an existing control, inheritance usually feels natural. If the goal is to keep button behavior and extend it slightly, inheriting from Button is clearer than starting again from a lower-level control type.\nclass StateButton : Button { private int _state = 0; protected override void OnClick(EventArgs e) { base.OnClick(e); _state = (_state + 1) % 3; Invalidate(); } } The point here is not the exact implementation. The point is that the type is still a button. It has button behavior, button expectations, and then some additional state or rendering.\nThat is very different from creating an abstract base screen only because several screens happen to share a toolbar or a few fields.\nWhy Screens Rarely Need Inheritance Why, then, do screens so often end up as composition rather than inheritance?\nOne reason is that screens are independent working surfaces. A screen is usually built as a collection of controls with its own layout, data flow, and operational context. Even if two screens share a toolbar, a set of inputs, or a rough arrangement, that often means only that they happen to resemble each other.\nIn other words, the commonality is local, not necessarily typological.\nWhen that is the case, composition is usually the more honest structure. A shared toolbar can become a reusable control. A shared form section can become a reusable part. The screen itself does not need to become a subclass merely to reuse those pieces.\nMeaningful Types This is the point that matters most to me.\nWhen designing a class, I have long tried to ask whether that class stands as a meaningful type in its own right.\nMany inheritance failures happen because engineers start from shared code and then try to turn that shared code into a base class. But shared code is not the same thing as a meaningful abstraction.\nIf a base class exists only because several classes coincidentally share some fields, helper methods, or a fragment of layout, the inheritance line is already suspicious. The base may reduce duplication for a while, but it does not necessarily express a stable concept.\nThat is why I think inheritance should begin from meaning, not from convenience.\nA framework base such as CotomyForm has a clear role. A desktop base such as Form has a clear role. A custom button derived from Button still has a clear role. Those types make sense before any concrete subclass is added.\nMany partial screen abstractions do not satisfy that condition. They are often only bundles of convenience.\nEntity Modeling Has the Same Risk The same thing happens in entity design.\nTextbook examples often use simple classification trees such as mammal, human, and dog. Those examples are easy to draw, but actual business domains often do not look like that.\nIn real projects, I once created a structure closer to Item and Product. Materials were handled as Item, and sales targets were represented as Product derived from it.\nAt first that looked reasonable. Later it became clear that the common properties were only accidentally similar. The structure was not expressing a durable type relationship. It was mixing inventory-like identity with sales-specific concerns.\nA better design would have been closer to separating Item and SalesInformation, then combining them where necessary.\nThat experience made the same lesson visible again. Similar fields do not guarantee a meaningful base type.\nNo Absolute Rule There is no absolute answer here.\nInheritance and composition are not enemies. They express different structural relationships. The problem begins when one of them is used to represent a relationship it does not actually describe.\nIf AI-assisted development keeps accelerating implementation speed, that design judgment becomes even more important. Code generation can produce structure quickly, but it does not automatically produce meaningful types.\nThe value of design is therefore not reduced by faster implementation. If anything, it becomes more visible.\nConclusion I still do not feel much resistance to inheritance itself. Even so, when I think through domain and screen design carefully, I find that I actually do not use inheritance very often.\nIn practice, the question is rarely inheritance or composition by itself. The real question is whether the structure being created represents a meaningful type.\nWhen the base type expresses a real role, such as a framework boundary, a UI control type, or another stable structural component, inheritance can be very natural.\nWhen the commonality is only local reuse, composition is usually the more honest design.\nUnderstanding that difference matters more than choosing sides in the inheritance versus composition debate. It may also help explain why modern technical culture moved in this direction. The valid cases for inheritance are real, but they may be fewer than older engineers of my generation once assumed.\nDesign Series This article is part of the Cotomy Design Series.\nSeries articles: CotomyElement Boundary , Page Lifecycle Coordination , Form AJAX Standardization , Inheritance and Composition in Business Application Design , API Exception Mapping and Validation Strategy , Why Modern Developers Avoid Inheritance , and Inheritance, Composition, and Meaningful Types.\nPrevious article: Why Modern Developers Avoid Inheritance Next article: Designing Meaningful Types ","permalink":"https://blog.cotomy.net/posts/design/07-inheritance-composition-and-meaningful-types/","summary":"Explores when inheritance and composition fit naturally, and why meaningful types matter more than convenience.","title":"Inheritance, Composition, and Meaningful Types"},{"content":"This note continues from API Exception Mapping and Validation Strategy , Inheritance and Composition in Business Application Design , and Form AJAX Standardization .\nIntroduction I was trained in a more classical object-oriented style.\nIn my early professional years, inheritance was used very naturally. The distinction between is-a and has-a relationships was treated as one of the basic foundations of design, and that way of thinking still remains part of how I structure systems today.\nThat does not mean I try to force inheritance everywhere. I use composition where it makes more sense, and in many parts of modern application design it clearly does. But I have never felt that inheritance itself should be rejected automatically.\nLimited Use of Inheritance in Models Even so, I am careful about inheritance in application models.\nIn domain models, inheritance can become expensive very quickly. Once many concrete types depend on a shared base, a small modification in that base can propagate through a large area. The verification cost rises with it, and the risk is often not visible until the model has already spread through many screens and services.\nDiscovering the Cultural Shift What surprised me over time was not the recommendation to use composition in some areas. What surprised me was how often modern discussions treated inheritance itself as something undesirable.\nWhen I read discussions online, I often found a stronger position than I expected. Some developers did not just say that composition is often better. They said inheritance should never be used at all.\nWhat felt particularly strange to me was that inheritance and composition were sometimes discussed as though they were parallel options from which one should be selected for an entire system. From my perspective, they are not that kind of pair. They describe different relationships and answer different design needs.\nAn inheritance relationship and a composition relationship are not interchangeable in the first place. The question should usually be which requirement is being modeled, and which structural relationship fits that requirement. It does not make much sense to decide at the whole-project level that one of them should replace the other everywhere.\nMy own experience comes primarily from Japanese development environments, so I do not claim this article as a universal conclusion. I have worked in Japan, and much of the information I gather in day-to-day practice is also filtered through Japanese-language discussions and Japanese development culture. So it is entirely possible that the global picture is somewhat different from what I have observed. This article is closer to a personal observation about how design culture appears to have shifted when viewed from that background.\nFramework Influence One obvious factor is the influence of modern frontend frameworks.\nReact and similar frameworks strongly normalized composition-oriented thinking. Inheritance is still technically possible in such ecosystems, but it is rarely the recommended way to organize application behavior. The surrounding patterns, examples, and community habits all guide developers toward composition instead.\nThis is not accidental. Those frameworks were designed so that developers could structure complex UI behavior without depending on inheritance hierarchies. Once a generation of developers learns architecture through that style, the cultural default naturally changes with it.\nPersonal Position My own position is still fairly simple. Inheritance should not be feared. It remains useful in some contexts, and Cotomy itself uses inheritance in its foundation classes.\nI have also noticed that many developers of roughly my own generation do not instinctively hate inheritance either, even when they are quite cautious about where to apply it. So I do not think this difference is only about theory. It also seems to reflect when and where a person learned to build software.\nAt the same time, I understand why user-level design in many frameworks prefers composition. There is no contradiction there. A framework can rely on inheritance inside its own structural layers while still giving application developers a way to organize the parts of what they want to build as a set of assembled pieces, and to realize varied behavior without depending on inheritance at the application level.\nRisks of Inheritance I do agree that inheritance creates real risks when used carelessly.\nIn domain models, a base class can create widespread coupling. The more shared assumptions accumulate in that base, the harder it becomes to change anything safely.\nIn UI components, the danger is slightly different. A base class change can alter rendering behavior, event wiring, or layout assumptions across many screens at once. The impact analysis becomes harder because the dependency is structural rather than local.\nSo I do not disagree with the warning. Careless inheritance should be avoided.\nIn fact, when the goal is simply future change resistance, separately defining similar properties or behavior can sometimes be safer than forcing them into one inheritance line too early. Shared structure reduces duplication, but it also concentrates impact.\nThe Real Question The real issue is not inheritance itself, but how inheritance is used.\nThat is the question that matters more than whether inheritance is good or bad in the abstract.\nInheritance tends to fail when the base class is not a meaningful abstraction, but only a container for utility methods or a place to collect unrelated shared fields. At that point the inheritance line no longer represents an object-oriented relationship. It becomes only a distribution mechanism for convenience code.\nThat kind of structure is usually where people start to hate inheritance, and understandably so.\nTo me, inheritance makes sense when the base type still has meaning as a type, even if it cannot exist directly as a concrete instance. What matters is that the base is not merely a storage place for common code, but a clearly meaningful thing in its own right, with more specific screens or objects derived from it.\nIn that sense, I sometimes think old textbook examples did not help very much. Explanations built around mammal and human, or employee and clerk, may have been intended to teach classification, but they did not always help people understand inheritance as a design tool. They could make inheritance look like a taxonomy exercise rather than a structural way to express a meaningful abstraction and its more concrete forms.\nA useful comparison is the old Windows Forms model. A Form represented a screen and exposed operations appropriate to a screen, while the detailed behavior belonged to each derived screen class. Inside that screen, controls were then assembled through composition. That division always felt reasonable to me. Form had a clear independent meaning as Form, and each individual screen existed as a derived form built on that role. Inheritance defined the structural role of the screen, and composition defined what the screen contained.\nA Real Project Example I remember an older ASP.NET project where many controllers inherited from a common base class. At first glance, that looked ordinary. But the base class had gradually turned into a storage area for unrelated helpers.\nUser information retrieval lived there. Data formatting lived there. Message creation lived there. Several small operational shortcuts were added there as the project expanded.\npublic abstract class BaseController : Controller { protected UserContext CurrentUser =\u0026gt; ...; protected string FormatAmount(decimal value) =\u0026gt; ...; protected string CreateMessage(string code) =\u0026gt; ...; } This was not really object-oriented abstraction. It was closer to structured programming implemented through classes. The class hierarchy was carrying procedural convenience functions, not a stable conceptual model.\nWhen developers see inheritance in that form repeatedly, it is not surprising that they become suspicious of inheritance itself.\nWhy Teams Sometimes Ban Inheritance Because of that history, banning inheritance can be a rational rule in some environments.\nOn large projects, teams are often assembled under constraints that have little to do with ideal design. Hiring quality is uneven. External engineers rotate in and out. Skill levels vary more than resumes suggest. In some cases, very large systems are built by temporarily gathering many people who have never worked together before.\nI have seen projects in Japan where this was completely normal. A team of more than twenty people could be assembled for a single web system, while only a handful actually belonged to the main vendor and the rest were effectively a temporary collection of outside engineers. I was one of those people myself more than once, so I do not say that from a distance.\nIn that kind of environment, the difficult part is not only writing code. The difficult part is predicting how safely each developer will extend existing structures. Sometimes the variation is so wide that architectural freedom itself becomes a source of risk.\nThe unpleasant reality is that resumes and interviews do not tell you enough. People naturally present themselves at their best during hiring, and in practice it was not rare to discover after onboarding that someone could not safely handle the level of design responsibility they had implied. That is not a theoretical staffing problem. On some projects it becomes part of the architecture problem.\nUnder those conditions, a blanket rule such as avoid inheritance can be a practical control measure. It reduces one category of high-impact mistakes, even if it also removes a legitimate tool.\nSeen from that angle, an inheritance ban is not necessarily an intellectual position. It can be an operational safeguard.\nFrontend Language Constraints Another factor may be the historical shape of frontend languages themselves.\nEarly JavaScript did not make classical object-oriented design especially pleasant. Class definitions were awkward compared with languages that treated classes as a first-class syntax and design boundary. Many developers simply avoided object-oriented structure because the language did not reward it.\nI remember very clearly looking at old JavaScript class-style patterns and feeling that trying to preserve a classical OOP style there was more trouble than it was worth. That was one reason I accepted jQuery quite naturally at the time. The language could support object-oriented programming in a broad sense, but it did not feel like a language that wanted to help me do it well.\nLibraries such as jQuery also did not require users to think in object-oriented terms. Internally they may have had their own abstractions, but users were not asked to model screens or behavior through inheritance hierarchies. A great deal of frontend work could be done without formal modeling, inheritance design, or explicit class structure. At least to me, that environment seemed to make it easier for frontend practice to develop at some distance from classical OOP habits.\nExpansion of Frontend Development There may also be a broader social reason.\nAs frontend development expanded rapidly, many developers entered the field from design-oriented or markup-oriented backgrounds. Some of them were excellent implementers, but not necessarily trained through the traditional object-oriented curriculum that earlier business application developers often went through.\nMy impression is that this may have reinforced composition-first thinking, because frontend work often centered on assembly, interaction, and presentation rather than modeling. Developers who specialized only in frontend work may simply have had fewer chances to become comfortable with object-oriented modeling, especially inheritance-based modeling. That is only a hypothesis, not a proven explanation, but it seems plausible to me.\nA Balanced Interpretation For that reason, I do not think the modern avoidance of inheritance is necessarily wrong.\nSoftware engineering evolves through repeated experimentation. Teams keep the methods that reduce failure in their own context. If avoiding inheritance gives a team more predictable results, that may be the correct choice for that environment.\nThe goal is not ideological purity. The goal is to build reliable systems.\nCotomy and Inheritance My own conclusion remains practical. Cotomy uses inheritance in its core architecture, because framework-level abstractions often benefit from it.\nCotomyForm extends CotomyElement, and more specialized form classes are built on top of that line. CotomyPageController is also designed as an inherited structural base for page-level behavior. Application-level modeling, however, often benefits from more selective composition. Those are different design layers, so I do not expect them to follow exactly the same rules.\nCotomy does not try to eliminate inheritance entirely. It tries to use it where the abstraction is structural and stable.\nFor that reason, my expectation is that Cotomy will probably feel more natural to developers who mainly build server-oriented business applications and want a stronger frontend structure for those systems, rather than to developers whose habits were formed entirely inside modern composition-first frontend ecosystems. If that is true, Cotomy users may not be especially inclined to reject inheritance in the first place.\nConclusion Inheritance is not inherently harmful. What causes trouble is misuse.\nModern development culture may avoid inheritance for understandable reasons: painful historical experience, team structure, framework influence, and language ecosystem constraints. Different parts of the industry can develop very different architectural instincts from those conditions.\nI find that difference interesting in itself, because software design culture is shaped not only by theory, but also by tools, teams, and the environments in which people learned to build systems.\nDesign Series This article is part of the Cotomy Design Series, which explores architectural decisions behind the framework.\nSeries articles: CotomyElement Boundary , Page Lifecycle Coordination , Form AJAX Standardization , Inheritance and Composition in Business Application Design , API Exception Mapping and Validation Strategy , Why Modern Developers Avoid Inheritance, and Inheritance, Composition, and Meaningful Types .\nNext article: Inheritance, Composition, and Meaningful Types ","permalink":"https://blog.cotomy.net/posts/design/06-why-modern-developers-avoid-inheritance/","summary":"Explores why many modern developers avoid inheritance, examining cultural, historical, and practical factors in business application development.","title":"Why Modern Developers Avoid Inheritance"},{"content":"This note continues from Inheritance and Composition in Business Application Design , Form AJAX Standardization , and Page Lifecycle Coordination .\nIntroduction API failure handling is one of those areas where small stylistic choices become large operational costs. On small screens, almost any style can work. On long-lived business systems, error handling style directly changes readability, recovery behavior, and maintenance speed.\nThis article explains why I prefer exception-oriented API handling in Cotomy, how validation should be treated as an HTTP contract, and why async/await made this approach more natural in everyday business code.\nJavaScript Error Handling Before async/await In JavaScript, async/await arrived in ECMAScript 2017. In Node.js, it became officially supported in Node 7.6 in 2017.\nBefore that, most asynchronous code around HTTP calls relied on Promise chains. Since fetch itself is Promise-based, code often looked like this:\nfetch(\u0026#34;/api/users\u0026#34;) .then(r =\u0026gt; r.json()) .then(data =\u0026gt; render(data)) .catch(e =\u0026gt; handleError(e)); This was not wrong. It was the normal style at the time.\nWhy Promise Chains Felt Unnatural The problem was not syntax preference. The problem was flow structure.\nPromise chains tend to separate steps into fragmented blocks. Success logic and failure logic are connected by conventions, but visually they are split across chained callbacks. In business screens where requests, validation, and side effects are mixed, this fragmentation increases mental load.\nFor many practical cases, Promise chains made error flow feel like a transport detail rather than a first-class control path.\nThe Arrival of async/await With async/await, the same fetch flow became structurally closer to synchronous code:\ntry { const response = await fetch(\u0026#34;/api/users\u0026#34;); const data = await response.json(); render(data); } catch (error) { handleError(error); } This is why the shift mattered in practice. Promise chains can split structure, while try/catch keeps success and failure in one readable block. In business processing, exception-oriented flow is usually easier to follow during implementation and debugging.\nCotomy\u0026rsquo;s Exception-Oriented API Handling By default, the fetch API does not throw exceptions for HTTP errors such as 400 or 500 responses. Cotomy intentionally converts those responses into exceptions so that HTTP failures participate in the same control flow as runtime errors.\nCotomy treats HTTP failures as exceptions on the client side. API failures are converted into CotomyApiException-derived types, and the caller handles them through try/catch.\nThis policy is intentionally simple.\nThe flow stays linear. The success path and failure path are explicit. Promise chain branching is avoided. The handler reads like a business transaction: submit, parse, branch by failure type if needed.\nHTTP Status Codes as Validation Contracts Validation is not just a UI concern. It is part of the HTTP response contract.\nI prefer defining status meanings clearly and treating them as stable operational agreements between server and client:\nStatus Meaning 200 Success 201 Created 400 Invalid request format or missing required input 409 Conflict 422 Validation error 500 Server error Validation is not always an exceptional situation from a domain perspective. However, on the client side it is often handled as an exception-like control path in order to keep UI flow consistent.\nCotomyApiException Class Structure The exception hierarchy in Cotomy is designed to mirror HTTP semantics and make typed branching straightforward in UI code.\nclassDiagram Error \u0026lt;|-- CotomyApiException CotomyApiException \u0026lt;|-- CotomyHttpClientError CotomyApiException \u0026lt;|-- CotomyHttpServerError CotomyHttpClientError \u0026lt;|-- CotomyUnauthorizedException CotomyHttpClientError \u0026lt;|-- CotomyForbiddenException CotomyHttpClientError \u0026lt;|-- CotomyNotFoundException CotomyHttpClientError \u0026lt;|-- CotomyConflictException CotomyHttpClientError \u0026lt;|-- CotomyRequestInvalidException CotomyHttpClientError \u0026lt;|-- CotomyTooManyRequestsException By splitting exception classes by HTTP intent, UI code can branch by type without inventing its own parallel error taxonomy. This keeps HTTP specification and application structure aligned.\nThe practical boundary in Cotomy is this: unexpected runtime failures remain generic Error-level failures, while HTTP contract failures are grouped under CotomyApiException as application-level exceptions.\nThis is also where naming history matters. CotomyHttpClientError is still part of the CotomyApiException hierarchy and works as a fallback for unmapped 4xx cases. So in behavior it belongs to the API exception flow, even if the class name itself uses Error.\nIn hindsight, some names could have been more explicit. But those names are already part of the public API surface, so I keep them in the current major line for compatibility and revisit naming consistency in the next major version.\nThe current Cotomy code also keeps non-HTTP failures separate, for example response JSON parse failure and invalid body input handling. Those are not status-mapped API failures, and separating them keeps the contract boundary explicit.\nValidation and API Failure Sequences At runtime, form submit behavior can be described as one sequence with clear status-dependent branches:\nsequenceDiagram Browser-\u0026gt;\u0026gt;CotomyForm: submit() CotomyForm-\u0026gt;\u0026gt;CotomyApi: POST /api/entity CotomyApi-\u0026gt;\u0026gt;Server: HTTP Request alt Success Server--\u0026gt;\u0026gt;CotomyApi: 200 OK CotomyApi--\u0026gt;\u0026gt;CotomyForm: Response else Validation Error Server--\u0026gt;\u0026gt;CotomyApi: 422 Validation Error CotomyApi--\u0026gt;\u0026gt;CotomyForm: throw CotomyApiException CotomyForm--\u0026gt;\u0026gt;UI: cotomy:submitfailed event else Server Error Server--\u0026gt;\u0026gt;CotomyApi: 500 CotomyApi--\u0026gt;\u0026gt;CotomyForm: throw CotomyHttpServerError end And a typical client handler looks like this:\ntry { const response = await api.submitAsync({ method: \u0026#34;POST\u0026#34;, action: \u0026#34;/api/users\u0026#34;, body: formData }); const result = await response.objectAsync(); render(result); } catch (error) { if (error instanceof CotomyRequestInvalidException) { showValidation(error.response); } else if (error instanceof CotomyConflictException) { showConflictMessage(); } else { showUnexpectedError(error); } } This reflects Cotomy\u0026rsquo;s broader design: server side defines strict HTTP contracts, and client side handles validation failures, API failures, and server errors in one consistent exception flow. The result is simpler UI code and more predictable error handling.\nConclusion Promise chains were used for historical reasons, and they were the practical option before async/await became standard. But async/await made exception structure far more natural for business application flows.\nCotomy intentionally treats HTTP failures as exceptions, maps them into explicit exception classes, and expects validation to be designed as an HTTP contract. When server contracts are strict, client design becomes simpler, clearer, and more maintainable.\nDesign Series This article is part of the Cotomy Design Series, which explores architectural decisions behind the framework.\nSeries articles: CotomyElement Boundary , Page Lifecycle Coordination , Form AJAX Standardization , Inheritance and Composition in Business Application Design , API Exception Mapping and Validation Strategy, and Why Modern Developers Avoid Inheritance .\nNext article: Why Modern Developers Avoid Inheritance ","permalink":"https://blog.cotomy.net/posts/design/05-api-exception-mapping-and-validation-strategy/","summary":"How Cotomy treats API failures and validation through structured HTTP responses and exception mapping.","title":"API Exception Mapping and Validation Strategy"},{"content":"This note continues from Form AJAX Standardization , Page Lifecycle Coordination , and CotomyElement Boundary .\nEarly Programming Experience I first learned programming at technical college. The first language was COBOL, and after that I studied C.\nCOBOL, however, never became my practical language at work. Unfortunately, by the time I graduated COBOL was already considered a declining technology, and there were plenty of experienced engineers around. A brand-new developer like me was not exactly in high demand.\nThe first language I actually used professionally was Visual C++. That was also my first real experience with object-oriented programming. In the early 2000s, inheritance was used very naturally when modeling domain structures. Encapsulation, polymorphism through inheritance, and the idea that data and behavior should be packaged together were often described as a major conceptual shift from structured programming.\nTo be honest, I did not strongly feel that shift at the time, probably because my own skill level was still low. I was still busy learning how to build things that simply worked. But for engineers who were already performing at a high level in that period, I believe the transition was much more impactful.\nHow Object-Oriented Thinking Was Drilled Into Us At that time, junior programmers were drilled on two relationships as core foundations: is-a and has-a. I was taught those repeatedly, and I treated them as one of the most important practical anchors when learning object-oriented design.\nLooking back, I sometimes think even other OOP elements were often explained through that lens by the people teaching us. Whether that was their explicit intention or not, at least in my case the message was very consistent: understand is-a and has-a first, then build the rest of your design thinking on top of that.\nAfter more than ten years of mostly solo development, one thing has become clear to me recently. The industry perspective around those concepts has shifted quite a lot.\nCotomy and Classical OOP Cotomy itself is still built in what many people today would call a more classical object-oriented style. Many frameworks, even modern ones, still rely heavily on inheritance internally when they need to structure behavior at scale. CotomyPageController and other base classes in Cotomy follow that approach.\nThis is not a claim that inheritance is universally superior. My personal view is simpler. The most important practical value of object-oriented programming is that data and the operations acting on that data are defined together.\nHumans are not very good at connecting data defined in one place with behavior defined somewhere else. When those two are separated too much, mental distance grows. That distance is not an abstract concern for me. It is one of the main causes of fragile design in real projects.\nCotomy itself is designed to reduce those distances wherever possible. One visible example is that CotomyElement accepts HTML and scoped CSS together. That choice was intentional. I did not avoid TSX-style embedded HTML because of technical limitations. I avoided it because I preferred a different architectural boundary.\nThe Modern Frontend Position In modern frontend development, inheritance is often discouraged.\nIdeas like this already existed more than ten years ago, but the context was different. Earlier discussions were often about inheritance in data modeling. In that area, a base class can create strong pressure against change. Once many parts of the model depend on one inheritance line, even small changes can become expensive. Systems can become harder to evolve while they are still under active development.\nI am relatively comfortable with inheritance and tend to use it more than many current frontend discussions would suggest. Even then, I do not use inheritance just to share processing, especially in domain models. That kind of inheritance often creates accidental coupling rather than design clarity.\nAt the same time, if a base type is semantically complete and truly stands as a valid abstraction, I think it should be inherited even when it is an abstract class. For me, the key is not whether inheritance is fashionable, but whether the meaning of the base remains structurally sound.\nThe Composition Recommendation Modern guidance often says composition should be preferred over inheritance.\nI understand why that recommendation spread, but the phrase has always felt a little strange to me. Inheritance and composition represent different kinds of relationships in object-oriented design. They solve different problems.\nInheritance and Composition in Practice A very ordinary example is Windows Forms. A screen class inherits from Form. Then controls are placed inside the form. That second part is composition.\nThis pattern is natural and common. Cotomy follows the same pattern. A page controller is inherited as a structural base, and then elements and forms are composed inside the page.\nI do not believe inheritance should solve everything. But composition also does not solve everything.\nCotomy itself intentionally uses both approaches. Page controllers are defined through inheritance, while UI elements and forms are composed inside the page. This separation keeps responsibilities clear.\nReact and Composition-Driven Frameworks Frameworks such as React are designed around composition from the beginning. With that architecture, inheritance is less practical as a primary design mechanism.\nReact itself still uses inheritance internally through React.Component, but component reuse and extension are expected to be implemented through composition rather than subclassing.\nI personally do not prefer that style, but I can understand why it became dominant.\nA Broader View Most modern language features exist for one practical purpose: helping humans understand large and complex systems.\nSoftware systems are fragile logical constructions. Encapsulation, inheritance, composition, and modularization are all tools for reducing cognitive load. None of them is perfect, and each can be misused.\nFor that reason, I am cautious about arguments that fully reject one feature only because it can be abused. Misuse is real, but complete rejection is not always productive.\nConclusion Inheritance can certainly be abused, and many teams have painful examples of that. Still, rejecting it entirely may also discard a useful conceptual tool.\nFor engineers who need to deliver reliable systems under time constraints, the goal is not ideological purity. The goal is clarity of design.\nDesign Series This article is part of the Cotomy Design Series, which explores architectural decisions behind the framework.\nSeries articles: CotomyElement Boundary , Page Lifecycle Coordination , Form AJAX Standardization , Inheritance and Composition in Business Application Design, API Exception Mapping and Validation Strategy , and Why Modern Developers Avoid Inheritance .\nNext article: API Exception Mapping and Validation Strategy ","permalink":"https://blog.cotomy.net/posts/design/04-inheritance-and-composition-in-business-application-design/","summary":"A reflection on inheritance and composition from long-term business system development experience.","title":"Inheritance and Composition in Business Application Design"},{"content":"This continues from CotomyPageController in Practice .\nBusiness Screens Often Share the Same Structure As discussed in other articles of this journal, business systems usually define many entities, and those entities are connected through complex relationships. Users still need to perform concrete operations on them every day: search, inspect, create, update, and delete.\nCreate and edit requirements can vary widely by business rule and approval flow. Some screens need simple forms, while others require conditional fields, embedded tables, or staged confirmation. Even then, the operational core is often still CRUD with the same user intent: find data, understand data, and change data safely.\nEntity types are also broader than a short list can express. Depending on the system, targets can include users, products, suppliers, orders, production records, inventory snapshots, pricing rules, and many other domain-specific units. Display and search patterns differ, but teams usually keep the same operational screen flow so users can move across modules without relearning the UI.\nThat consistency is not only an implementation convenience. It helps users become productive faster and reduces hesitation during operations. For that reason, screens with completely different interaction styles should be introduced carefully and only when requirements truly demand it.\nExample: A List Page Structure A typical list page starts with a condition form and a table. Each row can navigate to a detail screen when clicked.\n\u0026lt;form id=\u0026#34;condition-form\u0026#34;\u0026gt; \u0026lt;input type=\u0026#34;text\u0026#34; name=\u0026#34;keyword\u0026#34; placeholder=\u0026#34;Search\u0026#34;\u0026gt; \u0026lt;button type=\u0026#34;submit\u0026#34;\u0026gt;Search\u0026lt;/button\u0026gt; \u0026lt;/form\u0026gt; \u0026lt;table id=\u0026#34;entities-table\u0026#34;\u0026gt; \u0026lt;thead\u0026gt; \u0026lt;tr\u0026gt; \u0026lt;th data-sort=\u0026#34;id\u0026#34;\u0026gt;ID\u0026lt;/th\u0026gt; \u0026lt;th data-sort=\u0026#34;name\u0026#34;\u0026gt;Name\u0026lt;/th\u0026gt; \u0026lt;/tr\u0026gt; \u0026lt;/thead\u0026gt; \u0026lt;tbody\u0026gt; \u0026lt;tr data-entity-id=\u0026#34;1\u0026#34;\u0026gt; \u0026lt;td\u0026gt;1\u0026lt;/td\u0026gt; \u0026lt;td\u0026gt;Example\u0026lt;/td\u0026gt; \u0026lt;/tr\u0026gt; \u0026lt;/tbody\u0026gt; \u0026lt;/table\u0026gt; This same pattern appears across many screens: user lists, product lists, and order lists.\nIntroducing CotomyListPageController Because this structure repeats across many screens, it is natural to standardize the behavior around it.\nA practical way to formalize this pattern is defining a controller class such as CotomyListPageController on top of CotomyPageController.\nThis is not presented as the only right architecture. It is one concrete pattern for building an application-level foundation: you define behavior categories first, then place each screen in a clear category boundary.\nIn this approach, a list controller is not only about reusing event handlers. It becomes a structural base for a screen category inside the application, which helps both implementation consistency and design decisions.\nClass Structure classDiagram CotomyPageController \u0026lt;|-- CotomyListPageController CotomyListPageController --\u0026gt; CotomyElement : entitiesTable CotomyListPageController --\u0026gt; CotomyQueryForm : conditionForm Lazy Element Access Pattern One useful pattern is storing CotomyElement | null members and resolving them lazily in getters with ??=.\nimport { CotomyPageController, CotomyElement } from \u0026#34;cotomy\u0026#34;; export class CotomyListPageController extends CotomyPageController { private _entitiesTable: CotomyElement | null = null; protected get entitiesTable(): CotomyElement { return this._entitiesTable ??= CotomyElement.byId(\u0026#34;entities-table\u0026#34;)!; } } This keeps DOM lookup logic centralized and makes page structure explicit inside the controller. It keeps DOM access predictable and avoids repeated lookups when the same element is used across multiple controller methods.\nInitializing the List Behavior List pages usually need the same initialization: condition form setup and row click navigation.\nIn CotomyPageController, initializeAsync is the right place for this screen-level wiring. CotomyQueryForm is a natural fit here because list screens usually express search state in the URL. It reads the form values, rebuilds the query string, and navigates to the updated URL, which keeps search and paging behavior explainable.\nimport { CotomyPageController, CotomyElement, CotomyQueryForm } from \u0026#34;cotomy\u0026#34;; export class CotomyListPageController extends CotomyPageController { private _entitiesTable: CotomyElement | null = null; protected get entitiesTable(): CotomyElement { return this._entitiesTable ??= CotomyElement.byId(\u0026#34;entities-table\u0026#34;)!; } protected async initializeAsync(): Promise\u0026lt;void\u0026gt; { await super.initializeAsync(); const conditionForm = this.body.first(\u0026#34;#condition-form\u0026#34;, CotomyQueryForm); conditionForm?.initialize(); this.entitiesTable.onSubTree( \u0026#34;click\u0026#34;, \u0026#34;tr[data-entity-id]\u0026#34;, e =\u0026gt; { const row = (e.target as HTMLElement) .closest(\u0026#34;tr[data-entity-id]\u0026#34;); if (!row) return; const id = row.getAttribute(\u0026#34;data-entity-id\u0026#34;); if (!id) return; location.href = `/entities/${id}`; } ); } } In a real application, this often navigates to a detail page such as /products/{id} or /users/{id}. The /entities/{id} path here is only a generic example.\nWith this structure, every list screen can share the same base behavior, while each screen still controls its own endpoint and table schema.\nInitialization Sequence sequenceDiagram participant Script as Page Script participant W as CotomyWindow participant PC as CotomyPageController participant LC as CotomyListPageController participant QF as CotomyQueryForm participant T as entitiesTable Script-\u0026gt;\u0026gt;PC: CotomyPageController.set() W-\u0026gt;\u0026gt;LC: call initializeAsync() on window load LC-\u0026gt;\u0026gt;PC: super.initializeAsync() LC-\u0026gt;\u0026gt;QF: initialize condition form LC-\u0026gt;\u0026gt;T: register onSubTree click handler W-\u0026gt;\u0026gt;W: trigger cotomy:ready Extending the Base Controller In my own systems, the base controller layer is where operational common features are placed, especially for long-lived business screens.\nFor edit screens, one recurring concern is session timeout during AJAX operations. A shared controller can centralize re-login behavior and recovery flow so each screen does not reimplement the same failure handling.\nThe same applies to shared header and menu interactions. Navigation toggles, layout actions, and common shell controls are usually application-level concerns, so keeping them in a base controller keeps feature screens focused on business behavior.\nIn larger systems, I also separate controllers by segment, split by user group or business area. That segmentation can map to separate projects, for example by csproj units, and each segment can have its own page controller lineage.\nThis gives a stable structure such as: App-level base controller, then segment controller, then screen-category controller like list or detail, and finally the page-specific controller.\nWhy Page-Level Structure Matters Common code can also be extracted as utility functions. That is valid, but page-level behavior often remains easier to reason about when grouped in page controllers.\nInheritance debates are common, and many teams prefer to avoid it entirely. In my own work, a shallow controller hierarchy has been a readable option for operational business screens, because it makes screen structure and shared behavior boundaries easy to see.\nIf inheritance does not fit your team style, composition with explicit wiring is also a good choice: reusable behaviors such as list navigation or pagination can be implemented as separate objects and injected into a page controller instead of inherited.\nIn practice, maintainability usually gets worse when the decision becomes ideological, either always inherit or never inherit. What matters more is whether boundaries stay clear as the system grows.\nFor my own projects, these checks are practical:\nIf the shared behavior runs on the same page lifecycle, a base controller is often reasonable. If differences are mostly configuration, shallow inheritance usually stays readable. If the hierarchy grows deep, composition is usually safer. If impact analysis becomes hard, refactor boundaries before adding features. More importantly, it gives designers and implementers a concrete base to define application feature categories such as list pages, detail pages, and editor pages. That categorization guides how new screens should be structured from the start.\nHaving said all that, as a side note, I personally still like solving this area with inheritance. I do avoid unnecessary depth, but if each class has a clear name and stands on its own meaning, I do not think deeper layering needs to be rejected automatically.\nAt the same time, I know many people think very differently about this. Especially developers who are used to frontend-heavy workflows often avoid inheritance itself, and that is a valid approach. React, for example, is generally structured around composition rather than class inheritance. So this is simply my personal preference from business-screen architecture, not something I want to impose on others. If Cotomy feels less suited to composition-heavy extension, that may simply be my own design preference showing through.\nConclusion CotomyPageController makes it easier to stabilize repeated business screen patterns such as list, detail, and editor flows.\nCotomyListPageController in this article is only one example, but standardizing behavior at the page level can significantly improve consistency and maintainability in long-lived business applications.\nPractical Guide This article is part of the Cotomy Practical Guide, which focuses on hands-on usage patterns for the framework.\nSeries articles: Working with CotomyElement , CotomyPageController in Practice , Standardizing CotomyPageController for Shared Screen Flows, and Building Business Screens with Cotomy .\nNext Next article: Building Business Screens with Cotomy , with a code-first Razor Pages example that wires one screen with CotomyPageController and CotomyEntityFillApiForm.\nLinks Previous: CotomyPageController in Practice . More posts: /posts/ .\n","permalink":"https://blog.cotomy.net/posts/practical-guide-3-standardizing-cotomy-page-controller/","summary":"How to standardize list-style screens with CotomyPageController-based structure.","title":"Standardizing CotomyPageController for Shared Screen Flows"},{"content":"Previous article: Working Inside Japan\u0026rsquo;s Contract Engineering Services Model Opening It has been about twenty-five years since I started working after graduating from school. During many of those years, I also accepted small to medium software development jobs on the side from local companies.\nMost of that work was not glamorous. It usually meant building custom internal systems for relatively small organizations, solving very ordinary operational problems that still mattered a great deal to the people using them every day.\nThe Access Years In the early years, most of those systems were built with Microsoft Access.\nThe reason was simple: its cost performance was extremely good.\nAccess was not an ideal platform if you were thinking about large-scale growth or long-term architectural evolution. It had clear limits in scalability, and once a system began to grow beyond a certain point, its weaknesses became difficult to ignore.\nBut for the clients I was dealing with at that time, those limits were usually acceptable. The systems were small, the budgets were small, and the business complexity was still within a range that Access could handle without much trouble. In that environment, it was often more than sufficient.\nLarger Work, but Not Yet Deliverable Alone As time went on, some repeat customers started bringing me larger projects with more substantial budgets.\nAt first that sounded encouraging, but many of those opportunities had to be declined. The problem was not lack of interest. The problem was risk.\nIf the project size grew too much, the development time became difficult to predict, and once I could no longer guarantee delivery by myself, accepting the work felt irresponsible. I had to think not only about whether I could write the code, but whether I could actually carry the whole project through without creating trouble for the client.\nAround that period, I became much more interested in system design quality itself. I was thinking more seriously about architecture, consistency, and how to build systems that could keep working without constant friction.\nIronically, this interest grew at a time when my influence over architecture decisions in my primary workplace was still limited. In corporate projects, there were many cases where I could not freely pursue the design ideas I wanted to explore. That made my independent work feel even more important, because it was one of the few places where I could test my own thinking.\nMoving Toward Web Development I began working seriously on web development a little more than ten years ago.\nBefore that, when a project grew beyond the level that Access could handle comfortably, I usually chose a two-tier client and server system with MySQL or another OSS database. That was the practical step up I personally used for somewhat larger work, and for quite a while it was the model I relied on most often.\nLater, I gradually moved some of that work toward a three-tier client and server style using Java and GlassFish.\nThat shift mattered for a very practical reason. In the earlier two-tier model, responsibility was always at risk of becoming scattered between the client and the database layer. Once I moved more logic and control to the server side, I could guarantee data more consistently from one place. Validation, processing rules, update order, and transactional consistency all became easier to manage when the server was clearly responsible for them.\nI would not say that this solved every problem, but it gave me a stronger sense that the system could protect itself structurally instead of relying too heavily on each client behaving correctly. Looking back, many of the structural preferences I still have today were already starting to take shape during that period.\n.NET already existed when I started my career. In theory, it was a very attractive option for me from the beginning.\nBut at that time it was not free.\nThat detail matters more than people may remember now. Japan was in the middle of what is often called the employment ice age, the economy was weak, and spending money on development tools as an individual was not a casual decision. Buying software for work was a real financial burden.\nI bought Visual C# separately and worked with what I could afford. Visual Studio Professional and MSDN subscriptions often felt out of reach. I do not remember that period with resentment, but I do remember it with a certain heaviness. You wanted to improve your tools and your skills, but every step forward had a cost attached to it.\nStill, I kept going. There was not much romance in it. It was simply the normal determination of trying to keep building with the resources available at the time.\nEarly Architecture Experiments Even in those early projects, I was already experimenting with ideas that were probably unusual for the scale of work I was doing.\nI tried to build OR-mapper-like abstractions. I implemented something close to UnitOfWork. I introduced abstraction layers aggressively, sometimes more aggressively than I should have.\nMany of those attempts did not work very well.\nSome were too complicated for the actual size of the project. Some created extra structure without enough return. Some were simply immature ideas built before I had enough experience to shape them properly.\nBut the motivation behind them was real. I was developing alone, and without some degree of automation and structural consistency, the amount of work would quickly exceed the time I could realistically invest. I could not afford to rebuild every pattern by hand each time.\nEven then, the systems I delivered were still far cheaper than typical corporate development projects. And despite that lower cost, I often continued supporting them after delivery.\nI wrote contracts carefully so that support obligations remained limited, because that was the only responsible way to protect both sides. Still, reality was never as clean as the contract language. If a customer had a problem in a system I had built, I usually ended up responding.\nCoding Was Not the Hardest Part Looking back over the years of solo development, including later periods when I built internal systems alone inside my current company, I do not think writing code was ever the biggest challenge.\nThe harder problems were environment setup, deployment, and operational maintenance.\nThose were the places where work became tense.\nThe Weight of Deployment Deployments always carried anxiety.\nEven when I prepared carefully, deployment took time. The work involved manual steps. And because I was working alone, there was no independent reviewer checking the final operation from a separate perspective.\nCode review quality is naturally limited when one person is responsible for everything. Sometimes mistakes happened.\nEach incident taught me something, and after every problem I improved part of the process. But human operations always have limits. You can reduce risk, but if enough work depends on one person performing a sequence of manual steps correctly, you never fully relax.\nDomains, Certificates, and New Kinds of Risk When I started building more web systems, a different category of operational problems appeared.\nSuddenly I had to worry about domain expiration and SSL certificate renewal.\nThose may sound like minor administrative details, but they were not minor at all. They became real operational risks, because they could break a live service for reasons completely unrelated to the application code itself.\nDuring the PHP era, I deliberately tried to avoid that class of failure by using rental hosting services and relying on provider-managed domains and certificates whenever possible. It was not elegant architecture. It was defensive pragmatism.\nClient and Server Update Failures In the older two-tier client and server systems, I implemented automatic update mechanisms so that client applications could be kept in sync more safely.\nEven then, failures still happened. There were cases where administrators forgot to update servers properly, and eventually I would receive an unexpected phone call because the system had stopped working.\nThose moments were a reminder that even a good mechanism can fail once it passes through real operating environments.\nWhy Cloud Became the Turning Point In the end, many of these operational problems were resolved only after I started running systems on Azure.\nLooking back, the main reason I moved toward cloud infrastructure was not fashion, and it was not even raw scalability at first. The strongest motivation was to make deployment and everyday operation safer, calmer, and as close to maintenance free as possible.\nThat was the real turning point.\nHow I Operate Now Today, most of the systems I manage run on Azure. A few still remain on rental servers, but the majority have already migrated.\nThe deployment flow is now much more controlled. I push to the main branch, deployment runs automatically to staging, I verify the result, and then I swap to production.\nI do not think this style of operation is especially rare now. Many engineers probably use some variation of the same flow, with automated deployment to staging, verification, and then a controlled release to production.\nThat makes production deployment effectively zero downtime in normal cases, and it reduces operational mistakes substantially. The difference compared with the earlier manual periods is large enough that I no longer think of cloud migration primarily as infrastructure modernization. For me, it was an operational safety improvement.\nMigrations Still Need Respect Database migrations still require attention.\nBut because I usually develop alone, branch conflicts are rare, and migrations are generally manageable. One of the checks I still care about is whether the old code path remains safe after the migration has been applied. That point deserves more caution than people sometimes give it.\nThe process is much better than it used to be, but it is not something I treat casually.\nClosing In an ideal world, everything would be automated perfectly.\nBut even if the automation is written as scripts, or someday generated more and more by AI, I still feel uncomfortable letting changes reach production without personally verifying them. I think that instinct is healthy. Responsible engineers confirm critical changes themselves.\nEven now, deployments and migrations can still create a small knot in my stomach. Perhaps that tension is simply part of software development. We improve the process, reduce the risks, and build better systems than we had before, but someone still has to look carefully, make the decision, and accept the result.\nPrevious article: Working Inside Japan\u0026rsquo;s Contract Engineering Services Model Next article: AI in Real Development Work ","permalink":"https://blog.cotomy.net/posts/misc/early-architecture-attempts/","summary":"Early personal development work, architecture experiments, and the operational challenges that eventually led to cloud deployment.","title":"Early Architecture Attempts"},{"content":"Previous article: Reaching Closures to Remove Event Handlers Later ElementWrap was never meant to control an entire page When I first created ElementWrap, the intention was narrow. I wanted to simplify DOM operations so daily screen work would be less repetitive.\nAt that stage, I was not trying to create a page-level orchestration model. I only wanted a practical wrapper around common element operations.\nMany business systems, especially CRUD-centered systems, do not need heavy TypeScript behavior on the client side. Most screens can run with minimal interaction logic. That assumption was realistic in many of my earlier projects.\nWhy that assumption started to break Later, one project required more coordination than that assumption could support. It was a small sales management system I developed as an individual contract.\nThe domain itself was not unusual. It handled products, customer-specific quotation prices, orders, shipping destinations, and consignor handling. In Japanese business practice, shipments are often sent under a trading company acting as the consignor, so that data relationship had to be represented correctly.\nThe system generated delivery slips, invoices, shipping data, and order records. A key objective was to import data from purchase orders whenever possible and reduce manual document preparation.\nIn other words, it was a practical business system with modest scale but non-trivial coordination needs.\nWhy the system was built from scratch Before implementation, we evaluated existing SaaS products. None matched the required workflow cleanly. Customization cost was high, and ongoing management fees would accumulate over time.\nBecause there was some technical knowledge inside the company, we judged that Azure hosting plus a custom system was operationally manageable.\nThe scope was intentionally limited. It covered users, products, packaging, quotations, orders, deliveries, and billing. The plan assumed around 20 hours per week and roughly one year for design and implementation.\nThis was not a large digital transformation project. It was a focused attempt to build exactly what operations needed.\nThe limits of everything starting from the load event In older systems built with ElementWrap, I mostly used a simple pattern. Register initialization logic in the load event, then let user events drive the rest.\nBecause many interactions were event-driven, this pattern worked better than expected for a long time.\nBut the order entry screen in this project became much more complex. Order creation needed customer selection, quotation selection, product selection, shipping destination selection, and consignor selection. Each choice influenced other choices. Some fields appeared or disappeared depending on earlier decisions.\nTechnically it was possible to place everything inside one entry-point handler. But architecturally that would have produced one very large and fragile script. The entry point itself would become a maintenance risk.\nThe idea of a PageController That was the point where the page controller idea became necessary.\nThe idea itself was not original. It is a common pattern, and I do not remember exactly where I first learned it. What mattered was that the pattern matched the problem at the right time.\nI needed one place that could own page-level coordination.\nThe first idea: run() My first design attempt was straightforward. PageController had a run() method, and entry code would instantiate and run the controller.\nconst controller = new OrderPageController(); controller.run(); This idea actually came from older application architectures I had used in C# desktop systems. In hindsight, this was a clear failure on my side. I carried a desktop pattern into browser runtime design without enough adaptation. I implemented it once, but noticed the mismatch early in development and replaced it before production use. The typical relationship was simple. A controller instance is created, run() is invoked, the controller prepares the screen, the screen exposes an interface, and the controller handles callbacks through that interface.\npublic partial class OrderForm : Form { public interface IOrder { void OnSubmit(Order order); } private readonly IOrder _events; public OrderForm(IOrder events) { _events = events; } } public class OrderController : OrderForm.IOrder { public void Run() { var form = new OrderForm(this); form.Show(); } public void OnSubmit(Order order) { // process order } } Conceptually this was clean in desktop architecture. In browser architecture, it was the wrong fit.\nWhy run() failed in the browser In browser code, run() could execute before the DOM was fully ready. That creates timing fragility immediately and turns initialization order into a source of hidden defects.\nThere were obvious workarounds. I could add DOM-ready handlers or move script tags to the bottom of the page. But those fixes felt procedural, not structural. Since jQuery workflows normally add startup logic through ready handling, not doing that consistently at first was also a clear reflection point for me.\nVery early in development, I judged this design as architecturally incomplete and dropped it. I did not want page initialization correctness to depend on local script placement habits. This experience became a direct reminder that reusing patterns across different architecture types is risky. A pattern that is valid in desktop UI can fail when lifecycle ownership and execution timing are controlled by the browser.\nThe design that survived: static registration The approach that survived changed ownership of initialization. Instead of creating a controller instance directly in entry code, the page registers a controller type. The framework then controls when the instance is created and initialized.\nThis is the direction that became CotomyPageController.\nCotomyPageController.set(class extends CotomyPageController { protected override async initializeAsync(): Promise\u0026lt;void\u0026gt; { // page initialization logic } }); Internally, set() registers a load-time path and runs initialization only when the page is ready. That moved timing responsibility from each screen script into one framework-level mechanism.\nWhy this structure worked This structure solved several practical problems at once.\nOne controller per page became explicit. Page-level coordination had a stable home. Related UI references could be stored as controller properties. Multiple forms could be managed as one coordinated unit. List pages and detail pages could share the same control style.\nReal screen behavior became easier to reason about once controller subclasses were shared as reusable page types. With common subclasses for list and detail flows, a row click could navigate to detail consistently. A page restored by browser back could reload data through one predictable path. Behavior moved from scattered handlers toward centralized lifecycle control through those shared controller boundaries.\nIn practice it is mostly for shared behavior In day-to-day work, most pages are not extremely complex. So PageController is often used to standardize shared patterns rather than to host elaborate logic.\nThat is why base classes such as ListPageController and DetailPageController became useful in practice. They encapsulate common behavior, clarify responsibility, and reduce repeated setup code.\nThe long-term effect is simple. Development becomes faster, and maintenance becomes calmer, because each page has one obvious coordination boundary. Looking back, that small decision quietly changed how I structured every screen afterward. More importantly, this was the point where the design moved from a utility class library toward an actual framework boundary.\nDevelopment Backstory This article is part of the Cotomy Development Backstory, which traces how Cotomy\u0026rsquo;s architecture emerged through real project constraints.\nSeries articles: Building Systems Alone , Early Architecture Attempts , The First DOM Wrapper and Scoped CSS , API Standardization and the Birth of Form Architecture , Page Representation and the Birth of a Controller Structure , The CotomyElement Constructor Is the Core , Dynamic HTML Boundaries in CotomyElement , Reaching Closures to Remove Event Handlers Later , The Birth of the Page Controller, and The Birth of the Form Abstraction .\nNext article: The Birth of the Form Abstraction ","permalink":"https://blog.cotomy.net/posts/development-backstory/09-the-birth-of-page-controller/","summary":"Why page-level control became necessary and how the first PageController design appeared.","title":"The Birth of the Page Controller"},{"content":"Previous article: From Global CSS Chaos to Scoped Isolation In the previous note, I wrote about style boundaries. This time, I want to focus on data boundaries and why my database design process changed.\nThis is not a tutorial. It is an architectural reflection based on how my own development practice evolved.\nA career built around databases Most systems I have built in my career have been database systems. Even when requirements looked like screen features, the real behavior of the system was usually decided by how data was stored, related, and updated.\nFor that reason, data representation was always a core design task for me, not a secondary implementation detail. The format of data and the persistence structure directly affected application structure, service boundaries, and operational safety.\nThe historical separation between schema and code For a long time, schema design and application code lived in different places. ER diagrams, table definition documents, and class diagrams were managed as separate artifacts. Implementation then followed those artifacts.\nEarly in my career, I did not question that separation. It was normal industry practice, and experienced teams adapted to it naturally. Developers became used to manually synchronizing schema changes with application code changes.\nIt was not elegant, but it was how many teams delivered systems reliably.\nMy design practice at that time Before starting implementation, I usually prepared table definitions and ER diagrams. Depending on project complexity, I also added class diagrams or simple object-flow sketches.\nThe goal was always the same. I wanted to understand data shape first, then trace how it moved through the system.\nAt that time, I considered this process obvious and never felt strong discomfort with it.\nTurning point from a CosmosDB project My perspective changed during a project where field teams needed to register many kinds of operational reports and consultation records in one system. The requirement was to search across heterogeneous records, not to perform heavy relational aggregation with frequent joins. Because of that, a traditional relational database was not automatically the best fit for the core storage model.\nDuring database evaluation, I encountered CosmosDB, which was still called DocumentDB at that time. I was looking for a storage model that behaved like a registry for diverse records, and CosmosDB matched that intent.\nCosmosDB worked well as a JSON document registry. Each record could be stored with a key, and the flexible schema fit real operational data where structure varied by report type. What I noticed quickly was that stored JSON matched serialized class structures very naturally.\nAt the same time, search requirements exposed a limitation. CosmosDB queries were not ideal for complex cross-document search in that period, and full-text search was not available. So I deliberately combined CosmosDB with Azure Search, later renamed Azure Cognitive Search. CosmosDB became the primary document registry, and Azure Search became the indexing and cross-document search engine.\nThat felt different from my previous database work. The data shape was now being expressed directly by program structure. It was the first time I had worked with a database model where stored structure and program structure aligned this directly.\nImpact of JSON document storage In this model, I no longer started by defining schema in an external artifact and then mapping code to it. Data format emerged from application-side classes.\nCompared with traditional RDB workflows, this felt unexpectedly refreshing. It reduced both the psychological and structural distance between what the program was and what the database stored. That realization later prepared the ground for adopting Entity Framework.\nEntity Framework already existed at that time. I simply had not encountered it in my own projects yet.\nRealization: data and code are inseparable The main insight was simple. Data design and program design cannot be separated in practice.\nIf those definitions diverge, the application eventually breaks at runtime or during maintenance. After the CosmosDB experience, the old separation between schema documents and application models started to feel increasingly uncomfortable.\nThat discomfort changed how I evaluated database tooling.\nAdopting Entity Framework with code-first After that, I moved to Entity Framework with a code-first approach. Entity classes became the primary expression of data structure and system behavior assumptions. With code-first modeling, data design and behavior design happened in the same place, instead of being split across separate artifacts. That removed a conceptual gap between how the system should behave and how data should be shaped. Entity Framework migrations became the synchronization mechanism between code and database.\nThis approach still requires operational discipline. Migration sequencing, rollout timing, and rollback planning are real concerns.\nEven with those risks, the conceptual clarity was worth it for my projects.\nLimits of code-first Code-first does not solve every database concern. Database-specific features, full-text indexing strategies, and specialized performance tuning often still need manual configuration.\nSo I do not treat EF as total automation. But for core structure definition, keeping the source of truth in code greatly reduces drift.\nPractical impact on development In the last several years, every system I built has used Entity Framework. The operational effect has been consistent. Schema and code stay aligned with less coordination work. Manual SQL writing decreases. Spelling mistakes in handwritten SQL become rare in daily development.\nAnother practical gain is documentation. Schema documents and metadata can now be generated with tools, and increasingly with AI-assisted workflows, from the same model definitions used in implementation.\nWhy this matters most for small teams The biggest advantage appears in solo development or small teams. When one person or a few people handle both application and data concerns, reducing manual synchronization has direct impact on delivery speed.\nLess synchronization overhead means less operational friction. It also means fewer subtle mismatch bugs between schema and code.\nFor small teams, that change is not just convenience. It is a structural productivity multiplier.\nClosing reflection Traditional database-first design is not wrong. It remains valid in many contexts.\nBut in modern application-driven development, defining data structures directly in code has become a strong architectural advantage in my work.\nC# Architecture Notes This article is part of the Cotomy C# Architecture Notes, which reflect on backend and project-structure decisions around business systems.\nSeries articles: Why I Chose C# for Business Systems and Still Use It , From Global CSS Chaos to Scoped Isolation , Unifying Data Design and Code with Entity Framework, and How I Split Projects in Razor Pages Systems .\nNext article: How I Split Projects in Razor Pages Systems ","permalink":"https://blog.cotomy.net/posts/csharp-architecture/03-unifying-data-design-and-code-with-entity-framework/","summary":"How code-first modeling in Entity Framework reduced the historical gap between database schema and application logic.","title":"Unifying Data Design and Code with Entity Framework"},{"content":"Previous article: Why I Chose C# for Business Systems and Still Use It In the previous note, I wrote about project boundaries in C#. This time, I want to focus on style boundaries.\nThis is not a tutorial. It is a reflection on how my CSS architecture changed after repeated failures in real business screens.\nThe global style.css era At the beginning, I put almost all styles into one global style.css. The entire application depended on that file. That was not a deliberate architecture decision. It was mainly a result of my limited frontend experience at that time.\nI do not consider that design good. It worked while the system was small, but it broke down as soon as screen count and variation increased.\nAs features accumulated, style.css only expanded. Selectors became harder to trace because many rules were generic and far from the screens they affected. The cascade itself was not the problem. The problem was that ownership of the cascade was unclear. When ownership is unclear, a change made for one page can silently alter another page.\nWhen this happened repeatedly, teams started to rely more on inline styles as a local escape hatch. That reduced short-term risk for one ticket, but it increased long-term inconsistency. During layout regressions, root-cause investigation slowed down because rule precedence had to be reconstructed from too many locations. The style system became operationally fragile.\nThe categorization phase The first structural improvement was simple categorization by purpose. I split styles into frame.css, parts.css, list.css, and editor.css.\nThis pattern still exists in my systems today. It was not perfect, but it was a real improvement. Collision frequency dropped because broad layout rules and reusable part rules were no longer mixed without intent. Debugging also became easier because investigation started from a narrower file set.\nMore importantly, this phase changed how I thought about CSS. I stopped treating styles as one shared text asset and started treating them as responsibility domains. Once that perspective appeared, further separation became easier to justify architecturally.\nTransition to Razor Pages and scoped CSS When I moved back to C# and Razor Pages, scoped CSS had a strong impact on my architecture decisions.\nTo be clear, scoped CSS is not unique to C#. I do not claim that C# created the concept. I am not comparing ecosystems here. I first encountered scoped CSS in a C# project by chance, through Razor Pages. That first encounter changed how I structured screen styles.\nMy return to C# itself was based on familiarity, syntax preference, and strict typing. Azure strategy also influenced that decision. Azure fit my operational and organizational context best at the time.\nWhat the Razor Pages mechanism changed In Razor Pages, Main.cshtml pairs naturally with Main.cshtml.css. The same pattern applies to layout files, such as _Layout.cshtml and _Layout.cshtml.css.\nThat pairing shifted style ownership. Layout-level concerns that used to live in frame.css moved into _Layout.cshtml.css. Page-specific parts moved into each page-local stylesheet. The path from markup to style became physically short, and that reduced accidental cross-screen coupling.\nThe operational impact was immediate. Unintended style bleed that used to trigger late-night debugging sessions almost disappeared, and day-to-day screen maintenance became much calmer.\nWhy shared list.css and editor.css still remain Even after adopting scoped CSS, I kept some shared files such as list.css and editor.css. This is intentional.\nScoped CSS works through attribute-based isolation, which is excellent for local ownership. Under the hood, the build process rewrites selectors by attaching a generated attribute to the component root and prefixing matching selectors accordingly. But some formatting responsibilities are cross-screen by design, especially table conventions and form-field structures that must stay visually consistent in multiple pages. For those domains, centralized shared CSS still provides a better control point than repeating equivalent rules across many scoped files.\nSo the model is not total replacement. It is selective isolation with explicit shared domains.\nElementWrap and the dynamic CSS boundary problem Before CotomyElement, I used an earlier abstraction called ElementWrap. As dynamic DOM generation increased, a new boundary problem appeared. CSS definitions were often too far from the point where elements were created and used. That distance created architectural discomfort because behavior and ownership were no longer visible in one place.\nThe response was to allow HTML and CSS together at construction time. I intentionally did not use inline styles for this. Responsive behavior required media queries, and inline style attributes cannot represent media-query rules.\nTrying to hard-code viewport-based class switching in TypeScript felt fundamentally wrong.\nStyle tag lifecycle management Once HTML and CSS were allowed together, lifecycle discipline became mandatory. In CotomyElement, a style tag is generated automatically for scoped CSS, and cleanup is tied to element lifecycle. When the last element of the same scope disappears, the corresponding style tag is removed.\nThis behavior matters in dynamic grids and frequently recreated UI blocks. Without cleanup, style definitions accumulate over time and obscure the live state of the page. With cleanup, style resources follow the same lifecycle model as UI resources. That keeps runtime state understandable and reduces long-session drift.\nFrom [scope] to [root] in CotomyElement In early versions, the placeholder was [scope], largely because I was influenced by the term scoped CSS itself. But semantically, it was always pointing to the root element of each CotomyElement. That mismatch kept bothering me.\nAt that time, parts of the API were already publicly available even though version 1 had not been released yet, so changing the keyword was not a trivial decision. I prioritized semantic clarity and introduced a transition period where [scope] and [root] worked in parallel. When preparing version 1, I unified the syntax to [root].\nCurrent behavior injects the actual root scope prefix automatically for child selectors. In implementation terms, [root] is rewritten to a data-cotomy-scopeid selector, and if [root] is omitted, it is prefixed automatically. Even so, I still recommend writing [root] explicitly for readability. It makes ownership visible in the selector itself and reduces interpretation cost during reviews.\nThe current separation model At this point, style domains are clearer than before. I can separate three layers with less ambiguity. System-wide styles remain in shared files when global consistency is required. Server-rendered component styles are isolated by Razor Pages scoped files. Client-generated component styles are isolated at runtime through CotomyElement scoped CSS.\nSince enabling CSS handling in ElementWrap and later formalizing it in CotomyElement, bugs caused by style-scope confusion have almost disappeared. Not because CSS became simple, but because style ownership became structurally explicit. What changed was not the syntax of CSS, but the structural visibility of style ownership.\nClosing reflection I do not know who first invented scoped CSS as an idea. But as an architectural tool, it deserves appreciation.\nC# Architecture Notes This article is part of the Cotomy C# Architecture Notes, which reflect on backend and project-structure decisions around business systems.\nSeries articles: Why I Chose C# for Business Systems and Still Use It , From Global CSS Chaos to Scoped Isolation, and Unifying Data Design and Code with Entity Framework .\nNext article: Unifying Data Design and Code with Entity Framework ","permalink":"https://blog.cotomy.net/posts/csharp-architecture/02-from-global-css-chaos-to-scoped-isolation/","summary":"How global CSS collapsed under scale, and how scoped CSS in Razor Pages reshaped my architectural thinking.","title":"From Global CSS Chaos to Scoped Isolation"},{"content":"Previous article: Dynamic HTML Boundaries in CotomyElement Why event handling had to exist in CotomyElement CotomyElement is fundamentally a wrapper around HTMLElement. Once that boundary was defined, event handling was not optional. A DOM wrapper that cannot register and control events is incomplete for real screen behavior.\nI also added convenience methods such as click and change. The intent was modest and practical. They were jQuery-style ergonomics to reduce boilerplate at call sites, not an attempt to invent a new event model.\nThe core event API remained on and off. Convenience calls existed to make frequent cases shorter and easier to scan in business UI code.\nThe first phase worked because the requirements were small Early on, event handling was mostly add-only.\nThe assumption was simple. Handlers were attached, pages rendered, user actions were processed, and full element removal ended the lifecycle in normal flows. Explicitly removing specific handlers was not a frequent requirement, so that edge stayed quiet.\nAt that stage, no major failure pattern was visible. The API felt good enough, and I moved on.\nWhere the trouble started: function identity under closures In plain on usage, identity is straightforward. A function reference is passed in, and the same reference can be used later for off.\nThe delegated subtree pattern changed that. CotomyElement provides onSubTree so one parent can react to events from matching descendants. In that path, the original handler is wrapped in a closure that checks selector matching first.\npublic onSubTree(event: string | string[], selector: string, handle: (e: Event) =\u0026gt; void | Promise\u0026lt;void\u0026gt;, options?: AddEventListenerOptions): this { const delegate: EventHandler = (e: Event) =\u0026gt; { const target = e.target as HTMLElement | null; if (target \u0026amp;\u0026amp; target.closest(selector)) { return handle(e); } }; const events = Array.isArray(event) ? event : [event]; events.forEach(eventName =\u0026gt; { const entry = new HandlerEntry(handle, delegate, options); EventRegistry.instance.on(eventName, this, entry); }); return this; } That closure is a new function instance created at registration time. JavaScript compares functions by reference identity, not by source similarity or behavior. Two functions that look identical are still different if they are different instances.\nThis matters for removal symmetry and any logic that attempts to resolve handlers by identity.\nTo remove a listener with removeEventListener, the runtime needs the same effective function reference that was attached. In delegated registration, the attached listener is delegate, while the public API naturally passes the original handle. If the system only compares one side, the lookup can fail even when intent is correct.\nWhat the source code reveals The internal design in src/view.ts records both layers of function identity.\nclass HandlerEntry { public constructor(public readonly handle: EventHandler, public readonly wrapper?: EventHandler, public readonly options?: AddEventListenerOptions) { } public get current(): EventHandler { return this.wrapper ?? this.handle; } /** * Comparison mode * \u0026#34;strict\u0026#34;: Exact match (matches including wrapper) * \u0026#34;remove\u0026#34;: For deletion (ignores wrapper = treats as wildcard) */ public equals(entry: HandlerEntry, mode?: \u0026#34;strict\u0026#34; | \u0026#34;remove\u0026#34;): boolean; public equals(handle: EventHandler, options?: AddEventListenerOptions, wrapper?: EventHandler, mode?: \u0026#34;strict\u0026#34; | \u0026#34;remove\u0026#34;): boolean; public equals(entryOrHandle: HandlerEntry | EventHandler, optionsOrMode?: AddEventListenerOptions | \u0026#34;strict\u0026#34; | \u0026#34;remove\u0026#34;, wrapper?: EventHandler, mode?: \u0026#34;strict\u0026#34; | \u0026#34;remove\u0026#34;): boolean { let targetHandle: EventHandler; let targetWrapper: EventHandler | undefined; let targetOptions: AddEventListenerOptions | undefined; let compareMode: \u0026#34;strict\u0026#34; | \u0026#34;remove\u0026#34; = \u0026#34;strict\u0026#34;; if (entryOrHandle instanceof HandlerEntry) { targetHandle = entryOrHandle.handle; targetWrapper = entryOrHandle.wrapper; targetOptions = entryOrHandle.options; compareMode = (optionsOrMode as \u0026#34;strict\u0026#34; | \u0026#34;remove\u0026#34;) ?? \u0026#34;strict\u0026#34;; } else { targetHandle = entryOrHandle; if (typeof optionsOrMode === \u0026#34;string\u0026#34;) { compareMode = optionsOrMode; targetWrapper = wrapper; targetOptions = undefined; } else { targetOptions = optionsOrMode; targetWrapper = wrapper; compareMode = mode ?? \u0026#34;strict\u0026#34;; } } if (this.handle !== targetHandle) { return false; } if (compareMode === \u0026#34;strict\u0026#34; \u0026amp;\u0026amp; this.wrapper !== targetWrapper) { return false; } return HandlerEntry.optionsEquals(this.options, targetOptions); } } handle is the original user-facing function. wrapper stores the delegated closure when one exists. current is what actually gets bound to addEventListener.\nThe options comparison is also strict and reference-based for signal.\npublic static optionsEquals(left?: AddEventListenerOptions, right?: AddEventListenerOptions): boolean { const getBoolean = (options: AddEventListenerOptions | undefined, key: \u0026#34;capture\u0026#34; | \u0026#34;once\u0026#34; | \u0026#34;passive\u0026#34;): boolean =\u0026gt; options?.[key] ?? false; const getSignal = (options: AddEventListenerOptions | undefined): AbortSignal | undefined =\u0026gt; options?.signal; const leftSignal = getSignal(left); const rightSignal = getSignal(right); const signalsEqual = leftSignal === rightSignal; return getBoolean(left, \u0026#34;capture\u0026#34;) === getBoolean(right, \u0026#34;capture\u0026#34;) \u0026amp;\u0026amp; getBoolean(left, \u0026#34;once\u0026#34;) === getBoolean(right, \u0026#34;once\u0026#34;) \u0026amp;\u0026amp; getBoolean(left, \u0026#34;passive\u0026#34;) === getBoolean(right, \u0026#34;passive\u0026#34;) \u0026amp;\u0026amp; signalsEqual; } Strict mode requires full identity consistency, including wrapper. Remove mode intentionally relaxes that wrapper check. In remove mode, the original public handler is treated as the authoritative identity, even if the internally attached listener is a wrapper. Options are still matched via capture, once, passive, and signal identity.\nThat distinction is the center of the workaround.\nA short confession from the middle of development There was a moment when I seriously considered a simpler rule: delegated subtree handlers would be register-only and effectively non-dispatchable through symmetric identity operations. I postponed the structural fix because usage was internal and the pressure was low. It stayed that way longer than it should have.\nIf you build systems long enough, you eventually discover that postponing a structural problem feels easier than solving it immediately. I am not proud of it, but I suspect I am not alone.\nThe pragmatic solution: keep an internal registry Since closure identity cannot be reconstructed after the fact, the practical direction was to preserve registration entries explicitly.\nThe registry stores handlers per event and per element instance.\nclass HandlerRegistory { private _registory: Map\u0026lt;string, HandlerEntry[]\u0026gt; = new Map(); public add(event: string, entry: HandlerEntry): void { if (entry.options?.once) { this.remove(event, entry); } if (!this.find(event, entry)) { this.ensure(event).push(entry); this.target.element.addEventListener(event, entry.current, entry.options); } } public remove(event: string, entry?: HandlerEntry): void { // ... for (const e of list) { if (e.equals(entry, \u0026#34;remove\u0026#34;)) { this.target.element.removeEventListener(event, e.current, e.options?.capture ?? false); } else { remaining.push(e); } } } } And a higher registry maps by instance identity.\nclass EventRegistry { private _registry: Map\u0026lt;string, HandlerRegistory\u0026gt; = new Map(); private map(target: IEventTarget): HandlerRegistory { const instanceId = target.instanceId; let registry = this._registry.get(instanceId); if (!registry) { registry = new HandlerRegistory(target); this._registry.set(instanceId, registry); } return registry; } } The behavior is deliberate.\nIn strict mode, wrapper identity must match, which protects exact duplication checks and precise entry lookup.\nIn remove mode, wrapper comparison is relaxed, so an off call with the original handle can still remove delegated entries whose actual attached listener is an internal closure.\nThis is not about pretending identity problems do not exist. It is about preserving enough registration context so lifecycle operations remain deterministic.\nWhy this matters for lifecycle predictability UI runtime stability is mostly about lifecycle boundaries, not syntax convenience.\nEvent listeners are lifecycle resources. If registration and unregistration use mismatched identity semantics, handlers remain attached longer than intended, or disappear unexpectedly when unrelated comparisons collide. Both cases produce hard-to-trace behavior drift.\nThe registry adds indirection, but it centralizes ownership. It gives CotomyElement a predictable place to resolve what was actually attached, under which options, and under which element instance.\nThat predictability becomes especially important when screens are dynamic, subtree delegation is common, and handlers are created through closures as a normal implementation detail.\nNot elegant, but reliable This solution is pragmatic.\nIt introduces indirection.\nIt is not conceptually elegant.\nBut it works reliably.\nUntil a better model emerges, this registry approach is the compromise that keeps event lifecycle behavior predictable.\nThe broader lesson is uncomfortable and useful at the same time. API ergonomics and structural correctness often pull in different directions. A convenient surface can hide identity complexity, and that complexity eventually asks for explicit internal structure.\nThis was not about adding a feature. It was about restoring identity in a system that had lost it.\nDevelopment Backstory This article is part of the Cotomy Development Backstory, which traces how Cotomy\u0026rsquo;s architecture emerged through real project constraints.\nSeries articles: Building Systems Alone , Early Architecture Attempts , The First DOM Wrapper and Scoped CSS , API Standardization and the Birth of Form Architecture , Page Representation and the Birth of a Controller Structure , The CotomyElement Constructor Is the Core , Dynamic HTML Boundaries in CotomyElement , Reaching Closures to Remove Event Handlers Later, and The Birth of the Page Controller .\nNext article: The Birth of the Page Controller ","permalink":"https://blog.cotomy.net/posts/development-backstory/08-reaching-closures-to-remove-event-handlers-later/","summary":"The problem of removing event handlers when closures change function identity, and the pragmatic registry solution inside Cotomy.","title":"Reaching Closures to Remove Event Handlers Later"},{"content":"This is the eighth post in Problems Cotomy Set Out to Solve. This continues from UI Intent and Business Authority .\nIn the previous post, I separated intent from authority. The UI declares intent, while business authority stays in business logic and operational contracts. That boundary improves predictability, but one structural problem still remains in day-to-day implementation:\nhow to bind Entity structure to screen controls without creating long-term runtime fragility.\nIntroduction: The Hidden Structural Gap I spent more than five years building business systems mainly with PHP. During that period, Web development repeatedly felt structurally difficult, even when individual tools were productive.\nAt the time, I could describe symptoms but not the root cause. Refactoring felt riskier than expected. Minor schema changes produced wide UI adjustments. The cost of keeping screens aligned with data shape stayed higher than it looked in small examples.\nThis was not a language criticism. The same pattern appears across stacks. The issue is a structural boundary problem in how Web screens connect data and UI.\nDesktop vs Web Binding Models That boundary became clearer when I built a desktop application with Java and Swing. The main realization was not about desktop nostalgia. It was about where the binding contract lived.\nIn many HTML-based systems, Entity structure and screen controls are connected through string-based property names spread across templates, request payloads, and client scripts. The contract is implicit and distributed.\nIn desktop workflows, the contract was often closer to type-aware tooling and IDE-supported mapping. The gap between model structure and control binding was usually narrower. That difference made maintenance behavior feel different over time.\nThe String Matching Problem This was also consistent with my earlier VB and SQL experience. A common flow was: run a query, receive a RecordSet, read columns by string name, then assign values to controls.\nThat model works, but the failure mode is structural:\nA typo in a field name becomes a runtime error. Renaming a column or property becomes risky because references are not always discoverable. A schema change propagates manually through screen definitions and mapping code. Each local fix is simple, but system-wide drift grows.\nThe central issue is not syntax convenience. It is that string matching weakens the operational contract between data model and UI. That weak contract reduces predictability and increases long-term maintenance load.\nWhy This Persists in Modern Web Apps Desktop development also had fragile periods, especially in early VB-style binding, but the environment gradually became safer. In VB6 and later .NET, IDE tooling improved, strong typing became easier to sustain, and design-time binding reduced accidental mismatch. In practice, Entity classes became the default representation, and I also used a lightweight ORM-like dynamic mapping approach to keep model-to-screen alignment manageable.\nThat did not remove every risk, but it narrowed the structural gap.\nWeb systems still keep a wider gap for architectural reasons. HTML is a separate language layer from server-side Entity definitions. With AJAX-heavy flows, binding becomes even more manual: input names, JSON keys, and attribute selectors are usually strings. The contract often exists as convention rather than enforceable structure.\nAgain, this is not an attack on Web architecture. It is a characteristic of the layer split. Without discipline, refactoring becomes dangerous because implicit contracts are easy to break silently.\nCotomy’s Mitigation Strategy In Cotomy prototypes, the first mitigation was to reduce raw string ownership wherever possible.\nOn the server side, form-related attributes reference Entity property names via Razor and nameof instead of literal strings. On the client side, direct dependency on Entity property names is avoided when possible, and binding flow is centralized through Cotomy Form mechanisms rather than per-screen ad hoc mapping.\nThe design goal is straightforward: move as much binding responsibility as possible closer to compile-time validation and shared runtime paths.\nUsing C# on both Razor and backend layers also helps here. DTOs, validation attributes, and naming rules can be shared as classes instead of being redefined across separate language stacks.\nThis does not eliminate mismatch risk. It narrows the structural boundary where drift can appear and reduces the frequency of fragile manual synchronization.\nWhy Not Blazor (For Now) Blazor can reduce a large part of the binding gap by keeping UI definition and model handling in C#.\nThe non-adoption decision here was contextual and time-specific. The target system needed both:\nSEO-discoverable public pages for company information. Internal customer-facing order screens with heavier client-side interaction. At that stage, this mixed requirement set created architectural friction in the Blazor options I evaluated.\nBlazor Server also depends on persistent SignalR transport (commonly WebSocket), and I wanted to avoid that operational dependency for this system profile.\nSo the decision was not about framework quality. It was about matching runtime and delivery constraints at that time.\nTooling Ideas That Remain Unbuilt Another idea was to build a VSCode extension that generates TypeScript classes from server-side Entities, so structural alignment could be automated at the tooling layer.\nIf this is implemented seriously, there are two realistic paths: extension-side generation in a webpack-based toolchain, or generation from the C# project side as part of the build flow.\nEither way, TypeScript type updates need a reliable intermediate artifact and a consistent notification path on each build or source change. Without that, generated types easily fall behind model changes and create false safety.\nSo the question is not whether code generation sounds useful. The real question is whether we can commit to full operational consistency.\nThe idea is still valid, but it has not been implemented. A partial version would likely increase both maintenance overhead and mismatch risk instead of reducing them.\nThat hesitation is important by itself: if the right fix still looks expensive, the problem is structurally non-trivial.\nFor now, I am not planning to implement this alone.\nBut if enough developers want this capability, launching it as a collaborative open source project could be an interesting direction.\nEven in an AI-first era, building tools we can trust in our own delivery contexts is still meaningful. Better structural tooling can also improve the quality of AI-generated code by making contracts clearer and safer.\nConclusion: Toward Safer Structural Contracts What I currently adopt is straightforward: Razor with nameof on the server side, and centralized binding through Cotomy Form on the client side.\nThis current state is not a complete structural guarantee. Some safety still depends on team discipline and naming rules, so fragile code can still appear when those rules are applied inconsistently.\nThat constraint should be stated first, because it defines the real risk.\nEven so, the practical value is large. Reducing typo-driven failures and reducing refactor drift are both high-impact outcomes in day-to-day business screen maintenance.\nThis approach does not solve everything, but it introduces a workable path toward safer contracts. In practice, having a path that teams can adopt now is far better than leaving binding quality to scattered screen-level conventions.\nProblem Series This article is part of the Cotomy Problem Series, which examines recurring structural failures in business UI design.\nSeries articles: HTML and CSS as One Unit , Form Submission as Runtime , Screen Lifecycle and DOM Stability , Form State and Long-Lived Interaction , API Protocols for Business Operations , Runtime Boundaries and Operational Safety , UI Intent and Business Authority , and Binding Entity Screens to UI and Database Safely.\nNext Next: Screen State Consistency in Long-Lived UIs ","permalink":"https://blog.cotomy.net/posts/problem-8-binding-entity-screens-safely/","summary":"Entity structure and UI controls are often bound through fragile string matching. This article explores the structural gap and Cotomy’s mitigation approach.","title":"Binding Entity Screens to UI and Database Safely"},{"content":"This is the first article in C# Architecture Notes. I use C# every day to build business systems, and this series is where I want to explain why I still do that, what I think it does especially well, and how I design systems around it.\nThis is not a tutorial. It is an architectural reflection based on what I have seen and what I continue to do in production work.\nBefore web work, there was open systems development I did not start as a web developer. I started in what Japan typically called open systems development.\nIn Japan, open systems usually referred to client/server enterprise systems on Windows or UNIX, in contrast to mainframe systems. In international terms, this would generally be described as distributed client/server enterprise development.\nThose systems were often large desktop applications with a very high number of screens. Direct database access in a 2-tier model was common. In some projects, database credentials were even stored in configuration files without encryption. This was not rare, and it was not limited to small vendors.\nWhen I started in open systems development, this 2-tier style was the dominant default. Client applications connected to the database directly, and many responsibility decisions were treated as client-side implementation concerns. Validation existed, but it was often distributed and inconsistent because each screen carried part of the burden.\nAfter SOAP-based web services became common, many systems gradually moved toward a 3-tier architecture. The presentation layer, application service layer, and data layer were no longer collapsed into one direct client-to-database flow. Business logic and validation moved toward the server layer, and that change was not only technical. It changed how teams understood responsibility boundaries across the whole system.\nThe moment server-side validation and integrity guarantees became standard practice changed how I thought about responsibility boundaries. Data consistency no longer depended solely on client discipline.\nThe impact of server-side integrity As validation moved to the server, business rules became more centralized and more enforceable. That improved consistency not only for one screen, but across multiple clients and entry points.\nData guarantees became something the architecture could require, rather than something each screen hoped to preserve. Even if a user bypassed UI constraints, the server could still reject invalid state transitions and protect core invariants. In practical operations, that made systems more resilient to both accidental misuse and uneven implementation quality.\nTo this day, every system I build assumes that data integrity must be guaranteed on the server side.\nI never considered that architecture ideal. At the same time, I did not have the authority or political influence to replace those decisions. So my early years were less about ideal design and more about learning how real delivery environments actually behave.\nWhat felt different when .NET appeared Around the time I entered the industry, .NET appeared and spread quickly. Multiple languages could run on the same framework, and one solution could contain multiple projects that cooperated while staying separate.\nThat felt structurally new to me. It was not only about syntax. It was about being able to shape system boundaries physically.\nWeb development was already increasing in the market, but most of my own implementation at that time was desktop work in VB, VC++, and C#. I only began designing web systems on my own after 2010.\nThe core point: solution aggregation and project boundaries For me, the key architectural value was simple. A csproj is a physical boundary. A sln can aggregate many projects, but it does not dissolve those boundaries.\nThat means I can classify system units by business domain, responsibility, and purpose while still keeping them inside one coordinated development space.\nAs far as I know, this kind of project-level physical and conceptual separation is especially natural in the .NET ecosystem. I say this as personal observation, not as a universal claim.\nWhy does this matter in practice? Because each functional unit remains physically independent. Safe modification scope becomes narrower. Cognitive load decreases. In team development, merge conflict probability also goes down because change areas are more clearly partitioned.\nFirst web project in .NET My first web project in .NET was as an implementation member, not as an architecture decision-maker. The team used several vendor-provided components, and my own web knowledge at that point was still limited.\nEven so, the framework and project structure allowed me to keep moving productively. That experience reinforced my trust in the environment before I fully understood every web-specific detail.\nThe PHP phase The first web system I developed fully by myself was in PHP. The reason was straightforward: it had to run on rental hosting.\nWhy I returned to C# I returned to C# for two major reasons. First, Azure removed most of the hosting and runtime constraints that had pushed me away before. Second, C# still gave me the project separation model that I considered essential for long-term operation.\nThis was the turning point.\nMy current architecture style Now I usually create one csproj per business segment or subdomain. In many systems, each project maps to the first URL path segment.\nsales and sales-related operations stay in one segment. inventory and stock operations stay in another. management functions stay in their own boundary.\nIn other words, routing boundary and project boundary are intentionally aligned.\nI build large systems alone. Not globally large, but large enough that without strict structural separation, I simply cannot manage them. My cognitive limits are real. If I do not split the system this way, it collapses under its own weight. C# allows me to impose order before chaos appears.\nWhy this is harder to enforce in PHP In PHP, separation is usually folder-based. That can work well for small systems. But there is no equivalent project-boundary enforcement at the same level. Cross-domain dependencies are also not structurally restricted in the same way, so boundaries can blur as the codebase expands.\nAs systems grow and more contributors participate, keeping architectural boundaries stable becomes more fragile. Business systems do not stay small. They evolve with the business, and structure that depends only on discipline eventually gets stressed.\nWhy I still choose C# today My current view is that C# is the best choice for business system development in my context.\nIt is not perfect. In practice, AI-generated code can sometimes be less predictable depending on the ecosystem and available examples. When working in C#, I tend to review generated output more carefully and treat it as a draft rather than final code. I also still feel some frustration that anonymous interface implementation is not available in the same way Java allows.\nEven with those limitations, when I balance productivity, structural safety, and long-term maintainability for a single developer, C# remains the best equilibrium I have found.\nWhere Cotomy fits in this boundary model Cotomy is a frontend framework. It intentionally does not implement server-side data integrity mechanisms, and it does not assume a specific backend architecture. Developers can implement server logic in the style that fits their application constraints, domain complexity, and operational requirements.\nCotomy does not enforce data integrity because that responsibility belongs to the server layer. It remains intentionally independent from backend-specific design choices. Cotomy assumes that server-side boundaries are already architecturally defined and respected.\nClosing This first note focused on project structure as an architectural boundary.\nC# Architecture Notes This article is part of the Cotomy C# Architecture Notes, which reflect on backend and project-structure decisions around business systems.\nSeries articles: Why I Chose C# for Business Systems and Still Use It, From Global CSS Chaos to Scoped Isolation , and Unifying Data Design and Code with Entity Framework .\nNext article: From Global CSS Chaos to Scoped Isolation ","permalink":"https://blog.cotomy.net/posts/csharp-architecture/01-why-i-use-csharp-for-business-systems/","summary":"Why I continue using C# for daily business system development, and why solution/project boundaries matter more than language syntax.","title":"Why I Chose C# for Business Systems and Still Use It"},{"content":"Previous article: The CotomyElement Constructor Is the Core Why dynamic HTML is not trivial Passing a string into a DOM wrapper sounds simple, but it immediately asks harder questions. Where is the ownership boundary? What counts as valid structure? Which safety checks are strict, and which are best-effort?\nIn the early phase of CotomyElement, this was one of the most fragile areas. Small changes in parsing behavior could silently alter runtime shape, and that kind of instability is hard to debug later in controller code.\nThe single root element rule One question came first: how much freedom should this constructor allow?\nIf multiple root nodes were accepted, one instance would need to hold multiple HTMLElements. That would change the class model, event ownership, and almost every method contract that assumes one underlying element.\nSo I drew a boundary: one instance means one element. If the input resolves to multiple roots, construction fails.\nThis was not a technical limitation. I could have modeled a fragment-like wrapper. I rejected that direction because ambiguity at construction time would spread everywhere else.\nconst doc = parser.parseFromString(wrappedHtml, \u0026#34;text/html\u0026#34;); if (doc.body.children.length !== 1) { throw new Error(`CotomyElement requires a single root element, but got ${doc.body.children.length}.`); } The invalid standalone tag problem The first implementation used DOMParser directly, and that works for many tags. But some tags cannot stand alone in valid HTML parsing contexts. td, tr, thead, and option are typical examples.\nThis was not theoretical. Dynamically generating rows and cells was common in actual screens, so ignoring the problem would make the API unreliable exactly where dynamic UI was needed most.\nThe pragmatic fix was to wrap such tags with required structural parents before parsing, then extract the intended element.\nFor example, td is parsed by temporarily placing it inside table, tbody, and tr, then selecting td from the parsed result.\nconst wrapperMap: Record\u0026lt;string, { prefix: string, suffix: string }\u0026gt; = { tr: { prefix: \u0026#34;\u0026lt;table\u0026gt;\u0026lt;tbody\u0026gt;\u0026#34;, suffix: \u0026#34;\u0026lt;/tbody\u0026gt;\u0026lt;/table\u0026gt;\u0026#34; }, td: { prefix: \u0026#34;\u0026lt;table\u0026gt;\u0026lt;tbody\u0026gt;\u0026lt;tr\u0026gt;\u0026#34;, suffix: \u0026#34;\u0026lt;/tr\u0026gt;\u0026lt;/tbody\u0026gt;\u0026lt;/table\u0026gt;\u0026#34; }, thead: { prefix: \u0026#34;\u0026lt;table\u0026gt;\u0026#34;, suffix: \u0026#34;\u0026lt;/table\u0026gt;\u0026#34; }, option: { prefix: \u0026#34;\u0026lt;select\u0026gt;\u0026#34;, suffix: \u0026#34;\u0026lt;/select\u0026gt;\u0026#34; } }; const wrap = wrapperMap[tag]; const wrappedHtml = wrap ? `${wrap.prefix}${html}${wrap.suffix}` : html; I do not consider this beautiful. I consider it stable. wrapperMap exists because browser parsers enforce structural context, and CotomyElement needs consistent behavior for dynamic markup that would otherwise fail depending on tag type.\nScoped CSS complications Scoped CSS introduced another boundary question. If CSS is provided, should CotomyElement always apply it?\nFor roots like style, script, meta, or link, scoped CSS is almost certainly a mistake. Throwing exceptions for every such case felt too defensive for day-to-day usage, so I took a softer boundary: apply scoped CSS only when the element is stylable.\nIf not stylable, the CSS is ignored.\nThis is a practical compromise. It avoids turning minor misuse into hard failures while keeping normal usage predictable.\npublic get stylable(): boolean { return ![\u0026#34;script\u0026#34;, \u0026#34;style\u0026#34;, \u0026#34;link\u0026#34;, \u0026#34;meta\u0026#34;].includes(this.tagname); } private useScopedCss(css: string): this { if (css \u0026amp;\u0026amp; this.stylable) { const hasRoot = /\\[root\\]/.test(css); const normalizedCss = hasRoot ? css : `[root] ${css}`; const writeCss = normalizedCss.replace( /\\[root\\]/g, `[data-cotomy-scopeid=\u0026#34;${this.scopeId}\u0026#34;]` ); // style tag injection into head... } return this; } The root selector is normalized to root when missing, then rewritten into a concrete data-cotomy-scopeid selector. That keeps local CSS scoped to the current element identity without requiring manual selector rewriting in every call.\nThe id problem CotomyElement carries two internal identities: instanceId and scopeId.\ninstanceId is data-cotomy-instance and is used for event registry and lifecycle ownership. scopeId is data-cotomy-scopeid and is used for scoped CSS isolation.\nBut HTML uniqueness is defined by id. That is a separate concern.\nI kept a strict boundary here: Cotomy does not modify id by default. For controlled elements such as forms managed through CotomyPageController, if id is missing, generateId is called so controller-level lookup remains predictable.\n// CotomyElement public generateId(prefix: string = \u0026#34;__cotomy_elem__\u0026#34;): this { if (!this.id) { this.attribute(\u0026#34;id\u0026#34;, `${prefix}${cuid()}`); } return this; } // CotomyPageController protected setForm\u0026lt;T extends CotomyForm = CotomyForm\u0026gt;(form: T): T { if (!form.id) { form.generateId(); } this._forms[form.id!] = form; return form.initialize(); } This is not a perfect answer. It is a boundary tradeoff. The platform id model stays untouched unless ownership is explicit, and page-level orchestration still gets stable identifiers where it needs them.\nA future major version may revisit this line.\nClosing I have only explained a fraction of CotomyElement so far, but these boundaries were some of the most important decisions during development.\nMy vocabulary may not always be enough to express the full design intent, but the intent itself was real: keep dynamic behavior usable without letting ambiguity spread through the entire API.\nIf you are shaping your own development environment or framework, even partially, I hope these reflections are useful in your own boundary decisions.\nDevelopment Backstory This article is part of the Cotomy Development Backstory, which traces how Cotomy\u0026rsquo;s architecture emerged through real project constraints.\nSeries articles: Building Systems Alone , Early Architecture Attempts , The First DOM Wrapper and Scoped CSS , API Standardization and the Birth of Form Architecture , Page Representation and the Birth of a Controller Structure , The CotomyElement Constructor Is the Core , Dynamic HTML Boundaries in CotomyElement, Reaching Closures to Remove Event Handlers Later , and The Birth of the Page Controller .\nNext article: Reaching Closures to Remove Event Handlers Later ","permalink":"https://blog.cotomy.net/posts/development-backstory/07-dynamic-html-boundaries-in-cotomy-element/","summary":"How I handled dynamic HTML input in CotomyElement, where parsing breaks, and why the boundaries became strict.","title":"Dynamic HTML Boundaries in CotomyElement"},{"content":"Previous article: Page Representation and the Birth of a Controller Structure Why talk about a constructor? Even though CotomyElement wraps many DOM methods, the real idea is inside the constructor.\nIn the previous articles, I explained how TypeScript stabilized fragile code through type constraints, how ElementWrap improved locality around DOM handling, and how scoped CSS reduced the distance between UI and styling responsibility. The constructor is where those lines finally converge into one boundary I can actually use every day.\nI do not see it as syntax convenience. I see it as a decision about ownership.\nThe original form: HTMLElement in, wrapper out In the ElementWrap era, the constructor started with one simple shape: HTMLElement in, wrapper out.\nThat fit my actual workload. Most HTML was server-rendered, so the dominant flow was to find known nodes and wrap them for typed handling. That pattern still describes most of the systems I build.\nThere are two different searches in that world. One is page-wide search from document scope. The other is nested search under an already selected element. Both are useful, and I never wanted to force every lookup into a parent-threading pattern if a document-level query was clearer in that moment.\nWhy static finders exist and why byId is static only CotomyElement provides static finder methods so I can enter from page root quickly. Methods like first, find, and byId work as small entry points from document scope.\nIt also supports nested search from an existing instance, so child-level selection stays local when I already have a component boundary.\nModern Cotomy has CotomyWindow as a broader boundary, but for daily use I still wanted CotomyElement itself to keep a minimal root-level entry style.\nThe byId decision is intentional. An id is page-unique by definition, so it belongs to page scope rather than instance scope. For that reason, byId exists as a static method and there is no equivalent instance method.\nThe four constructor patterns Current CotomyElement accepts four input kinds.\nPattern A: HTMLElement When I already have an element from existing markup, I pass it directly.\nconst button = document.querySelector(\u0026#34;#save-btn\u0026#34;) as HTMLElement; const el = new CotomyElement(button); Pattern B: html + optional css object This is the constructor shape that binds creation and scoped styling in one local point.\nconst panel = new CotomyElement({ html: `\u0026lt;section\u0026gt;\u0026lt;h2\u0026gt;Summary\u0026lt;/h2\u0026gt;\u0026lt;p\u0026gt;Ready\u0026lt;/p\u0026gt;\u0026lt;/section\u0026gt;`, css: `[root] { padding: 8px; border: 1px solid #ccc; }`, }); Pattern C: tagname + optional text + optional css object This was added later mostly for small convenience cases.\nconst message = new CotomyElement({ tagname: \u0026#34;p\u0026#34;, text: \u0026#34;No records found.\u0026#34;, css: `[root] { color: #666; margin: 0; }`, }); Pattern D: html string only If scoped CSS is unnecessary, a plain html string is often the cleanest call.\nconst row = new CotomyElement(`\u0026lt;li class=\u0026#34;item\u0026#34;\u0026gt;Alpha\u0026lt;/li\u0026gt;`); Why the html+css pattern is the biggest feature, but not always used If I had to pick one constructor pattern as CotomyElement\u0026rsquo;s largest characteristic, it is the html plus css object pattern. In a practical sense, I built this boundary mainly for that. It compresses structure and styling into a single local decision.\nAt the same time, I have to be honest about real usage volume. In my systems, this pattern is not everywhere.\nMost large structure is still server-rendered. Frame-level styling is usually centralized in a large shared stylesheet. Page or shared-part styling also lives in Razor scoped CSS. CotomyElement scoped CSS is mostly used for TypeScript-generated pieces where local ownership matters more than global reuse.\nThat is why I describe the core gain as searchability and locality, not universal replacement.\nThe benefit: scoped CSS that cannot leak When I build a UI piece from API data, I can attach only the CSS needed for that generated component and keep it isolated from the rest of the page.\nThat matters under two pressures: parent styles I did not author and app-wide rules that can shift over time. In this constructor model, those pressures do not need to dictate local component styling behavior.\nHow I decided to treat scoped CSS At first, I considered relying on inline style as the simplest path. It looks straightforward when the target is one element and one visual tweak.\nThat approach broke down quickly for normal UI work. Child selectors, pseudo-classes, and responsive rules all become awkward or impossible when style has to stay inline, so I needed real CSS instead of style attributes pretending to be a full styling system.\nThe current CotomyElement approach is to inject a scoped style tag and bind it to a dedicated CSS scope identifier. In useScopedCss, it creates a style element and normalizes selectors with [root]. Then it rewrites [root] into [data-cotomy-scopeid=\u0026quot;\u0026quot;] and appends the CSS to head using the id format css-. I know this sounds like I went deep on one small detail, and I did.\nI keep scopeId separate from instanceId on purpose. scopeId is the CSS boundary marker, while instanceId is wrapper identity used by event-related ownership such as EventRegistry. InstanceId is about behavior ownership; scopeId is about styling ownership. They solve different problems even when they live on the same element, and when scoped CSS is used, scopeId is generated per element.\nCleanup also matters. On removed, CotomyElement defensively checks whether an element with that scopeId is still present in the DOM, and removes the corresponding style tag only when it is no longer present. I do not claim measured performance wins here, but this avoids style-node accumulation risk on long-lived screens with repeated dynamic generation.\nIt also helps extensibility. A base class can define a default structure, and subclasses can apply small diffs without turning style ownership into a cross-file hunt.\nI know this sounds like I am praising my own idea a little too much, and yes, I probably am.\nStill, the useful part is not pride. The useful part is fewer unintended effects when real screens evolve.\nEvolution and removed patterns Two constructor-era patterns were removed on purpose.\nFirst, I used to pass html and css as separate parameters. I dropped that because unstable arity grew quickly and the signature became too easy to misuse as variants increased.\nSecond, I used to allow constructing from another wrapper instance. I removed that because multiple wrapper identities around the same DOM node was structurally wrong for the boundary I wanted. One DOM identity should have one clear wrapper ownership path. Multiple wrappers around the same identity blur ownership.\nThese were boundary decisions, not simple refactors.\nThe lifecycle problem: creation is not enough Creation is only half of a DOM wrapper story. The other half is deletion.\nA wrapper has a specific failure mode: the DOM node can disappear while the wrapper instance still exists in memory. When that happens, event registrations and internal state can turn into ghost state, and later operations can accidentally run against a detached element. The bug is often quiet until a second action path touches it.\nGarbage collection does not solve that ownership problem by itself. If controller state, caches, or closures still reference the wrapper, the object stays alive even after the DOM node is gone.\nI considered patterns like strict manual dispose calls, explicit parent-managed lifetimes, and checking attachment state everywhere before each operation. Those patterns can work, but in my daily flow they were too easy to forget or apply inconsistently.\nIn the current Cotomy implementation, lifecycle detection is handled through MutationObserver in CotomyWindow.initialize(). It watches body subtree removals, wraps each removed HTMLElement, and triggers a removed event when the node is truly detached and not in a moving state. Here, moving state means temporary transit relocation, where data-cotomy-moving is set to avoid false removal handling during movement. CotomyElement already registers a removed hook during construction, and that hook swaps the internal element to a data-cotomy-invalidated placeholder and clears EventRegistry entries for that instance.\nThis does not make lifetime perfect in every possible usage pattern, but it does turn the common removal path into automatic invalidation instead of manual cleanup discipline. I did not want my future self to debug another invisible lifetime bug at 2 AM.\nCotomyWindow as a boundary CotomyWindow is the broader page-level boundary for window events, layout-change propagation, and mutation observation setup.\nEven with that boundary, CotomyElement still needs lifecycle correctness on its own side, because wrappers can be created from many places in application code. CotomyWindow helps detect removal at page scope, while CotomyElement keeps per-instance ownership behavior consistent after removal.\nClosing Compared with modern frontend ecosystems, Cotomy is very small.\nBut this constructor boundary is the grain of sand that contains what I actually wanted to build: local DOM ownership, practical search entry points, and scoped styling where it matters.\nIt is not a grand framework claim. It is a precise tool shaped by my constraints and repeated screen implementation work.\nDevelopment Backstory This article is part of the Cotomy Development Backstory, which traces how Cotomy\u0026rsquo;s architecture emerged through real project constraints.\nSeries articles: Building Systems Alone , Early Architecture Attempts , The First DOM Wrapper and Scoped CSS , API Standardization and the Birth of Form Architecture , Page Representation and the Birth of a Controller Structure , The CotomyElement Constructor Is the Core, Dynamic HTML Boundaries in CotomyElement , Reaching Closures to Remove Event Handlers Later , and The Birth of the Page Controller .\nPrevious article: Page Representation and the Birth of a Controller Structure Next article: Dynamic HTML Boundaries in CotomyElement ","permalink":"https://blog.cotomy.net/posts/development-backstory/06-cotomy-element-constructor-is-the-core/","summary":"Why the constructor overloads capture my real design intent: locality, scoped CSS, and practical DOM ownership.","title":"The CotomyElement Constructor Is the Core"},{"content":"Previous article: API Standardization and the Birth of Form Architecture Opening Context In the previous articles, I described three major stabilizations in sequence: TypeScript reduced fragile handling through type safety, ElementWrap improved DOM locality, and form classification gave API interactions a consistent boundary.\nAfter that, a different question remained unresolved.\nHow should one page itself be represented?\nEarly Confusion: How to Bind Logic to a Page At first, I treated page initialization as a wiring problem.\nWhen a screen loaded, where should its logic begin? What should own the first event binding? What should hold references to key elements after that?\nThe most natural idea was one JavaScript file per page. I gave each endpoint its own script target and tried to keep boundaries that way. Conceptually, it sounded clean.\nIn practice, bundling and entry configuration were not clean at all for me at that stage. My Webpack understanding was still shallow, and each structural adjustment had side effects on build rules, path mapping, and output organization. I spent a long time in that unclear zone, where I could make things work but could not explain the structure with confidence.\nI do not feel embarrassed about that period. It was a normal result of learning while shipping.\nTrue clarity around bundling only arrived much later, when generative AI tools and autonomous agents made it easier to explore configuration patterns quickly and compare alternatives without burning whole days on trial-and-error setup.\nEndpoint-Based Initialization Before structure matured, I used the simplest reliable trigger.\nIf each screen had its own endpoint, then page-specific logic could run on DOMContentLoaded. That solved startup in a technical sense. Code executed at the right moment, and users could use the screen.\nAlso, because basic HTML was assembled on the server side, it was reasonable to treat target tags as existing when TypeScript searched and bound them. That assumption was stable in my environment and did not create irrational risk by itself.\nI did generate some elements on the TypeScript side, but the scope was limited. Typical examples were message displays and panels for selecting specific entities. Those dynamic parts were useful to standardize, yet they were still bounded utilities rather than the main screen skeleton.\nBut complex screens exposed a different limitation immediately.\nThose screens needed state ownership, reusable interactions, and predictable update paths. A procedural file with startup handlers was enough for small behavior, but it became brittle once a screen started accumulating modal coordination, list updates, and edit transitions.\nThe initialization problem was solved. The representation problem was not.\nThe Controller Realization The turning point was practical, not theoretical.\nIn the early phase, I first tried to avoid a dedicated controller and rely on shared helpers plus a load-event registration method implemented as an ElementWrap static method. On the surface, that looked simple.\nThe cost appeared later. Instances that composed each screen were managed in scattered locations, ownership became ambiguous, and state changes were harder to follow during revisions.\nFor complex screens, it became more natural to define one controller class and let that instance own page references, mutable state, and behavior methods. Instead of scattering handlers across free functions and static helpers, I could keep interactions in one class boundary and initialize the screen by creating that class.\nMost screens did not need anything more complicated than that.\nI prioritized consistency over minimal code volume. Even when a very small screen could have worked with a few direct handlers, I often still used the same controller pattern so that moving between screens required less context switching.\nThat choice paid off repeatedly. List screens reused common structure more easily. CRUD flows became more uniform in naming and execution shape. Boilerplate shrank because repeated flow moved to shared methods. Refactoring became safer because responsibilities were visible in one place instead of hidden across many top-level callbacks.\nThis was not an attempt to follow textbook MVC purity.\nIt emerged from repeated implementation pressure: every time I avoided structure, maintenance cost returned. Every time I introduced a controller boundary, revisions became calmer.\nThe Birth of CotomyPageController As this pattern repeated across projects, a base controller class gradually emerged.\nShared initialization steps moved into that base. Project-level page controllers began inheriting from a common parent by default. The page layer stopped being an informal collection of scripts and started behaving as a predictable inheritance chain.\nDuring one period, I also experimented with a switchable-controller mechanism for complex SPA-style flows. It produced some measurable benefits, but it also pushed architecture in a direction where a large component framework would be the more rational choice.\nBecause Cotomy aimed at a narrower boundary, I removed controller switching before publishing and kept the model centered on one page controller structure per screen.\nEven in legacy systems, migration followed the same direction over time. Earlier wrappers and custom setup code were not replaced all at once, but many screens moved incrementally toward Cotomy-style page control as they were touched for feature work.\nIn current systems, almost every new or refactored screen is built either directly on Cotomy or on transitional controller classes that are designed to converge into it.\nFor CRUD-heavy applications, it is often useful to define specialized subclasses of CotomyPageController by screen category and keep each category\u0026rsquo;s default behavior there.\nFramework-Like Emergence At that point, I started noticing a larger pattern.\nWhat began as local utilities was turning into a set of boundaries.\nThe page layer had structure. The form layer had classification. The DOM layer had locality.\nThat combination made the TypeScript library feel framework-like in day-to-day use.\nIt was never intended to compete with large ecosystems, and I still do not frame it that way. The goal was narrower: provide exactly the boundaries I needed for business UI delivery, no more and no less.\nOngoing Uncertainty Inside my own environment, I consider this evolution a success.\nAt the same time, I am still uncertain about how far this structure should be generalized. What works across my projects may not map directly to every team context, and I do not want to pretend the direction is fully closed.\nThe architecture is stable enough to trust, but still open enough to evolve.\nTransition Forward So far in this series, I have mostly described which recurring problems Cotomy solved for me and why those solutions became durable.\nFrom the next articles onward, I will focus more on granular technical decisions inside those boundaries, where tradeoffs become sharper and implementation details matter more.\nDevelopment Backstory This article is part of the Cotomy Development Backstory, which traces how Cotomy\u0026rsquo;s architecture emerged through real project constraints.\nSeries articles: Building Systems Alone , Early Architecture Attempts , The First DOM Wrapper and Scoped CSS , API Standardization and the Birth of Form Architecture , Page Representation and the Birth of a Controller Structure, The CotomyElement Constructor Is the Core , Dynamic HTML Boundaries in CotomyElement , Reaching Closures to Remove Event Handlers Later , and The Birth of the Page Controller .\nPrevious article: API Standardization and the Birth of Form Architecture Next article: The CotomyElement Constructor Is the Core ","permalink":"https://blog.cotomy.net/posts/development-backstory/05-page-representation-and-controller-structure/","summary":"How I struggled to bind logic to individual pages, and how controller classes gradually gave my TypeScript library framework-like properties.","title":"Page Representation and the Birth of a Controller Structure"},{"content":"Previous article: The First DOM Wrapper and Scoped CSS Opening Context In the previous article, I described how ElementWrap stabilized my daily DOM work and how scoped CSS finally reduced the endless stylesheet collapse.\nThat phase worked far better than I had expected in real projects. A layered styling approach emerged and kept screens manageable: global design for forms, tables, and base controls was loaded once; page-level or shared partial styling was handled with Razor scoped CSS; and dynamically generated elements carried their HTML and styling together inside ElementWrap.\nI also struggled for a while with one practical decision: where scoped CSS should physically live in runtime flow. The solution I settled on was to generate a style tag from ElementWrap and append it to head. It was not perfect, but it was clean enough for my constraints and effective enough to stay in production use.\nBy then, most DOM-level instability was under control. The next bottleneck was API calls.\nThe API Chaos Phase During the jQuery era, I depended on jQuery.ajax and similar helper patterns. When I moved to pure TypeScript, my first step was simple: wrap fetch behind a small Api class. In the beginning, that class was basically a thin fetch wrapper.\nEven that tiny abstraction helped. Repeated option definitions were reduced, and request handling became more consistent across screens.\nBut fetch was never the real problem.\nThe real problem was that the full procedure was still unstructured: collect form data, transform it, call an API, handle success or failure, then reflect state changes back into the UI. Different screens implemented the same CRUD flow in slightly different ways, and each small divergence created maintenance cost later.\nAnother factor made this worse. I only started doing end-to-end web development in a serious way after I had already become a solo builder, so I had to discover almost every workflow by myself while still shipping production screens. That naturally amplified complexity and slowed standardization. Calling it a skill gap is fair, but the larger issue was that both learning and delivery had to happen at the same time, in the same codebase.\nThe timing also mattered. As smartphones spread quickly, mobile support became unavoidable, and that pressure landed directly on the same fragile architecture. I tried jQuery Mobile early, and it helped me move quickly at first, but in practice it pushed me toward maintaining separate screen structures for PC and mobile contexts. That solved one problem and created another, because divergence between two UI structures increased maintenance overhead and made behavior consistency harder to keep.\nResponsive design ideas probably already existed in some form, but at least in the practical range I could easily reach through everyday web searches and books at that time, I did not have a reliable path to adopt them with confidence.\nThe Ugly Phase: Button-Driven JSON Assembly Before I introduced proper form interception, many screens were still driven by button click handlers. Those handlers collected input values manually, assembled JSON objects manually, and sent them through API calls.\nThere were reasons for this. Some screens changed fields dynamically by configuration. I was optimizing for delivery speed. I was also still searching for repeatable patterns under pressure, while teaching myself how to structure a full web stack in parallel and juggling PC and mobile behavior expectations.\nStill, this was one of my weaker patterns. I abandoned many experiments, threw away a lot of code, and rewrote screens with different approaches. I do not remember every attempt, because there were too many trial branches that never deserved to survive.\nThe Turning Point: Classifying Forms by Access Type The turning point came when I stopped treating all forms as the same thing.\nForms differ by access type and intent.\nSearch forms and edit forms may look similar on screen, but their responsibilities are different. Once I accepted that distinction, architecture decisions became much clearer. Search forms should use query strings, avoid API dependence, and preserve URL state. Edit forms should use API calls, run fully through AJAX, and avoid server-side re-render dependence for data expansion.\nBefore this classification, I often rendered data on the server for edit screens and still implemented API-side logic for the same screen behavior. That duplicated transformation logic in two places.\nThis was a hidden defect generator, and it was unacceptable.\nInheritance-Based Form Architecture To resolve this, I introduced inherited form classes instead of leaving each screen to improvise.\nI established a base form class, a query-string-oriented form class, an API-oriented form class, and mode-aware variants for new and edit handling. I am intentionally describing the structure conceptually rather than as exact class names from that period, because naming and details evolved over time.\nAfter some additional revisions, another practical requirement became explicit. In most screens, it was natural to define method and action directly in HTML form markup. But some screens needed TypeScript-side control because method or target URL had to switch dynamically by runtime state. I needed both styles to coexist without creating separate form ecosystems.\nSo the architecture evolved toward a dual boundary: declarative defaults in HTML, plus controllable override points in TypeScript when dynamic behavior was truly required. As usage areas expanded, the form layer became more flexible, but the contract stayed consistent.\nWhat matters is the boundary that was created.\nThis was the phase when my TypeScript common classes finally started to feel cohesive rather than accidental.\nDramatic Impact The impact was immediate.\nCRUD implementation time dropped. Consistency improved. Server and client responsibilities stopped duplicating as often. More importantly, I could choose the access style, method, and endpoint control point that fit each screen\u0026rsquo;s purpose without breaking implementation consistency. Some screens stayed mostly declarative through form markup, while others used TypeScript-side switching for dynamic transitions, and both approaches remained inside one architectural model.\nThe key benefit was flexibility without fragmentation.\nThe Moment I Considered Publishing Around this point, I noticed something had changed.\nMy TypeScript foundation had structure. The DOM layer had coherence. The form layer had classification. API access had a standard boundary.\nIt no longer felt like a pile of utilities. It felt like a system.\nThat was the first time I seriously thought, Should I publish this?\nThe Reality: Overload and Frustration At the same time, my reality was harsher than the architecture progress. I was not only developing systems. I was also managing internal operations, handling recruitment, supporting business process redesign, sometimes supporting field operations, and even taking on sales activities when needed. The work volume had already crossed a reasonable one-person boundary. There was no universe in which I had the time to polish and publish anything.\nThe publication idea was postponed, not abandoned.\nClosing Reflection The form architecture phase was where scattered techniques finally converged.\nIt was not Cotomy yet.\nBut it was no longer a sequence of isolated experiments. It was a direction.\nDevelopment Backstory This article is part of the Cotomy Development Backstory, which traces how Cotomy\u0026rsquo;s architecture emerged through real project constraints.\nSeries articles: Building Systems Alone , Early Architecture Attempts , The First DOM Wrapper and Scoped CSS , API Standardization and the Birth of Form Architecture, Page Representation and the Birth of a Controller Structure , The CotomyElement Constructor Is the Core , Dynamic HTML Boundaries in CotomyElement , Reaching Closures to Remove Event Handlers Later , and The Birth of the Page Controller .\nPrevious article: The First DOM Wrapper and Scoped CSS Next article: Page Representation and the Birth of a Controller Structure ","permalink":"https://blog.cotomy.net/posts/development-backstory/04-api-standardization-and-form-architecture/","summary":"How unstructured API calls became the next bottleneck, and how form classification reshaped my TypeScript foundation.","title":"API Standardization and the Birth of Form Architecture"},{"content":"Previous article: Early Architecture Attempts From jQuery Removal to Pure TypeScript In the previous article, I described the phase where I still depended on a server template plus TypeScript plus jQuery clone flow. I had already felt that this combination was fragile, but at that point I still treated it as a necessary compromise.\nElementWrap started as a very small utility. It was not born as a dynamic UI engine. It was just a thin wrapper around HTMLElement. The constructor took an HTMLElement instance, and the early methods only existed to reduce repetitive direct DOM handling that kept appearing across screens. Looking back, this small class was the direct ancestor of what later became CotomyElement.\nThe initial objective was very practical: at first, I was not trying to discard the server-rendered find and clone flow. I wanted to do that same flow in a simpler and safer way than jQuery, with type constraints helping me avoid fragile handling. I was not trying to define a grand front-end philosophy. I simply wanted pure TypeScript to own more of the dynamic behavior so that the control flow stayed in one language and one mental model.\nThat single shift made my work sessions less fragmented. Instead of tracing jQuery selections across templates, I could hold the element reference directly and move through logic with types. It did not eliminate selector-related risks overnight, but it reduced the amount of guesswork during routine changes. The first version of ElementWrap was almost boring, and that was exactly why it helped.\nCSS Collapse and Design Obsession The harder problem appeared in CSS.\nAs my internal systems grew, stylesheets expanded in a way that was no longer an aesthetic inconvenience. It became an operational problem. I had to spend too much time checking whether one screen tweak damaged another, and naming conventions alone were not containing that risk.\nTo be fair, part of this was my own CSS management quality at the time. I tried several patterns to improve it, and some of them did help for a while. But as revisions accumulated across years of real operations, those local improvements were not enough to prevent eventual breakpoints.\nLong before this period, I had built a custom control set in Swing. That experience stayed with me. I had learned that UI control is not vanity. It directly changes how calmly and efficiently people can do daily work. In business systems, users repeat the same screens for years. Minor visual friction accumulates into fatigue.\nI have always believed that even slight improvements in visual clarity can lift mood, and mood changes productivity more than many teams want to admit. People think this is a soft topic, but in repetitive operations, the emotional baseline matters.\nAnd if I am being completely honest, a slightly polished interface seems to buy you a small amount of goodwill when something breaks.\nThat observation was not theoretical. I learned it through repetition, and it convinced me that interface control — visual structure and interaction behavior together — was operational, not cosmetic.\nWhy Existing Templates Did Not Work I tried using off-the-shelf design templates in multiple projects. They were not useless, and I did get measurable improvements by reorganizing naming, splitting style concerns, and tightening conventions. But once repeated feature revisions piled up, the control quality degraded again, and I could not tolerate that cycle.\nThe core problem was not that styling was difficult. The core problem was distance. The element that I needed to style and the CSS definition that affected it were often too far apart in both file structure and ownership. By the time I touched a rule, several other screens might already depend on it in ways that were not obvious.\nI also tried splitting CSS files more aggressively. That improved readability for a while, but it did not solve growth. The same collision patterns returned when projects crossed a certain size and revision density.\nI considered SCSS too. It offered useful organization tools, but at that time it felt like additional structural layering around the same underlying problem. I could nest, import, and reuse, yet the design-to-target distance still remained. The syntax was better, but the responsibility model was unchanged.\nRealization: The Distance Problem The turning point was simple to state and difficult to ignore.\nThe real issue was that the applied design and the target element were too far apart in structure and responsibility.\nOnce I recognized that, many earlier frustrations became easier to classify. Selector conflicts, naming inflation, and cautious refactors were symptoms, not root causes. I had been trying to stabilize behavior while design intent and DOM ownership lived in separate places.\nThat insight changed everything, because it gave me a criterion for architecture decisions: reduce distance between what I build and how it is styled.\nInspiration from Razor Pages Scoped CSS Around that time, I encountered Razor Pages with the cshtml and cshtml.css pairing model.\nWhat struck me was not novelty for its own sake. It was the practical feeling of locality. CSS stayed near the page it targeted. The stylesheet stayed small enough to reason about. Side effects felt limited by default, not by discipline alone.\nFor someone drowning in shared stylesheet sprawl, that experience felt almost magical. Not magical in a technical sense, but in the sense that a long-standing maintenance pressure suddenly had a comprehensible shape.\nI do not mean it solved every concern automatically. But it showed me a direction where blast radius could be reduced by structure, not only by naming effort.\nExtending ElementWrap ElementWrap then expanded beyond the original HTMLElement-only constructor.\nAt first, it only accepted HTMLElement. Later I added another pattern that accepted html and css together. There was no elegant unified overload design at that stage. I was building for immediate internal needs, not for publication quality.\nSo for a while, two constructor patterns coexisted:\nHTMLElement html + css In the initial implementation, scoped CSS could also be attached later depending on the situation. I had implemented it that way because I thought it would be useful to change how state was expressed while the UI was running. As I explain later, that turned out to be a bad pattern, but this period was full of countless bad design choices and countless incremental improvements made through daily development work.\nNote: in the final CotomyElement design, I restricted scope ID setup to constructor time only. The reason was structural. If scope styling could be attached later, it would also allow scoped CSS to be applied to server-generated HTML after the fact, and that would reintroduce the same distance problem from a different direction. I concluded that CSS should not be attachable later if I wanted to preserve the element-style locality that this approach was meant to enforce.\nThe scope identifier in that era was [scope]. In current CotomyElement, the marker is root-based, but back then [scope] was the practical anchor I could implement quickly.\nThe scoping approach itself was straightforward. I prefixed selectors with the scope attribute so rules only applied to the intended element tree. It was not elegant in a textbook sense, and it could produce heavy-looking rules. But in a private system, that tradeoff was acceptable. Predictability and speed mattered more than stylistic purity.\nI did not yet have a formalized abstraction boundary or polished naming conventions. Still, for daily delivery, this was the first time CSS behavior felt governable instead of constantly defensive.\nDramatic Effect The effect was immediate enough that I abandoned the old clone-template flow soon after.\nMost dynamic elements started being defined directly in TypeScript, with their CSS kept close to the same construction context. That shift reduced the number of files I had to traverse for routine screen changes and made impact estimation much faster.\nThis was still not a framework. It was still far from public quality. If I showed that code to a broad audience, I would have had many things to explain and even more to rewrite.\nBut productivity improved dramatically, and that was the metric I cared about at the time. I could iterate UI behavior faster, isolate style changes better, and recover from mistakes with less collateral damage.\nIn hindsight, this was less about inventing something new and more about finally aligning the structure with the kind of work I was actually doing every day.\nWhy Not React? People may ask why I did not adopt React or similar tools at that stage.\nThe short answer is scale and scope.\nFrameworks like React can address many of the same classes of problems, especially when teams, screens, and state interactions grow beyond what ad hoc structure can safely handle. I do not deny that. Honestly, some systems where I used ElementWrap would likely have benefited more from React, but in the daily pressure of delivery I could not justify paying that learning cost for only one part of the workload.\nBut I did not need that scale in my immediate context. My dominant pain was not ecosystem breadth. It was the gap between element ownership and style ownership. I needed to narrow scope first and regain local control. ElementWrap with scoped CSS was enough to do that for my environment.\nSo this was not a rejection of React. It was a decision to solve the smallest structural problem that was actively slowing my delivery.\nThe Endless Shift of Problems ElementWrap worked much better than I expected.\nAt that time, I had no intention to publish it or shape it into a general-purpose tool. It was a private response to practical pressure. Yet once this structural problem became manageable, another set of problems became visible, especially around boundaries, consistency, and what should be considered reusable versus screen-specific.\nThat pattern has repeated throughout my system development life. When one irrationality is reduced, another appears from a different angle. The work is not a straight path to completion. It is continuous confrontation with new irrationalities, each one asking for a better boundary than the previous one.\nAI will likely influence part of this process over time. It can accelerate prototyping, verification, and even some architectural exploration. But I do not think this confrontation disappears soon, because the hardest problems are usually about responsibility and tradeoff, not only code generation.\nDevelopment Backstory This article is part of the Cotomy Development Backstory, which traces how Cotomy\u0026rsquo;s architecture emerged through real project constraints.\nSeries articles: Building Systems Alone , Early Architecture Attempts , The First DOM Wrapper and Scoped CSS, API Standardization and the Birth of Form Architecture , Page Representation and the Birth of a Controller Structure , The CotomyElement Constructor Is the Core , Dynamic HTML Boundaries in CotomyElement , Reaching Closures to Remove Event Handlers Later , and The Birth of the Page Controller .\nPrevious article: Early Architecture Attempts Next article: API Standardization and the Birth of Form Architecture ","permalink":"https://blog.cotomy.net/posts/development-backstory/03-first-dom-wrapper-and-scoped-css/","summary":"How ElementWrap emerged from jQuery migration and CSS collapse, and how scoped styling reshaped my UI architecture.","title":"The First DOM Wrapper and Scoped CSS"},{"content":"Previous article: Alone, Building Systems From Contract Engineering to Solo Internal Systems Before Cotomy existed, I left a contract engineering services company and moved to a non-IT company. In that role, I became responsible for executing and coordinating the company\u0026rsquo;s IT adoption and internal systemization, which meant I had to design, deliver, and keep improving multiple systems in a practical way. I had also been taking small personal development jobs before that transition, so I already had experience building simple systems under tight constraints.\nWhy PHP Was the Practical Choice Around the same period, smartphones were just starting to spread widely, and that made web delivery a hard requirement because desktop-only applications could not cover the use case. I had already been developing with C#, and I wanted to keep using it for the next systems, but practical deployment constraints were stronger. I could set up on-premises servers at a rough test level, yet building and operating an internet-exposed environment with reliable security was beyond what I could trust my skills to handle at that time, so the risk was too high. At the same time, I had several systems that needed to run on cheap rental servers, and C# hosting on managed or low-cost rental environments was still rare or expensive. For those practical reasons, I consciously gave up C# and moved to PHP.\nWhy Smarty, and Why It Became Friction My first PHP project was an EC site, and the OSS I adopted in that project happened to use Smarty. I started using Smarty mostly because of that starting point, not because I had a strong template strategy.\nAs I remember it, Smarty was often presented as a way to separate programmer and designer responsibilities. In real templates, though, loops and conditions inevitably appear, so that separation never became clean in practice. I also had to learn a separate syntax on top of PHP itself, and in that environment and at my skill level, the learning cost felt high.\nI still used Smarty across about 3 to 4 systems over a few years. But when I began making my own PHP framework, I dropped Smarty and switched to standard PHP embedded HTML.\nThe First Self-Made PHP Framework The first framework I made was intentionally simple. It had routing, a common base HTML layout through inheritance, and standardized database access so each screen did not reinvent basic data operations.\nI remember two core base concepts, though the exact names are fuzzy now, something like Worker and HtmlPage. Subclasses plus routing definitions in mapping.json determined what would be called. It was lightweight and not something I would call public-quality, but it improved my day-to-day delivery speed dramatically.\nEarlier, I had made a Swing control set in Java, but that was just a library. This PHP framework was the first one that directly improved real ongoing work.\nBuilding the framework myself also deepened my understanding of web development far more than I expected. It fit my personality to be able to fix or reshape anything I disliked, because the framework was mine to adjust, not a black box I had to accept. I do not mean I had a complaint about other people\u0026rsquo;s design sense, but I did want to control as much as possible. Also, most of my systems shared very similar mechanics, so the required feature set was limited, and that made a large framework feel unnecessary.\nThe Project That Forced a SPA-Like Direction This happened around the PHP7 era. I received a project that combined an organization-chart-like interface with internal chat behavior, and I wanted a smoother screen flow that felt closer to what we now call SPA, even though I did not know that term then.\nAs complexity grew, jQuery usage became heavy and harder to reason about. I started looking into AltJS options and found TypeScript.\nTypeScript: The One-Reason Decision I chose TypeScript almost immediately for one reason: type constraints. At the time, that was the clearest way to reduce breakage while scaling client-side code.\nBy then, I had already spent enough time in plain JavaScript to know how hard I could get trapped without type constraints. As the code grew, mismatched assumptions about object shapes and parameter usage kept slipping through until runtime, and debugging those failures took too much time.\nI had enough time to build TypeScript runtime and tooling quickly, but my rendering assumptions stayed server-first. HTML rendering was still server-side, I was not yet thinking in terms of HTML and CSS as a paired unit, and I continued using jQuery heavily, so the benefits of TypeScript were only partially realized.\nThe “Clone and Append” Era For complex UI elements, I first built DOM structures line by line with jQuery, and that approach soon hit limits. Then I shifted to a pattern where the server output template parts, and TypeScript located those parts, cloned them, and appended them where needed.\nLooking back, the code was unbelievably irrational and ugly!!\nEven so, this approach helped me ship several systems. Most of those systems are still running today, and I plan to replace or retire them within this year.\nI also started feeling that generating simple HTML on the client side would be useful. At the same time, generating everything on the client did not feel realistic, so I was aiming for a hybrid: SPA-like interaction plus screen-per-feature practicality. I believe Cotomy aims to achieve that balance today, though I am still critically examining whether it truly succeeds.\nMoving Away from jQuery Eventually, jQuery started to feel pointless because I was already implementing SPA-like behavior around it. I had no real webpack understanding at the time, no AI assistance, and I struggled to pair screens and endpoints cleanly, so I decided to stop depending on jQuery.\nOne concrete pain point was box measurement. I needed viewport-relative box metrics such as those returned by getBoundingClientRect(), and accessing them required stepping outside jQuery\u0026rsquo;s abstraction layer. That repeated friction pushed me toward a wrapper boundary: if I wrapped HTMLElement directly, I could build a class around the exact behavior I needed.\nClosing This part covered what I did and why before Cotomy existed. I still feel I am discovering what I really wanted structurally. In the next article, I will move closer to the first true DOM wrapper, early boundary decisions, and how those eventually became Cotomy.\nDevelopment Backstory This article is part of the Cotomy Development Backstory, which traces how Cotomy\u0026rsquo;s architecture emerged through real project constraints.\nSeries articles: Building Systems Alone , Early Architecture Attempts, The First DOM Wrapper and Scoped CSS , API Standardization and the Birth of Form Architecture , Page Representation and the Birth of a Controller Structure , The CotomyElement Constructor Is the Core , Dynamic HTML Boundaries in CotomyElement , Reaching Closures to Remove Event Handlers Later , and The Birth of the Page Controller .\nNext article: The First DOM Wrapper and Scoped CSS ","permalink":"https://blog.cotomy.net/posts/development-backstory/02-early-architecture-attempts/","summary":"The pragmatic early design that predated Cotomy: PHP routing, template friction, and the first DOM-wrapper impulse.","title":"Early Architecture Attempts"},{"content":"Opening I spent nearly ten years as an internal system engineer in a non-IT company. I occasionally hired people for specific phases, but in most periods I was the primary engineer responsible for architecture, implementation, and operation.\nThat responsibility changed how I evaluated design decisions. I was not selecting architecture for an ideal team setup. I was selecting architecture that I could sustain over time with limited coordination bandwidth.\nPrevious Career Model Before that period, I had experience in contract engineering services and large-team projects. In those environments, a common model was to split screens across engineers and let each part progress in parallel.\nThis model works well when a project has enough people. It scales horizontally because many engineers can develop and review different screens at the same time.\nThis was not inherently flawed architecture. It was a coordination model optimized for scale.\nI am not presenting that as criticism. It is a structural observation about team size and architecture fit.\nThe Scaling Problem for One Person When one person builds the whole system, the same distributed screen model becomes inefficient. Even without a team, the architecture still carries coordination overhead between screens, flows, and states.\nContext switching becomes a direct cost. Each screen has slightly different assumptions, and reconciling those assumptions repeatedly slows implementation. Over time, flow consistency also drifts because each local decision is made in isolation.\nIn practice, each screen tended to carry isolated logic, and there was no strong shared lifecycle boundary. Without that boundary, architectural consistency depended entirely on discipline rather than structure. As screens accumulated, cross-screen behavior drift became harder to control.\nEarly Independent Experiments Outside company work, I also took small system contracts. In those projects I started from PHP, Smarty, and jQuery, with a practical goal: unify screen flow and behavior across all pages as much as possible.\nI was not trying to invent a new framework. I was trying to keep workload survivable.\nAt that time there were already excellent PHP frameworks, and I did try some of them. I do not remember the exact reasons I rejected each option. What I clearly remember is that, with my knowledge at that time, they felt difficult to adopt when I needed to design with confidence and deliver the required scale by myself, including the learning cost.\nThat conclusion was about my requirements and constraints at that time. It was not a claim that those frameworks were inadequate for business applications in general.\nSkill Limitations at That Time My web application experience was limited in that period, not only on the frontend side but across web application design itself. Most of my earlier work was open-system development with VB, C#, C++, and Oracle-based databases.\nI had also worked on server-side programs, and I had participated in ASP.NET projects, but mainly on backend responsibilities. I did not yet have broad experience designing and operating web applications end to end.\nMy frontend skill level was close to beginner, and I also had very little awareness of broader tooling ecosystems.\nWebpack may already have existed in practical use, but I was not aware of it when I made those decisions. That gap mattered, and I do not want to rewrite that history as if I had clearer technical visibility than I actually did.\nThe Improvised Architecture I ended up creating modular components, connecting them through PHP, and manually standardizing behavior patterns. In retrospect, this was primitive.\nStill, productivity improved. Systems reached a reasonable size for one person to build and maintain, and implementation speed became better than my earlier attempts. It was a practical approach under constraint, not a polished architecture strategy.\nPHP 7 Impact PHP 7 improved day-to-day development for me, especially through type declarations. Stronger typing reduced ambiguity and made code review easier.\nHowever, frontend pressure remained. The jQuery-based side hit limits quickly as state combinations increased. Event handling became scattered, state mutation stayed implicit in many paths, and keeping behavior consistent across screens became progressively harder.\nDiscovering TypeScript When TypeScript started to gain visibility, static typing immediately made sense to me. At the same time, adopting it required transpilation and a build step, which was a practical change from the PHP and jQuery workflow I had been using.\nThe friction was not philosophical. It was operational. A new toolchain changed how I worked every day.\n.NET Core 3 Decision Around the time .NET Core 3 appeared, I decided to move to that stack. That decision became a turning point and the beginning of what later evolved into Cotomy.\nGiven my language background, returning to a more explicit type system fit how I thought. I had felt ongoing discomfort with dynamic typing in PHP for larger internal systems.\nThe build process also had a hidden benefit for me. It forced pauses. It forced reflection. It caught mistakes before runtime and added structural discipline to development flow.\nThat shift was less about language preference and more about changing how decisions were validated while building alone.\nStill, C# is my favorite. I admit this may be personal taste, but the code just feels more elegant to me.\nClosing The early architecture that led to Cotomy was very different from the public version today.\nIn future articles, I will cover the earliest design attempts, the mistakes, the architectural pivots, and why certain structural boundaries exist in its current design.\nDevelopment Backstory This article is part of the Cotomy Development Backstory, which traces how Cotomy\u0026rsquo;s architecture emerged through real project constraints.\nSeries articles: Building Systems Alone, Early Architecture Attempts , The First DOM Wrapper and Scoped CSS , API Standardization and the Birth of Form Architecture , Page Representation and the Birth of a Controller Structure , The CotomyElement Constructor Is the Core , Dynamic HTML Boundaries in CotomyElement , Reaching Closures to Remove Event Handlers Later , and The Birth of the Page Controller .\nNext article: Early Architecture Attempts ","permalink":"https://blog.cotomy.net/posts/development-backstory/01-alone-building-systems/","summary":"How working as a solo internal engineer reshaped my architectural thinking.","title":"Building Systems Alone"},{"content":"Overview Cotomy v1.0.5 is a patch release focused on simplifying default bind generator state behavior and refining release-note related references.\nChanges This version simplifies default bind generator state handling in CotomyViewRenderer, refines release note timestamp and summary wording, and updates reference-site download links for v1.0.4.\nInstall npm install cotomy@1.0.5 Links https://github.com/yshr1920/cotomy/releases/tag/v1.0.5 https://cotomy.net ","permalink":"https://blog.cotomy.net/posts/releases/cotomy-1-0-5-release/","summary":"Patch release that simplifies default bind generator state handling and refreshes release-note references.","title":"Cotomy v1.0.5"},{"content":"I spent many years working in projects organized under Japan\u0026rsquo;s contract engineering services model (in Japan this is often called System Engineering Services, abbreviated as SES). In practical terms, engineers join a client project and work inside the client\u0026rsquo;s command structure. This arrangement has variations, but the operating pattern is usually clear: direction, prioritization, and daily execution are largely controlled by the client side.\nI am not writing this as a legal discussion. This is a practical reflection on how that structure shapes development behavior over time.\nLong Experience in This Model In long-running programs, this model can feel operationally straightforward. Teams are assembled quickly, roles are assigned, and work starts moving. For organizations with large delivery pipelines, this has obvious execution value.\nAt the same time, control and responsibility are distributed in a specific way. Engineers may be accountable for delivery outcomes in daily practice, while decision authority over architecture or investment timing often sits elsewhere. That gap influences how technical decisions are made.\nResponsibility and Compensation One pattern I saw repeatedly was this: responsibility expanded faster than compensation. An engineer might coordinate other members, stabilize delivery risks, or act as a de facto lead, while billing conditions changed only marginally.\nWhen this becomes common, incentives shift. If structural improvements are not rewarded, teams prioritize immediate output over deep redesign. Refactoring, boundary cleanup, and long-horizon architecture work become harder to justify, even when everyone understands they are necessary.\nResulting Development Style The development style that emerges is often screen-first and flow-first. Object-oriented modeling may exist in parts, but it is frequently secondary to short-term screen delivery. Each screen becomes a linear implementation unit, and the full system becomes an accumulation of procedural paths.\nThis does not always fail immediately. Many systems continue operating for years. But structurally, they can become fragile: abstraction boundaries are thin, cross-screen consistency is difficult, and maintenance cost rises as behavior spreads across many local flows.\nI do not claim this describes every current team. I also do not know the full present state of the industry. Still, based on long observation, it is difficult to assume these structural issues have been fully resolved.\nOne Practical Strength The same model has a real scaling advantage. It can absorb a wide range of skill levels, including junior engineers and less-experienced contributors. When manpower is tight, this is not a theoretical benefit. It is an operational one.\nThe proverb Neko no te mo karitai captures the situation well: when capacity is insufficient, every available hand matters. From a scaling perspective, Japanese-style contract engineering services staffing is a practical response to large demand.\nA Hybrid Team Hypothesis A possible model for large systems is a hybrid structure. A small architecture-focused team defines boundaries, contracts, and long-term technical direction. A larger implementation team executes features within those boundaries.\nI treat this as a hypothesis, not a conclusion. But it may explain how organizations can combine scale with architectural consistency when delivery pressure is high.\nTesting Reality Testing strategy is another place where structure matters. Automation is essential and should be expanded wherever stable coverage is possible. At the same time, not all verification can be automated. Complex business systems still require manual testing effort at the edges, especially where workflows, exceptions, and operational context interact.\nThe practical question is not automation versus manual testing. The practical question is where each method gives the highest reliability per unit of effort.\nPersonal Position My own preference is to operate systems with coherent control across architecture, implementation, and operation. I value structures where design intent can be maintained through delivery and into long-term maintenance.\nI am still exploring which team and system structures make that level of control sustainable at scale. Structure shapes behavior, and behavior shapes systems.\nNext article: Early Architecture Attempts ","permalink":"https://blog.cotomy.net/posts/misc/working-inside-japans-ses-model/","summary":"A reflective analysis of how responsibility, incentives, team structure, and testing realities shape large development work in the Japanese-style contract engineering services model.","title":"Working Inside Japan's Contract Engineering Services Model"},{"content":"Overview Cotomy v1.0.4 is a patch release that extends binding behavior at the page level.\nChanges This version adds page-level default bind name generator support and includes a documentation update that adds v1.0.3 download links to the reference site.\nInstall npm install cotomy@1.0.4 Links https://github.com/yshr1920/cotomy/releases/tag/v1.0.4 https://cotomy.net ","permalink":"https://blog.cotomy.net/posts/releases/cotomy-1-0-4-release/","summary":"Patch release adding page-level default bind name generator support.","title":"Cotomy v1.0.4"},{"content":"This note continues from Page Lifecycle Coordination and CotomyElement Boundary . Those two articles explained boundary and lifecycle control. This article narrows the focus to form flow, because form flow consistency is one of the strongest predictors of long-term cost on business screens.\nWhy Form Flow Standardization Matters Search screens and edit screens look similar in UI, but their contracts are different. Search belongs to URL state that can be shared and reproduced. Edit belongs to a submit contract that controls validation, persistence, and post-save state.\nWhen these are mixed casually, each screen becomes locally reasonable but globally inconsistent. In small systems that inconsistency is annoying. In large systems with dozens or hundreds of forms, it becomes operational cost: maintenance slows down, behavior becomes harder to predict, and regression scope grows.\nForm handling is not a local implementation detail. It is a system-level stability decision.\nSearch State Through Query String Search state should be represented by query string, not hidden runtime memory. The reason is architectural, not stylistic. URL state is observable, bookmarkable, and reproducible by another user or by future debugging sessions.\nCotomy keeps this contract explicit with query-based form handling. Internally, asynchronous calls can still be used for partial updates, but query string remains the canonical public state for search conditions.\nIf the URL cannot explain the current search result, the screen has already lost traceability.\nSearch defines public, reproducible state. Submit defines transactional mutation. Mixing those two contracts blurs responsibility and increases ambiguity.\nSearch state must survive reload, sharing, and debugging. Submit state must survive validation, persistence, and post-save transition. Treating them as interchangeable contracts creates long-term ambiguity in both code and operational behavior.\nWhy AJAX Submit Becomes the Default A common failure pattern in business systems is incremental divergence. In my own projects over the last decade, in many business systems I have seen, particularly in domestic enterprise projects, screens were rendered on the server and submitted with normal non-AJAX POST and full reload. Later, partial AJAX behavior was introduced only where users requested faster interaction.\nThis usually creates two models on one screen. Display logic follows one path, submit logic follows another, and validation timing no longer matches. Then teams start adding patch logic to synchronize states that were never designed to be shared.\nCotomy’s form direction is to keep submit on an AJAX contract and keep fill behavior on the same structure used by load. AJAX is not chosen for fashion, but because it keeps the submit path structurally compatible with load-based fill logic. In practical terms, CotomyEntityFillApiForm loads with GET and also applies successful submit responses through the same fill path. That pushes teams toward one response shape for load and save flows instead of parallel contracts.\nWhen display and submit run on separate models, drift is not accidental. It is structural.\nShared Structure for View and Edit Modes Business screens are not always edit-only. Many screens switch between view-only, edit, and new-entry states based on permission, workflow phase, or approval status.\nThe critical point is that mode changes should not require a different data contract. If mode switching changes structure, every mode split introduces more conditions, more branches, and more hidden coupling.\nIn Cotomy-oriented projects, form behavior is layered to avoid that drift. The base form layer provides common lifecycle and submit interception. API form layers standardize asynchronous submit and entity-aware routing. Fill-capable layers unify load and submit response application. On top of those, application-layer form classes can manage view, edit, and new modes without breaking the underlying retrieval and submit contract.\nIn practice, this means GET for load and POST or PUT for submit are expected to return compatible payload shapes so that the same fill logic can apply to both.\nThat compatibility requirement is intentional. It forces teams to think about response shape as a shared contract rather than per-endpoint convenience.\nThis hierarchy is not abstraction for elegance. It is architecture for cost control.\nC# + EF + nameof as a Stability Mechanism Many form bugs are simple name mismatches. A field name differs by one character, casing drifts between template and API, and the bug appears only at runtime.\nI mainly build business systems with C# and EF. In that stack, one practical defense is using entity property names directly in cshtml through nameof, including name attributes.\n\u0026lt;input name=\u0026#34;@nameof(Order.CustomerId)\u0026#34; /\u0026gt; This is simple, but effective. Compile-time name safety removes a large class of repetitive, expensive mistakes in form-heavy systems. In practice, this alone has saved a meaningful amount of implementation and debugging time across many screens.\nAI-Assisted Development Needs More Structure, Not Less AI coding tools accelerate implementation, but they frequently produce near-correct field bindings: non-existent properties, slightly wrong names, or inconsistent casing. These errors are cheap to generate and expensive to debug when form contracts are loose.\nThat is why structural constraints matter more as AI usage increases. Standardized submit/fill contracts and compile-safe naming patterns reduce the surface where AI-generated drift can survive.\nFaster generation without structural safety only scales inconsistency faster.\nLanguage Neutral, C# Friendly Cotomy is not limited to C#. Its UI boundary and form architecture are server-language neutral.\nAt the same time, C# + EF aligns naturally with this model because strongly typed entities and nameof-based templates fit contract-driven form design. I do recommend this direction from personal development experience, and that alignment reflects the environment in which this architecture was refined. This is practical alignment, not language evangelism.\nConclusion Form standardization is not cosmetic. It is a structural investment that compounds over time. It defines operational stability, reduces drift across screens, and gives teams a durable base for long-term growth including AI-assisted development.\nDesign Series This article is part of the Cotomy Design Series, which explores architectural decisions behind the framework.\nSeries articles: CotomyElement Boundary , Page Lifecycle Coordination , Form AJAX Standardization, Inheritance and Composition in Business Application Design , API Exception Mapping and Validation Strategy , and Why Modern Developers Avoid Inheritance .\nPrevious article: Page Lifecycle Coordination Earlier context: CotomyElement Boundary Next article: Inheritance and Composition in Business Application Design ","permalink":"https://blog.cotomy.net/posts/design/03-form-ajax-standardization/","summary":"Why Cotomy standardizes query-string search, AJAX submit, and shared form contracts to control long-term cost in business systems.","title":"Form AJAX Standardization"},{"content":"Overview Cotomy v1.0.3 is a patch release focused on attached semantics in runtime behavior.\nChanges This version restores attached semantics and adds isConnected support, with a related documentation update for v1.0.2 download links in the reference site.\nInstall npm install cotomy@1.0.3 Links https://github.com/yshr1920/cotomy/releases/tag/v1.0.3 https://cotomy.net ","permalink":"https://blog.cotomy.net/posts/releases/cotomy-1-0-3-release/","summary":"Patch release restoring attached semantics and adding isConnected support in runtime behavior.","title":"Cotomy v1.0.3"},{"content":"Overview Cotomy v1.0.2 is a release centered on runtime consistency around attached-state behavior, together with documentation and reference-site alignment.\nChanges This version moves attached-state checks to Node.isConnected, adds Cotomy-specific AI agent prompt rules, and clarifies the entity fill multiple-select boundary. It also includes reference-site structure and prose cleanup.\nInstall npm install cotomy@1.0.2 Links https://github.com/yshr1920/cotomy/releases/tag/v1.0.2 https://cotomy.net ","permalink":"https://blog.cotomy.net/posts/releases/cotomy-1-0-2-release/","summary":"Release focused on attached-state consistency updates and reference/documentation alignment.","title":"Cotomy v1.0.2"},{"content":"This is a continuation of CotomyForm in Practice . The previous article focused on the form classes and their runtime roles. This article focuses on how CotomyApi transports submitted data and how the screen should handle success and failure explicitly.\nWhy This Boundary Matters In business screens, the API call is not a single line of transport code. It is one operation lifecycle from user intent to UI reflection. If this boundary is inconsistent, validation, conflict handling, and retry behavior drift screen by screen.\nCotomyApi keeps transport behavior consistent, but it does not decide business meaning for you. The screen still owns UI state updates.\nFull Request Lifecycle sequenceDiagram participant User participant UI as CotomyElement participant Form as CotomyForm participant API as CotomyApi participant Server participant Response User-\u0026gt;\u0026gt;UI: Edit fields UI-\u0026gt;\u0026gt;Form: Submit intent Form-\u0026gt;\u0026gt;API: Build payload API-\u0026gt;\u0026gt;Server: HTTP request Server--\u0026gt;\u0026gt;API: HTTP response API--\u0026gt;\u0026gt;Form: Normalized response Form--\u0026gt;\u0026gt;UI: Update screen state This is the practical flow to preserve. CotomyApi returns a response object, but the final state transition is still an explicit UI decision.\nMethod Behavior in Real Screens CotomyApi provides getAsync, postAsync, putAsync, patchAsync, deleteAsync, and submitAsync. The first five are direct method calls. submitAsync dispatches by form.method, and when the method is GET it routes to getAsync so parameters become a query string.\nimport { CotomyApi } from \u0026#34;cotomy\u0026#34;; type User = { id: string; name: string }; const api = new CotomyApi({ baseUrl: \u0026#34;/api\u0026#34; }); export async function loadUser(id: string): Promise\u0026lt;User\u0026gt; { const response = await api.getAsync(`/users/${id}`); return await response.objectAsync\u0026lt;User\u0026gt;(); } export async function createUser(name: string): Promise\u0026lt;User\u0026gt; { const response = await api.postAsync(\u0026#34;/users\u0026#34;, { name }); return await response.objectAsync\u0026lt;User\u0026gt;(); } export async function updateUser(id: string, name: string): Promise\u0026lt;User\u0026gt; { const response = await api.putAsync(`/users/${id}`, { name }); return await response.objectAsync\u0026lt;User\u0026gt;(); } export async function patchUser(id: string, name: string): Promise\u0026lt;User\u0026gt; { const response = await api.patchAsync(`/users/${id}`, { name }); return await response.objectAsync\u0026lt;User\u0026gt;(); } export async function removeUser(id: string): Promise\u0026lt;void\u0026gt; { await api.deleteAsync(`/users/${id}`); } Options and Default Payload Behavior CotomyApi options include baseUrl, headers, credentials, redirect, cache, referrerPolicy, mode, keepalive, and integrity. Defaults are same-origin credentials, follow redirect, no-cache, no-referrer, cors mode, and keepalive true.\nFor request bodies, the behavior is content-type driven. When Content-Type is application/json, the body is JSON stringified. When Content-Type is application/x-www-form-urlencoded, the body is converted with URLSearchParams. Otherwise the internal default is multipart/form-data. In that default path, a plain object is converted to FormData and sent as multipart. If multipart/form-data is used and the body is neither FormData nor a plain object, CotomyInvalidFormDataBodyException is thrown before fetch.\nException Mapping and Screen-Level Branching HTTP 4xx and 5xx responses are mapped to Cotomy exceptions. 400 and 422 map to CotomyRequestInvalidException. 401 maps to CotomyUnauthorizedException. 403 maps to CotomyForbiddenException. 404 maps to CotomyNotFoundException. 409 and 410 map to CotomyConflictException. 429 maps to CotomyTooManyRequestsException. Other 4xx map to CotomyHttpClientError, and 5xx map to CotomyHttpServerError.\nflowchart TD A[Submit Intent] --\u0026gt; B[CotomyApi Request] B --\u0026gt; C{HTTP Status} C --\u0026gt;|2xx| D[Parse objectAsync / arrayAsync] D --\u0026gt; E[Update UI Explicitly] C --\u0026gt;|400 / 422| F[CotomyRequestInvalidException] F --\u0026gt; G[Show Validation Guidance] C --\u0026gt;|401| H[CotomyUnauthorizedException] H --\u0026gt; I[Trigger Auth Flow] C --\u0026gt;|409 / 410| J[CotomyConflictException] J --\u0026gt; K[Show Conflict Message] C --\u0026gt;|5xx| L[CotomyHttpServerError] L --\u0026gt; M[Show Retry Guidance] C --\u0026gt;|Network Failure| N[Native Runtime Error] N --\u0026gt; O[Show Generic Failure] The network failure branch is separate. If fetch fails before an HTTP response exists, CotomyApi does not wrap it as CotomyApiException. You receive the native runtime error.\nimport { CotomyApi, CotomyConflictException, CotomyHttpServerError, CotomyRequestInvalidException, CotomyResponseJsonParseException, CotomyUnauthorizedException, } from \u0026#34;cotomy\u0026#34;; const api = new CotomyApi(); export async function submitOrder(body: FormData): Promise\u0026lt;void\u0026gt; { try { const response = await api.postAsync(\u0026#34;/api/orders\u0026#34;, body); const saved = await response.objectAsync\u0026lt;{ id: string; code: string }\u0026gt;(); renderSuccess(saved.code); } catch (error) { if (error instanceof CotomyRequestInvalidException) { renderValidation(); return; } if (error instanceof CotomyUnauthorizedException) { renderUnauthorized(); return; } if (error instanceof CotomyConflictException) { renderConflict(); return; } if (error instanceof CotomyHttpServerError) { renderRetry(); return; } if (error instanceof CotomyResponseJsonParseException) { renderContractError(); return; } renderGenericFailure(); } } JSON Parse Failures objectAsync and arrayAsync parse response text once and cache the parsed value. If JSON parsing fails, they throw CotomyResponseJsonParseException. This is a response contract failure, not a validation failure.\narrayAsync also has a guard behavior. If the parsed payload is not an array, it returns the provided default array value.\nKey Collision Handling There are two collision points to explain clearly.\nOn the request side, plain object keys are unique by JavaScript rules, so the last assignment wins before CotomyApi sends anything. If repeated keys are required, use FormData append so duplicate query or form keys are preserved.\nconst api = new CotomyApi(); const payload: any = { status: \u0026#34;draft\u0026#34; }; payload.status = \u0026#34;approved\u0026#34;; await api.postAsync(\u0026#34;/api/orders\u0026#34;, payload); const tags = new FormData(); tags.append(\u0026#34;tag\u0026#34;, \u0026#34;a\u0026#34;); tags.append(\u0026#34;tag\u0026#34;, \u0026#34;b\u0026#34;); await api.getAsync(\u0026#34;/api/search\u0026#34;, tags); On the response side, collisions happen when the screen merges server data into local state without a mapping boundary. CotomyApi returns parsed data, but it does not merge UI state for you. That merge policy belongs to the application layer.\ntype OrderViewModel = { status: string; amount: number; }; const viewModel: OrderViewModel = { status: \u0026#34;\u0026#34;, amount: 0 }; const localUiState = { expanded: false }; const response = await api.getAsync(\u0026#34;/api/orders/42\u0026#34;); const server = await response.objectAsync\u0026lt;any\u0026gt;(); viewModel.status = String(server.status ?? \u0026#34;\u0026#34;); viewModel.amount = Number(server.amount ?? 0); // localUiState stays separate and cannot be overwritten by API payload Architectural Boundary to Keep CotomyElement and CotomyForm handle UI and submit timing. CotomyApi handles HTTP transport and exception mapping. Domain decisions, conflict resolution policy, and final UI state remain in the application layer.\nThat boundary is what keeps the behavior explainable when the screen grows.\nUsage Series This article is part of the Cotomy Usage Series, which focuses on concrete runtime behavior and day-to-day API usage.\nSeries articles: CotomyElement in Practice , CotomyElement Value and Form Behavior , CotomyForm in Practice , CotomyApi in Practice, and Debugging Features and Runtime Inspection in Cotomy .\nConclusion CotomyApi is practical because it standardizes request conversion, exception mapping, and response parsing while leaving business decisions outside the transport layer. When the screen keeps explicit branching for success, validation, auth, conflict, server error, network error, and parse error, operational behavior stays stable across pages.\nPrevious article: CotomyForm in Practice Next article: Debugging Features and Runtime Inspection in Cotomy ","permalink":"https://blog.cotomy.net/posts/usage/cotomy-api-in-practice/","summary":"How CotomyApi is used in real screens: HTTP methods, options, exception mapping, key collisions, and explicit UI updates.","title":"CotomyApi in Practice"},{"content":"This is a continuation of CotomyElement in Practice . The previous article focused on lookup, layout control, size, and scroll behavior. This time the focus is value and text handling, plus what is actually sent when a form is submitted.\nThe main point is simple: not theory, but the exact payload the browser sends.\ntext vs value element.text rewrites textContent, while input.value is the form value used for submission. In practice, span and div are display targets and input is a submit target.\nconst nameInput = CotomyElement.byId(\u0026#34;name\u0026#34;)!; const label = CotomyElement.byId(\u0026#34;name-label\u0026#34;)!; nameInput.value = \u0026#34;John\u0026#34;; label.text = \u0026#34;Display: John\u0026#34;; text is for display. value is for submission.\nValue handling by input type Each type below is shown as one set:\nHTML example, TypeScript get/set, and POST body on submit.\n1) type=\u0026ldquo;text\u0026rdquo; HTML:\n\u0026lt;input type=\u0026#34;text\u0026#34; name=\u0026#34;username\u0026#34; value=\u0026#34;alice\u0026#34;\u0026gt; TypeScript:\nconst form = CotomyElement.byId(\u0026#34;profile-form\u0026#34;)!; const usernameInput = form.first(\u0026#39;input[name=\u0026#34;username\u0026#34;]\u0026#39;)!; const username = usernameInput.value; usernameInput.value = \u0026#34;bob\u0026#34;; POST body:\nusername=alice Notes:\nThe value is sent as is, and if it is empty an empty string is sent.\n2) type=\u0026ldquo;number\u0026rdquo; HTML:\n\u0026lt;input type=\u0026#34;number\u0026#34; name=\u0026#34;age\u0026#34; value=\u0026#34;30\u0026#34;\u0026gt; TypeScript:\nconst form = CotomyElement.byId(\u0026#34;profile-form\u0026#34;)!; const ageInput = form.first(\u0026#39;input[name=\u0026#34;age\u0026#34;]\u0026#39;)!; const ageText = ageInput.value; ageInput.value = \u0026#34;31\u0026#34;; POST body:\nage=30 Notes:\nEven when it looks numeric, submission is still string data, and if it is empty an empty string is sent.\n3) type=\u0026ldquo;checkbox\u0026rdquo; HTML (checked):\n\u0026lt;input type=\u0026#34;checkbox\u0026#34; name=\u0026#34;active\u0026#34; value=\u0026#34;true\u0026#34; checked\u0026gt; TypeScript:\nconst form = CotomyElement.byId(\u0026#34;account-form\u0026#34;)!; const activeInput = form.first(\u0026#39;input[name=\u0026#34;active\u0026#34;]\u0026#39;)!; const isActive = activeInput.match(\u0026#34;:checked\u0026#34;); const activePayload = activeInput.value; activeInput.attribute(\u0026#34;checked\u0026#34;, \u0026#34;checked\u0026#34;); activeInput.attribute(\u0026#34;checked\u0026#34;, null); POST body (checked):\nactive=true POST body (unchecked):\n(not sent) Notes:\nIf unchecked, the key itself is missing. The submitted payload uses the value attribute, and if value is omitted the browser sends on. In Cotomy code, checked toggling is handled through checked attribute operations.\n4) type=\u0026ldquo;radio\u0026rdquo; HTML:\n\u0026lt;input type=\u0026#34;radio\u0026#34; name=\u0026#34;role\u0026#34; value=\u0026#34;admin\u0026#34; checked\u0026gt; \u0026lt;input type=\u0026#34;radio\u0026#34; name=\u0026#34;role\u0026#34; value=\u0026#34;user\u0026#34;\u0026gt; TypeScript:\nconst form = CotomyElement.byId(\u0026#34;account-form\u0026#34;)!; const selectedRole = form.first(\u0026#39;input[name=\u0026#34;role\u0026#34;]:checked\u0026#39;)!.value; form.find(\u0026#39;input[name=\u0026#34;role\u0026#34;]\u0026#39;).forEach(r =\u0026gt; { r.attribute(\u0026#34;checked\u0026#34;, null); }); form.first(\u0026#39;input[name=\u0026#34;role\u0026#34;][value=\u0026#34;user\u0026#34;]\u0026#39;)!.attribute(\u0026#34;checked\u0026#34;, \u0026#34;checked\u0026#34;); POST body:\nrole=admin Notes:\nInputs with the same name form one group, and only the selected item is sent. In Cotomy code, radio selection is also toggled through checked attribute operations.\n5) select (single) HTML:\n\u0026lt;select name=\u0026#34;country\u0026#34;\u0026gt; \u0026lt;option value=\u0026#34;jp\u0026#34; selected\u0026gt;Japan\u0026lt;/option\u0026gt; \u0026lt;option value=\u0026#34;us\u0026#34;\u0026gt;USA\u0026lt;/option\u0026gt; \u0026lt;/select\u0026gt; TypeScript:\nconst form = CotomyElement.byId(\u0026#34;profile-form\u0026#34;)!; const countrySelect = form.first(\u0026#39;select[name=\u0026#34;country\u0026#34;]\u0026#39;)!; const country = countrySelect.value; countrySelect.value = \u0026#34;us\u0026#34;; POST body:\ncountry=jp 6) select multiple HTML:\n\u0026lt;select name=\u0026#34;tags\u0026#34; multiple\u0026gt; \u0026lt;option value=\u0026#34;a\u0026#34; selected\u0026gt;A\u0026lt;/option\u0026gt; \u0026lt;option value=\u0026#34;b\u0026#34; selected\u0026gt;B\u0026lt;/option\u0026gt; \u0026lt;/select\u0026gt; TypeScript:\nconst form = CotomyElement.byId(\u0026#34;search-form\u0026#34;)!; const tagsSelect = form.first(\u0026#39;select[name=\u0026#34;tags\u0026#34;]\u0026#39;)!; const selectedTags = form .find(\u0026#39;select[name=\u0026#34;tags\u0026#34;] option:checked\u0026#39;) .map(option =\u0026gt; option.value); tagsSelect.find(\u0026#39;option[value=\u0026#34;a\u0026#34;]\u0026#39;)!.attribute(\u0026#34;selected\u0026#34;, \u0026#34;selected\u0026#34;); tagsSelect.find(\u0026#39;option[value=\u0026#34;b\u0026#34;]\u0026#39;)!.attribute(\u0026#34;selected\u0026#34;, \u0026#34;selected\u0026#34;); POST body:\ntags=a tags=b Notes:\nThe same key is sent multiple times, and the server side usually treats this as an array.\nImportant:\nMultiple select is intentionally excluded from automatic fill in CotomyEntityFillApiForm. This is not a missing feature but a boundary decision.\nReason:\nArray binding patterns differ across projects, and real business UIs often replace native multiple select with token UI, checkbox groups, or custom selector components. Enforcing one automatic array synchronization model in core would reduce architectural flexibility.\nCotomy core keeps multiple select behavior strictly native.\nIf array-based synchronization is required, it should be implemented explicitly at application level.\nIn the upcoming project template distribution, standardized multi-select synchronization may be introduced through a separate class designed for consistent array binding patterns.\n7) textarea HTML:\n\u0026lt;textarea name=\u0026#34;note\u0026#34;\u0026gt;hello\u0026lt;/textarea\u0026gt; TypeScript:\nconst form = CotomyElement.byId(\u0026#34;profile-form\u0026#34;)!; const noteArea = form.first(\u0026#39;textarea[name=\u0026#34;note\u0026#34;]\u0026#39;)!; const note = noteArea.value; noteArea.value = \u0026#34;updated note\u0026#34;; POST body:\nnote=hello Notes:\nSubmission behavior is the same as value input handling.\n8) disabled field HTML:\n\u0026lt;input type=\u0026#34;text\u0026#34; name=\u0026#34;internalId\u0026#34; value=\u0026#34;123\u0026#34; disabled\u0026gt; TypeScript:\nconst form = CotomyElement.byId(\u0026#34;profile-form\u0026#34;)!; const internalIdInput = form.first(\u0026#39;input[name=\u0026#34;internalId\u0026#34;]\u0026#39;)!; const internalId = internalIdInput.value; internalIdInput.disabled = true; POST body:\n(not sent) Important:\ndisabled fields are not submitted.\n9) readonly field HTML:\n\u0026lt;input type=\u0026#34;text\u0026#34; name=\u0026#34;code\u0026#34; value=\u0026#34;ABC\u0026#34; readonly\u0026gt; TypeScript:\nconst form = CotomyElement.byId(\u0026#34;profile-form\u0026#34;)!; const codeInput = form.first(\u0026#39;input[name=\u0026#34;code\u0026#34;]\u0026#39;)!; const code = codeInput.value; codeInput.readonly = true; codeInput.value = \u0026#34;DEF\u0026#34;; POST body:\ncode=ABC Important:\nreadonly fields are submitted, and this is the key difference from disabled.\nUpdating value with CotomyElement const form = CotomyElement.byId(\u0026#34;order-form\u0026#34;)!; form.find(\u0026#34;input, select, textarea\u0026#34;).forEach(el =\u0026gt; { if (el.attribute(\u0026#34;data-auto-fill\u0026#34;) === \u0026#34;true\u0026#34;) { el.value = \u0026#34;auto\u0026#34;; } }); CotomyElement updates the real DOM directly. There is no separate virtual state. The value written to the element becomes the submit target as is.\nConfirm what will be sent with FormData const formEl = document.querySelector(\u0026#34;form\u0026#34;)!; const fd = new FormData(formEl); for (const [key, value] of fd.entries()) { console.log(key, value); } This lets you verify the real payload before sending. CotomyElement does not add special conversion here. It follows native browser form behavior.\nSummary CotomyElement updates values directly, but submit rules stay browser-native. Disabled fields are not sent, unchecked checkboxes are not sent, multiple select sends repeated keys, and non-file form values are handled as string payloads.\nLast time was DOM lookup and control. This time was DOM values and submission.\nUsage Series This article is part of the Cotomy Usage Series, which focuses on concrete runtime behavior and day-to-day API usage.\nSeries articles: CotomyElement in Practice , CotomyElement Value and Form Behavior, CotomyForm in Practice , CotomyApi in Practice , and Debugging Features and Runtime Inspection in Cotomy .\nPrevious article: CotomyElement in Practice Next article: CotomyForm in Practice ","permalink":"https://blog.cotomy.net/posts/usage/cotomy-element-value-and-form-behavior/","summary":"How CotomyElement handles value, text, and what actually gets sent in POST requests.","title":"CotomyElement Value and Form Behavior"},{"content":"The focus here is implementation. In real business screens, CotomyElement helps keep DOM handling consistent across tables, forms, buttons, and scroll-driven UI.\nCotomyElement Retrieval in Real Screens byId, first, last, find, contains, containsById, and empty are the static entry points you use first in view.ts. In practice, you combine them to build one screen setup flow: find root, detect optional blocks, then apply bulk operations.\nExample: admin list screen with row actions and bulk controls.\nimport { CotomyElement } from \u0026#34;cotomy\u0026#34;; const table = CotomyElement.byId(\u0026#34;user-table\u0026#34;); if (!table) return; const toolbar = CotomyElement.byId(\u0026#34;user-toolbar\u0026#34;) ?? CotomyElement.empty(); const saveButton = CotomyElement.first(\u0026#34;#save-users\u0026#34;); const latestRow = CotomyElement.last(\u0026#34;.user-row\u0026#34;); const rows = CotomyElement.find(\u0026#34;.user-row\u0026#34;); const hasPager = CotomyElement.contains(\u0026#34;[data-role=\u0026#39;pager\u0026#39;]\u0026#34;); const hasBulkAction = CotomyElement.containsById(\u0026#34;bulk-actions\u0026#34;); if (!hasPager) { toolbar.attribute(\u0026#34;data-layout\u0026#34;, \u0026#34;single\u0026#34;); } if (!hasBulkAction) { toolbar.attribute(\u0026#34;data-mode\u0026#34;, \u0026#34;simple\u0026#34;); } saveButton?.attribute(\u0026#34;data-state\u0026#34;, rows.length \u0026gt; 0 ? \u0026#34;ready\u0026#34; : \u0026#34;disabled\u0026#34;); latestRow?.scrollIn(); What this buys you:\nbyId gives stable page contracts, first and last fit single-target operations, find works for batch control, contains works as a cheap guard before heavier setup, and empty gives a safe fallback boundary.\nChild Retrieval Variations You usually combine screen root + child selection. This keeps operations scoped and predictable.\nconst form = CotomyElement.byId(\u0026#34;order-form\u0026#34;); if (!form) return; // selector-based child retrieval const requiredFields = form.find(\u0026#34;[data-required=\u0026#39;true\u0026#39;]\u0026#34;); // data-attribute filtered retrieval const warningFields = form.find(\u0026#34;[data-state=\u0026#39;warning\u0026#39;]\u0026#34;); requiredFields.forEach(field =\u0026gt; field.addClass(\u0026#34;required-highlight\u0026#34;)); warningFields.forEach(field =\u0026gt; field.attribute(\u0026#34;data-visible\u0026#34;, \u0026#34;true\u0026#34;)); Common cases:\nyou can apply classes to all child nodes that match a condition, enable only elements with data-state=\u0026lsquo;editable\u0026rsquo;, and keep one form isolated even when the page has multiple forms.\nLoop Patterns with find().forEach() Batch operations are where CotomyElement becomes practical in day-to-day work.\nconst table = CotomyElement.byId(\u0026#34;invoice-table\u0026#34;); if (!table) return; // highlight only checked rows table.find(\u0026#34;tr[data-checked=\u0026#39;true\u0026#39;]\u0026#34;).forEach(row =\u0026gt; { row.addClass(\u0026#34;is-selected\u0026#34;); }); // toggle disabled in bulk const lock = true; table.find(\u0026#34;input, select, button\u0026#34;).forEach(el =\u0026gt; { el.attribute(\u0026#34;disabled\u0026#34;, lock ? \u0026#34;true\u0026#34; : null); }); // emphasize error fields table.find(\u0026#34;[data-error=\u0026#39;true\u0026#39;]\u0026#34;).forEach(field =\u0026gt; { field.addClass(\u0026#34;has-error\u0026#34;); field.attribute(\u0026#34;aria-invalid\u0026#34;, \u0026#34;true\u0026#34;); }); Parent and Ancestor Retrieval with closest() closest() is useful when events happen deep in a row or modal.\nconst deleteButtons = CotomyElement.find(\u0026#34;[data-action=\u0026#39;delete-row\u0026#39;]\u0026#34;); deleteButtons.forEach(btn =\u0026gt; { btn.click(() =\u0026gt; { const row = btn.closest(\u0026#34;[data-row-id]\u0026#34;); if (!row) return; row.remove(); }); }); const modalSubmit = CotomyElement.first(\u0026#34;#modal-save\u0026#34;); modalSubmit?.click(() =\u0026gt; { const form = modalSubmit.closest(\u0026#34;form\u0026#34;); form?.trigger(\u0026#34;submit\u0026#34;); }); Typical operations:\nyou find a row container from an inline action button, find a form root from a modal footer button, and keep event handlers small without global selector re-query.\nSize and Scroll Metrics for Layout Control Use size values for behavior, not for static styling. In operational screens, this is common for sticky headers and dynamic panels.\nimport { CotomyWindow } from \u0026#34;cotomy\u0026#34;; const header = CotomyElement.byId(\u0026#34;page-header\u0026#34;); const body = CotomyElement.byId(\u0026#34;content-scroll\u0026#34;); const summary = CotomyElement.byId(\u0026#34;summary-panel\u0026#34;); if (!header || !body || !summary) return; const win = CotomyWindow.instance; const headerHeight = header.outerHeight; const viewportHeight = body.height; const panelWidth = summary.width; const panelOuterWidth = summary.outerWidth; const y = body.scrollTop; const x = win.scrollLeft; const top = summary.absolutePosition.top; const left = summary.absolutePosition.left; // dynamic height under fixed header body.style(\u0026#34;height\u0026#34;, \u0026#34;calc(100vh - \u0026#34; + headerHeight + \u0026#34;px)\u0026#34;); // sync floating summary position with current scroll/offset summary.style(\u0026#34;top\u0026#34;, Math.max(y + 8, top) + \u0026#34;px\u0026#34;); summary.style(\u0026#34;left\u0026#34;, Math.max(x + 12, left) + \u0026#34;px\u0026#34;); summary.attribute(\u0026#34;data-width\u0026#34;, panelWidth + \u0026#34;/\u0026#34; + panelOuterWidth); summary.attribute(\u0026#34;data-viewport-height\u0026#34;, String(viewportHeight)); Metrics to check in one place:\nwidth and height, outerWidth and outerHeight, offset-like values such as position/absolutePosition/rect, and scrollTop on elements with scrollLeft on CotomyWindow.\nVisibility and Operability Checks Before applying bulk changes, check whether the target is active in the current screen state.\nconst tabs = CotomyElement.find(\u0026#34;[data-tab-panel]\u0026#34;); tabs.forEach(panel =\u0026gt; { const measurable = panel.width \u0026gt; 0 \u0026amp;\u0026amp; panel.height \u0026gt; 0; const active = panel.visible \u0026amp;\u0026amp; panel.enabled \u0026amp;\u0026amp; panel.attached \u0026amp;\u0026amp; measurable; if (!active) return; panel.find(\u0026#34;input, select, textarea\u0026#34;).forEach(input =\u0026gt; { input.attribute(\u0026#34;data-checked-at\u0026#34;, Date.now().toString()); }); }); This pattern avoids touching hidden tab content or detached nodes during tab switching and partial updates.\ninview / outview for Infinite Scroll inview() and outview() are practical for SPA-like paging surfaces. Typical pattern: watch a sentinel row at the bottom and load next data chunk.\nimport { CotomyElement } from \u0026#34;cotomy\u0026#34;; const list = CotomyElement.byId(\u0026#34;activity-list\u0026#34;); const sentinel = CotomyElement.byId(\u0026#34;pager-sentinel\u0026#34;); let loading = false; let page = 1; async function loadNext(): Promise\u0026lt;void\u0026gt; { if (!list || loading) return; loading = true; try { const res = await fetch(\u0026#34;/api/activity?page=\u0026#34; + (page + 1)); const html = await res.text(); list.append(new CotomyElement(html)); page += 1; } finally { loading = false; } } sentinel?.inview(async () =\u0026gt; { await loadNext(); }); sentinel?.outview(() =\u0026gt; { // can be used for telemetry or cancel logic }); sequenceDiagram participant User participant Window participant IO as IntersectionObserver participant CE as CotomyElement participant Page as Page Logic User-\u0026gt;\u0026gt;Window: scroll Window-\u0026gt;\u0026gt;IO: detect intersection IO-\u0026gt;\u0026gt;CE: inview event CE-\u0026gt;\u0026gt;Page: load next page DOM Boundary Map flowchart TD DOM[Real DOM] CE[CotomyElement] Page[Page Controller] CE --\u0026gt; DOM Page --\u0026gt; CE Wrap-up CotomyElement is most effective when you keep a single handling style for lookup, traversal, state checks, and batch updates. That consistency is what keeps operational screens readable as they grow.\nNext article: CotomyElement Value and Form Behavior If you need design background, read CotomyElement Boundary . If you need first-step basics, read Working with CotomyElement .\nUsage Series This article is part of the Cotomy Usage Series, which focuses on concrete runtime behavior and day-to-day API usage.\nSeries articles: CotomyElement in Practice, CotomyElement Value and Form Behavior , CotomyForm in Practice , CotomyApi in Practice , and Debugging Features and Runtime Inspection in Cotomy .\nLinks Previous: Page Lifecycle Coordination .\nRelated: Working with CotomyElement .\n","permalink":"https://blog.cotomy.net/posts/usage/cotomy-element-in-practice/","summary":"Practical DOM patterns with CotomyElement: selection, traversal, state checks, sizing, and scroll-driven behavior.","title":"CotomyElement in Practice"},{"content":"This note continues from CotomyElement Boundary . In the previous article, I explained why Cotomy starts from a DOM boundary through CotomyElement.\nIn this article, I want to move one layer up and explain why page lifecycle and coordination are centralized in CotomyPageController. I am not trying to show one more class pattern for its own sake. The real topic is what actually breaks on business screens when initialization and control flow are handled differently on every page.\nWhy Page Lifecycle Matters Cotomy starts as a small reference module. You import what you need and use it directly.\nimport { CotomyElement, CotomyWindow } from \u0026#34;cotomy\u0026#34;; It is intentionally not a giant global framework that tries to own every runtime concern. Even so, business screens still have one unavoidable phase after page load: initialization.\nMost screens need that phase because the UI is not truly operable at first paint. Inputs are not fully prepared, handlers are not connected yet, and remote data is still loading. In practice, initialization usually includes form setup for validation and submit contracts, page event binding for buttons and modals, startup loading for master and transaction data, restore logic for draft values or URL context, and failure behavior for unauthorized or expired sessions.\nIn Cotomy, CotomyWindow.ready is tied to the cotomy:ready custom event, so this timing is part of the lifecycle contract, not just an arbitrary callback point. Even on simple pages, lifecycle order matters in a very concrete way: the DOM becomes ready, form state is initialized, shared listeners are connected, required data arrives, and only then does the UI become safely operable. If each screen improvises this order, behavior starts to diverge almost immediately.\nThe Failure Pattern of Scattered Initialization You can absolutely write page startup directly with DOMContentLoaded, like this:\ndocument.addEventListener(\u0026#34;DOMContentLoaded\u0026#34;, () =\u0026gt; { const forms = document.querySelectorAll(\u0026#34;.form\u0026#34;); forms.forEach(form =\u0026gt; { form.addEventListener(\u0026#34;submit\u0026#34;, onSubmitEachForm); }); loadInitialDataIndividually(); }); Technically, that works. The problem appears later, when each screen evolves in a slightly different way. One page uses DOMContentLoaded, another uses CotomyWindow.ready, one initializes a single form, another initializes multiple forms in a different order, and error fallback rules vary depending on who edited the screen last.\nThen the familiar failures start to show up. Ready logic becomes page-specific and hard to compare, API exception handling is duplicated with inconsistent behavior, unauthorized handling becomes fragmented, and cross-form dependencies break because startup order is no longer reliable. You also see concrete regressions like summary totals updating before detail forms are ready, or click handlers binding before selector services exist.\nThe design point is simple: when page initialization is written outside the framework boundary on every screen, architecture drifts. You adopt a runtime model, but bypass its most important timing boundary.\nWhat CotomyPageController Centralizes CotomyPageController gives each page one clear control boundary. In other words, one page has one endpoint, one page has one control boundary, and that boundary is the page controller.\nThat single boundary keeps lifecycle responsibilities from fragmenting. It centralizes initializeAsync flow, form registration, screen event orchestration, and coordination points for shared failure policy.\nimport { CotomyEntityFillApiForm, CotomyPageController } from \u0026#34;cotomy\u0026#34;; CotomyPageController.set(class extends CotomyPageController { protected override async initializeAsync(): Promise\u0026lt;void\u0026gt; { await super.initializeAsync(); this.setForm( CotomyEntityFillApiForm.byId\u0026lt;CotomyEntityFillApiForm\u0026gt;( \u0026#34;order-form\u0026#34;, class extends CotomyEntityFillApiForm {} )! ); this.setForm( CotomyEntityFillApiForm.byId\u0026lt;CotomyEntityFillApiForm\u0026gt;( \u0026#34;detail-form\u0026#34;, class extends CotomyEntityFillApiForm {} )! ); } }); This sample is not about syntax tricks. It shows where lifecycle responsibility should live. Startup, form registration, and shared failure behavior stay inside one page boundary instead of being scattered across local handlers.\nThe same idea applies to unauthorized and session-expired flows. Re-auth can be implemented in different ways, but the decision point should remain in the page boundary, not per button and not per local ready block.\nThis gives you one predictable extension surface for page behavior. Screens no longer invent their own lifecycle style, and they follow one controller contract.\nYou can still use CotomyWindow.ready inside local components such as forms, but it should not replace page-level lifecycle control. In the current implementation, ready listens to the cotomy:ready event, and that event is fired only after the page load flow runs through window initialization and initializeAsync via CotomyPageController.set. So ready callbacks are local timing hooks under the same page boundary, not a separate lifecycle model.\nReal Business Scenarios That Demand Coordination Business pages are rarely single-form, single-action screens. Coordination work appears right away. Typical examples include related-entity search modals for customer or vendor selection, product dialogs with pricing and stock hints, re-auth during long editing sessions, initial load of master plus transaction data, rerender after save, cross-form value reflection, and multiple API calls that should share one failure policy.\nCotomyPageController.set(class extends CotomyPageController { protected override async initializeAsync() { await super.initializeAsync(); const orderForm = this.setForm( CotomyEntityFillApiForm.byId\u0026lt;CotomyEntityFillApiForm\u0026gt;( \u0026#34;order-form\u0026#34;, class extends CotomyEntityFillApiForm {} )! ); this.body.first(\u0026#34;#select-customer\u0026#34;)?.click(async () =\u0026gt; { const selected = await this.app.customerSelector.open(); orderForm.find(\u0026#34;[name=\u0026#39;customerId\u0026#39;]\u0026#34;).forEach(e =\u0026gt; { e.value = selected.id; }); }); } }); Another common case is a screen that depends on multiple startup loads in strict order. That order is part of the screen contract, so it needs to be expressed and preserved consistently across pages. This is not just data fetching; it is lifecycle control for operable UI state.\nprotected async initializeAsync() { await this.initializeScreenControllers(); await this.applyInitialData(); this.enableUserActions(); } If you split this across ad-hoc ready handlers, race conditions become likely. Data can be applied before controllers or forms exist, handlers can run before dependencies are bound, and cross-form synchronization can start before all participants are registered. These are coordination failures, not component failures, so form-local logic alone cannot solve them.\nNot SPA-Centric, But SPA-Compatible Cotomy is not built as a giant SPA-first runtime, and that is still intentional. At the same time, page-level responsibility boundaries are not anti-SPA.\nIf each route or screen has one controller boundary and one lifecycle contract, the same design can scale into large SPA-style systems without immediate structural conflict. My current view is practical rather than absolute: this model is not tied only to traditional MPA, and the same coordination rules can work in SPA routing contexts as well.\nI am already applying this model across multiple features in real applications, and I also plan to use it in a larger SPA project. So this direction is grounded in ongoing implementation work, not only in theory.\nSeparation Between UI Boundary and Application Logic One more boundary matters here: Cotomy should not absorb everything.\nBusiness screens need entity selection, auth control, and shared error policy, but those are not all core Cotomy responsibilities. A cleaner layering is to keep CotomyPageController as the UI boundary, keep screen use-case orchestration in the application service layer, and keep domain rules plus data authority in business logic and API layers.\nCotomyPageController ↓ Application Service Layer ↓ Business Logic / API At the page level, this looks like:\nawait this.app.auth.ensureAuthenticated(); const selected = await this.app.customerSelector.open(); With this split, roles stay clear. Cotomy handles UI boundary and lifecycle coordination, the application layer orchestrates screen use cases, and business/API layers own domain rules and authoritative data.\nWithout that separation, two problems show up quickly: UI controllers become pseudo-domain services, and domain calls leak directly into click handlers without a policy boundary.\nIn internal systems, this shared layer already covers more than entity fill behavior. It includes screen mode switching between view and edit, processing overlays, and side panels for related-entity selection, all implemented as application-layer features instead of Cotomy core features. Some of these patterns may be generalized in the future if they can be standardized without breaking boundary clarity.\nConclusion Cotomy can begin as a small imported module, but business screens rapidly create lifecycle and coordination pressure. You can always write manual ready handlers, but long-term coherence breaks when every page does it differently.\nCotomyPageController is used to keep initialization timing, form registration, shared failure policy, and cross-form coordination inside one control boundary. The goal is not abstraction for abstraction\u0026rsquo;s sake. The goal is to prevent operational drift in real business UI development.\nDesign Series This article is part of the Cotomy Design Series, which explores architectural decisions behind the framework.\nSeries articles: CotomyElement Boundary , Page Lifecycle Coordination, Form AJAX Standardization , Inheritance and Composition in Business Application Design , API Exception Mapping and Validation Strategy , and Why Modern Developers Avoid Inheritance .\nLinks Previous article: CotomyElement Boundary Next article: Form AJAX Standardization ","permalink":"https://blog.cotomy.net/posts/design/02-page-lifecycle-coordination/","summary":"Why Cotomy keeps page lifecycle and cross-form coordination in CotomyPageController instead of scattering ready logic across screens.","title":"Page Lifecycle Coordination"},{"content":"This article starts a new design-notes series for Cotomy. The purpose here is not step-by-step usage, but design intent.\nIn practical guides, we discuss \u0026ldquo;how to use\u0026rdquo; APIs. Here, the focus is different:\nI want to explain why this design exists, which failure modes it was meant to prevent, and which trade-offs were accepted on purpose.\nThis first note focuses on the class that sits at the start of that model: CotomyElement.\nWhy This Series As projects grow, implementation details change often. But architectural intent changes more slowly.\nIf that intent is not documented, teams eventually keep only surface patterns, not reasons. Then the same arguments repeat in every project:\nWhy not use direct DOM APIs everywhere, let CSS live independently, and patch conflicts later? Why not just pick a familiar utility layer and move on?\nThis series records those questions explicitly, not as doctrine but as design history.\nWhy Start from a DOM Wrapper Boundary CotomyElement started from one practical need: I wanted to handle DOM elements with the same workflow every time.\nWhen using raw HTMLElement directly at scale, recurring problems appear:\nAPIs get scattered across native methods and ad-hoc helpers, null checks become repetitive and inconsistent, and type safety depends on local discipline instead of a shared entry point.\nThe initial requirement was simple: \u0026ldquo;Even a thin wrapper is fine, as long as the entry point is unified.\u0026rdquo;\nBut in practice, this became more than convenience. In Cotomy projects, DOM operations are intentionally routed through one boundary so that design leverage can appear:\nfuture behavior can be added in one place instead of scattered handlers, logging and tracing can stay on one operational surface, and debugging can start from a predictable boundary instead of random selectors.\nIn this note, \u0026ldquo;boundary\u0026rdquo; means a responsibility line: where element handling enters a shared runtime model instead of ad hoc local code.\nSo yes, at first glance this can look jQuery-like. But the objective is different.\njQuery historically optimized cross-browser convenience and chaining, while CotomyElement is focused on runtime boundary discipline in TypeScript-first projects.\nIt is not \u0026ldquo;DOM sugar\u0026rdquo; as a goal. It is a structural boundary for predictable behavior.\nHTML and CSS Distance as a Failure Pattern A common breakdown pattern is distance. HTML structure and CSS responsibility are separated so far that nobody can explain \u0026ldquo;which style is authoritative for this element\u0026rdquo;.\nTypical failure shape:\n\u0026lt;section id=\u0026#34;user-card\u0026#34;\u0026gt; \u0026lt;h3 class=\u0026#34;title\u0026#34;\u0026gt;User\u0026lt;/h3\u0026gt; \u0026lt;/section\u0026gt; /* file A */ .title { font-size: 14px; } /* file B */ #dashboard .title { font-size: 16px; } /* file C */ .card .title { letter-spacing: .04em; } Each rule can be locally reasonable. Together, the actual screen becomes context-dependent and fragile.\nCotomy\u0026rsquo;s design stance is to start from DOM boundary first, so that:\nstructure, style, and behavior can stay in one operational context.\nThis does not mean one file for everything. It means responsibility should stay traceable from an element outward. When that line is explicit, style ownership is easier to follow.\nRaw DOM Failure Pattern in Real Screens Another frequent failure is scattered direct selector logic.\ndocument.querySelector(\u0026#34;.title\u0026#34;)!.classList.add(\u0026#34;active\u0026#34;); document.querySelector(\u0026#34;.title\u0026#34;)!.setAttribute(\u0026#34;data-state\u0026#34;, \u0026#34;open\u0026#34;); Each line can look harmless. But once this pattern spreads across files and handlers:\nownership of state transitions becomes unclear, local DOM changes bypass screen-level intent, and refactor impact is hard to estimate before regressions appear.\nSo the problem is not only syntax, but responsibility drift around element state.\nWhy Not Just Use jQuery At the time of choosing a baseline, jQuery was a rational option. It still has proven history and broad familiarity.\nThe hesitation was not \u0026ldquo;jQuery is bad\u0026rdquo;. The hesitation was long-term direction:\ntrend and ecosystem fit for current TypeScript-heavy workflows, type safety and intent expression, and readability under large codebases where boundaries matter more than shortcuts.\nTo be explicit and fair:\nCotomy has only just been published, so adoption is still minimal. Its track record is currently limited to the author\u0026rsquo;s own production use so far, although it is already used via npm in several projects and is being actively maintained.\nThe difference is architectural role:\njQuery is primarily a convenience-oriented function layer, while CotomyElement is a responsibility boundary for DOM operations.\nWhat Cotomy does have in that boundary model:\nthe author understands the full design and implementation surface, fixes can be made immediately when the model needs adjustment, and architecture intent can stay aligned with implementation through one accountable owner.\nFor internal or responsibility-heavy systems, that alignment matters.\nWhy CotomyElement Became More Than a Thin Wrapper If it only wrapped querySelector, it would not solve enough. The class intentionally supports multiple entry patterns.\nEntry variants byId(\u0026hellip;) is for stable identity contracts, selector-based retrieval through first(\u0026hellip;) and find(\u0026hellip;) is for screen lookup, and creation from HTML string is for explicit runtime generation.\nThese variants exist not for convenience, but to prevent divergence in how elements enter the runtime model.\nExamples:\nimport { CotomyElement } from \u0026#34;cotomy\u0026#34;; const byId = CotomyElement.byId(\u0026#34;order-form\u0026#34;); const firstError = CotomyElement.first(\u0026#34;.field-error\u0026#34;); const rows = CotomyElement.find(\u0026#34;.result-row\u0026#34;); const created = new CotomyElement(\u0026#39;\u0026lt;div class=\u0026#34;notice\u0026#34;\u0026gt;Ready\u0026lt;/div\u0026gt;\u0026#39;); import { CotomyElement, CotomyWindow } from \u0026#34;cotomy\u0026#34;; const notice = new CotomyElement(\u0026#39;\u0026lt;div class=\u0026#34;notice\u0026#34;\u0026gt;Saved\u0026lt;/div\u0026gt;\u0026#39;); CotomyWindow.instance.initialize(); CotomyWindow.instance.body.append(notice); CotomyElement.find(\u0026#34;.field-error\u0026#34;).forEach(e =\u0026gt; e.attribute(\u0026#34;data-visible\u0026#34;, \u0026#34;false\u0026#34;)); Problems this solved It reduced initialization-order friction by keeping creation and binding in one model, improved null-safety discipline through explicit return behavior, and unified handling for existing DOM and generated DOM.\nWhy not rely on direct native DOM element handling Directly constructing HTMLElement is not the practical center of browser UI. Actual screens mix server-rendered nodes and runtime-generated nodes.\nDesigning around CotomyElement gives one operational surface for both, without splitting the mental model by origin.\nNot SPA-Centric, But Not Anti-SPA Officially, Cotomy is not a SPA-specialized framework. That is intentional, and it is a design premise rather than a limitation.\nHowever, this is not a rejection of SPA architecture. My view is that sufficiently large SPA-style systems are still possible, provided the design keeps UI framework concerns separate from runtime boundaries. The boundary model remains valid regardless of navigation style.\nOutside a dedicated component framework layer, many concerns can still be handled coherently in TypeScript:\nscreen control, state transitions, and operational behavior.\nThis is a design direction, not a universal guarantee. The key is to preserve explicit boundaries, whichever navigation style is used.\nConclusion CotomyElement was not introduced to add another utility API. It was introduced to establish a stable boundary where DOM handling becomes repeatable, explainable, and accountable.\nFrom that boundary, other Cotomy layers become extensions of the same idea:\nclear responsibility, explicit contracts, and consistent runtime behavior.\nThat is why this series starts here.\nDesign Series This article is part of the Cotomy Design Series, which explores architectural decisions behind the framework.\nSeries articles: CotomyElement Boundary, Page Lifecycle Coordination , Form AJAX Standardization , Inheritance and Composition in Business Application Design , API Exception Mapping and Validation Strategy , and Why Modern Developers Avoid Inheritance .\nNext Next: Page Lifecycle Coordination ","permalink":"https://blog.cotomy.net/posts/design/01-cotomy-element/","summary":"A design-focused note on why Cotomy starts from CotomyElement and treats DOM handling as an architectural boundary, not a UI convenience API.","title":"CotomyElement Boundary"},{"content":"In Cotomy, a single endpoint, especially in CRUD-heavy business systems, is treated as one operational screen boundary. This continues from Working with CotomyElement . For those screens, Cotomy expects the endpoint-level behavior to be coordinated by CotomyPageController.\nEach class involved in CRUD operations (forms, API forms, entity-aware forms) requires explicit initialization. In Cotomy’s design, that initialization kick and lifecycle coordination are also responsibilities of the page controller.\nBy centralizing screen entry, initialization, and CRUD orchestration in the page controller, Cotomy standardizes how these screens behave. That consistency keeps large systems maintainable as they grow.\nThis guide shows how that screen-level orchestration model works with Cotomy forms in a practical CRUD example under the principle:\none screen = one endpoint boundary.\nFor reference details, see: CotomyPageController , CotomyApiForm , and CotomyEntityFillApiForm .\nBefore PageController: Choosing the Right Form Type Before discussing why the page controller comes first, it helps to see how Cotomy form classes are typically used at the screen level.\nCase A: Search Conditions (Query Screen) Use CotomyQueryForm when the goal is URL navigation via query parameters.\nimport { CotomyElement, CotomyQueryForm } from \u0026#34;cotomy\u0026#34;; const form = CotomyElement.byId\u0026lt;CotomyQueryForm\u0026gt;( \u0026#34;user-search-form\u0026#34;, class extends CotomyQueryForm {})!.initialize(); This form always uses GET, serializes input values into the query string, and navigates through location.href. It does not use an entity lifecycle.\nThis is appropriate for list filters and search condition panels.\nCase B: Simple API Submit (Non-Entity Form) Use CotomyApiForm when submitting data to an API without entity identity switching.\nimport { CotomyElement, CotomyApiForm } from \u0026#34;cotomy\u0026#34;; const form = CotomyElement.byId\u0026lt;CotomyApiForm\u0026gt;( \u0026#34;feedback-form\u0026#34;, class extends CotomyApiForm {})!.initialize(); This form submits FormData through fetch, exposes apiFailed and submitFailed events, and does not auto-switch POST and PUT. It is not bound to entity identity or CRUD lifecycle.\nThis form type is useful for smaller operational units inside a screen:\nfor example, a search panel that should not trigger full GET navigation, a modal or side panel that selects options and posts results, or a feedback form that is not part of the main CRUD contract. The same applies to any POST operation that is operational but not part of the primary endpoint CRUD contract.\nIn other words, use CotomyApiForm when the submission is operational, but not identity-bound and not the primary endpoint contract of the screen.\nCase C: Entity CRUD Form (Create + Edit in One Screen) Use CotomyEntityFillApiForm for endpoint-bound CRUD screens.\nimport { CotomyElement, CotomyEntityFillApiForm } from \u0026#34;cotomy\u0026#34;; const form = CotomyElement.byId\u0026lt;CotomyEntityFillApiForm\u0026gt;( \u0026#34;user-edit-form\u0026#34;, class extends CotomyEntityFillApiForm {})!.initialize(); It automatically switches POST to PUT based on entityKey, performs a GET on load when entityKey exists, and calls fillAsync() to project data into inputs. That keeps create and edit under one endpoint contract.\nThis is the typical choice for business CRUD screens.\nAll of these form types share one structural requirement: they must be initialized.\nWhether you bind to an existing already present in the DOM or generate a new one dynamically, each form instance needs lifecycle wiring (initialize()), API binding, and sometimes entity loading coordination.\nWhen multiple forms, panels, and behaviors coexist on the same screen, something must coordinate them as a single operational unit.\nThat coordination role is what CotomyPageController provides.\nIt does not replace forms. It aggregates them under one screen boundary, ensuring initialization order, lifecycle restoration, and endpoint-level consistency.\nWhy Start with PageController Most business UI does not run as isolated form widgets. It runs as screens with operational context:\nyou load initial data, show current state, accept edits, submit and reflect server results, and handle failures without losing user intent.\nThat lifecycle belongs to the screen, not to a single submit button.\nThe page controller is useful here because it gives one place to coordinate screen behavior without forcing a component tree mental model. You still use real DOM, but with a clear entry boundary.\nThree important points:\nOverride initializeAsync() only for screen-level orchestration. Do not load or expand entity data inside initialize(). Use fillAsync for data projection. Treat a screen as a URL-bound operational unit with a defined load and submit contract. Practical rule:\ninitializeAsync() wires forms, registers screen-level behavior, and restores state. fillAsync() projects API response data into the DOM. actionUrl at the form level defines the endpoint contract clearly instead of scattering it across handlers.\nKeep lifecycle wiring separate from data projection.\nCore Concept: One Screen = One Endpoint Think in URL-addressable screens first:\nsuch as /users/edit/123, /orders/new, or /products/list.\nEach URL is an independent UI boundary. Each boundary has its own lifecycle, data loading path, submit semantics, and error recovery strategy.\nOne-screen-one-endpoint keeps responsibilities structurally separated:\nthe URL and server endpoint define the boundary, the page controller defines lifecycle and orchestration, and form helpers define submit protocol.\nPractical Example: Build a CRUD User Edit Screen Target screen endpoint:\n/users/edit/{id}\nWe will build this as a single endpoint-bound surface that can:\nLoad one user. Edit fields. Save changes. Delete user. Recover from failures.\nHTML Structure Start with explicit HTML owned by the page.\n\u0026lt;main id=\u0026#34;user-edit-screen\u0026#34;\u0026gt; \u0026lt;h1\u0026gt;User Edit\u0026lt;/h1\u0026gt; \u0026lt;div id=\u0026#34;message\u0026#34;\u0026gt;\u0026lt;/div\u0026gt; \u0026lt;form id=\u0026#34;user-edit-form\u0026#34; action=\u0026#34;/api/users\u0026#34; data-cotomy-entity-key=\u0026#34;123\u0026#34;\u0026gt; \u0026lt;label\u0026gt; Name \u0026lt;input name=\u0026#34;name\u0026#34; autocomplete=\u0026#34;name\u0026#34;\u0026gt; \u0026lt;/label\u0026gt; \u0026lt;label\u0026gt; Email \u0026lt;input name=\u0026#34;email\u0026#34; type=\u0026#34;email\u0026#34; autocomplete=\u0026#34;email\u0026#34;\u0026gt; \u0026lt;/label\u0026gt; \u0026lt;div class=\u0026#34;actions\u0026#34;\u0026gt; \u0026lt;button type=\u0026#34;submit\u0026#34;\u0026gt;Save\u0026lt;/button\u0026gt; \u0026lt;button type=\u0026#34;button\u0026#34; id=\u0026#34;delete-button\u0026#34;\u0026gt;Delete\u0026lt;/button\u0026gt; \u0026lt;/div\u0026gt; \u0026lt;/form\u0026gt; \u0026lt;/main\u0026gt; This markup is intentionally plain. The focus is a stable screen root (#user-edit-screen) with explicit inputs.\nIn Cotomy, a screen controller is typically defined as an anonymous class inside the page entry file. This keeps the endpoint boundary structurally isolated and prevents unnecessary cross-page coupling.\nCreate the PageController Use initializeAsync() as the screen entry point.\nimport { CotomyPageController, CotomyEntityFillApiForm } from \u0026#34;cotomy\u0026#34;; CotomyPageController.set(class extends CotomyPageController { protected override async initializeAsync(): Promise\u0026lt;void\u0026gt; { await super.initializeAsync(); // Bind and initialize the form this.setForm( CotomyEntityFillApiForm.byId\u0026lt;CotomyEntityFillApiForm\u0026gt;( \u0026#34;user-edit-form\u0026#34;, CotomyEntityFillApiForm )! ); // actionUrl is defined at the form level (HTML or attribute) // Do not hardcode endpoint URLs inside controller logic. } }); Key points:\ninitializeAsync() wires the screen, setForm() registers the form under controller lifecycle, and CotomyEntityFillApiForm handles data loading through loadAsync() and fillAsync(). The controller should not manually fetch and push values into inputs.\nSeparate New vs Edit Using Endpoint Context Mode (create vs edit) is determined by endpoint structure and form configuration, not by conditional branching inside the controller. The controller wires the screen, and the form plus endpoint contract express the mode.\nExample HTML:\n\u0026lt;form id=\u0026#34;user-edit-form\u0026#34; action=\u0026#34;/api/users\u0026#34; data-cotomy-entity-key=\u0026#34;123\u0026#34;\u0026gt; Rules:\ndata-cotomy-identify defaults to true, so it usually does not need to be written explicitly. If data-cotomy-entity-key exists, CotomyEntityApiForm uses PUT, and if it does not exist, it uses POST. CotomyEntityFillApiForm schedules loadAsync() on window ready after initialize(), and when data-cotomy-entity-key is present it issues GET and calls fillAsync() to reflect the response into inputs.\nfillAsync() is executed not only after GET, but also after successful POST and PUT operations. Therefore, your endpoint must return a consistent entity object for all three operations.\nExpected contract: GET /api/users/{id} returns the entity object, POST /api/users returns the created entity object, and PUT /api/users/{id} returns the updated entity object.\nAll responses should have the same structure if you rely on automatic form reflection.\nNote: When the server responds with 201 Created and entity identification is enabled (data-cotomy-identify !== \u0026ldquo;false\u0026rdquo;), Cotomy expects a Location header containing the new resource path. The entity key is extracted from that Location path relative to the form action. If Location is missing, the entity key is not updated automatically. If Location does not match the action prefix (or does not contain exactly one additional key segment), Cotomy throws an error during submit processing.\nImportant: The Location header requirement applies only when you use CotomyEntityApiForm (or CotomyEntityFillApiForm), which implements the POST → PUT transition automatically. If you are integrating with an existing API that cannot provide a Location header or does not follow this contract, inherit from CotomyApiForm instead and implement your own submit handling logic. Cotomy does not force a server contract — entity-aware behavior is opt-in through the Entity form classes.\nIf your API returns only a status flag or a different response shape, you must override the form behavior and handle the response explicitly.\nThis keeps create and edit under one endpoint contract without duplicating controllers.\nAdd Failure Handling at Screen Level If you need special submit behavior, override submitAsync() and use try–catch explicitly.\nIn practice, this is often done by passing an anonymous class to setForm():\nimport { CotomyPageController, CotomyEntityFillApiForm, CotomyConflictException } from \u0026#34;cotomy\u0026#34;; CotomyPageController.set(class extends CotomyPageController { protected override async initializeAsync(): Promise\u0026lt;void\u0026gt; { await super.initializeAsync(); this.setForm( CotomyEntityFillApiForm.byId\u0026lt;CotomyEntityFillApiForm\u0026gt;( \u0026#34;user-edit-form\u0026#34;, class extends CotomyEntityFillApiForm { public override async submitAsync(): Promise\u0026lt;void\u0026gt; { try { await super.submitAsync(); console.log(\u0026#34;Saved successfully\u0026#34;); } catch (error) { if (error instanceof CotomyConflictException) { console.warn(\u0026#34;Duplicate identifier.\u0026#34;); } throw error; } } } )! ); } }); Use this pattern only when the screen requires additional behavior beyond the standard form contract. Most CRUD screens should rely on the default entity form behavior and avoid duplicating submit mechanics.\nKeep Submit Behavior as a Form Concern The page controller orchestrates. The form helper standardizes submission.\nThis means your screen stays readable:\nPage entry and lifecycle coordination belong to the controller. Mode (create vs edit) is determined by the form’s entity key and endpoint configuration — not by conditional branching in the controller. The controller defines when the screen starts. The form and endpoint define how it behaves.\nDo not push full submit mechanics into page lifecycle code unless you need a custom operation path.\nCRUD Contract Table for One Endpoint Surface When teams say \u0026ldquo;CRUD screen,\u0026rdquo; they often mean different things. Write the contract explicitly for one endpoint-bound surface:\nRead: load existing user by id on entry. Create: same screen in new mode with defaults. Update: submit edited fields with standardized form protocol. Delete: explicit action with redirect and failure handling.\nYou can encode this in controller structure:\nimport { CotomyApi, CotomyPageController } from \u0026#34;cotomy\u0026#34;; // Note: // wireActions() and showMessage() are application-level methods. // They are not part of CotomyPageController. // Implement them in your own controller as needed. CotomyPageController.set(class extends CotomyPageController { protected override async initializeAsync(): Promise\u0026lt;void\u0026gt; { await super.initializeAsync(); this.wireActions(); } // Application-level example: wire your own UI events here. private wireActions(): void { // e.g. bind delete button click -\u0026gt; this.deleteUser() } // Application-level example: render feedback to your screen. private showMessage(message: string): void { console.warn(message); } private get currentId(): string | undefined { const segments = this.url.path.split(\u0026#34;/\u0026#34;).filter(Boolean); return segments[segments.length - 1]; } private async deleteUser(): Promise\u0026lt;void\u0026gt; { const id = this.currentId; if (!id) return; try { const api = new CotomyApi(); await api.deleteAsync(`/api/users/${id}`); location.href = \u0026#34;/users/list\u0026#34;; } catch { this.showMessage(\u0026#34;Delete failed. Please try again.\u0026#34;); } } }); Now each CRUD operation has a visible home, which improves long-term maintainability:\nonboarding is faster because behavior is discoverable, endpoint-level logs map to endpoint-level code, and tests can assert operation outcomes without bootstrapping a large app shell.\nRole Separation from Forms Keep this boundary explicit:\nthe page controller handles screen-level orchestration, CotomyApiForm provides the standardized submit path, and CotomyEntityFillApiForm adds automatic entity reflection on top.\nPractical workflow:\nDecide screen endpoint boundary Implement controller behavior for load/mode/failure/navigation Attach form protocol for submit consistency Add entity-fill extension only when reflection requirements justify it If you invert this and start from form classes, page logic drifts into submit hooks, and CRUD screens become hard to reason about.\nWhy This Matters in Real Systems Business systems are operated by screens, not by abstract component fragments. A user opens a URL, performs work, corrects errors, and continues. That unit is the screen boundary.\nOne-screen-one-endpoint gives concrete benefits:\nit gives responsibility closure per endpoint, easier testing for load/submit/failure behavior by screen, a stable refactoring surface for long-lived systems, compatibility with both SPA-like navigation and classic MPA transitions, and a better fit for AI-assisted generation because prompts can target one endpoint contract at a time.\nOn testing specifically:\nintegration tests can target /users/edit/{id} as one contract, failure tests can assert page-level message and fallback behavior, and submit tests can focus on form protocol separately.\nThis is cleaner than mixing all behavior into one large handler graph.\nNext Next: Standardizing CotomyPageController for Shared Screen Flows If you adopt only one rule from this guide, use this:\nDefine the screen endpoint boundary first. Then place page behavior in CotomyPageController. Then add form protocol helpers on top.\nThat sequence keeps Cotomy usage practical, testable, and stable.\nPractical Guide This article is part of the Cotomy Practical Guide, which focuses on hands-on usage patterns for the framework.\nSeries articles: Working with CotomyElement , CotomyPageController in Practice, and Standardizing CotomyPageController for Shared Screen Flows .\nLinks Previous: Working with CotomyElement . More posts: /posts/ .\n","permalink":"https://blog.cotomy.net/posts/practical-guide-2-one-screen-one-endpoint-cotomy-page-controller/","summary":"Design screen entry first with CotomyPageController, then structure CRUD flow around one endpoint boundary.","title":"CotomyPageController in Practice"},{"content":"This guide focuses on practical CotomyElement usage. The goal is to move from API familiarity to coding patterns you can apply immediately in production screens.\nIntroduction: Why CotomyElement Comes First If you start Cotomy from forms or API helpers, you can still build features. But you will miss the actual center of the model.\nCotomyElement is the smallest structural unit in Cotomy. It is not a virtual component instance. It is a wrapper around a real DOM element, with runtime behavior for events, scoped CSS, lifecycle hooks, and DOM-safe manipulation. For API details, see CotomyElement reference .\nA quick comparison with React helps set expectations:\nA React component is a declarative render unit over a virtual tree, while CotomyElement is a direct DOM structural unit over the real tree.\nCotomy does not ask you to treat the DOM as a render artifact. It asks you to treat the DOM as a live operational surface.\nThat is why CotomyElement comes first. If you understand this class, the rest of Cotomy (CotomyForm, CotomyApi, CotomyPageController) becomes an extension of the same structural boundary.\nBinding to an Existing DOM Element In business systems, you often start with existing server-rendered HTML. Cotomy supports this directly: bind to existing DOM and add behavior.\nimport { CotomyElement } from \u0026#34;cotomy\u0026#34;; const element = CotomyElement.byId(\u0026#34;user-panel\u0026#34;); You can do the same with a derived class:\nimport { CotomyElement } from \u0026#34;cotomy\u0026#34;; class UserPanel extends CotomyElement { public activate(): void { this.attribute(\u0026#34;data-state\u0026#34;, \u0026#34;active\u0026#34;); } } const panel = CotomyElement.byId\u0026lt;UserPanel\u0026gt;(\u0026#34;user-panel\u0026#34;, UserPanel); panel!.activate(); Why this matters:\nThe bound element becomes your structural root boundary, scoped behavior and event registration are attached to that runtime instance, and HTML ownership stays where it already belongs while you add behavior safely.\nThis is the practical meaning of \u0026ldquo;DOM = state\u0026rdquo;. You are not mirroring screen state into another hidden object graph. You are operating on the actual structure the user sees.\nCreating a New DOM Element Sometimes you do not have existing markup. You need to generate a new UI block intentionally. If you use VS Code, es6-string-html is recommended so /* html / and / css */ template literals are syntax-highlighted.\nPattern A (Base): HTML string input import { CotomyElement } from \u0026#34;cotomy\u0026#34;; const card = new CotomyElement(/* html */ ` \u0026lt;section class=\u0026#34;card\u0026#34;\u0026gt; \u0026lt;h3\u0026gt;Profile\u0026lt;/h3\u0026gt; \u0026lt;p\u0026gt;DOM-first structure\u0026lt;/p\u0026gt; \u0026lt;/section\u0026gt; `); This is the default creation style in many screens. If you do not need scoped CSS for that block, this is usually enough.\nPattern B: html + css input (scoped style) import { CotomyElement } from \u0026#34;cotomy\u0026#34;; const panel = new CotomyElement({ html: /* html */ ` \u0026lt;section class=\u0026#34;panel\u0026#34;\u0026gt; \u0026lt;h2 class=\u0026#34;title\u0026#34;\u0026gt;Users\u0026lt;/h2\u0026gt; \u0026lt;p class=\u0026#34;desc\u0026#34;\u0026gt;Scoped style example\u0026lt;/p\u0026gt; \u0026lt;/section\u0026gt; `, css: /* css */ ` [root] { border: 1px solid #d8e0e6; border-radius: 8px; padding: 12px; background: #fff; } [root] .title { margin: 0 0 6px; font-size: 1rem; } ` }); In this form, [root] explicitly marks the style boundary. Use this when you want to clearly show root-level and child-level selectors.\nYou can also omit [root] for simple single-rule CSS:\nconst compact = new CotomyElement({ html: /* html */ ` \u0026lt;section class=\u0026#34;compact\u0026#34;\u0026gt; \u0026lt;h3 class=\u0026#34;title\u0026#34;\u0026gt;Compact\u0026lt;/h3\u0026gt; \u0026lt;p class=\u0026#34;desc\u0026#34;\u0026gt;Single-rule scoped style\u0026lt;/p\u0026gt; \u0026lt;/section\u0026gt; `, css: /* css */ ` .compact { padding: 8px; border: 1px solid #d8e0e6; } ` }); In this mode, Cotomy prepends the root scope once. For multi-rule CSS, write [root] explicitly on each selector:\nconst detailed = new CotomyElement({ html: /* html */ ` \u0026lt;section class=\u0026#34;detailed\u0026#34;\u0026gt; \u0026lt;h3 class=\u0026#34;title\u0026#34;\u0026gt;Detailed\u0026lt;/h3\u0026gt; \u0026lt;p class=\u0026#34;desc\u0026#34;\u0026gt;Explicit root selectors\u0026lt;/p\u0026gt; \u0026lt;/section\u0026gt; `, css: /* css */ ` [root] .detailed { padding: 8px; border: 1px solid #d8e0e6; } [root] .title { margin: 0 0 4px; } [root] .desc { color: #52606d; } ` }); Pattern C: tagname + text + css input (lightweight) This pattern is less central, but useful for simple text or message output where writing full HTML tags is unnecessary.\nimport { CotomyElement } from \u0026#34;cotomy\u0026#34;; const message = new CotomyElement({ tagname: \u0026#34;p\u0026#34;, text: \u0026#34;Saved successfully.\u0026#34;, css: /* css */ ` [root] { color: #0b6b57; font-weight: 600; } ` }); This is explicit structure generation, not a render cycle. You decide when and where the element exists, then attach behavior.\nAdding an Element Through CotomyWindow CotomyWindow is the runtime surface for page-level behavior. In current Cotomy implementation, you use CotomyWindow.instance and append through append(. ). Related references: CotomyWindow and Forms Basics .\nimport { CotomyElement, CotomyWindow } from \u0026#34;cotomy\u0026#34;; const win = CotomyWindow.instance; win.initialize(); const notice = new CotomyElement(\u0026#39;\u0026lt;div class=\u0026#34;notice\u0026#34;\u0026gt;Ready\u0026lt;/div\u0026gt;\u0026#39;); win.append(notice); CotomyWindow.initialize() will be covered in detail in a future guide. If you are not using CotomyPageController, call CotomyWindow.instance.initialize() explicitly during startup.\nWhy use window-level append instead of appending to the body from scattered locations:\nYou operate through a shared lifecycle boundary, removal observation and runtime events are initialized in one place, and page-level behavior stays consistent as screens grow.\nThis does not forbid direct DOM usage. It keeps structural operations coordinated through runtime boundaries in long-lived UI.\nA common integration pattern is:\nCotomyWindow.instance.initialize() once at startup create or bind elements append entry elements through CotomyWindow.instance.append(. ) Retrieving Existing Elements — Static Methods Use CotomyElement static retrieval methods when you need predictable element lookup without external helper libraries.\nCotomyElement.byId(id, type?) Returns one typed element or undefined. Best when the page has a stable ID contract.\nimport { CotomyElement } from \u0026#34;cotomy\u0026#34;; const profile = CotomyElement.byId(\u0026#34;profile\u0026#34;); if (profile) { profile.attribute(\u0026#34;data-state\u0026#34;, \u0026#34;active\u0026#34;); } CotomyElement.first(selector, type?) Returns the first match or undefined. Use for single-entry blocks selected by CSS.\nconst firstError = CotomyElement.first(\u0026#34;.field-error\u0026#34;); firstError?.setFocus(); CotomyElement.last(selector, type?) Returns the last match or undefined. Useful for append-like UI where newest/last element matters.\nconst latestRow = CotomyElement.last(\u0026#34;.result-row\u0026#34;); latestRow?.scrollIn(); CotomyElement.find(selector, type?) Returns all matches as an array. Use when applying the same operation to multiple elements.\nCotomyElement.find(\u0026#34;[data-status=\u0026#39;pending\u0026#39;]\u0026#34;).forEach(el =\u0026gt; { el.attribute(\u0026#34;data-highlight\u0026#34;, \u0026#34;true\u0026#34;); }); CotomyElement.contains(selector) / CotomyElement.containsById(id) Boolean existence checks. Use for guard conditions before costly operations.\nif (CotomyElement.containsById(\u0026#34;approval-panel\u0026#34;)) { // proceed with panel setup } CotomyElement.empty(type?) Creates a hidden placeholder element. Useful as a safe no-op fallback when you want chain-style handling.\nconst meta = CotomyElement.byId(\u0026#34;meta\u0026#34;) ?? CotomyElement.empty(); meta.attribute(\u0026#34;data-ready\u0026#34;, \u0026#34;1\u0026#34;); Compared with jQuery-like usage:\nReturn types are explicit (CotomyElement | undefined, arrays, booleans), there is no implicit collection wrapper with mixed semantics, and typed constructor override supports class-based structure.\nDesign Guidelines for Using CotomyElement The following rules keep Cotomy usage stable in production code.\nTreat one feature as one element boundary, write scoped CSS relative to the root boundary, avoid moving too much UI state into detached JS variables, and operate on DOM state directly when the user-facing state is already in the DOM.\nPractical guideline:\nKeep domain decisions out of element classes, keep element classes focused on structure, interaction, and presentation state, and treat API results as operational input before updating the DOM intentionally.\nThis keeps separation of concerns clear while preserving debuggability in browser tools.\nWhat CotomyElement Is Not Misunderstanding this class leads to most early integration mistakes.\nCotomyElement is not:\na virtual DOM implementation, a re-render engine, or a centralized state management framework.\nIt is a runtime-oriented DOM abstraction with lifecycle and structural safety behaviors.\nIf you expect auto re-render from object mutation, you are using the wrong mental model. If you expect hidden state synchronization, same issue.\nCotomy\u0026rsquo;s design expects explicit structural updates on real DOM nodes.\nConclusion: From Element to Structured UI CotomyElement is the practical starting point for Cotomy. You can bind existing HTML, generate new structure, append through CotomyWindow.instance, and retrieve elements through static methods with predictable return behavior.\nThis is enough to build robust screen structure before introducing higher-level helpers.\nOnce this layer is clear, the next practical step is forms. That is where intent declaration, submission flow, and failure channels become operationally meaningful.\nPractical Guide This article is part of the Cotomy Practical Guide, which focuses on hands-on usage patterns for the framework.\nSeries articles: Working with CotomyElement, CotomyPageController in Practice , and Standardizing CotomyPageController for Shared Screen Flows .\nNext Next: CotomyPageController in Practice ","permalink":"https://blog.cotomy.net/posts/practical-guide-1-working-with-cotomy-element/","summary":"Binding, creating, and managing UI structure with CotomyElement in real DOM-centric workflows.","title":"Working with CotomyElement"},{"content":"This is the seventh post in Problems Cotomy Set Out to Solve. This continues from Runtime Boundaries and Operational Safety . In the first six posts, I focused on how to keep UI behavior stable as systems grow: HTML and CSS boundaries, submission boundaries, screen lifecycle boundaries, state continuity boundaries, API protocol boundaries, and runtime boundaries. The goal there was predictability. Here, I want to move to a different question: who should decide business outcomes.\nIn many systems, this question is never stated directly. Decisions grow in the UI through convenience. Conditions spread through handlers. Authority moves without being named. At first, this feels practical. Later, the system cannot explain why two screens make different decisions for the same operation.\nThis is where separation of concerns stops being a naming preference and becomes an operational rule.\nIntroduction: Why UI Intent Matters Most long-lived business UI breakdowns are not caused by missing features. They are caused by unclear responsibility. When screens own too much decision logic, each local optimization creates another private rule.\nThe symptoms are familiar:\nYou see error message policy differ by screen for the same domain failure, and validation decisions drift between create and edit pages. Over time, event handlers become the place where business decisions quietly accumulate.\nThat drift is not random. It appears when the role of the UI is undefined. So the core question is simple:\nWhat should the UI actually do?\nThe answer in this series is simple: UI should declare intent for an operation, but not become the source of business authority.\nCommon Patterns Where UI Becomes Authority Responsibility leakage usually looks reasonable in isolation. A developer adds one branch to improve user experience. Another branch is added for role-based visibility. Another one controls a transition based on a status. Soon, the screen is making decisions that belong to business logic.\nCommon patterns include:\nIt often starts with event handlers deciding whether an operation is allowed, then grows into UI branches that encode approval transitions. Role conditions end up implemented only in rendering logic, and success or failure policies quietly diverge by page.\nTypical examples:\n\u0026ldquo;Disable the submit button when stock is zero\u0026rdquo;. \u0026ldquo;Show this operation only for administrators\u0026rdquo;. \u0026ldquo;If approval stage is X, then do Y in this click handler\u0026rdquo;.\nNone of these are wrong as presentation concerns by themselves. The problem appears when the UI becomes the final authority for those rules. At that point, business decisions are no longer governed by a shared Operational contract. They are governed by whichever screen the user happened to use.\nIntent vs Authority — Defining Responsibility This distinction needs explicit terms.\nIntent means the UI declares that a user wants to perform an operation. Authority means the business layer decides whether that operation is valid and how it should be handled.\nUI owns intent. Business logic owns authority.\nIntent says, \u0026ldquo;attempt this operation with these inputs.\u0026rdquo; Authority says, \u0026ldquo;this operation is valid under current business rules.\u0026rdquo;\nIf intent and authority are merged into one layer, the system loses a stable Structural boundary. It becomes difficult to keep every entry point aligned to the same rule set.\nA practical framing is this: authority lives in backend business logic and its operation contracts. The UI communicates with that layer through explicit contracts, but should not replace it.\nConcrete Failures of Blurred Boundaries When UI holds authority, failure modes become systemic.\nBoundary ambiguity spreads first. Business conditions appear in many screens, and rule changes require screen-by-screen patching. A single domain update becomes a UI migration problem.\nAt the same time, duplicated rule systems emerge. The UI applies one condition while the API applies another. Both look correct locally, yet they diverge over time. This creates hidden conflict:\nUI says operation is invalid, API would accept. UI says operation is valid, API rejects with domain failure.\nTesting quality also drops around decision logic. Business decisions encoded in handlers are hard to isolate. You can test click outcomes, but not authoritative rule behavior at the right abstraction level. Regression risk increases with each UI variant.\nThese are operational failures that appear when authority is assigned to the wrong layer.\nCotomy’s Position: UI as Intent Layer Cotomy\u0026rsquo;s position is not that UI should be passive. It is that responsibility must be explicit.\nThe model is:\nUI declares operational intent Business authority is decided through business protocol and backend logic Runtime boundary preserves consistent execution semantics at the UI edge This preserves intent handling in the screen while keeping authority in a single operational domain.\nCotomy\u0026rsquo;s architectural stance treats UI as intent and backend as authority (see Cotomy Reference – Overview ).\nThat model is what keeps the UI predictable without turning it into a decision engine.\nImplementation Reality — How Cotomy Supports This The implementation shape in Cotomy aligns with this boundary. No hidden authority transfer is introduced.\nAt a high level:\nIn the current implementation, form.ts centralizes submission entry through CotomyForm and related API form types. api.ts provides structured exceptions and response objects for failure handling. view.ts provides DOM-state handling, event registration, and lifecycle-related behavior.\nWhat this means in practice:\nUI collects values and declares an operation attempt. Operation outcomes are evaluated from API status and domain error responses. Runtime provides consistent structure for execution, not business judgment.\nThis is why the series has repeatedly emphasized boundaries. Cotomy does not position the UI as a business rule engine. It presents UI as an intent layer operating against explicit contracts.\nIn other words, the UI can validate usability and guide users, but final business acceptance should stay outside the UI layer.\nMisunderstandings to Avoid This boundary model is often misread. The following clarifications are essential:\nUI-side validation still matters for usability and early feedback. This is not a backend architecture argument, and it also works with server-light or server-optional implementation patterns.\nThe point is responsibility separation, not feature prohibition.\nUI validation remains useful for user guidance and early feedback. But authoritative acceptance criteria should remain in the business logic enclave so all operation paths remain coherent.\nThis is also not a claim that every app must adopt one technical stack. The model is architectural: intent boundary in UI, authority boundary in business logic, and runtime structure between them.\nConclusion: Operational Safety Through Responsibility Separation Operational safety is not achieved by adding more checks to every screen. It is achieved by assigning the right layer to the right responsibility.\nThe UI should express intent clearly. The business layer should decide authority consistently. The runtime should preserve a predictable execution boundary between them.\nThat is the practical meaning of Intent vs Authority. It is also the reason this series moved from local UI structure to operation protocol and runtime boundaries.\nWhen intent and authority are separated, change remains tractable. When they are merged, behavior drifts and trust erodes.\nCotomy\u0026rsquo;s contribution is not \u0026ldquo;automatic business logic.\u0026rdquo; It is architectural discipline: clear Separation of concerns, a stable Structural boundary, and a runtime model that keeps UI behavior aligned with operational contracts.\nProblem Series This article is part of the Cotomy Problem Series, which examines recurring structural failures in business UI design.\nSeries articles: HTML and CSS as One Unit , Form Submission as Runtime , Screen Lifecycle and DOM Stability , Form State and Long-Lived Interaction , API Protocols for Business Operations , Runtime Boundaries and Operational Safety , UI Intent and Business Authority, and Binding Entity Screens to UI and Database Safely .\nNext Next: Binding Entity Screens to UI and Database Safely ","permalink":"https://blog.cotomy.net/posts/problem-7-ui-is-intent-not-business-authority/","summary":"UI should declare operational intent, while business authority must remain in business logic and operational contracts.","title":"UI Intent and Business Authority"},{"content":"This is the sixth post in Problems Cotomy Set Out to Solve. This continues from API Protocols for Business Operations . It closes the first structural arc of the series. The earlier problems defined separate boundaries. This one explains why those boundaries must be unified into a runtime boundary if a business UI is expected to remain operationally safe over time.\nThe question is simple: what does safety mean in business UI? Not \u0026ldquo;nothing ever fails.\u0026rdquo; Safety means failure is constrained, predictable, and recoverable under normal operations.\nWhere Most UI Systems Lose Safety Many systems still run without a clear runtime boundary between screen code and business operations. The result is not immediate collapse. The result is slow drift into unpredictable behavior.\nUI code and operation logic get mixed in the same layer, API calls are scattered per screen and per button, DOM operations are executed from arbitrary points, and error handling differs from one screen to another.\nEach local decision looks reasonable. At system scale, the outcome is inconsistent behavior that nobody can fully trace.\nWhat This Drift Produces When runtime boundaries are implicit, the system accumulates hidden failure paths:\nyou get operational mismatches between screens that should behave the same way, implicit dependencies that exist only in maintainers\u0026rsquo; memory, side effects outside the intended interaction scope, and more fragility during staff turnover and maintenance handoff.\nThis is why operational safety is a structural property, not a QA checklist. Safety is not the absence of incidents. Safety is the presence of bounded and predictable failure behavior.\nHow Problems 1 to 5 Connect to Runtime Safety To make the references explicit, here are the five earlier posts by title and link: Problem 1 is HTML and CSS as One Unit , Problem 2 is Form Submission as Runtime , Problem 3 is Screen Lifecycle and DOM Stability , Problem 4 is Form State and Long-Lived Interaction , and Problem 5 is API Protocols for Business Operations . This post treats those five boundaries as one connected runtime-safety model.\nIn short, the HTML/CSS boundary protects local UI structure, the submit boundary defines operation flow as runtime protocol, the screen lifecycle boundary stabilizes the working surface, the form continuity boundary preserves long-lived interaction state, and the business operation protocol boundary aligns forms with APIs.\nWithout integration, these remain isolated rules. With integration, they become a single execution contract: the runtime boundary.\nCotomy\u0026rsquo;s Position Cotomy\u0026rsquo;s model can be summarized as a four-layer operational stance:\nDOM is treated as a state-bearing working surface, forms are treated as protocol instead of page-specific glue, APIs are treated as business operation channels, and runtime is treated as the boundary that structures operational safety.\nIn this model, screens do not provide safety by themselves. Screens declare intent. Runtime defines consistent execution boundaries.\nThis same stance is visible across Cotomy\u0026rsquo;s public model and documentation: Cotomy , Comparison , CotomyElement reference , and Forms Basics .\nWhat It Means to Have a Runtime Boundary A runtime boundary is concrete even when implementation is abstracted away. It means critical operational concerns are not left to per-screen reinvention:\nform submission flow is runtime-structured, event registry lifecycle is runtime-structured, scoped CSS disposal rules are runtime-structured, and API error handling is normalized through events and exception types.\nDevelopers still write behavior. But behavior is executed through a boundary that keeps consistency under operation.\nThat distinction is the final defensive layer. If every screen owns execution semantics, safety diverges. If runtime owns execution semantics, safety can be system-wide.\nWhy This Is Not Another Architecture Debate This is not a DDD, Clean Architecture, backend design, or anti-SPA argument. The focus is UI operational safety: whether business operations execute predictably at scale.\nFinal Position Cotomy does not center on abstracting rendering. It centers on constructing an operational boundary.\nThat is the shift from implementation convenience to operational safety. A stable business UI is not produced by smart components alone. It is produced by explicit runtime boundaries that constrain how operations are executed.\nProblem Series This article is part of the Cotomy Problem Series, which examines recurring structural failures in business UI design.\nSeries articles: HTML and CSS as One Unit , Form Submission as Runtime , Screen Lifecycle and DOM Stability , Form State and Long-Lived Interaction , API Protocols for Business Operations , Runtime Boundaries and Operational Safety, UI Intent and Business Authority , and Binding Entity Screens to UI and Database Safely .\nNext Next: UI Intent and Business Authority ","permalink":"https://blog.cotomy.net/posts/problem-6-runtime-boundaries-operational-safety/","summary":"Operational safety in business UI depends on a clear runtime boundary between screen intent and execution structure.","title":"Runtime Boundaries and Operational Safety"},{"content":"1. Operator Information Operator name: Y. Arakawa\nLocation: Nagoya, Aichi, Japan\nContact email: yshr1920@gmail.com 2. Information We Collect We collect the following information.\n(1) Contact Form Name Email address Message content (2) Log Information IP address Access time Browser/OS (3) Cookies / Tracking Browsing data via Google Analytics Advertising-related identifiers (only if advertising features are enabled in the future) 3. Purpose of Use We use collected information for the following purposes.\nResponding to inquiries Analysis to improve services Site maintenance and operations Advertising delivery optimization (only if advertising services are enabled) 4. Third-Party Sharing We may share collected data with the following service providers.\nGoogle LLC (Analytics, Ads, etc.) Formspree (form submission processing) Cloudflare (hosting, CDN, and security) We will not disclose personal data to third parties without consent, except as required by law.\n5. Use of Cookies This site uses cookies.\nInformation collected via cookies is used for behavioral analysis and, if advertising features are enabled, ad optimization.\nUsers can disable cookies in their browser settings.\n6. Data Retention Form submissions are retained for up to 1 year.\nOther logs are deleted after approximately 6 months.\n7. Security Measures We implement access control, encryption in transit, and access log reviews to prevent data leakage.\n8. User Rights Users may request access to, correction of, or deletion of their personal information.\nRequests can be made via the contact form listed at the end of this policy.\n9. Legal Basis and Compliance This Privacy Policy is governed by the Act on the Protection of Personal Information (APPI) of Japan.\nWhere applicable, we also consider international data protection principles.\n10. International Data Transfers This site is operated by a Japan-based operator, but it is available globally. Some third-party services used by this site (such as analytics or form processing providers) may process data outside Japan. We rely on reputable providers and their publicly documented data protection measures and safeguards.\n11. Children’s Privacy This site is not specifically intended for children. If we become aware that we have collected personal information from a minor without appropriate consent, we will take steps to delete such information.\n12. Policy Updates We may update this Privacy Policy as needed. The latest version will always be available on this page.\n13. Contact For privacy-related inquiries, please contact us via the contact form:\nContact ","permalink":"https://blog.cotomy.net/privacy-policy/","summary":"\u003ch2\u003e1. Operator Information\u003c/h2\u003e\n\u003cp\u003e\u003cstrong\u003eOperator name:\u003c/strong\u003e Y. Arakawa\u003cbr\u003e\n\u003cstrong\u003eLocation:\u003c/strong\u003e Nagoya, Aichi, Japan\u003cbr\u003e\n\u003cstrong\u003eContact email:\u003c/strong\u003e \u003ca href=\"mailto:yshr1920@gmail.com\"\u003eyshr1920@gmail.com\u003c/a\u003e\n\u003c/p\u003e\n\u003ch2\u003e2. Information We Collect\u003c/h2\u003e\n\u003cp\u003eWe collect the following information.\u003c/p\u003e\n\u003ch3\u003e(1) Contact Form\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eName\u003c/li\u003e\n\u003cli\u003eEmail address\u003c/li\u003e\n\u003cli\u003eMessage content\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3\u003e(2) Log Information\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eIP address\u003c/li\u003e\n\u003cli\u003eAccess time\u003c/li\u003e\n\u003cli\u003eBrowser/OS\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch3\u003e(3) Cookies / Tracking\u003c/h3\u003e\n\u003cul\u003e\n\u003cli\u003eBrowsing data via Google Analytics\u003c/li\u003e\n\u003cli\u003eAdvertising-related identifiers (only if advertising features are enabled in the future)\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2\u003e3. Purpose of Use\u003c/h2\u003e\n\u003cp\u003eWe use collected information for the following purposes.\u003c/p\u003e","title":"Privacy Policy"},{"content":"This is the fifth post in Problems Cotomy Set Out to Solve. This continues from Form State and Long-Lived Interaction . The previous article argued that form state should be treated as a long-lived working context rather than a temporary payload.\nThat continuity still needs a server boundary.\nHere the series moves from internal UI boundaries to the UI-server boundary. The issue is not HTTP itself, but that business operations are split into incompatible patterns. An operation protocol defines how UI intent crosses into server work, how failure is handled, and how results return to the same screen context. A protocol, in this sense, is the contract that defines how UI intent becomes server operation and returns to the same context.\nThe Real Breakpoint in Business UI Many systems treat form submission as one thing and API calls as another. The same business action is allowed to cross the UI-server boundary through multiple operational paths.\nA form may use native submit behavior, a save button may use fetch(), a batch update may use a custom request, and a partial save may use yet another handler.\nThat inconsistency becomes much more visible in large business applications. One screen may still load through server rendering, another may save through Ajax, another may use an ordinary POST for updates, and another may fetch select options or dependent data through separate calls. Each choice can be defended locally. But once the number of CRUD-style screens keeps growing, those local choices accumulate into several operational cultures inside the same product.\nThis is one reason large business systems become difficult to keep coherent. Different subteams, delivery pressure, and uneven screen histories make it hard to keep one request model everywhere. Trying to restore that consistency later often requires much more effort than teams expect, because the inconsistency is no longer in one function. It is in the operational habits of the screen layer itself.\nYet from the user’s perspective, these are all the same operation: a business action that needs validation, locking, error feedback, and reflection.\nThat is the breakpoint. The transport may differ, but the operational meaning is often the same.\nA Pattern I Kept Seeing In the systems I kept seeing, and in the stories I kept hearing through industry conversations, each screen re-implemented its own API handling:\nHTTP errors and business errors are handled differently, form data conversion is repeated on every screen, loading and lock state are inconsistent, error notifications are ad hoc, and post-success UI reflection varies by page.\nThis is not just code duplication. It is a protocol failure. A protocol failure means the same operation no longer follows the same entry, failure, and reflection rules.\nIf the architecture does not force one recognizable shape, local development drifts toward opportunistic fixes. The developer under immediate pressure naturally patches the current screen in the fastest way available. That is not a moral failure. It is what usually happens when the system leaves the protocol optional.\nIn theory, a strong reviewer could catch every local deviation. In practice, that is rarely realistic once the number of screens and contributors grows. Guidance also degrades as it moves through layers of team leads or middle managers, and the operational rule becomes less precise each time it is passed down. The same pressure appears with AI-generated code as well. If the architecture does not make the intended protocol obvious, generated code also tends to reproduce the nearest local pattern instead of restoring a shared one. Even if AI-assisted development becomes much stronger from here, I expect this pressure to remain in some form, because the problem is not only code generation quality. It is the absence of an enforced operational shape.\nOnce that happens, the same business action starts behaving differently depending on which local request path happened to be chosen during implementation. These differences are not accidental. They come from allowing multiple operational paths for the same business action.\nComponent-oriented frameworks can improve a different part of the situation. They can give the frontend a clearer internal model for state, rendering, and composition.\nBut that does not automatically solve the problem discussed here. This article is about whether one business action crosses the UI-server boundary through a stable operational protocol. A frontend can be internally well-structured and still let load, save, validation, and reflection drift apart at the screen boundary.\nThe Core Misassumption The API is treated as a data transport tool rather than an operational protocol. fetch() is excellent for HTTP, but business UI needs more than transport. fetch() solves transport, but not operational consistency. It needs a consistent operational contract.\nThis is why the real problem is not whether HTTP access is technically centralized. Many teams already wrap fetch(), jQuery.ajax(), or another request function. That still leaves the screen without a clear operational unit. Sharing one request helper can standardize transport. It does not yet standardize how one screen enters an operation, handles failure, and reflects the result. What matters is that the form is treated as one recognizable class of screen behavior, not just as a place where requests happen.\nSeen from that angle, the issue is less about choosing one request helper and more about defining one operational object. A form is not only a collection of fields. It is the point where input, submit intent, error handling, and response reflection have to stay coherent. Without that level of definition, request access may be shared while the screen-level protocol is still fragmented.\nWhat looks like different actions are actually the same protocol:\nvalidate input, lock the working surface, submit, handle errors in a uniform way, and reflect results back to the UI.\nWhen this protocol is fragmented, the UI becomes inconsistent and fragile. An API call is treated as a transport event, not as part of a continuous operation.\nPart of my own path into this problem came from trying to standardize it with jQuery-era approaches. I kept trying to make the request side more reusable, but the behavior still ended up attached to individual events and local handlers, which made the screen contract difficult to unify. Because my earlier background was stronger in object-oriented design than in Web frontend practice, I cannot rule out the possibility that a cleaner answer existed somewhere in that ecosystem and I simply did not see it at the time. Even so, what became clear to me was that transport reuse alone did not solve the real problem. The missing part was the form as an operational boundary.\nThis is also where the previous article connects. If form state is a long-lived working context, then server interaction cannot be treated as an unrelated network detail. It has to preserve the same context through submit, failure, retry, and reflection. The server boundary must preserve that context through a single operational path.\nWhy This Matters in Business Systems Business UI is long-lived and operationally sensitive:\nthe number of screens grows over time, the data model is complex, and the UI must remain stable across staff and versions.\nThis becomes critical because the same business action is no longer executed through a single, predictable path.\nIf the operation protocol is not shared, every screen drifts. Small differences turn into operational risk.\nThis matters even more as a business system grows over time.\nThe problem is not only that one screen becomes harder to maintain. It is that the same business action starts behaving differently across dozens of screens, maintained by different people at different times.\nAt that point, the cost appears in several places at once. Users have to relearn small differences between screens. Developers cannot easily predict how a change in one API contract will affect other pages. Reviewers can no longer judge quality by one shared operational rule. When failures happen in production, the system is harder to diagnose because entry, failure, and reflection no longer follow one recognizable shape.\nSeen more concretely, an operation protocol needs at least three guarantees.\nThe entry guarantee defines where a business operation begins from the screen and from which context it is issued. The failure guarantee defines how errors are normalized and returned to that same context in a predictable shape. The reflection guarantee defines how success or failure is applied back to the same working context without inventing a different local pattern on every page. One screen may reload the whole page, another may patch one fragment, and another may only show a notification while leaving the visible state partly unchanged. All of these must follow the same operational path to preserve consistency.\nCotomy’s Position (Without Implementation Detail) Cotomy treats form submission and API interaction as one operational model at the design level. It assumes the UI should not invent a new request pattern per screen.\nThe exact execution path depends on which API surface you use. CotomyApiForm emits standardized failure events for form-driven flows. Direct CotomyApi usage throws structured exceptions. Submission still flows through shared protocol entry points, so lock behavior can be centralized as an application-level strategy when needed. UI reflection can follow a shared contract when built on CotomyApiForm or CotomyEntityFillApiForm.\nThis matches Cotomy’s model where UI operations are built on a stable protocol layer rather than ad hoc request handling (see Cotomy Reference – API Integration Basics ).\nCotomyApiForm emits standardized failure events. When using CotomyApi directly, structured exceptions are thrown. The runtime provides structure, not automatic global notification.\nThat boundary matters. Cotomy guarantees explicit protocol surfaces, not hidden application policy. CotomyApi gives a direct API client boundary. CotomyApiForm and CotomyEntityFillApiForm add form-oriented protocol structure on top of that boundary. Notification policy, locking strategy, and domain decisions still belong to the application layer.\nCommon Misreadings This is not a type-safety discussion, a backend architecture debate, or a GraphQL-vs-REST argument. It is about keeping the operational protocol between UI and server stable.\nProblem Series This article is part of the Cotomy Problem Series, which examines recurring structural failures in business UI design.\nSeries articles: HTML and CSS as One Unit , Form Submission as Runtime , Screen Lifecycle and DOM Stability , Form State and Long-Lived Interaction , API Protocols for Business Operations, Runtime Boundaries and Operational Safety , UI Intent and Business Authority , and Binding Entity Screens to UI and Database Safely .\nNext Next: Runtime Boundaries and Operational Safety ","permalink":"https://blog.cotomy.net/posts/problem-5-api-driven-entities-ui-contract-stability/","summary":"In business UI, form submission and API calls are the same operational protocol, not separate patterns.","title":"API Protocols for Business Operations"},{"content":"This is the fourth post in Problems Cotomy Set Out to Solve. This continues from Screen Lifecycle and DOM Stability . The previous article argued that a business screen needs an explicit lifecycle and a stable working surface.\nThat raises the next question immediately: if the screen stays stable, what exactly is being preserved on it?\nMost business UI is a loop of input, revision, validation, and submission. It is not a single event. It is sustained work. Yet form state is still treated as a temporary payload, even though it behaves as a continuous working context in practice.\nCotomy treats form state as a long-lived working context, because that is how business systems actually operate. Form state continuity means that user progress remains meaningful across input, validation, retry, reload, and return through the intended lifecycle path.\nThe Usual Pattern Falls Short Typical implementations assume:\nyou load the screen, fill the form, submit once, and exit.\nThat model breaks down in real operations. A business form often needs more than plain input. Before work can even start, the screen often has to load reference data, current values, or options required for correct entry. During editing, one input can change which fields are visible, required, or meaningful. After submission, server-side validation can return problems that must be shown without destroying the user\u0026rsquo;s current working context.\nBusiness forms are edited, paused, revisited, and corrected. Partial input and iterative validation are the norm.\nIn other words, the operational shape of the screen is not submit-and-leave. It is load, interpret, enter, revise, validate, reconfigure, and continue.\nThe Failure Modes Are Familiar When form state is treated as transient, these failures appear:\npartial input is lost, back navigation destroys work in progress, validation errors reset unrelated fields, revisions after errors become fragile, and long edits accumulate inconsistencies.\nThe form state is not just a set of values. It is a working context that changes over time.\nSome of these failures are more concrete than they first sound.\nA field can be hidden because another input changed, while the old hidden value still survives underneath and continues to affect submission. A screen-level selection can also break the relationship between fields that were meaningful only in the previous configuration. In other cases, the user does nothing wrong, yet a value or display block is still damaged because expanded data is rewritten through a rare validation path, a partial update path, or another special control path that exists outside the main interaction flow.\nThe more state is split across several local owners, the more likely these problems become. One part of the screen still reflects the current input. Another still reflects older loaded data. Another follows a special patch that only runs after one specific response. That is how a form can look usable while already no longer holding one coherent state.\nOne validation case is enough to make this visible.\nA user fills out a form, submits it, and gets a validation error. A corrected amount may still be visible in the input, while a calculated label or related display value has already fallen back to older server data. The user now sees a mix of current input, stale derived output, and partial corrections on the same screen. At that point, the screen no longer represents a single coherent working context. This is not a rendering issue. It is a state coherence issue.\nState Is Not Just Values In that validation example, what is the state?\nIt is not just the current input values. It also includes what the user typed before submission, what the server returned after validation, which fields are still invalid, which values are still being edited, and how the user currently interprets the screen.\nThat is why form state is better understood as a working context than as a plain set of values.\nThis matters because failures do not happen only when one text box loses its value. They happen when this whole working context stops being continuous.\nIf a validation response keeps the typed values but silently changes a related display field, the state is already damaged. If a partial reload restores one select box from older data while leaving the rest of the form untouched, the state is already damaged. The issue is not only value loss. It is continuity loss. This is why state cannot be treated as a snapshot. It is an evolving context.\nEvent vs. Continuity The difference is not technical. It is structural.\nAn event-based model would treat that same case as a sequence: input, submit, response, render.\nAn event model describes transitions. A continuity model must preserve meaning across them.\nBut that does not match what the user is actually doing in that moment. The user is not starting a new task after the response. The user is correcting the same input based on the validation result.\nThe form is not a one-time event. It is a continuous working context that spans input, submission, validation, correction, and retry.\nBusiness systems require the second model.\nThis follows directly from the previous article. A stable screen lifecycle is the condition that makes continuity possible. Form state is the content that continuity is preserving.\nWhere Continuity Actually Breaks The next step is to say more precisely where the damage comes from.\nContinuity does not break abstractly. It breaks when multiple mutation paths redefine the same working context.\nForm state does not break only because it exists in several places. It breaks because it can be updated through several unrelated paths.\nIn real business screens, the same form state is commonly affected by initial load from SSR or API, direct user input in the DOM, validation feedback returned from the server, partial re-render or fragment replacement, manual JavaScript updates, and reload or restore after navigation.\nEach path may look reasonable by itself. The problem appears when they are not governed as one structure.\nAt that point, one path preserves current input, another path redefines the baseline, another path rewrites display-only fields, and another path restores older assumptions from a cached screen. The visible form still exists, but the working context no longer has one continuous meaning.\nA second state surface does not create every mutation path by itself, but it makes scattered mutation paths much easier to accumulate.\nThis is why the problem is hard to avoid accidentally. These mutation paths are not exotic. They come from normal business requirements such as asynchronous validation, return navigation, partial updates, and server-driven refill. Once a screen lives longer than one submit, those paths appear naturally. Without design rules, continuity fails naturally too.\nWhy This Matters in Business UI Inputs can take minutes or longer. Users pause, switch tasks, and resume. Fields depend on each other. Corrections are common. The UI must preserve continuity over re-render.\nForm state is a sustained working context, not a transient event payload. Because that context belongs to an ongoing business operation, continuity must be preserved across interaction, validation, and retry.\nCotomy treats form state as a sustained working context bound to DOM rather than a transient payload. This aligns with Cotomy’s design where the DOM is the source of truth and runtime behavior provides lifecycle structure such as scoped style handling and removal observation, while keeping form state on the DOM side (see Cotomy Reference – Forms Basics ).\nThat design choice matters because it avoids introducing a second hidden source of truth for ordinary form work. Inputs stay where the user already sees them, while the runtime standardizes submit flow around that state instead of moving the state into a separate store by default.\nWhy DOM-As-State Is Not the Real Problem At this point, one common objection usually appears: keeping state on the DOM sounds like the problem, not the solution.\nI do not think that is the real structural issue.\nThe deeper problem is not that the DOM exists as state. The deeper problem is that the same form is often split into two competing state surfaces: the visible DOM and a separate JavaScript-side store that tries to represent the same thing.\nThe DOM is already where the user types, edits, focuses, and reads. For ordinary form work, it is the only state surface the user can directly observe. Duplicating that into another default store often increases the number of places that must stay synchronized.\nThat synchronization cost is not a small implementation detail. Once the screen has both a JS-side state model and a DOM-side state model, the system has to keep them aligned during input, validation feedback, partial updates, derived display changes, and restore behavior. Many of the familiar bugs come from that maintenance burden itself.\nCotomy\u0026rsquo;s choice is to reduce that burden at the source. It does not solve the problem by adding another synchronization layer. It narrows the problem by not creating a second default working state surface in the first place.\nThis does not mean direct DOM management is always easier in every kind of UI.\nIn a frontend that exists as an independent system in its own right, the UI is not only editing one Entity or reflecting one server-rendered business screen. It has to manage its own state model, screen transitions, interaction rules, and API-driven behavior across a broader application boundary.\nIn that kind of architecture, separating state from the DOM and introducing a more elaborate render model can be a reasonable tradeoff. That complexity exists for a reason. It allows the frontend to scale as its own system.\nBut many business screens are not that kind of system. They mainly need to load data, accept edits, reflect validation, and remain coherent over time.\nFor that class of screen, the cost of keeping DOM state and JS state aligned often becomes more expensive than the cost of managing the DOM-side working context directly.\nThe server still remains the authority for persisted business truth. Cotomy does not turn the DOM into business authority. It keeps the working form state on the DOM side and lets the runtime standardize submit, load, and restore behavior around that visible surface.\nThat is not a claim that all DOM mutation is automatically safe. It is the opposite. Once the DOM is the working surface, mutation paths have to be even more explicit, because any unofficial rewrite is directly changing the user\u0026rsquo;s current context.\nWhy This Is a Design Problem This goes beyond UX polish, validation logic, or convenience layers. It is about structural consistency for long-lived UI state.\nSeen more precisely, it is a question of guarantees.\nThe input continuity guarantee means a user\u0026rsquo;s in-progress values remain part of the same working context. The validation continuity guarantee means error handling should not silently redefine unrelated state. The return continuity guarantee means that when a page is restored or reloaded through the intended lifecycle path, the form can be reconstructed into a coherent operational state instead of becoming a detached snapshot.\nDesign Rules for Long-Lived Forms Once the problem is stated structurally, the design rules become clearer.\nEach form should have one visible working state surface. Each mutation should have a defined entry point. Validation should not redefine unrelated state. Reload and restore should reconstruct the same working context, not a nearby approximation.\nThe negative version is equally important. Do not let ad hoc handlers, partial patches, and recovery logic each redefine the form in their own way. That is how continuity disappears even when every individual update looked locally correct.\nCotomy’s Position (Without Implementation Detail) Cotomy treats form state as DOM state. State must survive user delay and navigation. The runtime provides structural safety and lifecycle consistency. Screens declare state. The runtime manages continuity around that state.\nIn practical terms, CotomyForm standardizes submit handling without replacing the DOM as the place where field state lives. CotomyEntityFillApiForm can load and refill form inputs on ready timing, and registered forms can participate in page restore flow through CotomyPageController. That is enough to give form state a lifecycle-aware structure without turning Cotomy into a global state store.\nProblem Series This article is part of the Cotomy Problem Series, which examines recurring structural failures in business UI design.\nSeries articles: HTML and CSS as One Unit , Form Submission as Runtime , Screen Lifecycle and DOM Stability , Form State and Long-Lived Interaction, API Protocols for Business Operations , Runtime Boundaries and Operational Safety , UI Intent and Business Authority , and Binding Entity Screens to UI and Database Safely .\nNext Next: API Protocols for Business Operations ","permalink":"https://blog.cotomy.net/posts/problem-4-form-state-long-lived-interaction/","summary":"Form state in business UI is not just input values. It is a long-lived working context that breaks when several mutation paths redefine it without one design rule.","title":"Form State and Long-Lived Interaction"},{"content":"This is the third post in Problems Cotomy Set Out to Solve. This continues from Form Submission as Runtime . The first article argued that a screen boundary has to stay local. The second argued that form submission should be treated as a shared runtime protocol instead of page-specific glue code.\nBut neither idea can hold if the screen itself is unstable.\nIn business systems, screens are not momentary views. They stay open for hours. They hold live input, partial progress, and operational state. Yet many UI models still treat screens as render events.\nCotomy started from a different premise: a screen should have a lifecycle and a stable working surface. A screen lifecycle defines when a screen becomes operable, remains stable as a working surface, and is explicitly terminated.\nThe Usual Model and Its Assumptions Common UI models assume that the screen is a renderable state:\nSPA patterns assume the screen is a redrawable view, virtual DOM assumes the DOM is a regenerable representation, and component-driven UI assumes continuous re-rendering.\nThese approaches are not wrong. They are optimized for different lifetimes.\nThey work well when redraw is cheap, when the application shell owns the full interaction model, or when local state can be reconstructed without much operational cost.\nBut that is not the pressure I kept seeing in business systems.\nThe Reality of Business Screens Business UI screens stay open for long stretches. Users pause, switch tasks, and return. Form state must survive. The DOM is not a temporary output.\nAn order-entry screen, a maintenance form, or an approval page is not just showing data. It is holding work in progress. The screen may contain temporary choices, field dependencies, validation messages, selected rows, modal context, and data loaded in a specific order.\nThat makes the DOM different from a disposable render target.\nThe DOM is not a rendering artifact. It is the user\u0026rsquo;s working surface. This perspective matches Cotomy\u0026rsquo;s model where a screen is treated as a lifecycle-bound unit with structural continuity (see Cotomy Reference ).\nThat stability matters because the previous problem, form submission as runtime, already assumes a durable screen boundary. If submit is one shared protocol, then the screen has to remain structurally coherent before submit, during submit, after validation failure, and after response reflection. Otherwise the runtime may be shared, but the actual working surface still behaves like a collection of unstable fragments.\nWhen that surface is unstable, the user loses continuity.\nWhat Unstable DOM Creates When the DOM is treated as disposable, these failures appear:\nfocus disappears during re-render, in-progress input is lost or reset, event bindings drift or get duplicated, and UI fragments reinitialize unexpectedly.\nIn business screens, these failures rarely appear as dramatic crashes. They appear as hesitation, inconsistent retry behavior, modal selections that no longer match the current form state, handlers that fire twice, or values that quietly return to an older state after part of the screen updates.\nThis is not a UX polish issue. It is a continuity failure.\nWhy Boundary and Runtime Still Need Lifecycle The first two problems in this series already point toward this conclusion.\nIf HTML and CSS are supposed to form one local unit, then that unit has to stay present long enough for local ownership to matter. If the screen is repeatedly rebuilt as a disposable surface, local styling and local structure may still exist, but the operational continuity of that unit becomes weaker.\nLikewise, if form submission is a shared runtime protocol, then the runtime needs a stable surface on which that protocol operates. Submit, lock, reflect, and retry only remain coherent when the screen that owns those actions is not silently redefined underneath them.\nIn other words, local boundary and shared protocol both depend on lifecycle stability.\nThe Core Boundary: Screen Lifecycle A screen has a lifecycle with three boundaries:\nexistence begins when the screen is mounted, stable operation continues while the working surface persists, and explicit end happens when the screen is torn down.\nIf these boundaries are not explicit, the system cannot guarantee stability.\nThis is where the idea of screen lifecycle becomes practical instead of abstract. The question is not whether something can be rendered. The question is whether the system knows when the screen becomes operable, what belongs to that screen while it is active, and what should happen when the user returns to it or leaves it.\nSeen this way, the three boundaries are also three guarantees.\nThe existence boundary guarantees when the screen becomes valid as an operable unit. The operation boundary guarantees what remains stable while the user is working. The teardown boundary guarantees what must be released so that the screen does not leave stale handlers, broken references, or half-detached behavior behind.\nCotomy\u0026rsquo;s runtime makes those guarantees explicit at the page level. CotomyPageController defines the page lifecycle boundary, form registration binds runtime behavior to that boundary, cotomy:ready guarantees that the page has completed initialization before the screen is treated as operable, and restore behavior ties continuity to registered forms when the page returns from browser history.\nWhy Re-Rendering-Centric Design Doesn’t Fit Re-rendering optimizes expression. Business UI optimizes continuity. The center of design should be DOM stability, not redraw efficiency. Long-lived UI state requires structural continuity.\nThis does not mean that no DOM updates should happen. Business screens still need dynamic behavior. Rows are added, dialogs open, search results refresh, and field values are reflected from API responses. The issue is not that the DOM changes. The issue is whether those changes preserve the screen as one continuous working surface or repeatedly treat it as replaceable output.\nOnce the second model dominates, lifecycle responsibility becomes harder to reason about. Event cleanup becomes fragile, form state continuity weakens, and the meaning of \u0026ldquo;the current screen\u0026rdquo; starts to dissolve into many local update paths.\nCotomy’s Position (Without Implementation Detail) Cotomy treats a screen as a unit with a lifecycle. The DOM is a stable working surface. The runtime manages existence, not just rendering. The emphasis is on presence, boundaries, and continuity.\nThat position is closely related to the previous two articles. Cotomy does not want screen structure, style, submit flow, and continuity to be governed by four different models. It treats them as parts of one screen-level runtime boundary.\nCommon Misreadings This is not an anti-SPA or anti-Virtual-DOM argument, and it is not a performance note. The point is structural continuity in business UI.\nProblem Series This article is part of the Cotomy Problem Series, which examines recurring structural failures in business UI design.\nSeries articles: HTML and CSS as One Unit , Form Submission as Runtime , Screen Lifecycle and DOM Stability, Form State and Long-Lived Interaction , API Protocols for Business Operations , Runtime Boundaries and Operational Safety , UI Intent and Business Authority , and Binding Entity Screens to UI and Database Safely .\nNext Form State and Long-Lived Interaction ","permalink":"https://blog.cotomy.net/posts/problem-3-screen-lifecycle-dom-stability/","summary":"In long-lived business UI, the screen is a working surface with a lifecycle, not a transient render.","title":"Screen Lifecycle and DOM Stability"},{"content":"This is the second post in Problems Cotomy Set Out to Solve. This continues from HTML and CSS as One Unit . Business systems are form-first by nature. Most screens follow the same rhythm: input → submit → reflect. Yet the implementation of that rhythm is still handled on a per-screen basis, as if it were a local feature.\nCotomy started by questioning that assumption.\nThe Reality of Form-Centered Systems In business UI, almost every screen is a form. The differences between screens are in what they send, not in how they should be sent. Despite that, submit logic is often rewritten on every page. The result is a fragmented system protocol.\nMost business systems also have many entities because the target domain itself is broad. Once that happens, CRUD operations multiply quickly. The same entity often appears in different screens depending on actor, use case, or operation context. A single master may end up with separate create, edit, approval, search, and maintenance screens, each shaped a little differently, but all still based on the same form-centered interaction model.\nWhat Traditional Form Submit Gets Wrong Classic form submit patterns repeatedly create the same failures:\naccidental re-submits after refresh or back navigation, UI state loss tied to page transitions, inconsistent user feedback across screens, and duplicated logic that differs slightly from page to page.\nMore importantly, most CRUD submit flows are already shaped by the form definition itself. The program logic around them should mostly be the same: collect values, submit, lock appropriately, handle failure, and reflect the result. This flow is not application-specific. It is a system-level behavior.\nBut once teams move toward jQuery-style AJAX, the place where this logic lives often shifts into event handlers. The structure is then recognized through click callbacks and per-page scripts rather than through the form or page as a clear unit. Perhaps part of why this felt wrong to me came from my background with Windows Forms, where the screen unit remained easier to see. Even so, I think the mismatch is real: event-driven submit code tends to obscure the form as the actual operational boundary.\nI remember struggling with this directly when I was developing with jQuery. I was already used to object-oriented development, so I wanted to bring that same kind of structural clarity into the screen model. But browser-side JavaScript at the time made that awkward too. The language and browser environment were not yet comfortable for writing this kind of code in a clear object-oriented style.\nAJAX Did Not Fix the Structure AJAX became the default, but the structure stayed the same. Each screen still has its own submission code, and the behavior diverges:\nloading states stay inconsistent, error handling diverges, double-submit protection varies, and post-success updates remain ad hoc.\nAJAX undeniably made richer UI easier because it created room to insert more processing before and after the request. Partial refresh, local validation feedback, and screen updates all became more practical. In that sense, AJAX was better for building interactive business screens.\nBut that did not solve the previous problem. The transport changed, yet the form-centered behavior was still not treated as one shared structure. AJAX changed the transport, not the protocol.\nThe Core Problem Form submission is not a feature. It is a shared protocol. In business systems, this protocol is closer to an operation model than a simple HTTP exchange, which is why treating it as per-screen fetch logic leads to structural drift. Here, runtime means the controlled entry point where UI intent becomes a server-side operation and where state transitions are allowed to happen. One part of that drift is conceptual. The main function of the screen is often an entity update, and that intent is clearly visible in the UI. Yet the implementation is reduced to one event handler somewhere in the page script, so the screen becomes harder to follow as a screen. The operation is central in meaning, but peripheral in structure.\nThe usual reaction is to write the flow explicitly for each screen so the logic becomes easier to see again. But then the same submission behavior starts spreading across many pages with small variations. That is a different form of failure, and I think it comes from the same root cause: the implementation flow has drifted away from the semantic structure of the feature itself.\nThe boundary should be understood clearly: the UI expresses intent, the form submission path carries that intent through a defined protocol, and the server owns the authority to perform the operation.\nThat means the responsibilities should be split like this:\nvalidation should stay a screen responsibility, submission flow should be runtime-structured, UI lock during submit should be handled through runtime hook points, error handling should follow a runtime-defined structure, and response reflection should use a defined extension point.\nThe runtime should own the flow so the system remains coherent. Cotomy formalizes this as a runtime-level submission protocol rather than per-screen logic (see Cotomy Reference – Forms Basics ).\nWhy It Matters More in Business Systems Business systems scale by adding screens, not replacing them. The risk of inconsistent submission behavior grows with every new form. When the UI is operated by humans at scale, small inconsistencies turn into costly mistakes.\nThe deeper issue is not only duplication. It is that the operational meaning of the screen becomes harder to see. A user sees \u0026ldquo;register,\u0026rdquo; \u0026ldquo;update,\u0026rdquo; or \u0026ldquo;approve\u0026rdquo; as one business action. But the implementation may be scattered across page-specific callbacks, AJAX branches, and small local exceptions. Once that happens repeatedly across many entity screens, the system stops feeling like a coherent set of business operations and starts feeling like a collection of unrelated scripts.\nThat disconnect matters more in business systems because the same operational shape appears again and again. If the structure for submit is not shared, developers must keep reconstructing the same intent from local code. The cost is not just more lines of code. It is slower understanding, weaker confidence during change, and more opportunities for one screen to quietly behave differently from another.\nConsistency in submission flow is not a convenience. It is a precondition for operational safety.\nCotomy’s Position (Without Implementation Detail) Cotomy centralizes form submission flow in the runtime layer. Screens define what to send. The runtime defines a unified submission flow. That structure allows consistent lock strategies and response reflection patterns to be implemented at the form level. The protocol is shared, not reinvented.\nThis matters because it restores the structure to the same level where the user and the developer both recognize the screen. The form remains the unit of operation. Screen-specific intent stays in the screen. The repeated mechanics of submit, lock, error handling, and reflection move into the runtime where they can stay consistent across forms.\nThat does not remove local behavior. It puts local behavior in the right place. The screen still owns its fields, validation, and meaning. The runtime owns the submission protocol that keeps those screens coherent as a system. Cotomy treats form handling as one runtime path rather than as submit code scattered across individual screens.\nCommon Misreadings This is not an AJAX debate or a UX polish topic. It is a structural safety discussion for business UI.\nAJAX can still be the right transport for richer interaction. The question here is not whether requests should be asynchronous. The question is whether form submission is treated as a shared operational model or left as page-level glue code.\nProblem Series This article is part of the Cotomy Problem Series, which examines recurring structural failures in business UI design.\nSeries articles: HTML and CSS as One Unit , Form Submission as Runtime, Screen Lifecycle and DOM Stability , Form State and Long-Lived Interaction , API Protocols for Business Operations , Runtime Boundaries and Operational Safety , UI Intent and Business Authority , and Binding Entity Screens to UI and Database Safely .\nNext Screen Lifecycle and DOM Stability ","permalink":"https://blog.cotomy.net/posts/problem-2-form-submission-runtime/","summary":"Form submission is a system-level protocol, not per-screen glue code.","title":"Form Submission as Runtime"},{"content":"This post opens Problems Cotomy Set Out to Solve. This first looked like a styling problem. It turned out to be a structural one.\nFor the web applications I saw around me when I started working, this was never really about styling. It was about whether a system could survive change.\nCotomy is not a UI framework in the usual sense. It is a response to structural failures that make long-term business UI maintenance brittle. The earliest and most persistent problem was not JavaScript. It was CSS.\nHow the Web Got Here When I started working, the web systems around me were still primitive compared with desktop applications. Before Ajax became common, most screens were request and response flows with full page reloads. JavaScript existed, but it was not yet a comfortable foundation for building rich business interaction.\nAt that time, web development itself was still far less common than it is now, so I did not have many chances to work on it directly. I was mostly watching it from nearby while doing more C++ and VB work.\nAs CSS and JavaScript became more capable, web systems gradually became more interactive. Screens could add elements, replace sections, and react to user input in more fluid ways. That progress solved one problem, but it also exposed another: structure, appearance, and behavior had evolved through different historical layers. HTML described the document, CSS lived elsewhere, and JavaScript started mutating the result afterward.\nWhy This Matters In business systems, screens live for years. Sometimes decades. They do not scale like consumer products where redesigns reset the surface. In this world, CSS is not a cosmetic layer. It is a boundary definition. If that boundary is vague, the entire system becomes fragile.\nHere, boundary means the place where change is allowed to stay local: where ownership is clear, scope is explicit, and one screen can change without silently affecting others.\nRich screens inevitably need dynamic behavior. They add rows, open panels, replace fragments, and change the visible state in response to user operations. But for a long time, the web made this awkward compared with desktop software. Even after dynamic manipulation became more practical, the visual definition of the added elements often remained somewhere else as CSS, detached from the place where the structure was created or changed.\nThat separation is not just inconvenient. It makes the screen harder to reason about. If the structure is produced in one place, the behavior lives in another, and the appearance is defined somewhere else again, the cost of safe change rises immediately.\nThe question was never “how do we make pages look better.” It was “how do we make pages safe to touch after hundreds of screens exist.”\nThe Reality I Saw Large business applications inevitably need a large number of screens because the breadth of the target domain turns into tables, forms, search views, and maintenance pages. In that kind of development, the highest priority is often not elegance. It is how to build that large set of tables and screens without collapsing the design, without losing control of the schedule, and without creating a system that becomes unmaintainable before delivery.\nOnce that becomes the real objective, UI decisions change. The more screens you have, the more dangerous CSS becomes. The impact radius is unclear, and as a result teams avoid touching UI at all. The safest option becomes a conservative, unambitious interface because ambition increases risk.\nIn systems originally built for the early 2000s browser world, especially with Internet Explorer as the operational baseline, even CSS-based visual design was often minimal. Many screens were built as plain white pages with primitive table-based layouts and double-line cell borders. Those screens were not visually impressive, but that was often a rational tradeoff. Instead of wasting limited resources on fragile visual refinement, teams prioritized something that worked, shipped on time, and created obvious business value.\nThis is one of the quiet reasons many business UIs in Japan look plain. It is not a lack of taste. It is fear of unintended cross-screen breakage.\nRazor Pages and Scoped CSS After years of building systems in PHP, Razor Pages felt immediately right to me when I encountered it. One of the biggest reasons was scoped CSS. It was probably the moment that gave me the clearest relief in web UI development. Before that, working with globally applied CSS meant constantly managing selectors, classes, and document structure with extreme caution just to avoid breakage. You could keep things barely stable, but only by spending mental energy on areas that were not the actual screen problem you were trying to solve.\nWhen CSS is scoped to a page, that burden changes immediately. The applicable range is explicit. The area you have to think about becomes dramatically smaller. You can improve a screen without worrying about a chain reaction across unrelated areas.\nThat experience made the root cause obvious to me. CSS was becoming dangerous not because styling itself was difficult, but because it introduced a cross-cutting structure that cut across the page\u0026rsquo;s real structure and consumed my limited mental resources. Once I saw that, I became convinced that CSS safety is architectural, not stylistic.\nThis principle aligns with Cotomy\u0026rsquo;s design model, where HTML is the primary structure and CSS is scoped at the same boundary (see Cotomy Reference – First UI (CotomyElement) ).\nAt the same time, I also felt that page-level scoped CSS was still not enough. If the frontend behavior layer kept operating outside the same boundary, the structural problem would return in another form. That is why I eventually came to think that the frontend side also needed the same kind of explicit unit: HTML, CSS, and behavior had to stay aligned around one screen boundary.\nWhat Existing Approaches Missed HTML is structure. CSS is a structural boundary. Treating CSS as an external, loosely attached asset breaks the unity of the screen.\nIf you want rich UI on the frontend, then adding, changing, and removing elements is unavoidable. Screens need to open sections, append rows, replace blocks, and respond to user interaction with real structural changes. Modern component-oriented frameworks are strong in exactly that area. Tools such as React support this style of implementation very well by hiding direct DOM handling behind component structure and TSX-like syntax.\nIn a SPA-style application, that can be a reasonable tradeoff. The whole application is often treated as one continuous screen model, so the pain caused by CSS being separated is not always felt in the same way as in server-rendered business systems with a large number of independent CRUD screens.\nBut that difference in pressure matters. In large business systems, the real problem is not only how to build one rich screen. It is how to build and maintain hundreds of screens without letting change radius expand everywhere. In that context, once the visual rules live outside the boundary of the screen unit, mismatches become hard to avoid completely. Of course some shared rules are necessary. That is normal. But styles begin applying where they are inconvenient, local changes require broader caution, and the clean mental model of the UI starts weakening again.\nMany approaches address this pain, but their goals are different. Global CSS optimizes for reuse more than safety. CSS Modules reduce collisions but can still feel detached from structure. Inline styles are too limited for long-term maintenance. TSX is powerful but heavy for simple stable screens. Styled components bring structure and style closer, but also move more control into JavaScript than I wanted.\nThese are not bad tools. They simply optimize for different problems. If a screen unit is supposed to own structure, appearance, and behavior together, then CSS cannot remain a loosely attached sidecar. It has to belong to the same boundary that HTML defines, while TypeScript manages behavior within that boundary.\nWhat I Needed to Solve The requirement was simple and strict, but it was not about styling convenience.\nI needed structure, appearance, and behavior to stay aligned around the same screen boundary. I needed a screen to remain self-contained even when it became more interactive. I needed dynamic UI changes to be possible without forcing me to think about unrelated screens. And I needed that model to keep working when the system grew into hundreds of CRUD pages.\nThat is why CSS could not be treated as a sidecar file. HTML and CSS had to form a single unit, and the impact radius of change had to stay local to that unit.\nThis is what makes UI sustainable in business systems. The independence of a screen is not a visual preference. It is operational survival.\nThis Is Not a Styling Debate This topic is easy to misread as a discussion about design taste or how rich a screen should look. It is not.\nThe real issue is whether a business system can evolve without breaking itself. The question is not whether UI should be plain or visually ambitious. The question is whether rich behavior can be introduced without destroying maintainability. In that sense, CSS is not an aesthetic concern here. It is a boundary concern.\nThe first visible symptom happened to be CSS, but the deeper problem was that the structure of the screen and the boundaries of change did not line up.\nCotomy’s Approach (Briefly) Cotomy starts from HTML as the primary screen structure. CSS is scoped at that same unit, so the visible boundary of the screen and the styling boundary match each other. TypeScript then handles behavior inside that same screen boundary instead of redefining the whole UI through a rendering abstraction.\nIn practical terms, the screen unit is the HTML-defined unit itself. Style and behavior close around that same unit instead of being owned somewhere else. The important part is not the file layout. It is that ownership, scope, and change radius are aligned around one screen boundary.\nIn other words, the screen is defined as an HTML unit, style is scoped to that structure, and behavior attaches to that same unit. That is the concrete shape of \u0026ldquo;closing\u0026rdquo; the boundary in Cotomy.\nThat does not make Cotomy a replacement for SPA frameworks. It reflects a different priority. The goal is to keep large numbers of business screens stable, local, and understandable while still allowing richer interaction where it is needed.\nDetails will come later. The important point in this first article is the boundary problem. I came to see HTML and CSS as one unit because, in the business applications I build, their separation was one of the earliest causes of UI fragility.\nProblem Series This article is part of the Cotomy Problem Series, which examines recurring structural failures in business UI design.\nSeries articles: HTML and CSS as One Unit, Form Submission as Runtime , Screen Lifecycle and DOM Stability , Form State and Long-Lived Interaction , API Protocols for Business Operations , Runtime Boundaries and Operational Safety , UI Intent and Business Authority , and Binding Entity Screens to UI and Database Safely .\nNext Form Submission as Runtime ","permalink":"https://blog.cotomy.net/posts/problem-1-html-css-single-unit/","summary":"The first structural problem Cotomy set out to solve: CSS as a boundary issue, not a styling preference.","title":"HTML and CSS as One Unit"},{"content":"Overview Cotomy v1.0.1 is a patch release focused on one behavioral fix. It ensures Cotomy respects explicit developer intent, which is a core runtime principle.\nFix The keepalive option now correctly respects false, allowing you to disable request keepalive on page unload. The default remains true.\nThis resolves the previous behavior where false was ignored.\nInstall npm install cotomy@1.0.1 Links https://github.com/yshr1920/cotomy/releases/tag/v1.0.1 https://cotomy.net ","permalink":"https://blog.cotomy.net/posts/cotomy-1-0-1-release/","summary":"Patch release that fixes request keepalive handling on page unload.","title":"Cotomy v1.0.1"},{"content":"Overview Cotomy v1.0.0 is the first stable major release. It marks the transition from experimental work to a production-ready foundation for business web applications.\nCotomy focuses on practical, server-oriented development where HTML, CSS, and TypeScript are treated as a single cohesive unit. It is designed for internal tools, management systems, and line-of-business applications rather than UI demos.\nVersion 1.0.0 does not mean feature completeness. It marks design model stabilization. The stabilization refers specifically to Cotomy\u0026rsquo;s core runtime and structural model as described in the official reference (Cotomy Reference ).\nWhat v1.0.0 Means The API surface is now considered stable, core design principles are finalized, breaking changes will be minimized going forward, and the release is suitable for long-term adoption in production systems.\nThis release establishes Cotomy\u0026rsquo;s core architecture as the baseline for future evolution.\nCore Philosophy The core philosophy is simple: one screen equals one endpoint, HTML is the primary structure rather than a virtual DOM, CSS is scoped at the component level, TypeScript controls behavior rather than rendering, and the model is designed for server-backed business systems.\nCotomy is not a SPA-first framework. It is optimized for structured, maintainable, data-driven enterprise UI.\nKey Capabilities Key capabilities include component-scoped CSS, strong TypeScript integration, declarative DOM handling, event delegation utilities, form and screen controller patterns, and coexistence with Razor Pages and server rendering.\nWho Should Use Cotomy Cotomy is ideal for:\ninternal business systems, ERP and management tools, admin dashboards, form-heavy applications, and systems where HTML structure matters.\nNot intended for:\nanimation-heavy consumer apps, Canvas or WebGL UIs, and highly interactive design-first products.\nStability Notice Version 1.0.0 establishes Cotomy\u0026rsquo;s stable baseline. Future versions will focus on ecosystem expansion, tooling, and developer experience rather than core architectural shifts.\nInstall npm install cotomy@1.0.0 Links https://github.com/yshr1920/cotomy/releases/tag/v1.0.0 https://cotomy.net ","permalink":"https://blog.cotomy.net/posts/cotomy-1-0-0-release/","summary":"First stable major release. Establishes Cotomy as a production-ready foundation for business web applications.","title":"Cotomy v1.0.0"},{"content":"This journal launched with the first Cotomy release and serves as the starting point for understanding the ideas behind the framework. Rather than a feature log, it serves as a public record of design decisions: how we build systems and websites with Cotomy, and why we choose each approach.\nWhat Cotomy Is Cotomy is not a general frontend framework and not a SPA replacement. Its primary domain is business applications, where form handling and data integrity define the real cost of change.\nIts core idea is a DOM-first UI runtime for long-lived business applications.\nWhy It Exists Modern UI frameworks evolved around rendering efficiency and state management, but the hardest problems in business systems live elsewhere: input, validation, contracts, and consistency over time.\nCotomy centers that reality. UI is the surface. The structure below it is a form runtime and a data contract that remain stable under long-term operation.\nBusiness UI systems live for years, sometimes decades. Their hardest problems are architectural, not visual. This journal treats those decisions as first-class knowledge.\nThe Vision The goal is not to create another framework. The goal is to make it possible for small teams to build and maintain large business systems, and to prepare a design model that AI-assisted development can reason about directly.\nThe vision is to help small teams build large systems, favor design-first development over implementation-first delivery, provide AI-ready design units, and keep foundations sustainable even as framework trends shift.\nWhat This Journal Will Cover This journal covers Cotomy design philosophy, the UI runtime model, structural problems in business UI, and architecture boundaries across the system.\nWhere to Start If you are new to Cotomy, the following reading order provides the best overview of the ideas behind the framework.\n1. Problem Series — Why these problems exist\nThese articles explain the structural problems in long-lived business UI systems and why Cotomy focuses on form runtime design.\nExample topics include:\nHTML and CSS boundaries, form submission as a runtime protocol, screen lifecycle and DOM stability, and UI intent and business authority separation.\nStart here if you want to understand the motivations behind the framework.\n2. Design Series — Core architecture decisions\nThe design series explains the architectural model behind Cotomy, including:\nCotomyElement boundaries, page lifecycle coordination, form runtime design, and inheritance and composition choices.\nThese articles describe how the framework is structured internally.\n3. Practical Series and Usage Series These articles show how the runtime concepts appear in actual code.\nThe Practical series focuses on screen-level construction patterns, while the Usage series explains individual Cotomy APIs and runtime behavior.\nThey focus on:\nCotomyElement usage, CotomyPageController patterns, API integration, and runtime behavior in real screens.\nThey are useful once you want to start experimenting with Cotomy.\n4. Development Backstory Some articles document how the ideas behind Cotomy evolved through real development work.\nThese posts describe the practical experiences that shaped the framework.\nLearn More Official site: https://cotomy.net npm package: https://www.npmjs.com/package/cotomy ","permalink":"https://blog.cotomy.net/posts/introducing-cotomy/","summary":"An introduction to Cotomy, why it exists, and the vision behind it.","title":"Introducing Cotomy"},{"content":" Name Email Message\nSend\nPrivacy Policy\n","permalink":"https://blog.cotomy.net/contact/","summary":"\u003cform class=\"contact-form\" action=\"https://formspree.io/f/mbdadved\" method=\"POST\"\u003e\n  \u003clabel\u003eName\u003c/label\u003e\n  \u003cinput type=\"text\" name=\"name\" required\u003e\n\u003cp\u003e\u003clabel\u003eEmail\u003c/label\u003e\n\u003cinput type=\"email\" name=\"email\" required\u003e\u003c/p\u003e\n\u003cp\u003e\u003clabel\u003eMessage\u003c/label\u003e\u003c/p\u003e\n  \u003ctextarea name=\"message\" required\u003e\u003c/textarea\u003e\n  \u003cinput type=\"hidden\" name=\"page_url\" id=\"page_url\"\u003e\n  \u003cinput type=\"hidden\" name=\"_next\" value=\"https://blog.cotomy.net/thanks/\"\u003e\n  \u003cinput type=\"hidden\" name=\"_subject\" value=\"Blog Contact\"\u003e\n  \u003cinput type=\"hidden\" name=\"_redirect\" value=\"/thanks/\"\u003e\n\u003cp\u003e\u003cbutton type=\"submit\"\u003eSend\u003c/button\u003e\u003c/p\u003e\n\u003c/form\u003e\n\u003cp class=\"contact-privacy\"\u003e\u003ca href=\"/privacy-policy/\"\u003ePrivacy Policy\u003c/a\u003e\u003c/p\u003e\n\u003cscript\u003e\n  document.getElementById(\"page_url\").value = location.href;\n\u003c/script\u003e","title":"Contact"},{"content":"Thank you for your message. We will reply if necessary.\n","permalink":"https://blog.cotomy.net/thanks/","summary":"\u003cp\u003eThank you for your message. We will reply if necessary.\u003c/p\u003e","title":"Thank You"}]