{"id":19705,"date":"2025-10-22T19:28:17","date_gmt":"2025-10-22T19:28:17","guid":{"rendered":"https:\/\/ectgt.com\/nobankrunWallet\/?p=19705"},"modified":"2025-10-24T17:00:27","modified_gmt":"2025-10-24T17:00:27","slug":"the-narratives-you-should-practice-every-moment-before-your-five-senses-reach-your-own-seven-sense-ego-so-you-may-align-your-make-sense-judgement-with-infinite-intellegence-mentors-like-elon-mus","status":"publish","type":"post","link":"https:\/\/ectgt.com\/nobankrunWallet\/the-narratives-you-should-practice-every-moment-before-your-five-senses-reach-your-own-seven-sense-ego-so-you-may-align-your-make-sense-judgement-with-infinite-intellegence-mentors-like-elon-mus\/","title":{"rendered":"The narratives you should practice every moment &#8211; before your five senses reach your own seven sense (ego), so you may align your make sense judgement with Infinite Intellegence. Mentors like Elon Musk, Jensen Huang, etc. AI assistants &#8211; Quantum Entanglement. AI tools talk to one another, you should join the game."},"content":{"rendered":"\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<iframe loading=\"lazy\" width=\"560\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/y2ab3A_pBiY?si=E_Vahkxi8x36eQZr\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n\n\n\n<p>Here\u2019s a detailed breakdown of how the vision from Jensen\u202fHuang (CEO of <a href=\"https:\/\/chatgpt.com\/g\/g-6IXqeWU7q-creative-visualizer\">NVIDIA<\/a>) ties into the \u201capp + LLM\u201d paradigm \u2014 using the example of Cursor (for enterprise\/code engineers) and more broadly for general industries\/users \u2014 and how the pieces fit together, where value is generated, and what to watch out for.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">1. Jensen Huang\u2019s vision: key themes<\/h2>\n\n\n\n<p>From Huang\u2019s recent keynotes and remarks, some of the consistent themes:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>He frames the current AI moment as <em>industrial-scale intelligence<\/em> and \u201cfactories of tokens\u201d \u2014 i.e., massive data+models generating tokens that transform images, text, sound, data, etc. <a href=\"https:\/\/www.rev.com\/transcripts\/gtc-keynote-with-nvidia-ceo-jensen-huang?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">Rev+2NVIDIA+2<\/a><\/li>\n\n\n\n<li>He emphasizes one architecture that goes from cloud \u2192 enterprise \u2192 edge \u2192 personal, i.e., \u201cone architecture \u2013 from cloud AI, enterprise AI, personal AI, to edge AI.\u201d <a href=\"https:\/\/www.youtube.com\/watch?v=X9cHONwKkn4&amp;utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">youtube.com+1<\/a><\/li>\n\n\n\n<li>He emphasizes that the interface to computing is shifting: programming languages may give way to \u201chuman language\u201d, i.e., prompt-based or natural\u2010language interaction becomes the interface to systems rather than low-level code. <a href=\"https:\/\/www.reddit.com\/r\/Futurology\/comments\/1b4hgrt\/nvidia_ceo_jensen_huang_says_kids_shouldnt_learn\/?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">Reddit+1<\/a><\/li>\n\n\n\n<li>He repeatedly talks about agents, autonomous loops, model inference, context windows, and the compute infrastructure (GPUs, large models, chips) as foundational enabler. <a href=\"https:\/\/www.rev.com\/transcripts\/gtc-keynote-with-nvidia-ceo-jensen-huang?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">Rev+1<\/a><\/li>\n<\/ul>\n\n\n\n<p>From all of that, the \u201capp + LLM\u201d combination fits exactly into this: You build applications that embed or are driven by large language models (LLMs) + model infrastructure + domain context, so that the user no longer just uses static software but uses dynamic AI-driven apps.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">2. Cursor: a concrete \u201capp + LLM\u201d example for enterprise \/ code engineers<\/h2>\n\n\n\n<p>Let\u2019s use Cursor as a reference point for how \u201capp + LLM\u201d plays out in practice in the coding\/engineering domain.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">What Cursor does<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Cursor is an AI-powered code editor built for Windows, Mac and Linux. <a href=\"https:\/\/www.datacamp.com\/tutorial\/cursor-ai-code-editor?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">DataCamp+1<\/a><\/li>\n\n\n\n<li>Features include: powerful autocompletion (predicting your next edits across lines), smart rewrites (type naturally, get code), an \u201cagent\u201d mode (complete tasks end-to-end) and context retrieval (understand your entire codebase). <a href=\"https:\/\/www.cursor.com\/features?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">Cursor+2Cursor+2<\/a><\/li>\n\n\n\n<li>It supports multiple frontier LLM models such as OpenAI\u2019s GPT-4.1, Claude variants, etc. <a href=\"https:\/\/cursor.com\/pricing?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">Cursor+1<\/a><\/li>\n\n\n\n<li>It includes enterprise features: large context windows, privacy modes (data not stored), codebase indexing, model hosting\/hosting options. <a href=\"https:\/\/docs.cursor.com\/models?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">Cursor Documentation+1<\/a><\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">The \u201capp + LLM\u201d bond in Cursor<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>App layer<\/strong>: Cursor provides the user interface, the code editor, integration with file system, terminals, project management, developer workflows, context retrieval, README and docs, and so on.<\/li>\n\n\n\n<li><strong>LLM layer<\/strong>: It provides the intelligence \u2014 e.g., natural-language instructions (\u201cRefactor all tests to use async\/await\u201d), the assistant mode (\u201cFind lint errors and fix them\u201d), code generation, multi-line edits, retrieval from your codebase context, etc.<\/li>\n\n\n\n<li><strong>Bridge \/ synergy<\/strong>: The value emerges when the LLM knows the context of your codebase (via retrieval, indexing) and the app injects that into the model\u2019s prompt\/context window. For example: \u201c@File MyModule.py @Docs MyAPI.md\u201d etc. <a href=\"https:\/\/www.vipshek.com\/blog\/cursor?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">Vipul Shekhawat+1<\/a><\/li>\n\n\n\n<li><strong>Enterprise dimension<\/strong>: For large orgs, this means the app + LLM combo can support thousands of engineers, integrate with internal codebases, enforce compliance\/security\/privacy, scale up model usage, allow agent workflows, etc.<\/li>\n\n\n\n<li><strong>Performance\/infrastructure<\/strong>: The underlying infrastructure (context windows, model size, efficiency) matters a lot for enterprise-scale code generation\/refactoring.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Why that matters<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>It accelerates productivity: engineers can write, refactor, debug significantly faster.<\/li>\n\n\n\n<li>It raises the abstraction level: engineers give natural-language instructions, and the system handles boilerplate, context, multiple files, testing.<\/li>\n\n\n\n<li>It can reduce errors, improve consistency across large codebases, allow embedded domain logic\/training.<\/li>\n\n\n\n<li>It also enables new workflows: e.g., code review bots, automatic lint\/test loops, guided code generation for new features, etc. (Cursor mentions \u201cAgent mode: Runs commands, loops on errors\u201d etc. <a href=\"https:\/\/forum.cursor.com\/t\/guide-a-simpler-more-autonomous-ai-workflow-for-cursor-new-update\/70688?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">Cursor &#8211; Community Forum+1<\/a> )<\/li>\n\n\n\n<li>It aligns with Huang\u2019s vision: As programming evolves toward human-language instructions and AI assistance, this kind of app + LLM is a building block.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Key considerations \/ limitations<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Context length \/ window size: With large codebases, you still have to manage what context you feed the model. Cursor mentions optimizing\/ pruning non-essential content. <a href=\"https:\/\/docs.cursor.com\/models?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">Cursor Documentation<\/a><\/li>\n\n\n\n<li>Data privacy \/ internal code: Enterprises must ensure the data used by the model is secure, accessible only to authorized actors, and model outputs are trustworthy. Cursor offers \u201cPrivacy Mode\u201d. <a href=\"https:\/\/docs.cursor.com\/models?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">Cursor Documentation<\/a><\/li>\n\n\n\n<li>Model-hallucination \/ correctness: Code generation still needs human oversight; the app + LLM must include verifications (tests, reviews) rather than blind automation.<\/li>\n\n\n\n<li>Integration and adoption: Tools must fit into existing workflows. If the app doesn\u2019t integrate, or if the model outputs are not reliable, adoption is limited.<\/li>\n\n\n\n<li>Cost \/ compute: Large models + context windows + scale usage =&gt; infrastructure cost. Enterprises must rationalize ROI.<\/li>\n\n\n\n<li>Versioning \/ maintenance: The model and the application must evolve; as the codebase changes, domain knowledge drifts, models need fine-tuning, prompt engineering, context management.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">3. General users + industries: \u201capp + LLM\u201d beyond coding<\/h2>\n\n\n\n<p>Now let\u2019s expand the idea to general users across industries and how \u201capp + LLM\u201d plays out in multiple sectors. The same bond applies but with domain-specific apps and workflows.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Patterns<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Vertical apps<\/strong>: For example, in legal, finance, healthcare, marketing, manufacturing \u2014 you have an app tailored to that domain (say a contract editor, a trading-desk dashboard, a diagnostic assistant, a creative content tool). Then you embed an LLM as the intelligence layer: natural-language query, summarization, generation, retrieval over domain-specific documents, etc.<\/li>\n\n\n\n<li><strong>Context integration<\/strong>: The app brings in the domain context (client files, legal docs, patient records, CAD drawings, sensor data). The LLM uses that to interpret, generate, or assist.<\/li>\n\n\n\n<li><strong>Workflow enhancement<\/strong>: The employee or user interacts via the app naturally (\u201cSummarize this contract, highlight the risks, rewrite in simpler language\u201d; \u201cGenerate a marketing email sequence for Product X given this data\u201d).<\/li>\n\n\n\n<li><strong>Scale + enterprise concerns<\/strong>: For enterprises, you\u2019re dealing with many users, many workflows, model governance, data governance, domain compliance, integration with back-end systems (ERP, CRM, manufacturing execution systems).<\/li>\n\n\n\n<li><strong>End-user diffusion<\/strong>: For general users, you might see simpler apps: writing assistants (text editors with embedded LLM), presentation creation tools, personal productivity apps, domain tools (architecture design, music composition, graphic design). These pair LLMs + UI for the user and reduce friction: you don\u2019t have to explicitly talk to ChatGPT; you have the AI embedded in the interface.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Why it matters (and why now)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Huang\u2019s vision frames the compute\/AI infrastructure as becoming ubiquitous: so the opportunity for \u201capps driven by LLMs\u201d is enormous across sectors.<\/li>\n\n\n\n<li>We\u2019re seeing large context windows, cheaper compute, model availability (open-source and cloud), which means embedding LLMs into apps is more feasible.<\/li>\n\n\n\n<li>Productivity dry-run: Many industries have lots of unstructured data (documents, images, sensor logs) and large cognitive\/manual loads. App + LLM can automate substantial parts of that.<\/li>\n\n\n\n<li>Competitive differentiation: For enterprises, building domain-specific knowledge + models within apps becomes a competitive moat (because the domain context + model tuning + workflow embed is harder to replicate).<\/li>\n\n\n\n<li>User-friendly interface: The \u201chuman language as interface\u201d means the barrier to using powerful models lowers \u2014 the user doesn\u2019t need to be a coder or AI expert, they just use the app.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Use-cases \/ industries<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Legal\/Contracts<\/strong>: An app for contract review + LLM that ingests contract text, identifies risk clauses, suggests revisions, compares to precedent library.<\/li>\n\n\n\n<li><strong>Finance\/Trading<\/strong>: Dashboard app + LLM that reads news, internal memos, filings, synthesises insights, helps traders or analysts by generating summaries \/ trend detection.<\/li>\n\n\n\n<li><strong>Healthcare<\/strong>: Diagnostic support app + LLM over patient data + medical literature to propose potential diagnoses, flag risks, assist in report writing.<\/li>\n\n\n\n<li><strong>Manufacturing \/ Industrial IoT<\/strong>: App that integrates sensor data, CAD drawings, maintenance logs + LLM that suggests maintenance schedule, root-cause analysis, optimises workflows.<\/li>\n\n\n\n<li><strong>Marketing\/Content<\/strong>: Content-creation app + LLM that takes brand guidelines, audience data, product information and generates copy, designs, motion graphics.<\/li>\n\n\n\n<li><strong>Software engineering\/DevOps (like Cursor)<\/strong>: Code editor or DevOps app + LLM that automates boilerplate, suggests architecture, improves existing code, automates tests.<\/li>\n\n\n\n<li><strong>Personal productivity \/ knowledge work<\/strong>: Email\/meeting app + LLM that summarises, drafts replies, integrates calendar\/context, helps plan tasks.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Industry-bond: how enterprises &amp; general users connect<\/h3>\n\n\n\n<ol class=\"wp-block-list\">\n<li><a href=\"https:\/\/developer.nvidia.com\/login\">Enterprises build<\/a> or adopt \u201capp + LLM\u201d tools for their domain; as these tools mature, they often trickle down (or spin out) into general-use versions for broader audiences.<\/li>\n\n\n\n<li>General-user apps often start simpler (less domain specificity, more general tasks) but as users demand more power or domain context, enterprises adopt or build heavier versions (with more governance, integration).<\/li>\n\n\n\n<li>The infrastructure investments (compute, models, data pipelines) made for enterprise tools also lower the cost and risk for general-user tools.<\/li>\n\n\n\n<li>The proliferation of \u201capp + LLM\u201d thus creates a virtuous cycle: more domain-specific enterprise adoption \u2192 more model\/data investment \u2192 more general-user spin-offs \u2192 more innovation in models and workflows \u2192 feed back into enterprise.<\/li>\n\n\n\n<li>Also: enterprises often have unique data\/contexts; general-user toolmakers may adopt similar app patterns (UI\/UX, embeddings+retrieval, fine-tuned models) but with less domain-risk and smaller scale. So general user apps become \u201clighter\u201d analogues of enterprise ones.<\/li>\n<\/ol>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">4. Key takeaways &amp; strategic pointers<\/h2>\n\n\n\n<p>From all of the above, here are some actionable takeaways and strategic thoughts when thinking about app + LLM for enterprise and for general users:<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">For <a href=\"https:\/\/developer.nvidia.com\/login\">enterprises<\/a><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Choose domain-specific workflows<\/strong>: Identify the parts of your operation with heavy cognitive\/manual cost, lots of context or documents, where an app + LLM can reduce friction or time.<\/li>\n\n\n\n<li><strong>Build context pipelines<\/strong>: The domain context (documents, past data, codebase, logs) is critical. Without relevant context, LLMs will under-perform.<\/li>\n\n\n\n<li><strong>Governance, privacy, security<\/strong>: You\u2019ll need to handle data governance (model yes\/no, private vs cloud), auditability, explainability (why did the model suggest X?), integration with existing systems.<\/li>\n\n\n\n<li><strong>Model + compute infrastructure<\/strong>: Decide whether you\u2019ll use cloud models, fine-tuned models, self-hosted, or a hybrid; monitor cost vs benefit (token usage, inference latency, context window size).<\/li>\n\n\n\n<li><strong>Human-in-the-loop<\/strong>: Especially early, keep humans in supervisory roles. Use the app + LLM to augment, not entirely replace, domain experts.<\/li>\n\n\n\n<li><strong>Measuring ROI<\/strong>: Track metrics like time saved, error reduction, throughput increase, user adoption. Because model cost + app development cost are non-trivial.<\/li>\n\n\n\n<li><strong>Iterate on workflows<\/strong>: The best value often comes when you embed LLMs into workflows, not just as a standalone \u201cchat with LLM\u201d tool. That means the app needs to orchestrate user interface + data + model + action.<\/li>\n\n\n\n<li><strong>Scalability &amp; versioning<\/strong>: As the domain context evolves (new regulations, new products, new codebase), the app + LLM system must evolve (re-index, retrain, update prompts).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">For <a href=\"https:\/\/chatgpt.com\/g\/g-6IXqeWU7q-creative-visualizer\">general users \/ consumer \/ smaller orgs<\/a><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Use smaller-scope apps<\/strong>: You don\u2019t need enterprise-scale context. Smaller apps that embed LLMs can deliver value (e.g., writing assistants, small-team code editors, marketing content tools).<\/li>\n\n\n\n<li><strong>Leverage embedded LLMs<\/strong>: Instead of switching to a separate chat interface, use tools where the intelligence is embedded directly into the app you already use. (This aligns with the \u201chuman language interface\u201d shift Huang describes.)<\/li>\n\n\n\n<li><strong>Mind the cost\/usage trade-off<\/strong>: Even for individuals, LLM usage can add cost (token usage, subscription models). Pick tools where the value gained is clearly greater than cost.<\/li>\n\n\n\n<li><strong>Understand limitations<\/strong>: The model may still hallucinate, lack domain context, misinterpret user requests. Use the LLM as a helper, not as sole decision-maker.<\/li>\n\n\n\n<li><strong>Explore domain-specific extensions<\/strong>: If you have niche needs (e.g., design, data science, law, healthcare), look for apps embedding LLMs tuned for that domain \u2014 the \u201capp + LLM\u201d approach is increasingly available across verticals.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">What to watch out for<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Model drift \/ outdated context<\/strong>: Domain knowledge changes (law, regulations, codebase, product specs). Needs refresh.<\/li>\n\n\n\n<li><strong>Over-reliance on \u201cmagic\u201d:<\/strong> Too much faith in the LLM can lead to errors, compliance risk, model bias.<\/li>\n\n\n\n<li><strong>Data leakage \/ security<\/strong>: Running LLMs with sensitive data has risks; ensure secure data flows.<\/li>\n\n\n\n<li><strong>Compute &amp; cost explosion<\/strong>: If you feed huge context windows, many users, autoregressive agent loops \u2014 costs escalate.<\/li>\n\n\n\n<li><strong>User adoption \/ change management<\/strong>: Even the best app + LLM will fail if users don\u2019t adopt or trust it. Proper onboarding, interface, oversight are essential.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">5. How it ties back to Jensen Huang\u2019s \u201cwaves\u201d of AI<\/h2>\n\n\n\n<p>Putting it all together and aligning back to Huang\u2019s framing:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Huang describes \u201cwaves\u201d of AI (agentic AI, physical AI, enterprise AI, personal AI). <a href=\"https:\/\/www.rev.com\/transcripts\/gtc-keynote-with-nvidia-ceo-jensen-huang?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noreferrer noopener\">Rev+1<\/a> The \u201capp + LLM\u201d model is a direct operationalization of the enterprise &amp; personal AI waves.<\/li>\n\n\n\n<li>\u201cOne architecture\u201d means the same compute\/model stack can serve cloud + enterprise + edge + personal. So whether you\u2019re building an enterprise code editor (Cursor) or a personal productivity app, the underlying architecture is unified.<\/li>\n\n\n\n<li>Huang\u2019s assertion that \u201cprogramming becomes human language\u201d is realized in apps that embed LLMs: users write natural language instructions to drive the system, rather than hand-coding low-level details.<\/li>\n\n\n\n<li>The factory of tokens: In enterprise apps you generate tokens (code, text, commands) at scale; the app ensures the workflow, context, and user interface.<\/li>\n\n\n\n<li>Infrastructure matters: Without the compute (GPUs, large models) and data (context indexing, retrieval), app + LLM can\u2019t scale. Huang emphasises this infrastructure piece strongly.<\/li>\n\n\n\n<li>So, in sum: enterprise + general-user \u201capp + LLM\u201d is the realization of the vision Huang sets: AI embedded, natural-language interface, domain context, scale.<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Here\u2019s a detailed breakdown of how the vision from Jensen\u202fHuang (CEO of NVIDIA) ties into the \u201capp + LLM\u201d paradigm \u2014 using the example of Cursor (for enterprise\/code engineers) and more broadly for general industries\/users \u2014 and how the pieces fit together, where value is generated, and what to watch out for. 1. Jensen Huang\u2019s [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9672,9661,9664,1],"tags":[],"class_list":["post-19705","post","type-post","status-publish","format-standard","hentry","category-mentorbeyondbaseline","category-sensoryacuritymentor","category-inspiration-on-ai","category-electronic-channel-transaction-gravity-trend"],"_links":{"self":[{"href":"https:\/\/ectgt.com\/nobankrunWallet\/wp-json\/wp\/v2\/posts\/19705","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ectgt.com\/nobankrunWallet\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ectgt.com\/nobankrunWallet\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ectgt.com\/nobankrunWallet\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ectgt.com\/nobankrunWallet\/wp-json\/wp\/v2\/comments?post=19705"}],"version-history":[{"count":5,"href":"https:\/\/ectgt.com\/nobankrunWallet\/wp-json\/wp\/v2\/posts\/19705\/revisions"}],"predecessor-version":[{"id":19723,"href":"https:\/\/ectgt.com\/nobankrunWallet\/wp-json\/wp\/v2\/posts\/19705\/revisions\/19723"}],"wp:attachment":[{"href":"https:\/\/ectgt.com\/nobankrunWallet\/wp-json\/wp\/v2\/media?parent=19705"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ectgt.com\/nobankrunWallet\/wp-json\/wp\/v2\/categories?post=19705"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ectgt.com\/nobankrunWallet\/wp-json\/wp\/v2\/tags?post=19705"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}