November 13, 2025

Working with Bicep – Part 2: Azure Function App Deployment

Hi Folks, 

In Part 1, we explored how to use Azure Bicep to deploy Logic Apps and integrate with Dynamics 365 for Finance and Operations (D365FO). Now, in Part 2, we’ll extend that architecture by deploying Azure Function Apps using Bicep—enabling custom logic, token handling, and deeper orchestration capabilities.

Just for a recap, lets understand why Azure function is so awesome, Azure Functions are serverless compute services ideal for lightweight processing, transformations, and integrations. When paired with Logic Apps, they offer:
  • Custom logic execution (e.g., token parsing, data shaping)
  • Event-driven triggers (e.g., D365FO business events)
  • Scalable backend processing without managing infrastructure
In todays post lets take a real time scenario, To authenticate Logic Apps with D365FO, you often need to retrieve an OAuth token. Here’s how an Azure Function can help, below is sample code which can be called from Logic Apps to retrieve and inject the token dynamically.




  Here are some known issues and their possible fixes (atlest they worked for me ;) )



-Harry Follow us on Facebook to keep in rhythm with us. https:fb.com/theaxapta

October 29, 2025

WorthKnowing: Measuring Code Quality: Beyond Subjective Judgment

Hi Folks, 

There are five main measurement for any code review, 

1. Reliability

Reliability reflects the likelihood that software will operate without failure over a defined period. It hinges on two factors:
  • Defect count: Fewer bugs mean higher reliability. Static analysis tools can help identify defects early.
  • Availability: Measured using metrics like Mean Time Between Failures (MTBF), which indicates how often the system fails.
A reliable codebase is foundational to building robust software systems.

2. Maintainability

Maintainability assesses how easily code can be updated, fixed, or extended. It depends on:
  • Codebase size and structure
  • Consistency and complexity
  • Testability and understandability
No single metric can capture maintainability, but useful indicators include:
  • Stylistic warnings from linters
  • Halstead complexity measures, which quantify code readability and effort
Both automated tools and human reviewers play vital roles in maintaining clean, adaptable code.

3. Testability

Testability measures how effectively software can be tested. It’s influenced by:
  • Control and observability of components
  • Ability to isolate and automate tests
One way to assess testability is by evaluating how many test cases are needed to uncover faults. Tools like cyclomatic complexity analysis can help identify overly complex code that’s harder to test.

4. Portability

Portability gauges how well software performs across different environments. While there’s no universal metric, best practices include:
  • Testing on multiple platforms throughout development—not just at the end
  • Using multiple compilers with strict warning levels
  • Enforcing consistent coding standards
These steps help ensure your code isn’t locked into a single ecosystem.

5. Reusability

Reusability determines whether existing code assets can be repurposed. Reusable code typically exhibits:
  • Modularity: Components are self-contained
  • Loose coupling: Minimal dependencies between modules
Static analysis tools can identify interdependencies that hinder reuse, helping teams refactor for better modularity.

Code quality isn’t a one-size-fits-all concept. But by focusing on measurable traits like reliability, maintainability, testability, portability, and reusability, teams can build software that’s not only functional—but also resilient, scalable, and future-proof.


-Harry Follow us on Facebook to keep in rhythm with us. https:fb.com/theaxapta