Friday, November 16, 2007

Algebra I FTW!

Never let it be said that Algebra I isn't useful in real life. Mr. Wog would be so proud.

So here's the problem:

I need to construct a complete, detailed payroll history for an employee over a particular 6-month period.

During the period, the employee is paid from 4 different budgets, which we'll call A, B, C and D.

Budget A is active all 6 months, and I have complete payroll detail for it during that time.

Budget B is active all 6 months, but I only have its payroll detail for the first 3 months.

Budget C is active all 6 months, but I have no payroll detail for it at all. I know that it was not paid during the first 3 months, and that a lump sum was transferred from Budget A to Budget C to cover the first 3 months of my arbitrary period and the 3 months prior to the period I'm studying. Unfortunately, it's applied to earn dates by academic quarter rather than calendar month, which means it's offset by 2 weeks in addition to being partial. I don't know for sure, but I assume, that Budget C was paid normally for the last 3 months.

Budget D is active only the last month, and I have its complete payroll detail.

I have monthly averages for each of the 4 budgets as calculated before the transfer and after the transfer. I also have the total monthly average paid to the employee.

I have the total monthly amount paid to the employee (A+B+C+D) for the first 3 months only, during which time it is constant, but I can tell from the monthly average that the employee received a pay increase sometime during the last 3 months. I don't know exactly when or how much.

I need to reconstruct the detail in such a way that the monthly averages for each budget, both before and after the transfer, and the total monthly average, come out the same. I'm allowed to make brute-force assumptions about anything that doesn't affect that outcome, e.g., I actually have to construct it by 2-week pay period, but since the averages are monthly, it's OK for me to arbitrarily divide the monthly total by 2 to come up with a pay period amount.

My tools for this exercise are Excel and a whiteboard. In Excel, I create some PivotTables to group and average the data so I can track the effect of each change to each budget.

The first thing I see is that the monthly total of Budget B for each of the first 3 months is exactly the same as its 6-month average. I decide to fill in that it was paid at the same rate for all 6 months. Budget B is now solved.

I have Budget A data for the last 3 months, and now that Budgets B and D are solved as well, I can get Budget C with simple subtraction... except now I need to figure out the pay increase.

It makes sense to me to assume that the pay increase occurred in month 6 only, because that's when Budget D became active and the pattern of Budget A seems to support this idea.

On the whiteboard, I figure out the amount of the pay increase with this equation:

(5(18208)+x)/6 == 18261.84

x == 18531.04

I decide to assume this, which gives me a grand total for months 4 (18208), 5 (18208) and 6 (18531.04). Now that I know Budgets A, B and D during those months, I can fill in Budget C by subtracting. If this isn't precisely right, I figure I can make up the difference in the first 3 months and make the averages come out as desired.

Now I try to guess how the lump-sum transfer for the first 3 months should be pro-rated. Its effective date is supposed to be only partially overlapping with the period I'm working on, so I try to calculate the per-pay-period amount on that basis, but the adjusted monthly average I come up with doesn't match the adjusted monthly average I've been given. On a hunch that this math was too tortured for the poor soul who made the adjustment in the first place, I decide to see what happens if I pro-rate the lump sum over only the partial period that I'm investigating and not the total transfer effective period. Bingo! Everything falls into line.

To a degree of accuracy that I care about (and no more), I've successfully filled in all the missing payroll detail for my employee, and I have a complete picture of all 4 budgets for the entire 6 month period.

Math is cool.

Friday, November 09, 2007

Windows Technology Frameworks (p&p, day 5)

The wrap-up.

When our speaker asks, "how many of you think this stuff is so complicated that you'll never be able to learn it all?" and all the hands go up, well, that's encouraging.

He likens the .NET Framework to shopping at Home Depot. We wander around from aisle to aisle, with some kind of broken home widget in our hands, desperately searching for something that resembles the widget, so we can zero in on a replacement for the widget, and if/when we get to the right item, we turn it over and the installation instructions make no sense.

"How many of you have looked up the documentation for the DoSomething() method and found that its sole contents are: 'Does something'?"

"ASP.NET: the most impressive kludge in the history of software development." A pretty reasonable attempt to layer a real programming language over browser-based development.

"If the root problem is complexity, EntLib is not the solution."

Agile, TDD, software factories, etc., touted as panaceas... really an attempt to spackle over the real problem, complexity. And now, we kick around a Peter-P-who-shall-not-be-named, for his pairing and TDD presentations at p&p last year (which, wow, I remember!)...

Pair programming had better produce twice the value of one programmer programming alone. We've talked about how hard it is to find good developers. If you put two bad programmers together, do they become > 2 good ones?

If TDD means you write 2 lines of test for every line of production code, then that prod code had better be three times better. (If not, you're just feeding a substance abuse problem.)

If everybody pairs, and everybody writes 2 lines of test per line of code, and every team has 2 testers per dev, the only possible conclusion is that we suck. (Our ratio of testers to developers causes #DIV/0!, so don't assume the converse.)

Fueled by iPods, "people are waking up to the idea that simplicity has value and complexity has cost."

Simplicity Manifesto v1.0
  • Stop adding features
  • Make help helpful
  • Fix the bugs
  • CRUD for free
  • Hide the plumbing
  • Get better names
"HTML: the COBOL of the internet"... we've pushed it way, way beyond what it was envisioned to do.

"Treat simplicity as a feature" and demand simplicity in tools, too.

Boldly go, y'all.

< 10 of these things are not like the others (p&p, day 5)

The speaker this morning told a funny story about women.

He didn't say anything offensive about women, particularly.

I'm not going to belabor this. But it made me feel several inches tall, and I didn't hear a damn other word he talked about during his hour.

Update: on the bright side...

5 of the aforementioned < 10 are from UW. That's both a good thing, for me, and an even more disproportionate thing for the rest of the world.

It occurs to me that UW is a spectacularly good place for women in IT to work, as evidenced by the fact that we just don't do that. That's why hearing it here was like being ambushed by a grotesque. This fact could be a powerful recruiting tool if I could figure out how to wield it.

Change the world and/or go home (p&p, day 5)

First half hour of keynote: more Scott H hilarity, lots of LOLcats, a nice wrap-up, like an Episcopal benediction and dismissal, but for geeks. "Be well, write good code, and stay in touch." Thanks be to [insert your Microsoft joke here]!

Second half hour of keynote: more MVC Framework demos by popular demand.

Yesterday was exceptionally difficult and depressing for me (capped off with a miserable 90-minute commute home), so this is just the right mood enhancer at just the right time. Not just the comedy and the great stock photo usage, but also that Scott H explains complicated things pretty accessibly.

More musings on learning styles... I tend to tune out when I can't "contextualize", i.e., figure out how the demoed tech relates to anything I'm doing now (or have done before). The problem is, if I haven't paid attention, how'm I going to recognize an opportunity to apply new tech in the future? We already know the answer to that.

Push learning is going to work better than pull. If I load up on stuff I don't understand now, I'll have a better chance of wiring it together someday when a context presents itself. This is a lot better than flailing around for a solution when I get stuck later, especially because most of the time I won't be "stuck", I'll be writing code that works, but is crap, because I don't know any better.

The benefits of showing up and paying attention: John Lam has just shipped LOLCODE on the DLR and is coming down to the podium to demo it. Now that's something that might really come in handy someday.

Thursday, November 08, 2007

WCSF (p&p, day 4)

Web development paradigms
  • Content sites (e.g., news)
  • Transactional sites (e.g., stores, banks)
  • Collaboration sites (e.g., wikis, workflows)
Demo, helpfully, to focus on transactional sites

WCSF facilitates...

Separation of UI development & UI design!

Richness, experience, navigation

Security, manageability, testing

Separation of responsibilities very easily, very clearly... deploy without affecting other subteams (ahem).

I guess the thing I find most challenging about "Factories" day is that demos like these, for me, are just teasers. They either get me thinking, "hey, that'd be fun to play around with" and/or "hey, that sample problem resembles a problem I recognize", or they don't. I think to do this, they have to show a variety of things in modest depth, but the depth itself ends up being lost on me, and that makes for long hours of struggling to pay meaningful attention.

I suspect some "homework" on my part would help with this... if I were to keep current on p&p, or at least brush up on the latest initiatives before heading off to the summit, I'd probably have enough of a basis in the technologies to have formed specific interests and questions. To that end, I've subscribed to about 15 different blogs today. Will the momentum carry through to 2008? Tune in!

MVC (p&p, day 4)

Scott H reaffirms that stand-up comedy skillz are at least as valuable as a liberal arts education, in any industry [see also: Mark Driscoll].

MVC Goodness!!1!
  • Separation of concerns
  • Red/green testability
  • Extensible/pluggable
  • Clean URLs, clean HTML
  • Great integration within ASP.NET
Don't Panic (or perhaps the Kool-Aid)
  • Not Web Forms 4.0; about more choices
  • As simple or as complex as you wish
  • Fundamental in System.Web
  • Plays well with others
"Where does the file live?" It doesn't really exist. The URL/I doesn't point to a physical location.

Everybody wants a friendly, hackable URL/I. It's a user-interface point now.

And now, code.


Just like yesterday, I think watching and listening to a code presentation that I barely understand, or don't understand at all, is improving my skills by osmosis. Better than a book. Every once in a while, I see a problem I recognize from having solved it (poorly) once before...

He keeps coming back to the clean/intuitive URL/I thing, but that's a problem I can totally relate to!

"The problem with XML is not that it's human-readable, but that it's manager-readable."

Holy cow, it's an MVC gradient graphic generator. My head might explode.

This was awesome! What a great start to a probably otherwise challenging day!

Wednesday, November 07, 2007

Can I choose 'none of the above'? (p&p, day 3)

Face-off on the future of patterns, featuring, for reasons which never really became clear, suction-cup-dart guns.

Should patterns be disseminated as a new online share/wiki/repository and in books? I.e., descriptively?


Should patterns be built into tools and libraries and provided? I.e., some other adverb?

In some ways it took on the characteristics of a negative political campaign, with both sides explaining each other's shortcomings, and had the same effect on me as it does on voters, which was to depress turnout. :P

For me to be able to use patterns, I need to learn them as a vocabulary and understand them intuitively. Patterns aren't imposed, anyway, they emerge from good coding practices and become named as a means of communicating about them. (I learned that several trainings ago, along with lots of other things I can't actually implement.)

Back on day 1, one of the speakers captured the answer here already. Patterns aren't magic, they're just a language that allows architects to talk to each other and to coders about what should be built and how.

I don't learn well by reading. I wouldn't use books. I don't have anything to contribute to a wiki. They are right that nobody would want to sit down and document the pattern anyway, we have work to do.

But, tools without understanding are like... FrontPage­. Pretty much exactly like FrontPage­.

I learn by doing, and I learn by surrounding myself with people smarter than me (about the topic in question, eh!). Any pattern I've ever heard of, I learned by someone near me talking about it whilst I smiled, nodded, and covertly Googled­. It works!

Aha! And the reason it works is that I learn the pattern in context. I'm not just figuring out what the pattern is, I'm trying to figure out WTF my friend and/or colleague is talking about, which is probably an application of a pattern, and when I figure it out, I have learnt both. I will remember it, and I'll be better equipped to apply it somewhere else later (in theory).

OK, Google­ usually points me to Wikipedia, and somebody had to have written that, and I do then assimilate it by reading it. But I'd be very unlikely to go there, or anywhere else, and browse patterns for the sake of patterns, so discussing the future without a context just isn't that helpful to me (see also: years of unapplied trainings).

The mechanism doesn't matter. It's about the understanding. The right community doesn't even have to try, they just have to be smart about patterns in proximity to each other.

Get me my suction-cup-dart gun.

Moar Ted Plz! (p&p, day 3)

Started this out in the parking lot, but wow! it merits its own post after all.

Workflow workflow workflow workflow. Workflow patterns!

Interestingly enough, Ted uses a hypothetical higher ed approval process as his hypothetical test example.

W(t)F: look for "processes" in application code that users will want to control directly, or that will change over time. Long-running processes that happen in human space/time.

Instead of running around/telephone game for coders to implement and re-implement changing requirements, esp. around changing processes, give the "knowledge workers"/domain experts the tools to DIY. Not just the content, but the process/steps/sequence.

Activities: not branching and flow control, but domain-specific steps

Programmers build domain-specific activities (we focus on what we understand); domain experts string them together into processes (ditto)

We have to build activities so they're potentially useful across multiple workflows... sounds familiar

Decide early who's going to write the workflow; when in doubt, assume non-techs will do it.

Sequence workflow, state-machine workflow, open-ended processes, parallellism

Workflows with non-processing elements may never proceed to the next step (human actor hit by bus); need a timeout/recovery plan, don't leave a bunch of stuff locked & waiting, etc.

Workflows can persist themselves off anywhere and rehydrate anywhere, any time. Activities need to be decoupled accordingly. If done right, super-scalable, because can punt any of them to a new box any time.

Also "temporally decoupled"... take your order, capture it off, store it, apply its workflow to it somewhere else. Avoids lots of remote tripping, slashdotting, etc., because it doesn't matter what happens to the site. The workflow gets done eventually. If something went wrong, you get an email later.

Kudos to ps for catching Ted at the same thing I caught him at: "the IT guy" writes activities to enable the non-technical domain expert, "the secretary", to write processes "herself". Grrr. (This reminds me that in Peter's Agile Tragicomedy yesterday, the cast of characters consisted of a female PM, a female customer, three male developers and a male tester.)

Now with more Provost! (p&p, day 3)

There is a distant possibility that I might, utterly unlike last year, "get" Dependency Injection Frameworks.

"Don't call us, we'll call you."

The responsibility for making decisions about how to resolve the dependencies lies outside the object itself.

Service Locator: not really DI, just a pattern. It knows how to find its dependencies.

Interface Injection: the framework can find the inject method, then you pass in an interface.

Setter Injection: call "setFoo" and pass it a Foo.

Constructor Injection: this component requires a Foo, or I can't even "new it up". It doesn't have any limbo state while it's being instantiated/set up.


Method Call/Method Injection: calls methods on your behalf at predefined times.

Getter Injection (AOP): all usage of the Foo inside of the object will call the get(), whose implementation is empty, and the get() gets replaced with an implementation via AOP.

So... why?

Makes you think about highly cohesive, loosely coupled design. Testable without fancy mocking frameworks. Don't Repeat Yourself.

  • Lots of little objects (pitter-patter of little objects?)
  • Interface explosion
  • Runtime wire-up complicated, difficult to visualize (but he blames the tools more than the technique)
  • When building reusable libraries, consider wrapping facades around systems created this way
  • Late-time binding affects performance, as do all strategies for maintainability; "we all want responsiveness, most of us don't need performance."
Scott says, less talk, more code.

Factories are a very specific type of DI.

And now, pair bantering, I mean programming.

"And thus you can destroy, or not destroy, planets."

In conclusion, I get it, but only kinda. Contextualization!

Heavy lifting (p&p, day 3)

The Rocky-Ted-Peter-Brad-Keith banter dynamic is great. It'd be cool to be part of a community at that level.

Does SOA supplant client/server? Forever? Are we in a post-OOD/P world? No, it is about selecting the right tools for the right job.

What we think of as n-tier is often just a layered architecture, multiple DLLs all on the same machine; this is distinct from physical "tier" boundaries (servers, etc.). [So that's why our interviewees draw server diagrams when we ask them this.]

When you don't properly separate layers, and it's way too easy not to, then "C# is VB3 with semicolons." Lack of separation is short-term productive, but awful.

Old-school layers: Presentation, Business, Data
Newer layers: Presentation, UI, Business, Data Access, Data Storage... hey, we came up with that on our own, kinda.

The corollary to the newer layers is that you can't use the DataSet, at least not as designed, because it tends to flow all the way from DA through B to UI and would break the entire app if replaced. You have to isolate the DataSet in the DA layer and convert it to something else in the others.


Tiers and service-orientation don't go together. Everywhere you had a tier, you now have a separate application with services to get them to talk to each other. How do you structure these little apps? In a kind of layered way. A service-oriented "system" is made up of lots of little applications that talk to each other, at least one of which talks to the user. Each individual app is internally layered, too, e.g., its "UI" is the part that talks to the other services.

Workflow? Free in .NET 3. The new hotness? (Workflow is not new. It's older than OO.)

Workflows are not apps, they are "orchestrations of activities". But, each activity is kind of a mini-app.

OO is dead! O rly?

It's naïve to think you can arbitrarily make any method a service and put it on another server somewhere, e.g., if you are calling it from inside a loop or it is otherwise chatty, network/performance. This was figured out in OO a long time ago, could do the same thing with OO pre-services, just frequently wasn't a good idea.

"OO: Key Concepts"! (Hi cjm!)

A service is an encapsulation and an abstraction, because it's supposed to be a black box. Same for an activity in a workflow. This is building on top of OO ideas. SOA is all about eliminating coupling, but the drawback is increasing complexity/overhead; that's why OO compromises and accepts some coupling in some places (for simplicity, maintainability, performance).

Is OO hard to use in distributed environments? It requires forethought and planning. I think this means it's hard. :)

"SOA should be spelled $OA"

"Almost everybody does services. Almost nobody is 'service-oriented'." They use them the exact same way as MTS/COM+ for data access (ew!), just looks better on a resume.

SO: Key Concepts
  • Autonomy of computing entities
  • Message-based communication
  • Asynchronous communication (?)
  • Loose coupling
    • Behavior negotiation
    • Explicit boundaries
    • Contract exchange (metadata)
  • No tiers
[Services are paranoid little critters. They can't assume anything about the thingies they interact with, except that the thingies will be clueless and constantly changing, so they need to spell everything out and agree on everything up-front. Services need prenups. Services are cynics. As every cynic knows, this means they are realists.]

  • Potentially better way to model real world
    • Non-deterministic technology for a non-deterministic world
  • Leverage existing application behaviors
  • Promote re-use of behaviors, not code
  • Cross-platform, cross-language
  • Immature concepts and tools; glorified MTS
  • Complicated and expensive to code around non-determinism
  • Distributed parallelism is hard... easier if you drop the async goal
Workflow: Key Concepts
  • Define process in terms of inputs, outputs, tasks
  • Organize tasks into ordered steps... what if order needs to change?
  • Define resources required for each task
  • Isolation between workflow and outside world... how?
    • Clear lines of communication
      • Dependency properties
      • Workflow events
The "code activity" is "like FORTRAN in a GUI."

  • Very mature concepts
    • Data flow and flowchart concepts from decades-past
    • Maybe it really is new, if all the practitioners have retired or are about to :)
  • Major learning curve for OOers
    • Everything looks like an object, must un-learn a lot about objects & events
Bringing it home!

New stuff doesn't obsolete everything that came before. We come up with new concepts and new metaphors to solve specific problems that weren't solved by the previous thing(s).

It's a hybrid world.

E.g., a layered client app that talks to layered services and/or layered workflows that in turn call other layered services and/or layered workflows.

Or, a layered workflow and a layered client app which share data storage (e.g., a corporate dB).

"Control messages": "I wrote the data over here and I want you to go get it and do this with it" rather than the data being in the message.


The problem with the hybrid is every little thingy has a separate business layer. How do you re-use common business stuff like auth, rules, etc.? The problem is that re-use is the flip side of coupling. If you have re-use, you have coupling. Uh-oh. So re-use is, at best, overrated, at worst, harmful.

Aren't you universally coupled to the database schema? Well, yes. Unless you have separate little databases for every little thingy. If you can pull this off, more power to you.

"If you want to achieve decoupling, you can't allow more than one piece of code to talk to any table." Reality intrudes.

Also, Keith intrudes. Time's up! Break! Rocky is cool.

Tuesday, November 06, 2007

Agile SDL (p&p, day 2)

"Why they're called a 'buddy', I don't know. It's kind of like calling my IRS auditor my 'buddy'."

(Non securitur: "If you give me the right hardware, I can probably get Excel to bring me more iced tea.")

[The immediate problem I'm seeing with this (presumably canonical) SDL model is that waterfall doesn't work for security, because there are always new threats, so, e.g., the threat model is out of date before it's even been written. "Final security review" especially doesn't seem meaningful. I might be jumping ahead as far as this process really needing to be agile instead.]

"How many of you wait until the end to get performance right? Do you do a 'performance push'?"

Agile! Here we go!

How do we get the same gains with a less-heavy process and less up-front design?

Security is just as much a part of every developer's job as, e.g., reliability

Appoint a Security Owner, preferably one who cares whether the app is secure or not, whose responsibility it will be to ensure that the team meets security goals day-to-day

Need to train every developer to know what a security bug looks like

Agile Threat Modeling

Lightweight, rapid
Sketch DFD on whiteboard
Include threat mitigations in the feature backlog

Does management want to learn how insecure your app is from you... or from a hacker?

Use code-scanning tools daily or weekly
Peer code review; anything that increases quality, increases security
Check for banned APIs at code check-in time?

Crypto: don't try this at home. Hire a pro to review.

Unit-level, object-level security testing; write at same time as functional tests
"Throw some evil inputs at it" when running tests

Security Push is good to get everybody's head in the game, and/or for legacy cleanup, but all the rest of the time security needs to be every day, every sprint, whatever

We could make Agile the more secure development approach... it lends itself, we just don't capitalize (yet)

xUnit, schmUnit (p&p, day 2)

Unit testing --> "programmer testing"

Not about TDD, though TDD is a good (the best?) way to do unit testing

"I understood too much about how the system is supposed to work to test it effectively. But if I do the test first...," no preconceptions about how system works.

Non-functional requirements: the "-ilities" (some, like usability, I've previously called qualitative; others are more technical, like scalability, maintainability)

Lesson 1: just do it; there are good reasons, pros, cons, whatever, just do it

Lesson 2: write tests using the 3A Pattern
  • Arrange: set up the test harness (instantiate the objects, fill in pseudovalues, etc.; the other stuff you can't test without)
  • Act: do the thing (the one whose workingness you want to test)
  • Assert: verify the results (test one thing per test, regardless of how many Asserts are needed)
Lesson 3: keep tests close to production code; don't make private stuff public just to make it testable to an artificial test architecture (e.g., separate assembly)

Lesson 4: use alternatives to ExpectedException
  • it violates 3A (NUnit syntax places the test out of order)... use Assert.Throws() with delegate{} (imperfect, but better) or .NET 3.5 lambda thingies
  • not always enough info about where the exception threw from, or should have thrown from, in processing order
  • Assert.Throws() returns the exception itself for further inspection
Lesson 5: small fixtures; separate fixtures for all the tests associated with a particular method, setups will be similar

Lesson 6: don't use SetUp and TearDown, even at the expense of some code repetition in tests; although small fixtures should help with this, they won't always

Lesson 7: improve testability with Inversion of Control (a pattern; Dependency Injection Frameworks use this but are themselves overkill for "most" applications)
  • constructor injection
  • setter injection (create an interface for the thing that needs to be mocked in the test; swap out the set method for test purposes)
  • cascading failures are unhelpful; isolate the code such that if change one part of code, one test should fail, and the failure should point you to the actual place where the problem is
Lesson 8: doesn't like mock object frameworks; they violate 3A

Just seeing these examples is one more step toward "contextualizing" this stuff for me. Cool.

Agile 2008 (p&p, day 2)

Agile 2008 conference seems like something we should get into.

Particularly, we might have interesting things to say about organizational culture...

Post-agilism (p&p, day 2)

The Agile forest for the Agile trees: too much attention to individual practices than understanding the broad principles; form over substance

Sounds like the difference between "Agile Tragedy" and "Agile Comedy" is communication, practically an encounter session. Telling truths, even when difficult. Bi-directional.

Aha: Agile is about trust, building it, maintaining it, specifically earning it.

Zen agile, neo-agile, postagile... can jettison all 12+ practices and still be little-a agile

Shared understanding of what "done-done" looks like
Empowered, own the problem, it is the team's to suck or shine
Incremental understanding; learn as you go about all aspects of what's going on (including each other, the business area, the customer, etc.)... must accept that you don't know up-front... must seek to learn continuously

Warning signs
  • Silent team room
  • Afraid to change existing code
  • Customer unavailable
  • Don't ship frequently
When off-track
  • Talk talk talk (yay!!)
  • Ask how we'll solve the problem(s) together
  • Guide, don't bully (oops)
  • Gotta listen, too (hmm)
Pair programming reduces context-switching costs: much less likely to interrupt and distract two people who are engaged in an activity than one person when you don't know what they're doing. Context cop(s) important, take (kick) side chatter out of the room... then can be a noisy vigorous team room but will be focused and happy

"Let's say you have an agile team, and you have team members who don't want to be agile. You can't fire them, and you can't promote them. Where can you put them to minimize the damage?" (Ted rules!) It's better to bring them into the team, but if you can't, then you've either got to figure out how to be a team without them, or just don't do the hyper-team thing at all, which is a valid option.

Peter Provost rules!

OMG Steve McConnell (p&p, day 2)

Basic motivations for agile:

Developers: want less overhead, want to focus on "real" work
Customers: want flexibility

Value delivery keeps up with cost

"Especially useful when schedule and resources are fixed, and mission is to provide maximum business value within those constraints"

Agile Manifesto (ca. 2001)

"Reduced emphasis on long-range predictability of features--especially of the combination of cost, schedule and features"

Variation of methodologies is really about the balance of up-front work vs. in-iteration

In-phase defect removal goal is always 100%, at least in theory

Aha: in addition to code defects, can also find requirements defects, architecture/design defects... leaving major testing to end allows build-up of latent defects (all kinds), unexpected fix time

("Evolutionary Delivery" illustrated with arbitrary 25% balance, looks good for us)

Packaged methodologies

XP: developer focused; abandoned more often than retained
Scrum: workflow/management focused; retained more often than abandoned

Few projects use all the practices of a given method; claim of "synergy" among practices is invalid

Useful practices
  • Short release cycles (1-4 weeks): does not have to be externally to the customer; virtually always valuable
  • Rolling-wave planning: long-term (2-12 months); detailed (30-60 days)
  • Timebox development: commit to delivering fixed set of functionality in given timeframe; high morale, visible progress; need a rest period or different activity between sprints
  • Empowered, small, cross-functional teams: include all stakeholders needed to make binding decisions
  • Active management (coach, Scrum Master): "Theory Y" style; remove barriers to good work
  • Coding standards
  • Frequent integration & test: daily build and smoke test (no point building if don't also test that build is OK); note that you don't have to deliver new functionality every day, nor even check code in every day
  • Automated regression test (TDD): virtually always a good practice with new development, problematic for existing/legacy systems
  • Postmortem for each release
Hit or miss
  • Customer-provided acceptance tests: ongoing participation hard to sustain; customer testers don't always know meaningful comprehensive tests
  • Daily stand-up meetings: a good idea that can go too far; less often, shorter, can be OK
  • Simple design: don't oversimplify; must fully satisfy known requirements
  • Test-first development: culture shift is difficult, high discipline required, worth the effort
  • 40-hour work week: we really mean a sustainable pace of work
  • System metaphor
  • Onsite customer
  • Collective code ownership
  • Pair programming: good if used selectively
  • Refactoring: prone to abuse

Overhyped, buzzwordy, has led to cynicism

Not the best solution for every company, just for many; there are still cases where sequential dev is the best fit

Best outcomes

Importance of short, iterative and/or incremental cycles
Die Waterfall die!
Check-and-balance against overly bureaucratic CMM
Reality check against "omniscience" of requirements, planning, design

In conclusion:

Steve McConnell is a great communicator. I feel like this stuff is really accessible to folks at all tech levels, which makes it helpful for me in taking the Gospel back to our stakeholders.

Monday, November 05, 2007

p&p Parking Lot

Day 1: Architecture

Enterprise Service Bus: complicated. I couldn't figure out how it'd be useful for us.

Scalability: seemed really geared toward high-availability, high-traffic apps, which mine definitely isn't. The advice about degrading services gracefully is helpful, though.

SecPAL: new standard is an oxymoron, it's all game theory figuring out who's going to adopt it and how many peers would need to in order to make it useful.

Day 2: Agile

Does the fact that day 1 was "architecture" make this a "waterfall" conference?

Empirical research on Agile adoption: zzz... I mean, it is awesome, I'll take the deck to work for show-and-tell, seriously. Why's it called a "deck", anyway?

Longitudinal study shows correlation between introduction/adoption of Scrum (specifically) and less overtime (fewer hours and less often). This makes agility interesting for us because the sustainable pace is less negotiable in our environment, while productivity is what flexes...!

Day 3: Development

Totally unable to overcome skepticism of IronRuby. Also, speaker less than riveting. He's talking about IronRuby while presenting on a MacBook. Bwah?

Rocky: "Sharepoint is the new Access!" LOL.

Models: zzz. I suck for not paying attention to this.

Day 4: Software Factories

Microsoft's ill-fitting latte lids, thoughtfully supporting me in my mission to spill coffee on important technology persons. Starting, today, with myself.

In the half hour before the keynote, Scott H popped up a Notepad window over his lead slide, and hacked out a short story about someone at the conference this week actually playing Quake and CounterStrike during the sessions, headphones and all, and finished with the question: how many of you are planning to play CounterStrike during my keynote? About five minutes before his talk began, he ^A-deleted it. Never said a word about it. It was like an Easter egg for presentations.

The entire rest of day 4, so far: my brain hurts. My laptop's brain hurts. My chair's brain hurts.

Swag update: Code Complete 2, Test-Driven Development in Microsoft .NET

Architecture for... me (p&p summit, day 1)

"Pragmatic Architecture"

Demystifying architecture. What is an architect? That person who gets paid more than everybody else and has management fooled? "Architect" == Latin for "cannot code anymore"?

Infrastructure architect, enterprise architect, systems architect.

An architect's areas of concern/decision-making: communication, presentation, state management, processing, resource management, tools.

A higher-level perspective than implementers typically think at.

Addresses the high-level decisions that are really, really difficult to refactor (e.g., changing from WinForms to a web app is not a "refactor").

This stuff seems less frightening this way.

My problem is a lack of vocabulary: patterns, libraries, the specifics. I learn the abstract concepts, the "rules", the grammar a lot more easily. (Linguistics vs. language... the same reason I can't speak more than a few words of French, Russian, Spanish or German today but can describe broad commonalities and differences among them...)

Swag update: cozy p&p summit polar fleece vest.

Day 2 update: "Architecture is that stuff that if you don't do it right, it costs too much to fix."

A good target for stories is not that you can't think of any more to add, but that there's nothing you can take away.

"What do you do with troublesome senior devs who won't cooperate with agile methodologies and can't get along and get in the way of the team?" "Make 'em architects!" [audience LOL. pause.] "Hmm, seriously, that might work."

Star struck (still at p&p, still day 1)

I'm pretty excited about the fact that Steve McConnell is scheduled to deliver tomorrow's keynote address.

I really need to see if I can arrange to spill coffee on him.

p&p, day 1

It's cool to hear a keynote speech in this field and basically understand what it's about.

This morning, LINQ, which seems like a way of replacing inline SQL with inline SQL. OK, not really. LINQ code in C# seems a lot better for maintenance and code clarity, but our speaker mentioned that it encapsulates the "how" away from the "what" and I'm not sure I see how it does that.

I learned a new verb, "to new up" (to instantiate).

Now we're talking about patterns, and our (next) speaker has pointed out that patterns aren't widely used because they are "difficult to contextualize". I like that; it's a highfalutin' way of saying what I've struggled with all along. His analogy, however, that it's like searching for a recipe and getting only a list of ingredients, is funny considering my friend in culinary school has been trained to handle that exact situation and convert the ingredients into a delicious dish. In other words, she's a better patterns architect than I am.

More to come...