There's a new initiative being pushed down by the Big Heads, but first some background:
In our development shop, we each have our own development box. All the developers have local admin so they can install software and do other things developers need to do in order to be able to develop. The development machines are on the Intranet, as are our test servers, both web and database. Some of the applications we develop depend on connecting to existing applications for their data sources.
We develop, test, and deploy sandbox installs on our development servers for customers to review, repeating this processes as best we are able until we're done with whatever it is we're building.
The completed (and development tested) application then goes to Test and Integration, were the eventual users perform their test cases (which the developers wrote BTW) and the network guys make sure it will integrate into "production", i.e., that it won't screw up anything already out there.
Once they rubber stamp, it goes to Configuration Management, who versions it and then releases it to production, who installs it on the "live" Intranet.
Barring the typical development stupidity and ignorance one runs into almost anywhere, this process (eventually) works well enough for us to complete applications and get them deployed and in use.
Well now someone has decided it's a "risk" to do development on machines that are connected to the "live" Intranet. The proposed solution is an isolated Development Lab, where there's a single development "server" upon which each developer has his or her virtual machine set up to develop on. The developers will have admin permissions be able to install/uninstall software and changing settings just like now on his or her VM. There's a single development database server we are all supposed to use.
So far, no adequate answer to my questions:
How are we supposed to connect to existing web services on the Intranet if the app we're building requires it? This is the biggest concern, as many of the apps we build connect to existing services for data. The floated solution so far is to buy additional servers and stand up copies of the production systems in the lab environment that we then develop against. Brother, I'm here to tell you I can's get $3 for a ream of paper, and now they're are going to buy servers for standing up production copies? I wonder what the vendors of these systems will think of us running additional instances of their software?
How are geographically separated customers supposed to review milestone installs of work in progress? Right now, customers can log into the Intranet from anywhere and view apps in progress. In the Big Heads' Dev Lab, they won't be able to do that, so customers will have to physically travel to the Dev Lab to view.
Lastly, and this sort of ties in with the whole $3 ream of paper thing, we've been given a VM to do "proof of concept" against. We're supposed to set it up with a baseline install of all the stuff a developer needs. The VM they gave us has 18 gig disk space. Visual Studio 2005 needs 6 gig right from the git-go. VS 2008 needs even more. Plus whatever apps we're working on. Plus helper apps like Infragistics. And SQL Server Developers edition. And the OS. And, and, and... We need 80 gigs I told them. Sorry, don't have the resources. So you gong to stand up dupe servers of all the apps we develop against, but you can buy some more disks so the developers can have enough space to set up their dev environments? Yeah, that'll fly.
Anyway, just needed to vent. Processes, over time, are supposed to get simpler and more streamlined. Why is that software development always seems to go in the other direction and become more complicated?