BurgerThought: Softshoring the new offshoring!

Offshoring is often overused nowadays by managers that tend to only understand that based on head count, it costs less, and that is of course true. The problem with simple offshoring is efficiency and failure to deliver, hopefully you can use the brand new Softshoring concept to replace most of your offshore work!!!

Training

To ensure your offshore team delivers at the required quality you spend time, effort and money training people that are not even directly part of your company. This is time you could have spent training some new cool kid next to you, that would buy you a drink, tell jokes and feel involved. Most of the time, you will have to invest in people that do not really care and that will move on at the next opportunity without looking back. It’s not a good investment of your company’s knowledge, it’s just dispersing knowledge and training people for free.

Delivery

In India and China, delivery at the expected target is a challenge that can only be achieved with close management of the teams to ensure progress. In a project environment, that can work with tight management, but for BAU tasks, you necessarily have a project manager to handle these people. This has a cost as someone on site will need to take on his time to follow progress of the offshore team, identify problems to the best of his ability and provide remote answers to, more often than not, simple but delivery blocking questions/issues. This will cost even more if your offshore team is not top notch, as you will need a very good project manager with international and remote project management experience, and that can’t be cheap when all you want are some chaps to produce damn reports and just follow very precise and detailed procedure.

Activity

Because it is supposed to be cheap, it is a good reason to throw any labour intensive, some would say “stupid”, work to offshore teams. This is actually quite interesting as it shows so much of a common trend in today’s managers. It’s cheap, and at laaast you get the much fought for increased head count! But the work sent offshore is often forgotten there, it is never optimized and offshore teams do not seem to be very interested in simplifying their own life. In their place… doing shitty work… would you do any better?

Most of your offshore team members just want you to be happy by doing EXACTLY the job you ask them to do, no more (and sometimes even a little less if they can go away with it, but hey we can’t blame them ^^). But that is not a good mindset for process optimization and continuous improvement, so there is a problem with efficiency as your remote teams will not have the nerve to tell you how to make the process better, often because that would involve somehow that they can do better than what you propose and in their eyes, that could have a negative impact on their relationship with you… Well, being located several thousand miles away does not help, but I have found that it’s also a cultural difference in the way they handle relationship. That is a very common asian view of relationships. At the end of the day, all of this does not help process self improvement, costs a lot of useless money on the long run and costs also turnover, as the staff gets bored from the uninteresting tasks proposed and the lack of flexibility to change anything (even if it is a barrier only in their head).

Economic

Offshoring is sending out knowledge and funds to another country, it is not good business for your local country’s economy, as it does crush local jobs, the funds sent away do not get back much either directly (buy your products) and indirectly (local taxes, VAT, consumption in the same country, etc). I may be wrong but offshoring is interesting only for the offshore economy, the added competitiveness of your local company may not stand strong in the long run as turnover, knowledge control and cultural gap raise new issues that cost money and pile up, in the end I am not quite sure the added competitiveness is that clear.

Solution? Softshoring!

My workmate and I have found that most of the work that people want to offshore can often be done more efficiently and with much less errors by a good script or software: that is the definition of Softshoring! 🙂 Indeed most of the tasks offshored can be automated to such an extent that the actual people needed to do the work are 1 guy that knows its stuff, instead of a pile of AngryBirds applying a procedure they despise and have little interest in improving.

This strongly relates to automation in the workplace of course, but managers today only look at the cost and don’t know how to measure efficiency and have no interest in identifying improvements, so much that people are sometimes discouraged to propose improvement although they know what should be done.

Softshoring requires a good IT developer that understands the processes and has a brain for him.

With softshoring, rediscover what IT was originally made for: doing the crappy job itself and let human beings drink their beer and use their brains!

Advertisements
Posted in Burger Thoughts | Tagged , , , , , , , , , | Leave a comment

MPlayer 5.1 AC3 DTS in Ubuntu 10.04

I wanted to play some mkv/avi files containing 5.1 AC3/DTS sound with mplayer through an SPDIF optical out on my Ubuntu box.

With default settings, at first I was getting:

Selected audio codec: [hwac3] afm: hwac3 (AC3 through S/PDIF)
==========================================================================
[format] Sample format big-endian AC3 not yet supported
Error at audio filter chain pre-init!

I have the default configuration, so PulseAudio is running. I have just chosen “Alsa” as default audio device in smplayer (yes SMPlayer, a GREAT GUI for mplayer, Qt rocks get over it 😉 ).

I have tried deactivating pulseaudio temporarily to call on the alsa device directly with:

pasuspender — smplayer test.avi

but it still did not work. This suspends PulseAudio, frees all audio hardware and launches the provided command…. could come in handy another day 🙂

After some googling and testing, I finally figured out you need the following things for it to work:

  • Select the correct Hardware Alsa device ! (for me it was 0.1 for the SPDIF output). If you use mplayer CLI, that is: -ao alsa:device=hw=0.1
  • Disable “Enable the audio equalizer”. Activating this always gives the error.
  • Enable “AC3/DTS pass-through S/PDIF”. This one is obvious…
  • Disable “Use software volume control”. Activating this always gives the error.

You should get something like this:

 

Now you can play your favorite DVD/BR/files with full SPDIF AC3/DTS surround power, that will get decoded on your kick-ass Surround amplifier at the other end of your SPDIF cable 🙂 All this on your favorite OS. Yeah!!!

Volume configuration

If you can’t hear anything, you need to mess with your PulseAudio settings first, and if not satisfactory then your Alsa settings.

For Pulse you need to select the “Digital Stereo Duplex (IEC958)” hardware profile.

For alsamixer, I have mostly:

  • “S/PDIF Default PCM” on Mute.
  • “Channel mode” at 6, although I am not sure it is needed.
  • “S/PDIF” on Unmute (lol)
  • Everything else on Mute, except for “PCM” and “Master” of course 🙂
Posted in Linux | Tagged , , , , , , , , , , , , | 2 Comments

VirtualBox 3.2.8 EULA nuclear FAIL

VirtualBox is a good virtualization platform for both Windows and Linux environments.

It sometimes happens that I read the EULA (yes yes) of non-free softwares I install. To my surprise, in the VirtualBox 3.2.8 PUEL, it says:

§ 3 Restrictions and Reservation of Rights.
[…]
(3) The Product is not designed, licensed or intended for use in the design, construction, operation or maintenance of any nuclear facility and Oracle and its licensors disclaim any express or implied warranty of fitness for such uses.
[…]

Cheers for the big O!! We knew you guys had nightmares on free software licenses (see the OpenSolaris murder) and dreams of fruitful lawsuits (Javaaa), but this one actually got me laughing for once 🙂

For them to actually put such a statement before the “§ 5 Disclaimer of Warranty”, probably means that this case probably REALLY happened. YES. Otherwise, why would they even bother?  The disclaimer of warranty being “AS-IS”, they should be already correctly covered from a legal perspective.

That’s a taste of what IT is like in the real world…

Hopefully, in “§ 6 Limitation of Liability”, big O states that they will not be liable “FOR SPECIAL, INDIRECT, CONSEQUENTIAL, INCIDENTAL OR PUNITIVE DAMAGES” for using VirtualBox in a nuclear facility. So don’t come after them if half of your country is gone.

Who is really responsible these days? Let met see the manager!

Posted in Burger Thoughts, Software | Tagged , , , , , , | Leave a comment

Firefox applies styling to script blocks…

Here is the question I asked on stackoverflow.

This applies to Firefox and Chrome, but not to IE.

I still do not understand under which circumstances anyone would want the script blocks to be valid targets for css rules. If anyone knows about any reference to this, I’d be glad to know ! 🙂

Posted in Software, Web | Tagged , , , , , , , , | Leave a comment

Converting a solution from Visual Studio 2010 back to 2008

Today, as the expiration of the Visual Studio 2010 beta license is nearing expiration while as small partners we still haven’t received the new ISOs, I need to convert back my solution and projects to Visual Studio 2008.

A quick Google check gave me a link to this 2009 post: http://blogs.msdn.com/b/rextang/archive/2009/07/06/9819189.aspx

I didn’t need to apply this to all files, I only needed to do it my solution file and then everything loaded neatly. One thing that was not mentioned is the conversion of UnitTests. You need to remove the reference to the version 10 of Microsoft.VisualStudio.QualityTools.UnitTestFramework and add back the reference to version 9. After that, all went nicely for the 14 projects in my solution.

Posted in Tips, Uncategorized, Windows | Tagged , , , , , , , , , , , , , , , , | Leave a comment

IIS7 is failing Debug.Assert

Assert is one of the (several) best friend of every developer. Asserts are widely useful from checking pre-conditions to post-conditions and every other possible assumptions/invariants in between.

When an assert condition fails, it usually indicates a serious error and is a way for the program to scream back at the developer “There’s a massive breach of contract here and the result isn’t likely to be pretty”… Historically, asserts failures tend to manifest themselves quite catastrophically such as aborting the program mid-execution or displaying a severe looking pop-up for graphical softwares.

In ASP.NET with IIS previous to version 7, it used to be the same, you would get an annoying ugly massive pop-up on the server (usually a developer’s machine given that the .NET Debug.Assert is only executed in debug mode). Now, I can see how this can cause problems to people using debug builds on servers (probably not the smartest) or using one testing server for several developer in which case the server hangs awaiting for someone to click the “OK” button on the pop-up !

In IIS 7, there is no ugly annoying pop-up anymore, the assertion kindly display in the Output window of Visual Studio if you are attached to the process or you can redirect them to a trace file if you prefer. This is all fine and clearly has some use for some people, but to me, it kills the first and most important aspect of a failed assert: it’s got to be in your face and annoying enough that the developer cannot a) miss it and b) ignore it.

The IIS7 behaviour misses the point on both accounts as not everyone is going to happen to look at the output window all the time during debugging and it is way too easy to ignore it — just don’t look at it. Finally, it also means that unless you are actually attached to the process you will not see the assert. Even if you happened to see the assert after the fact, it is of a lot less use since you don’t have the process at hand with the call-stack and frames to do anything useful if the problem is not excessively trivial.

I had already wrapped all my calls to Debug.Assert into a conditional method so that I could easily intercept/add a behavior to those calls such as sending emails to the development mailing list for example. So it was rather easy to restore the original catastrophic behavior of the assert with the following:

    1 [System.Diagnostics.Conditional( "DEBUG" )]
    2 public static void Assert(Boolean condition, String message)
    3 {
    4     if ( !condition )
    5     {
    6         System.Diagnostics.Debug.Assert( condition, message );
    7         System.Threading.Thread.CurrentThread.Abort();
    8     }
    9 }

This nicely abort the thread which should catch any developer’s attention 🙂 The conditional attribute on DEBUG insures that the method is removed from release builds where DEBUG is not defined in a similar fashion as Debug.Assert is.

I have searched for a good while and I have not seen anyway of configuring the Debug.Assert behavior to be either the old or the new way which would be very useful. If anyone knows how to do this, I’d be very interested in hearing it.

Posted in .NET, Development, Uncategorized, Windows | Tagged , , , , , , , , , , , , , , , , , , , | Leave a comment

IIS and UrlScan – denying request

Today I was developing a little Silverlight client app and everything was working fine in Visual Studio web server.  But then I deployed my app in IIS and everytime I request a page, I got back a 404 error. It took me a while to check for all the usual culprits (permissions, authentication etc.), but I could not find out what was wrong. So I decided to check out the IIS logs (under <WINDOWS_ROOT>/system32/LogFiles/W3SVC1 for me) and here is what I found:

<time> localhost GET /Rejected-By-UrlScan 404

oO

Ok, so that was a new one for me, so I went and checked the UrlScan logs (under <WINDOWS_ROOT>/system32/inetsrv/urlscan/logs for me) and here is what I found:

<date> <time> localhost GET /Fotoz.Web/ Rejected URL+contains+dot+in+path URL – –

So, here was the culprit denying me access: my folder path contained a dot. I had the bright idea of calling my web app Fotoz.Web which made it fail.

I then checked the UrlScan.ini file (under <WINDOWS_ROOT>/system32/inetsrv/urlscan for me) and found the setting:

AllowDotInPath=0

which I changed to:

AllowDotInPath=1

This should not impact too much on the security since I still had the following in the [DenyUrlSequence] section:

..  ; Don’t allow directory traversals

which I think is the only thing that could matter if my server was ill-configured.

Posted in Development, Uncategorized, Web, Windows | Tagged , , , , , , , , , , , , , , , , | 1 Comment