Thursday, March 12, 2015

Enhanced object properties in the PHP and Ruby libraries

The newest versions of the Google Calendar API and Google Tasks API use JSON as their data format. In languages like PHP and Ruby, it’s simple to turn a JSON object into something that can be easily read and modified, like an associative array or hash.

While creating and modifying hashes is straightforward, sometimes you want a true object and the benefits that come with using one, such as type checking or introspection. To enable this, the PHP and Ruby client libraries can now provide objects as the results of API calls, in addition to supporting hash responses.

Ruby gets this for free with the latest version of the gem. For PHP, you have to enable support in the client instance:
$apiClient = new apiClient();
$apiClient->setUseObjects(true);
The following examples for PHP and Ruby retrieve an event via the Calendar API, and use data from the new resulting object:

PHP:
$event = $service->events->get("primary", "eventId");
echo $event->getSummary();
Ruby:
result = client.execute(
:api_method => service.events.get,
:parameters => {calendarId => primary,
eventId => eventId})
print result.data.summary
If you have general questions about the client libraries, be sure to check out the client library forums (PHP and Ruby). For questions on specific Apps APIs come find us in the respective Apps API forum.

Dan Holevoet   profile

Dan joined the Google Developer Relations team in 2007. When not playing Starcraft, he works on Google Apps, with a focus on the Calendar and Contacts APIs. Hes previously worked on iGoogle, OpenSocial, Gmail contextual gadgets, and the Google Apps Marketplace.

Read more »

Google Drawings Support added to Documents List API

We are excited to have just launched full support for Google Drawings in the Documents List API. You can now create, import, retrieve, update, and delete your Drawings in a similar manner as other document types.

Currently, we allow importing Drawings from WMF files. Drawings can be exported as PDF, JPEG, PNG, and SVG images.

This has been a feature requested by many users, so we’re happy to fulfil those requests. The release of this feature has been done in stages. You may have noticed that Drawings started showing up in list feeds a few months ago, but our most recent release marks full support for Drawings.

These features are only available in version 3.0 of the Documents List API. Users can read the updated documentation and the release notes for information on using these new features of the API. As always, if you have any questions, please visit the Documents List API support forum.

Want to weigh in on this topic? Discuss on Buzz

Read more »

5 things you didnt know you could do with the Google Drive API

Have you tried using the Google Drive API? If so, you’re aware that it allows you to programmatically manage a user’s Google Drive and build applications to manipulate files stored in the user’s account. However, the API might still be capable of doing a few things you didn’t know about. Here is a list of five specific use cases and how each of them can be addressed with the API.

1) Sharing a file with the world

When a file in Google Drive is shared publicly, it can be downloaded without authentication at the URL provided by the API in the webContentLink field of the Files resource. To retrieve that value, send a GET request to retrieve the file metadata and look for the webContentLink element in the JSON response, as in the following example:


{
"kind": "drive#file",
"id": "0B8E...",
"etag": "WtRjAP...",
"selfLink": "https://www.googleapis.com/drive/v2/files/0B8E...",
"webContentLink": "https://docs.google.com/a/google.com/uc?id=0B8E...",
...
}

2) Granting comment-only access to a user

When setting permissions for a file with the Drive API, you can choose one of owner, writer and reader as the value for the role parameter. The Drive UI also lists another role, commenter, which is not allowed for that parameter.

In order to grant comment-only access to a user with the Drive API, you have to set the role parameter to reader and include the value commenter in the list of additionalRoles, as in the following example:


{
"kind": "drive#permission",
...
"role": "reader",
"additionalRoles": [
"commenter"
],

...
}

3) Listing all files in the root folder

It is possible to restrict the list of files (and folders) returned by the Drive API by specifying some search criteria in the q query parameter. Each file has a parents collection listing all folders containing it, and the root folder in Google Drive can be conveniently addressed with the alias ‘root’. All you need to do to retrieve all files in that folder is add a search query for element with ‘root’ in their parents collection, as in the following example:


GET https://www.googleapis.com/drive/v2/files?q=root in parents

Remember to URL-encode the search query for transmission unless you are using one of the available client libraries.

4) Finding how much quota is available in the user’s account

Your application might need to know if users have enough available quota to save a file, in order to handle the case when they don’t. Quota information is available in the About feed of the Drive API:


{
"kind": "drive#about",
...
"quotaBytesTotal": "59055800320",
"quotaBytesUsed": "14547272",
"quotaBytesUsedInTrash": "511494",

...
}

The feed includes three values related to quota management: quotaBytesTotal, quotaBytesUsed and quotaBytesUsedInTrash. The first value indicates the total amount of bytes available to the user (new accounts currently get 5GB for free) while the second one tells how many of those bytes are in use. In case you need to get more free space, you can use the last value to know how many bytes are used by files that have been trashed. An application might use this value to recommend emptying the trash bin before suggesting to get additional storage.

5) Discovering if one of the user’s apps can open a file

Google Drive allows users to store any kind of file and to install applications to open file types that are not directly supported by the native Google applications. In case you need to know what applications are installed and what file types each of them can open, you can retrieve the Apps feed and look for the primaryMimeTypes and secondaryMimeTypes elements for supported MIME types or primaryFileExtensions and secondaryFileExtensions for file extensions:


{
"kind": "drive#app",
"name": "Pixlr Editor",
...
"primaryMimeTypes": [
"image/psd",
"image/pxd",
"application/vnd.google-apps.drive-sdk.419782477519"
],
"secondaryMimeTypes": [
"image/png",
"image/jpeg",
"image/gif",
"image/bmp"
],
"primaryFileExtensions": [
"pxd",
"psd"
],
"secondaryFileExtensions": [
"jpg",
"png",
"jpeg",
"bmp",
"gif"
],


}

Note: to access the Apps feed you have to request access to the https://www.googleapis.com/auth/drive.apps.readonly OAuth scope.

Claudio Cherubino   profile | twitter | blog

Claudio is an engineer in the Google Drive Developer Relations team. Prior to Google, he worked as software developer, technology evangelist, community manager, consultant, technical translator and has contributed to many open-source projects. His current interests include Google APIs, new technologies and coffee.

Read more »

Wednesday, March 11, 2015

Agile scope completion techniques

One of the questions Ive received in the past about agile techniques is how to ensure youve captured enough detail about your requirements in order to proceed without missing major scope elements.

Whether you are using story cards, features or other techniques to capture your requirements, you need to answer this question: "How do I know when Ive done enough requirements gathering?" In waterfall this is ‘easy’ – gather all the detail and sign-off (ok – I’m simplifying). In agile, we depend on features or stories, but many are concerned that major scope elements will be left out which will either cause many items to grow exponentially in size or that feature X is really feature X, Y and Z. For example, when the registration screen has 50 fields instead of the 10-15 that we might have assumed, but didn’t write down. It is hard to understand how this can be done in 1 or 2 days using feature or story cards that contain only one line of description, a few lines of acceptance and a few assumptions.

Three things for you to consider to help you solve this dilemna:

1. In waterfall techniques, although we hold some comfort in our massive requirements documents, we know from experience that even then things will change and things will be missed.

2. My teams estimate using planning poker with the full team including the client and we have found this has helped to uncover hidden or unknown scope. We discuss each item together before estimating and talk about the number of screens, inputs, outputs, services etc involved. This discussion itself often uncovers additional scope, but so does the estimating that follows each discussion. For example, when most of us say ‘2’ and one person says ‘8’, the person who said ‘8’ enlightens the team on the complex caching required to meet the performance requirements listed as an Acceptance test. This is especially important if your client is the one with the highest estimate. Dont ignore it.

3. Lastly, I attended a virtual class on agile estimating that suggested another technique. For every feature or story, categorize the requirements certainty as high, medium and low. Keep challenging your client until the requirements certainty on each story is low.

Id be interested in other techniques you may be using to keep the initial requirements gathering phase light weight, yet complete. I think as an industry we are getting better at embracing the changes that are inevitable on all projects, but our clients still require us to have a good understanding of the known scope and the resulting estimate before starting the project.
Read more »

Tuesday, March 10, 2015

Create a Spreadsheet User Directory with Apps Script

As a consultant helping companies move to the Google cloud, I receive many feature requests before, during, and after each migration. Often I’m asked about re-creating small and specific solutions that support particular business needs not fully covered by Google Apps out of the box. In many cases, a simple Google Apps Script solution satisfies the business requirement.

What is the Google Spreadsheet User Directory?

The “Google Spreadsheet User Directory” is a solution I’m frequently asked about. Google Apps Domain administrators can use a simple Apps Script that can be saved into a Google Spreadsheet and then set to run on a schedule, via a “time-driven” trigger. By using the Google Profiles API (available only for domain administrators), domain administrators can create a Google Spreadsheet which contains Google Apps domain user information.The user profile data can then be consumed and used by other business logic code, either in the spreadsheet itself or elsewhere.
Using Apps Script to provide this kind of solution was an obvious choice for the following reasons.

  1. Apps Script makes the Google Spreadsheet User Directory a simple, flexible solution that the customer can quickly understand and extend. The JavaScript syntax is easy to learn and program in, and there is no need to compile and deploy code.
  2. The Apps Script code is conveniently integrated into Google Spreadsheets, so there is no need to use any other software. Advanced functions can be exposed to end users for data manipulation through the spreadsheet menu, and scheduling an Apps Script to run at a regular interval is trivial via the Spreadsheet “Triggers” mechanism.
  3. Google Apps Script provides services for accessing Google Profiles, Contact Info, and Google Groups plus Google Docs, Google Sites, Google Charts, and more.  The Google Spreadsheet User Directory script makes use of both the new Apps Script Domain Services API and the GData Profiles API, via the “UrlFetch” service.
  4. The Apps Script code can be easily shared through Google Spreadsheet templates and through the Google Script gallery.

Using the Google Spreadsheet User Directory

The Google Spreadsheet User Directory code consists of a primary scanUserProfiles() function and some supporting “utility” functions. The three steps for setting up the code to run are: 1. Set up the “Consumer_Key” and “Consumer_Secret” ScriptProperties and run the scanUserProfiles() function in the Apps Script integrated development environment to get the first “Authorization Required” screen. (I’ve included an illustration below... Choose “Authorize.”).
2. Since scanUserProfiles() uses OAuth with UrlFetch to get User Profile information via the GData API, it needs to be run at least one more time inside of the Apps Script IDE, so that the OAuth “Authorize” prompt can be shown to the programmer and accepted.
3. After authorization, the scanUserProfiles() script is free to make authorized requests to the Google User Profiles feed, as long as the developer who saved it has “domain admin” rights.

Design of the Google Spreadsheet User Directory

The following snippets show the OAuth setup, the user profiles Url setup, and the initial UrlFetch.
var oAuthConfig1 = UrlFetchApp.addOAuthService("googleProfiles");
oAuthConfig1.setRequestTokenUrl("https://www.google.com/accounts/OAuthGetRequestToken?scope=https:// www.google.com/m8/feeds/profiles");
oAuthConfig1.setAccessTokenUrl("https://www.google.com/accounts/OAuthGetAccessToken");
oAuthConfig1.setAuthorizationUrl("https://www.google.com/accounts/OAuthAuthorizeToken?oauth_callback=https:// spreadsheets.google.com/macros");
oAuthConfig1.setConsumerKey(ScriptProperties.getProperty("Consumer_Key"));
oAuthConfig1.setConsumerSecret(ScriptProperties.getProperty("Consumer_Secret"));
var options1 = {
oAuthServiceName : "googleProfiles",
oAuthUseToken : "always",
method : "GET",
headers : {
"GData-Version" : "3.0"
},
contentType : "application/x-www-form-urlencoded"
};
var theUrl = "";
if (nextUrl == "") {
theUrl =
"https://www.google.com/m8/feeds/profiles/domain/" + domain +
"/full?v=3&max-results=" + profilesPerPass + "&alt=json";
} else {
theUrl = nextUrl;
}
if (theUrl != "DONE") {
var largeString = "";
try {
var response = UrlFetchApp.fetch(theUrl, options1);
largeString = response.getContentText();
} catch (problem) {
recordEvent_(problem.message, largeString, ss);
}
}
var provisioningJSONObj = null;
var jsonObj = JSON.parse(largeString);
var entryArray = jsonObj.feed.entry;
The "nextUrl" variable above (line 74) is being pulled from a cell in the spreadsheet, where Im saving the "next" link from the fetched data. (If there’s no “next” link, I save "DONE" to the same spreadsheet cell.) To fetch JSON, I’m appending the parameter &;alt=json on lines 75 and 76. After I’ve got my JSON object, I create an array to store the data that I will be writing out to the spreadsheet. I set the array default values and make liberal use of try-catch blocks in this code, since there’s no telling which of these fields will be populated, and which will not.
for (var i=0; i<entryArray.length; i++) {
var rowArray = new Array();
rowArray[0] = "";
rowArray[1] = "";
rowArray[2] = "";
try { rowArray[0] = entryArray[i].gd$name.gd$fullName.$t; } catch (ex) {} //fullname
try { rowArray[1] = entryArray[i].gd$name.gd$givenName.$t; } catch (ex) {} //firstname
try { rowArray[2] = entryArray[i].gd$name.gd$familyName.$t; } catch (ex) {} //lastname
At the end of the data collection process for a single record/row, I add the rowArray to another single-element array called valueArray (line 207), to create a 2-D array that I can use with range.setValues to commit my data to the spreadsheet in one shot (line 209).
var updateRow = getNextRowIndexByUNID_(rowArray[3],4,stageSheet);
var valueArray = new Array();
valueArray.push(rowArray);
var outputRange = stageSheet.getRange(updateRow, 1, 1, 12);
outputRange.setValues(valueArray);

The function getNextRowIndexByUNID (line 205) just finds the next available row on the “staging” sheet of the spreadsheet, so I can write data to it. The code is inside of a “for” loop (starting on line 106) that executes once for each entry in the current JSON object (created lines 96 and 97).
} else {
// COPY CHANGES TO "PRODUCTION" TAB OF SPREADSHEET
var endTime = new Date();
setSettingFromArray_("LastPassEnded",getZeroPaddedDateTime_(endTime),settingsArray,setSheet);
if (parseInt(getSettingFromArray_("StagingCopiedToProduction",settingsArray)) == 0) {
// THIS DOES A TEST-WRITE, THEN A "WIPE," THEN COPIES STAGING TO
// PRODUCTION
var copied = copySheet_(ss,"Staging","Employees");
if (copied == "SUCCESS") {
var sortRange = empSheet.getRange(2,1,empSheet.getLastRow(),empSheet.getLastColumn());
sortRange.sort([3,2]); // SORT BY COLUMN C, THEN B
// RESET SETTINGS
setSettingFromArray_("NextProfileLink","",settingsArray,setSheet);
setSettingFromArray_("LastRowUpdated",0,settingsArray,setSheet);
setSettingFromArray_("StagingCopiedToProduction",1,settingsArray,setSheet);
}
}
} // end if "DONE"

If the script finds “DONE” in the “NextProfileLink” cell of the spreadsheet, it will skip doing another UrlFetch to the next feed link (line 81). Instead, it will copy all records from the “staging” sheet of the spreadsheet to the “production” one, via a utility function called “copySheet” (line 273). Then it will sort the range, reset the copy settings, and it will mark another designated cell, “StagingCopiedToProduction” as “1” in the spreadsheet, to stop any further runs that day.

Scheduling the Google Spreadsheet User Directory Script to Run

Below are the triggers I typically set up for the Spreadsheet User Directory. I recommend setting scanUserProfiles() to run on an interval of less than 30 minutes, since the Google-provided token in each “NextProfileLink” url lasts about that long. I also recommend running the WipeEventLog() utility function at the end of each day, just to clear data from prior runs from the EventLog tab of the spreadsheet.

Conclusion

Above I’ve outlined how to create a basic User Directory out of a Google Spreadsheet and Apps Script that will always keep itself current. Since Google Spreadsheets support the Google Visualization API and a query language for sorting and filtering data, all kinds of possibilities open up for creating corporate “directory” gadgets for Google Sites (see the image at right) and for enabling business processes that require workflows, role lookups, or the manipulation of permissions on content in the various Google Apps.
Using Apps Script made this solution quick and easy to produce and flexible enough to be extended and used in many different ways. The code is easy to share as well. If you’d like to give the Google Spreadsheet User Directory a try, then please copy this spreadsheet template, and modify and re-authorize it to run in your own domain. Enjoy!

Shel Davis

Guest author Shel Davis is a senior consultant with Cloud Sherpas, a company recently named the Google Enterprise 2011 Partner of the Year. When Shel is not working on solutions for customers, he’s either teaching classes on Google Apps and Apps Script (Google Apps Script Training), or he’s at home, playing with his kids.

Read more »

Private Member Variables in Javascript Objects

The programming language of Google Apps Script is JavaScript (ECMAScript). JavaScript is a very flexible and forgiving language which suits us perfectly, and theres also a surprising amount of depth and power in the language. To help users get into some of the more useful power features were starting a series of articles introducing some more advanced topics.

Lets say we want to create an object that counts the number of occurrences of some event. To ensure correctness, we want to guarantee the counter cant be tampered with, like the odometer on your car. It needs to be "monotonically increasing". In other words, it starts at 0, only counts up, and never loses any previously counted events.

Heres a sample implementation:

  Counter = function() {
    this.value = 0;
  };

  Counter.prototype = {    
    get: function() {
      return this.value;
    },
    increment: function() {
      this.value++;
    }
  };

This defines a constructor called Counter which can be used to build new counter objects, initialized to a value of zero.  To construct a new object, the user scripts counter = new Counter(). The constructor has a prototype object, providing every counter object with the methods counter.increment() and counter.get(). These methods count an event, and check the value of the counter, respectively. However, there is nothing to stop the script from erroneously writing to counter.value. We would like to guarantee that the counters value is monotonically increasing, but lines of code such as counter.value-- or counter.value = 0 roll the counter back, breaking our guarantee.

Most programming languages have mechanisms to limit the visibility of variables. Object-oriented languages often feature a private keyword, which limits a variables visibility to the code within the class. Such a mechanism would be ideal here, ensuring that only the methods counter.increment() and counter.get() could access value. Assuming that these two methods are correctly implemented, we can be sure that our counter cant get rolled back.

Javascript has this private variable capability as well, despite not having an actual keyword for it. Lets examine the following code:

  Counter = function() {
    var value = 0;
    this.get = function() {
      return value;
    };
    this.increment = function() {
      value++;
    };
  };

This constructor gives you objects that are indistinguishable from those built with the first constructor, except that value is private. The variable value here is not the same variable as counter.value used above. In fact, the latter is undefined for all objects built with this constructor.

How does this work? Instead of making value a member variable of the object, it is a local variable of the constructor function, by use of the var keyword. The get and increment functions are the only functions that can see value because they are defined within the same code block. Only code inside this block can see value; outside code does not have access to it. However, these methods are publicly visible by having been assigned to the this object.

Limiting visibility of variables is considered a good practice, because it rules out many buggy states of your program. Make sure to use this technique wherever possible.

Cross-posted from Google Apps Script Blog 

by: Jason Ganetsky, Software Engineer, Google Apps Script

Read more »

Integrating Google Docs with Salesforce com using Apps Script

Editors Note: Ferris Argyle is going to present Salesforce Workflow Automation with Google Spreadsheet and Apps Script at Cloudforce. Do not miss Ferriss talk - Saurabh Gupta

As part of Googles Real Estate and Workplace Services (REWS) Green Team, the Healthy Materials program is charged with ensuring Google has the healthiest workplaces possible. We collect and review information for thousands of building materials to make sure that our offices are free of formaldehyde, heavy metals, PBDEs and other toxins that threaten human health and reduce our productivity.

A Case for using Google Docs and Salesforce.com

My team, as you might imagine, has a great deal of data to collect and manage. We recently implemented Salesforce.com to manage that data, as it can record attributes of an object in a dynamic way, is good at tracking correspondence activity and allows for robust reports on the data, among many other functions.

We needed Saleforce.com to integrate with our processes in Google Apps. We wanted to continue collecting data using a Google Docs form but needed it integrated with Salesforce.com because we:

  1. Liked the way the form looked and functioned
  2. Wanted to retain continuity for our users, including keeping the same URL
  3. Wanted a backup of submissions

And this is where Google Apps Script came to our rescue. We found that we could use Google Apps Script to create a new Case or Lead in Salesforce.com when a form is submitted through our Google Docs form. This allowed us to continue using our existing form and get our data directly and automatically into Salesforce.com.

Google Docs + Apps Script + Salesforce.com = Integrated Goodness!

Salesforce.com has two built-in options for capturing data online - Cases and Leads. Google Docs Forms can capture data for both of them. Set up your Case or Lead object with the desired fields in Salesforce.com. The next step is to generate the HTML for a form. You will use the IDs in the Salesforce.com-generated HTML when writing your Google Apps script.


A) Getting the HTML in Salesforce.com:

1. Login to Salesforce.com and go to Your Name > Setup > Customize > Leads or Self-Service (for Cases) > Web-to-Lead or Web-to-Case

2. Make sure Web-to-Lead/Web-to-Case is enabled. Click on Edit (Leads) or Modify (Cases) and enable if it is not.

3. Click on the Create Web to Lead Form button (for Leads) or the Generate the HTML link (for Cases)

4. Select the fields you want to capture and click Generate. Save the HTML in a text file. You can leave Return URL blank


B) Setting up Google Apps Form/Spreadsheet:

Create your form and spreadsheet (or open up the one you already have and want to keep using). This is very easy to do. Go to your Docs and click on Create to open a new form. Use the form editor to add the desired fields to your form- theyll show up as column headings in the corresponding spreadsheet. When someone fills out your form, their answers will show up in the right columns under those headings.


C) Writing the Google Apps Script:

The script is set up to take the data in specified cells from the form/spreadsheet and send it into designated fields in your Salesforce.com instance (identified by the org id in the HTML generated above). For example, the form submitters email is recorded through the form in one cell, and sent into the email field in either the Lead or Case object in Salesforce.com.

1. Create a new script (Tools > Script Manager > New).

2. Write the script below using the pertinent information from your Salesforce.com-generated code (shown further down).


function SendtoSalesforce() {
var sheet = SpreadsheetApp.getActiveSpreadsheet().getActiveSheet();
var row = sheet.getLastRow();
var firstname = sheet.getRange(row, 2).getValue();
var lastname = sheet.getRange(row, 3).getValue();
var email = sheet.getRange(row, 4).getValue();
var company = sheet.getRange(row, 5).getValue();
var custom = sheet.getRange(row, 6).getValue();
var resp = UrlFetchApp
.fetch(
https://www.salesforce.com/servlet/servlet.WebToLead?encoding=UTF-8,
{
method: post,
payload: {
orgid : 00XXXXXXXX,
first_name : firstname,
last_name : lastname,
email : email,
company : company,
00YYYYYYYY : custom,
external : 1
}
});
Logger.log(resp.getContentText());
}

Define your variables by directing the script to the correct cell (row, column number). Then in the payload section, match the field id from your Salesforce.com HTML (red) to the variable you defined (blue). For example, the email address of the submitter is defined as variable email, can be found in the 4th column of the last row submitted, and the id for that field in Salesforce.com is email.


Note that any custom fields youve created will have an alpha-numeric id.

3. Save your script and do a test run.


D) Wiring Script to a Form Submission.

To send your data automatically into Salesforce.com, you need to set a trigger that will run the script every time a form is submitted. To do this, go to your script and click Resources>Current scripts triggers.

1. Create a Trigger for your function so that it runs when a form is submitted.


2. Post the link to your form on your website, send it in an email, link to it on G+, etc. Get it out there!

Thats it! Now when someone submits a form, the information will come into your spreadsheet, and then immediately be sent into Salesforce.com. You can adjust your Salesforce.com settings to create tasks when the information comes in, send out an auto-response to the person filling out the form and set up rules for who is assigned as owner to the record. Youll also have the information backed up in your spreadsheet.

This has been a great solution for our team, and we hope others find it useful as well!


Beth Sturgeon  

Beth Sturgeon is a member of Googles Green Team in Mountain View, which makes sure that Googles offices are the healthiest, most sustainable workplaces around. Prior to Google, she had a past life as a wildlife researcher.

Read more »

Monday, March 9, 2015

The next Marketing Test Kitchen celebrating customer success

Thanks to everyone who participated in the first Marketing Test Kitchen initiative: “Add to Apps" button. Overall, it was a huge success. The number of vendors using “Add to Apps” buttons grew significantly, causing a large increase in installs driven by button traffic. Before kicking off the second Apps Ecosystem Marketing Test Kitchen initiative, we want to recognize the winners of the first one.

Congratulations to the 6 winners, who will get additional exposure on the featured and notable section of the Marketplace front page:
Outright, Producteev, Insync, Mavenlink, Zoho and Manymoon

Established vendors such as Manymoon and Zoho improved performance of existing buttons and newer folks like Outright and Producteev added buttons to capture new business. If you didn’t get your button up for last week’s contest, that doesnt mean you shouldn’t do it now! Adding a button helps improve your overall performance in the Marketplace and will prepare you for future initiatives.

Now let’s take a look at the next Marketing Test Kitchen...

The Next Challenge:
Publish your most compelling customer success stories by Thursday, Dec 2nd on your own blog and share it with us at marketing-test-kitchen@google.com. We will feature a few of the top stories on the Google Enterprise Blog (see examples here and here) and also rotate the winning vendors into the featured and notable sections on the Marketplace front page. Note we will feature every submission in the Marketplace Success Stories blog, so just by submitting a story you will end up on the front page of the Marketplace.

It’s easy to participate: Find a compelling customer, tell their story, publish it on your blog, share it with us, and track your performance.

What makes a compelling customer?
It is important to find a customer that demonstrates the value of your integrated features with Google Apps. Make sure that your customer gives explicit approval for using their story. Here are some qualities of a compelling customer.
  • Highlights the value of your app: For example, their use of your app in conjunction with various other web apps, such as other Marketplace apps.
  • Hard data to support success: Numbers that justify strong gains are important, ie: 50% productivity gains, 10% increase in revenue, 20% reduction in IT costs.
  • Passionate about Google Apps and the cloud: A genuinely passionate customer can explain the advantages of a cloud-based business and more easily help prospects understand and transition.
How can I make it easily consumable?
You can use the standard template from the developer site or find a more creative way to deliver it. You can create your own format that tells the story of the customer’s success. Here are some ideas to go beyond a typical blog post:
  • Be visual: Use tools such as Picnik and Aviary to tell your story with compelling visuals (or choose another creative tool).
  • Organize your presentation: You can use Google Presentations or SlideRocket to succinctly tell your story.
  • Use video: Shoot or animate a video of your customer telling their Apps Marketplace story.
  • Be creative: Combine the above ideas, write a story, or come up with something totally different.
To get a feel for different tones and stories, read some customer stories from various vendors on the Marketplace Success Stories blog. Also check out this example of a strong customer story that uses many of the above elements.



It’s easy to be a part of this new Marketing Test Kitchen. Just find a compelling customer, use a clever way to tell their story, publish it to your blog and share it by email. If you need more time, email us with your ideas as well! Make sure to track the performance of your blog post (and all other marketing efforts) through Google Analytics, learn how to code links and track traffic on the developer site.

Come up with the next Marketing Test Kitchen: Submit your idea via Buzz or email. We’ll evaluate the ideas and use the best ones for future initiatives. If we choose your initiative, we’ll give you a special prize.

Posted by Harrison Shih, Associate Product Marketing Manager, Google Apps Marketplace

Want to weigh in on this topic? Discuss on Buzz
Read more »

Using OAuth 1 0 Long Lived Tokens from OAuth Playground with the Python Client Library

The OAuth Playground is a great tool to learn how the OAuth flow works. But at the same time it can be used to generate a "long-lived" access token that can be stored, and used later by applications to access data through calls to APIs. These tokens can be used to make command line tools or to run batch jobs.

In this example, I will be using this token and making calls to the Google Provisioning API using the Python client library for Google Data APIs. But the following method can be used for any of the Google Data APIs. This method requires the token is pushed on the token_store, which is list of all the tokens that get generated in the process of using Python client libraries. In general, the library takes care of it. But in cases where it’s easier to request a token out of band, it can be a useful technique.

Step 1: Generate an Access token using the OAuth Playground.
Go through the following process on the OAuth Playground interface:

  • Choose scope(s) of every API you want to use in your application (https://apps-apis.google.com/a/feeds/user/ for the Provisioning API) . Here you can also add scopes which are not visible in the list.
  • Choose an encryption method that is the signature method to encode your consumer credentials. (“HMAC-SHA1” is the most common)
  • Enter your consumer_key and consumer_secret in the respective text fields. The consumer_key identifies your domain and is unique to each domain.

After entering all the required details you need to press these buttons on the OAuth Playground in sequence:

  • Request token: This will call Google’s OAuth server to issue you a request token.
  • Authorize: This will then redirect you to the authorization URL where you can authorize or deny access. At this point if you deny the access you will not be able to generate the Access token. Accepting this will convert the Request token generated in the last step into an Authorized Request token.
  • Access token: Finally, this step will exchange the authorized Request token for an Access token.

After the last step the text field captioned auth_token in the OAuth Playground has the required Access token and that captioned access_token_secret has the corresponding token secret to be used later.

Step 2: Use the above token when making calls to the API using a Python Client Library.

Here is an example in Python which uses the OAuth access token that was generated from OAuth Playground to retrieve data for a user.

CONSUMER_KEY = “CONSUMER_KEY
CONSUMER_SECRET = “CONSUMER_SECRET
SIG_METHOD = gdata.auth.OAuthSignatureMethod.HMAC_SHA1
TOKEN = “GENERATED_TOKEN_FROM_PLAYGROUND
TOKEN_SECRET = “GENERATED_TOKEN_SECRET_FROM_PLAYGROUND

DOMAIN = “your_domain

client = gdata.apps.service.AppsService(source=”app”, domain=DOMAIN)
client.SetOAuthInputParameters(SIG_METHOD, CONSUMER_KEY, consumer_secret=CONSUMER_SECRET)
temp_token = gdata.auth.OAuthToken(key=TOKEN, secret=TOKEN_SECRET);
temp_token.oauth_input_params = client.GetOAuthInputParameters()
client.SetOAuthToken(temp_token)
#Make the API calls
user_info = client.RetrieveUser(“username”)

It is important to explicitly set the input parameters as shown above. Whenever you call SetOuthToken it creates a new token and pushes it into the token_store. That becomes the current token. Even if you call SetOauthToken and SetOAuthInputParameters back to back, it won’t set the input params for the token you set.

Other Practices:

You can use the long-lived token to make command line requests, for example using cURL. It can be useful when you need to counter-check bugs in the client library and to test new features or try to reproduce issues. In most cases, developers should use the client libraries as they are designed, as in this example.




Gunjan Sharma  Profile | Twitter

Gunjan is a Developer Programs Engineer working on Google Apps APIs. Before joining Google, he completed his degree in Computer Science & Engineering from Indian Institute of Technology, Roorkee.

Read more »

Tuesday, March 3, 2015

Change the Colors Features of items on Different Slides in PowerPoint!

We’ve all been there.  You finish making a document (or want to re-design an older document), then you change your mind about what font or color scheme you want to use.  Its such a pain to switch each item individually... but no worries, its super easy to switch!



This one won the poll by a landslide!


Please note, this doesnt work in all versions of PowerPoint.  Please let me know if it works/doesnt work in your PowerPoint version!



You can download this tutorial as a PDF by clicking this picture!
https://drive.google.com/file/d/0B4WPihx63tTnNEk4TS01S0djd1E/edit?usp=sharing
Note: This tutorial is hosted on Google Drive.  To save it from there, just open the file and click File > Download to save onto your computer!


If you want to know how to change the formatting in the first place, try these tutorials:







For next weeks poll, Ill be adding an option based on the request of one of my lovely coworkers... how to make a photo mosaic!  I love making these as gifts!  I cant take credit for having created this one, but it was easier to take it from the internet than to text all of my friends to see if theyd mind me posting their photos online!

(source)


Read more »

Math Manipulative Labels!

The thing that I love about my math manipulatives is that the organization system is so simple that my Kindergarteners clean them up all on their own!  When the leprechaun visited us last St. Patricks Day, my little ones put back all of the manipulatives that he took out perfectly!  

I just updated my Math Manipulative set to add a second option for printing (a different font and my cute black border which is a freebie).  I love how the update came out!


The set is $2 at TPT and it includes labels for a ton of manipulatives:
-Color Tiles
-Cuisenaire Rods
-Pattern Blocks
-Snap Cubes
-Color Cubes
-Small Color Cubes
-Attribute Blocks
-Links
-Bear Counters
-Dice
-Jumbo Beads
-Unifix Cubes
-Attribute Buttons
-Two-Color Counters
-Money
-Cards
-Base Ten Blocks
-Geosolids
-Geoboards
-Clocks
-Thermometers
-Fraction Circles
-Fraction Tiles
-Rulers
-Bug Counters
-Dominoes
-Color Chips
-Pegboards
-Small Peg Boards
-Balances
-Parquetry Blocks

If you want me to add any more manipulatives, just email me at aturntolearn@gmail.com!
Read more »

TechCrunch40 Conference 40 Hottest New Learning Tools!

  • TechCrunch40 Conference
  • TechCrunch40 Companies & Learning Tools

WHAT?
TechCrunch40 conference was hosted by TechCrunch and Jason Calacanis on the September 17th-18th, 2007 at the Palace Hotel in San Francisco, California. The format was simple: Forty of the hottest new startups from around the world demoed their products over a two day period. 40 companies had been selected from a pool of over 700+ applicants from 26 different countries.

JUICE?
Click here to explore 40 of the hottest new learning tools around (out of those 700+ evaluated).

Yes, why not check out some these learning tools now, because you will probably be using some of them in the near future anyway (head start!). Also, check out the 17 Expert Panelists (MC Hammer? Why werent real learning tools experts such as Stephen Downes, George Siemens, Jane Knight or Joseph Hart included in the panel, too?), and Keynote Speakers (including a video chat with Facebook founder and CEO Mark Zuckerberg). These links will introduce you to some of the big success stories and major venture capitalists around today. So, if you have a good tool or idea waiting to be discovered and need capital investments, perhaps you should contact some of these people for guidance and help (including MC Hammer!). Hmm, perhaps not a good idea! Why not?

Finally, I a wonder which of these 40 hottest new learning tools will be bought by Google, MSN, Yahoo, Apple or Nokia in the coming year(s)? Got any tips? :)

Read more »

Monday, March 2, 2015

Microsoft Excel 2010 Tutorial

Microsoft Excel 2010 Tutorial
Microsoft Excel 2010 Complete Tutorial

Welcome to 2nd tutorial from Microsoft Office 2010 tutorials guide by http://comptutorials-tips.blogspot.com. This is an easy step by step guide about microsoft Excel 2010. Microsoft word 2010 generally uses for accounts maintaining and data entry etc. Best software for writing books etc. Follow this easy tutorial, step by step guide about microsoft Excel 2010.
Microsoft Office 2010 Online cource.

To download this easy step by step tutorial guide of Microsoft Excel 2010 Click Here

To download this easy step by step tutorial guide of Microsoft Excel 2010 Click Here
Read more »

How to Open QR Codes

I feel like its been forever since I blogged... the beginning of the school year has been beyond crazy!  Well, Im back to share with you a quick and easy tutorial: how to open up a QR code!  QR codes are all the rage today so I hope this help you feel more comfortable with starting to use them!



Here is the poll!


Now for the tutorial...


You can download this tutorial as a PDF by clicking this picture!
Note: This tutorial is hosted on Google Drive.  To save it from there, just open the file and click File > Download to save onto your computer!

Of course, this intro QR code tutorial of course means that Ill be adding another QR code tutorial to the poll: how to make pretty QR codes (if youre going to take the time to do it, you might as well make it look good!)
Read more »

Sunday, March 1, 2015

The Horizon Report 2005 Edition Just Read It!

Link to report (399 KB PDF. By NMC: The New Media Consortium)
"...The technologies chosen for the 2005 Horizon Report are framed within three adoption horizons that presume three different assumptions about when the targeted technologies will begin to see significant adoptions on university campuses...

Time-to-Adoption Horizon: One Year or Less
  • Extended Learning - On some campuses, traditional instruction is augmented with technology tools that are familiar to students and used by them in daily life. Extended learning courses can be conceptualized as hybrid courses with an extended set of communication tools and strategies. The classroom serves as a home base for exploration, and integrates online instruction, traditional instruction, and study groups, all supported by a variety of communication tools.
  • Ubiquitous Wireless - With new developments in wireless technology both in terms of transmission and of devices that can connect to wireless networks, connectivity is increasingly available and desired. Campuses and even communities are beginning to regard universal wireless access as a necessity for all.

Time-to-Adoption Horizon: Two to Three Years

  • Intelligent Searching - To support people?s growing need to locate, organize, and retrieve information, sophisticated technologies for searching and finding are becoming available. These agents range from personal desktop search ?bots,? to custom tools that catalog and search collections at an individual campus, to specialized search interfaces like Google Scholar.
  • Educational Gaming - Taking a broad view of educational gaming, one finds that games are not new to education. Technology and gaming combine in interesting ways, not all of which are about immersive environments or virtual reality. What is evolving is the way technology is applied to gaming in education, with new combinations of concepts and games appearing on the horizon.

Time-to-Adoption Horizon: Four to Five Years

  • Social Networks & Knowledge Webs - Supplying people?s need to connect with each other in meaningful ways, social networks and knowledge webs offer a means of facilitating teamwork and constructing knowledge. The underlying technologies fade into the background while collaboration and communication are paramount.
  • Context-Aware Computing/Augmented Reality - These related technologies deal with computers that can interact with people in richer ways. Context-aware computing uses environmental conditions to customize the user?s experience or options. Augmented reality provides additional contextual information that appears as part of the user?s world. Goals of both approaches are increased access and ease-of-use..."
Read more »

What We Wish We Would Win April 16 22 GIVEAWAYS!


Welcome to the second edition of "What We Wish We Would Win!"  Be sure to check here weekly to find links to some amazing giveaways!


This post will be constantly updated throughout the week with any new giveaways I come across, so be sure to follow me so you are always aware of the newest giveaways!

Click the pictures below to bring you to each giveaway!

Nine Items!!!
from: Jillians Just Tinkerin Around!
 Contest Ends: Wednesday, April 18


b & d Confusion Uno Game
from: Me!
 Contest Ends: Wednesday, April 18


Six Items!!!
from Teaching, Learning, and Loving
 Contest Ends: Friday, April 20


Two DVDs - HeidiSongs & TeacherTipster
from: Run! Miss Nelsons Got the Camera
 Contest Ends: Friday, April 20


A Secret Stories Kit
from: Run! Miss Nelsons Got the Camera

Contest Ends: Saturday, April 21


$10 TPT Cash and Four Items!
from: Wild About Teaching

Contest Ends: Sunday, April 22


Seven Items from Kindergarten Teachers!
From: Can Do Kinders, Rowdy in Room 300, Ketchens Kindergarten, and Kindergarten Chronicles

Contest Ends: Sunday, April 22


Item of your choice
from: A-B Seymour


A-B-Seymour

Contest Ends: Sunday, April 22


$100 TPT Credit
from: Little Minds at Work
Contest Ends: Wednesday, April 25


5 Items of your Choice!
from: Our Sweet Success
Contest Ends: When she hits 100 followers on TPT, Facebook, and her blog!


Subtraction Game (Everyone wins!)
from: Teachable Moments



 If  I missed your giveaway, leave a comment or email me at aturntolearn@gmail.com so I can add it!
Read more »