Latex: Source Code Display

Let us assume that you have source code, and you want to showcase it in your paper. To that end, I used the listings package with the following settings:

% loading package, setting up colors and fonts
\renewcommand{\ttdefault}{pxtt}  %use pxfonts
\definecolor{backcolour}{rgb}{0.906, 0.937, 0.965}
% setting up listing code
morekeywords={__global__, __device__,float2}, % CUDA specific keywords

% listing example
__device__ void pbdCollisionConstraint(
int i, //agent identifiers
int j,
float margin, //minimum distance allowed
float wi, //typically wi=wj=0.5
float wj,
float2 * x, //current agent positions
float2 * deltaX, //positional correction buffer
int * deltaXCtr//positional correction counter
float f = distance(x[i], x[j]) - margin;
float2 contact=make_float2(0.f,0.f);

Which gives you this:
Screen Shot 2019-10-12 at 3.16.22 PM

Without the pxfonts, you get something like this:
Screen Shot 2019-10-12 at 4.04.50 PM

Which is a bit bland…

Compiled from here:

Latex: Position different scale figures on same row

Happy Sunday!
I am working on a document in which I have several figures that come in different scales and ratio. Yet, I want for to compare them to each other, by placing them on the same row. In latex documents, you typically have a regular. single-column document, or a 2-column document. I will go over the steps for both formats. Before anything, first of all, resize the images if you can (this step is not a must), to the same height – even if they have different resolutions. Go for the lowest common denominator, that is still visually visible on a A4 page, say 6cm. In the latex code below, I set the images to take the same height, while keeping their aspect ratio, following this example.

For the single-column case do:

\subcaptionbox{Modern \&\\ Contemporary}{\includegraphics[height=1.69cm,keepaspectratio]{figures/intromodern_contemporary.jpg}}
\subcaptionbox{Cottage \& Country}{\includegraphics[height=1.69cm,keepaspectratio]{figures/introcottage_country.jpg}}
\caption{Examples of major room styles.}

And here are the results:

For the 2-column (full paper row) case do:

\caption{Mixed Classification}
\caption{Classification results: leftmost 3 images are classified as modern, while rightmost 3 images are classified as traditional. The middle panel shows images that have the highest uncertainty, where the classifier's output probability is around $0.5$.

And here are the results:


Note that that experimentally, I didn’t find a way to use the 2 column version for t


macOS OpenCL starter

OpenCL provides a C-like language to describe programs which can run on a GPU. It is an alternative to to CUDA. To get started on an apple computer, here’s code for a ‘hello world’:

#define DATA_SIZE (1024)
const char *KernelSource =
  "__kernel void square(__global float* input, __global float* output, const unsigned int count) { \n" \
  "   int i = get_global_id(0);                                                                    \n" \
  "   if(i < count) { output[i] = input[i] * input[i]; }                                           \n" \
int main(void) {
  int err;
  cl_device_id device_id;
  clGetDeviceIDs(NULL, CL_DEVICE_TYPE_GPU, 1, &device_id, NULL);
  cl_context context = clCreateContext(0, 1, &device_id, NULL, NULL, &err);
  cl_command_queue commands = clCreateCommandQueue(context, device_id, 0, &err);
  cl_program program = clCreateProgramWithSource(context, 1, (const char **) & KernelSource, NULL, &err);
  clBuildProgram(program, 0, NULL, NULL, NULL, NULL);
  cl_kernel kernel = clCreateKernel(program, "square", &err);
  cl_mem input = clCreateBuffer(context,  CL_MEM_READ_ONLY,  sizeof(float) * DATA_SIZE, NULL, NULL);
  cl_mem output = clCreateBuffer(context, CL_MEM_WRITE_ONLY, sizeof(float) * DATA_SIZE, NULL, NULL);
  float data[DATA_SIZE];
  for (int i = 0; i < DATA_SIZE; i++) { data[i] = i; }
  err = clEnqueueWriteBuffer(commands, input, CL_TRUE, 0, sizeof(float) * DATA_SIZE, data, 0, NULL, NULL);
  clSetKernelArg(kernel, 0, sizeof(cl_mem), &input);
  clSetKernelArg(kernel, 1, sizeof(cl_mem), &output);
  unsigned int count = DATA_SIZE;
  clSetKernelArg(kernel, 2, sizeof(unsigned int), &count);
  size_t local;
  clGetKernelWorkGroupInfo(kernel, device_id, CL_KERNEL_WORK_GROUP_SIZE, sizeof(local), &local, NULL);
  size_t global = count;
  clEnqueueNDRangeKernel(commands, kernel, 1, NULL, &global, &local, 0, NULL, NULL);
  float results[DATA_SIZE];
  clEnqueueReadBuffer(commands, output, CL_TRUE, 0, sizeof(float) * count, results, 0, NULL, NULL);
  unsigned int correct = 0;
  for (int i = 0; i < count; i++) {
      if (results[i] == data[i] * data[i]) { correct++; }
  printf("Computed '%d/%d' correct values!\n", correct, count);
  return 0;

Save this code to a file named ‘hello.c’. Run the line below. If the output is the same – all is good!

$ clang hello.c -o hello -framework OpenCL && ./hello
Computed '1024/1024' correct values!

Great you are keeping up. The next code to run will check your OpenCL devices, and platforms:

#ifdef __APPLE__

int main() {

    int i, j;
    char* value;
    size_t valueSize;
    cl_uint platformCount;
    cl_platform_id* platforms;
    cl_uint deviceCount;
    cl_device_id* devices;
    cl_uint maxComputeUnits;

    // get all platforms
    clGetPlatformIDs(0, NULL, &platformCount);
    platforms = (cl_platform_id*) malloc(sizeof(cl_platform_id) * platformCount);
    clGetPlatformIDs(platformCount, platforms, NULL);

    for (i = 0; i < platformCount; i++) {

        // get all devices
        clGetDeviceIDs(platforms[i], CL_DEVICE_TYPE_ALL, 0, NULL, &deviceCount);
        devices = (cl_device_id*) malloc(sizeof(cl_device_id) * deviceCount);
        clGetDeviceIDs(platforms[i], CL_DEVICE_TYPE_ALL, deviceCount, devices, NULL);

        // for each device print critical attributes
        for (j = 0; j < deviceCount; j++) {

            // print device name
            clGetDeviceInfo(devices[j], CL_DEVICE_NAME, 0, NULL, &valueSize);
            value = (char*) malloc(valueSize);
            clGetDeviceInfo(devices[j], CL_DEVICE_NAME, valueSize, value, NULL);
            printf("%d. Device: %s\n", j+1, value);

            // print hardware device version
            clGetDeviceInfo(devices[j], CL_DEVICE_VERSION, 0, NULL, &valueSize);
            value = (char*) malloc(valueSize);
            clGetDeviceInfo(devices[j], CL_DEVICE_VERSION, valueSize, value, NULL);
            printf(" %d.%d Hardware version: %s\n", j+1, 1, value);

            // print software driver version
            clGetDeviceInfo(devices[j], CL_DRIVER_VERSION, 0, NULL, &valueSize);
            value = (char*) malloc(valueSize);
            clGetDeviceInfo(devices[j], CL_DRIVER_VERSION, valueSize, value, NULL);
            printf(" %d.%d Software version: %s\n", j+1, 2, value);

            // print c version supported by compiler for device
            clGetDeviceInfo(devices[j], CL_DEVICE_OPENCL_C_VERSION, 0, NULL, &valueSize);
            value = (char*) malloc(valueSize);
            clGetDeviceInfo(devices[j], CL_DEVICE_OPENCL_C_VERSION, valueSize, value, NULL);
            printf(" %d.%d OpenCL C version: %s\n", j+1, 3, value);

            // print parallel compute units
            clGetDeviceInfo(devices[j], CL_DEVICE_MAX_COMPUTE_UNITS,
                    sizeof(maxComputeUnits), &maxComputeUnits, NULL);
            printf(" %d.%d Parallel compute units: %d\n", j+1, 4, maxComputeUnits);




    return 0;

Save this code to a file named ‘devices.c’. Run the line below. It will tell you about all the available OpenCL devices, and their IDs.

$ clang devices.c -o devices -framework OpenCL && ./devices

#ifdef __APPLE__

int main() {

    int i, j;
    char* info;
    size_t infoSize;
    cl_uint platformCount;
    cl_platform_id *platforms;
    const char* attributeNames[5] = { "Name", "Vendor",
        "Version", "Profile", "Extensions" };
    const cl_platform_info attributeTypes[5] = { CL_PLATFORM_NAME, CL_PLATFORM_VENDOR,
    const int attributeCount = sizeof(attributeNames) / sizeof(char*);

    // get platform count
    clGetPlatformIDs(5, NULL, &platformCount);

    // get all platforms
    platforms = (cl_platform_id*) malloc(sizeof(cl_platform_id) * platformCount);
    clGetPlatformIDs(platformCount, platforms, NULL);

    // for each platform print all attributes
    for (i = 0; i < platformCount; i++) {

        printf("\n %d. Platform \n", i+1);

        for (j = 0; j < attributeCount; j++) {

            // get platform attribute value size
            clGetPlatformInfo(platforms[i], attributeTypes[j], 0, NULL, &infoSize);
            info = (char*) malloc(infoSize);

            // get platform attribute value
            clGetPlatformInfo(platforms[i], attributeTypes[j], infoSize, info, NULL);

            printf("  %d.%d %-11s: %s\n", i+1, j+1, attributeNames[j], info);




    return 0;


Save this code to a file named ‘platforms.c’, and run:

$ clang platforms.c -o platforms -framework OpenCL && ./platforms

Curated from link1, and link2.

On AI Compensation in Industry


  • In the entire world, fewer than 22K people have the skills necessary to tackle serious artificial intelligence research, according to Element AI, an independent lab in Montreal (as of 2019).
  • Most of all, there is a shortage of talent, and the big companies are trying to land as much of it as they can.
  • Typical A.I. specialists, including both Ph.D.s fresh out of school and people with less education and just a few years of experience, can be paid from $300,000 to $500,000 a year or more in salary and company stock, according to nine people who work for major tech companies or have entertained job offers from them.
  • Well-known names in the A.I. field have received compensation in salary and shares in a company’s stock that total single- or double-digit millions over a four- or five-year period. And at some point they renew or negotiate a new contract, much like a professional athlete.
  • OpenAI:
    • paid its top researcher, Ilya Sutskever, more than $1.9 million in 2016.
    • It paid another leading researcher, Ian Goodfellow, more than $800,000 — even though he was not hired until March of that year. Both were recruited from Google.
    • A third big name in the field, the roboticist Pieter Abbeel, made $425,000, though he did not join until June 2016, after taking a leave from his job as a professor at the University of California, Berkeley. Those figures all include signing bonuses.
    • OpenAI spent about $11 million in its first year, with more than $7 million going to salaries and other employee benefits. It employed 52 people in 2016. That’s about 134K per employee, on average.
    • Greg Brockman, who leads the lab alongside Mr. Sutskever, did not receive such high salaries during the lab’s first year.

In 2016, according to the tax forms, Mr. Brockman, who had served as chief technology officer at the financial technology start-up Stripe, made $175,000. As one of the founders of the organization, however, he most likely took a salary below market value.

  • Two other researchers with more experience in the field — though still very young — made between $275,000 and $300,000 in salary alone in 2016, according to the tax forms.
  • At DeepMind, a London A.I. lab now owned by Google, costs for 400 employees totaled $138 million in 2016, according to the company’s annual financial filings in Britain. That translates to $345,000 per employee, including researchers and other staff.
  • At the top end are executives with experience managing A.I. projects. In a court filing this year, Google revealed that one of the leaders of its self-driving-car division, Anthony Levandowski, a longtime employee who started with Google in 2007, took home over $120 million in incentives before joining Uber last year through the acquisition of a start-up he had co-founded that drew the two companies into a court fight over intellectual property.

Curated from NY Times 1, NY Times 2