Spotlight on AI Visual Artist Tarqeeb aka Ashish Jose

While AI-generated artwork is pretty much anywhere and everywhere you turn these days. If you were among those monitoring the different leaps that artificial imagination and artificial intelligence took over the years, then you’d also know about DeepDream, a confounding program accessed via Google, which turned let AI learn from images fed into it and turn them into tripped-out spiraling, multicolored works of art themselves.

For someone like Ashish Jose, he was likely following such developments in the visual art and graphic design realm, considering he was already into photo editing and photo manipulation through apps like Mextures in 2012. A DJ-producer under the moniker Tarqeeb and a seasoned part of event and artist curation/management over the years with festivals like NH7 Weekender and the conference All About Music, the Tarqeeb Instagram page is a look into a whole different side to him.

From skateboarding dadis to pop culture twists on historical figures and moments, Ashish’s works have always proved there’s more that meets the eye. He’s imagining an alternate history and we’re here for it. Read his interview with Anurag Tagat here:

What is it like managing these different artistic pursuits for you in music and visual art? Was it always part of the agenda to want to do both as much as possible?

The freedom to pursue my artistic passions without the weight of expectation and pressure, gives me immense joy and gratitude. It helps me manage my creative pursuits effortlessly and has taught me how to balance the two.

How did you start making visual art? Give us a brief timeline of how you ended up moving toward this particular kind of AI generated art?

In 2012, I discovered an app called Mextures that ignited my passion for photo editing and photo manipulation. For the next two years, I posted artwork on Instagram, almost every day. I picked up photoshop along the way but made a majority of my art on phone editing apps like Union, Fragments, Picsart and Lightroom.

By 2016, I found myself balancing my role as a full-time musician while co-running the design agency Pythagoras alongside my friend Nikhil Kaul, who also pursued music and graphic design as a career. In 2017, I relocated to Bombay with my wife and began working for Only Much Louder as a programmer shortly after. During this time, both my music and visual art projects took a backseat, a conscious choice I had made.

Then, in October 2022, I started beta testing MidJourney, an AI generative app. Over the next six months, I dedicated an hour or two every day to understand how to communicate effectively with the AI and get closer to my desired results. It allowed me to quickly express an idea and move on.

Your work has this balance of storytelling and visual appeal. How long did it take you to get to a space where you were feeling fully comfortable with using things like Midjourney?

Thank you! It took me around 6-8 months of using Midjourney daily to get to a point where I was happy with the results. The software also evolved during this time and became highly intuitive and efficient.

What kind of tips would you have for anyone getting into this sort of space, especially considering everyone wants to give it a go?

The Midjourney Discord server has chat rooms where users post their work, share prompts and renders. This is a great place to learn how to communicate with the software. Having a clear idea about your concept and style of execution also really helps.

What’s been your favorite tool or fun element of creating AI art that keeps you creatively stimulated?

I love that apps like Midjourney help me visualise an idea instantly.

I’m guessing you’ve mostly done this for yourself, and for your page, but I’m sure you’ve had some offers to monetize your art? How have you approached that so far?

Initially, I felt conflicted about monetising AI art but over time I’ve become open to the idea of it. I’ve been approached for some interesting projects but most of them have crazy timelines. In my experience, it’s also tougher to work on brand briefs with AI as most of these projects need very specific things. It requires a hybrid model of working where you mix generative art and traditional editing tools to achieve the desired outcome.

Share the Post:

Related Posts