consistent or non-consistent memory as it sees fit. By using this API,
you are guaranteeing to the platform that you have all the correct and
necessary sync points for this memory in the driver.
+
+DMA_ATTR_NO_KERNEL_MAPPING
+--------------------------
+
+DMA_ATTR_NO_KERNEL_MAPPING lets the platform to avoid creating a kernel
+virtual mapping for the allocated buffer. On some architectures creating
+such mapping is non-trivial task and consumes very limited resources
+(like kernel virtual address space or dma consistent address space).
+Buffers allocated with this attribute can be only passed to user space
+by calling dma_mmap_attrs(). By using this API, you are guaranteeing
+that you won't dereference the pointer returned by dma_alloc_attr(). You
+can threat it as a cookie that must be passed to dma_mmap_attrs() and
+dma_free_attrs(). Make sure that both of these also get this attribute
+set on each call.
+
+Since it is optional for platforms to implement
+DMA_ATTR_NO_KERNEL_MAPPING, those that do not will simply ignore the
+attribute and exhibit default behavior.
+
+DMA_ATTR_SKIP_CPU_SYNC
+----------------------
+
+By default dma_map_{single,page,sg} functions family transfer a given
+buffer from CPU domain to device domain. Some advanced use cases might
+require sharing a buffer between more than one device. This requires
+having a mapping created separately for each device and is usually
+performed by calling dma_map_{single,page,sg} function more than once
+for the given buffer with device pointer to each device taking part in
+the buffer sharing. The first call transfers a buffer from 'CPU' domain
+to 'device' domain, what synchronizes CPU caches for the given region
+(usually it means that the cache has been flushed or invalidated
+depending on the dma direction). However, next calls to
+dma_map_{single,page,sg}() for other devices will perform exactly the
+same sychronization operation on the CPU cache. CPU cache sychronization
+might be a time consuming operation, especially if the buffers are
+large, so it is highly recommended to avoid it if possible.
+DMA_ATTR_SKIP_CPU_SYNC allows platform code to skip synchronization of
+the CPU cache for the given buffer assuming that it has been already
+transferred to 'device' domain. This attribute can be also used for
+dma_unmap_{single,page,sg} functions family to force buffer to stay in
+device domain after releasing a mapping for it. Use this attribute with
+care!
<listitem>
<para>Selection API. <xref linkend="selection-api" /></para>
</listitem>
+ <listitem>
+ <para>Importing DMABUF file descriptors as a new IO method described
+ in <xref linkend="dmabuf" />.</para>
+ </listitem>
+ <listitem>
+ <para>Exporting DMABUF files using &VIDIOC-EXPBUF; ioctl.</para>
+ </listitem>
</itemizedlist>
</section>
</footnote></para>
</section>
+ <section id="dmabuf">
+ <title>Streaming I/O (DMA buffer importing)</title>
+
+ <note>
+ <title>Experimental</title>
+ <para>This is an <link linkend="experimental"> experimental </link>
+ interface and may change in the future.</para>
+ </note>
+
+<para>The DMABUF framework provides a generic mean for sharing buffers between
+ multiple devices. Device drivers that support DMABUF can export a DMA buffer
+to userspace as a file descriptor (known as the exporter role), import a DMA
+buffer from userspace using a file descriptor previously exported for a
+different or the same device (known as the importer role), or both. This
+section describes the DMABUF importer role API in V4L2.</para>
+
+ <para>Refer to <link linked="vidioc-expbuf"> DMABUF exporting </link> for
+details about exporting a V4L2 buffers as DMABUF file descriptors.</para>
+
+<para>Input and output devices support the streaming I/O method when the
+<constant>V4L2_CAP_STREAMING</constant> flag in the
+<structfield>capabilities</structfield> field of &v4l2-capability; returned by
+the &VIDIOC-QUERYCAP; ioctl is set. Whether importing DMA buffers through
+DMABUF file descriptors is supported is determined by calling the
+&VIDIOC-REQBUFS; ioctl with the memory type set to
+<constant>V4L2_MEMORY_DMABUF</constant>.</para>
+
+ <para>This I/O method is dedicated for sharing DMA buffers between V4L and
+other APIs. Buffers (planes) are allocated by a driver on behalf of the
+application, and exported to the application as file descriptors using an API
+specific to the allocator driver. Only those file descriptor are exchanged,
+these files and meta-information are passed in &v4l2-buffer; (or in
+&v4l2-plane; in the multi-planar API case). The driver must be switched into
+DMABUF I/O mode by calling the &VIDIOC-REQBUFS; with the desired buffer type.
+No buffers (planes) are allocated beforehand, consequently they are not indexed
+and cannot be queried like mapped buffers with the
+<constant>VIDIOC_QUERYBUF</constant> ioctl.</para>
+
+ <example>
+ <title>Initiating streaming I/O with DMABUF file descriptors</title>
+
+ <programlisting>
+&v4l2-requestbuffers; reqbuf;
+
+memset (&reqbuf, 0, sizeof (reqbuf));
+reqbuf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+reqbuf.memory = V4L2_MEMORY_DMABUF;
+
+if (ioctl (fd, &VIDIOC-REQBUFS;, &reqbuf) == -1) {
+ if (errno == EINVAL)
+ printf ("Video capturing or DMABUF streaming is not supported\n");
+ else
+ perror ("VIDIOC_REQBUFS");
+
+ exit (EXIT_FAILURE);
+}
+ </programlisting>
+ </example>
+
+ <para>Buffer (plane) file is passed on the fly with the &VIDIOC-QBUF;
+ioctl. In case of multiplanar buffers, every plane can be associated with a
+different DMABUF descriptor.Although buffers are commonly cycled, applications
+can pass different DMABUF descriptor at each <constant>VIDIOC_QBUF</constant>
+call.</para>
+
+ <example>
+ <title>Queueing DMABUF using single plane API</title>
+
+ <programlisting>
+int buffer_queue(int v4lfd, int index, int dmafd)
+{
+ &v4l2-buffer; buf;
+
+ memset(&buf, 0, sizeof buf);
+ buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+ buf.memory = V4L2_MEMORY_DMABUF;
+ buf.index = index;
+ buf.m.fd = dmafd;
+
+ if (ioctl (v4lfd, &VIDIOC-QBUF;, &buf) == -1) {
+ perror ("VIDIOC_QBUF");
+ return -1;
+ }
+
+ return 0;
+}
+ </programlisting>
+ </example>
+
+ <example>
+ <title>Queueing DMABUF using multi plane API</title>
+
+ <programlisting>
+int buffer_queue_mp(int v4lfd, int index, int dmafd[], int n_planes)
+{
+ &v4l2-buffer; buf;
+ &v4l2-plane; planes[VIDEO_MAX_PLANES];
+ int i;
+
+ memset(&buf, 0, sizeof buf);
+ buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
+ buf.memory = V4L2_MEMORY_DMABUF;
+ buf.index = index;
+ buf.m.planes = planes;
+ buf.length = n_planes;
+
+ memset(&planes, 0, sizeof planes);
+
+ for (i = 0; i < n_planes; ++i)
+ buf.m.planes[i].m.fd = dmafd[i];
+
+ if (ioctl (v4lfd, &VIDIOC-QBUF;, &buf) == -1) {
+ perror ("VIDIOC_QBUF");
+ return -1;
+ }
+
+ return 0;
+}
+ </programlisting>
+ </example>
+
+ <para>Filled or displayed buffers are dequeued with the
+&VIDIOC-DQBUF; ioctl. The driver can unlock the buffer at any
+time between the completion of the DMA and this ioctl. The memory is
+also unlocked when &VIDIOC-STREAMOFF; is called, &VIDIOC-REQBUFS;, or
+when the device is closed.</para>
+
+ <para>For capturing applications it is customary to enqueue a
+number of empty buffers, to start capturing and enter the read loop.
+Here the application waits until a filled buffer can be dequeued, and
+re-enqueues the buffer when the data is no longer needed. Output
+applications fill and enqueue buffers, when enough buffers are stacked
+up output is started. In the write loop, when the application
+runs out of free buffers it must wait until an empty buffer can be
+dequeued and reused. Two methods exist to suspend execution of the
+application until one or more buffers can be dequeued. By default
+<constant>VIDIOC_DQBUF</constant> blocks when no buffer is in the
+outgoing queue. When the <constant>O_NONBLOCK</constant> flag was
+given to the &func-open; function, <constant>VIDIOC_DQBUF</constant>
+returns immediately with an &EAGAIN; when no buffer is available. The
+&func-select; or &func-poll; function are always available.</para>
+
+ <para>To start and stop capturing or output applications call the
+&VIDIOC-STREAMON; and &VIDIOC-STREAMOFF; ioctls. Note that
+<constant>VIDIOC_STREAMOFF</constant> removes all buffers from both queues and
+unlocks all buffers as a side effect. Since there is no notion of doing
+anything "now" on a multitasking system, if an application needs to synchronize
+with another event it should examine the &v4l2-buffer;
+<structfield>timestamp</structfield> of captured buffers, or set the field
+before enqueuing buffers for output.</para>
+
+ <para>Drivers implementing DMABUF importing I/O must support the
+<constant>VIDIOC_REQBUFS</constant>, <constant>VIDIOC_QBUF</constant>,
+<constant>VIDIOC_DQBUF</constant>, <constant>VIDIOC_STREAMON</constant> and
+<constant>VIDIOC_STREAMOFF</constant> ioctl, the <function>select()</function>
+and <function>poll()</function> function.</para>
+
+ </section>
+
<section id="async">
<title>Asynchronous I/O</title>
in the <structfield>length</structfield> field of this
<structname>v4l2_buffer</structname> structure.</entry>
</row>
+ <row>
+ <entry></entry>
+ <entry>int</entry>
+ <entry><structfield>fd</structfield></entry>
+ <entry>For the single-plane API and when
+<structfield>memory</structfield> is <constant>V4L2_MEMORY_DMABUF</constant> this
+is the file descriptor associated with a DMABUF buffer.</entry>
+ </row>
<row>
<entry>__u32</entry>
<entry><structfield>length</structfield></entry>
pointer to the memory allocated for this plane by an application.
</entry>
</row>
+ <row>
+ <entry></entry>
+ <entry>int</entry>
+ <entry><structfield>fd</structfield></entry>
+ <entry>When the memory type in the containing &v4l2-buffer; is
+ <constant>V4L2_MEMORY_DMABUF</constant>, this is a file
+ descriptor associated with a DMABUF buffer, similar to the
+ <structfield>fd</structfield> field in &v4l2-buffer;.</entry>
+ </row>
<row>
<entry>__u32</entry>
<entry><structfield>data_offset</structfield></entry>
<entry>3</entry>
<entry>[to do]</entry>
</row>
+ <row>
+ <entry><constant>V4L2_MEMORY_DMABUF</constant></entry>
+ <entry>4</entry>
+ <entry>The buffer is used for <link linkend="dmabuf">DMA shared
+buffer</link> I/O.</entry>
+ </row>
</tbody>
</tgroup>
</table>
&sub-log-status;
&sub-overlay;
&sub-qbuf;
+ &sub-expbuf;
&sub-querybuf;
&sub-querycap;
&sub-queryctrl;
<entry>&v4l2-memory;</entry>
<entry><structfield>memory</structfield></entry>
<entry>Applications set this field to
-<constant>V4L2_MEMORY_MMAP</constant> or
-<constant>V4L2_MEMORY_USERPTR</constant>.</entry>
+<constant>V4L2_MEMORY_MMAP</constant>,
+<constant>V4L2_MEMORY_DMABUF</constant> or
+<constant>V4L2_MEMORY_USERPTR</constant>. See <xref linkend="v4l2-memory"
+/></entry>
</row>
<row>
<entry>&v4l2-format;</entry>
--- /dev/null
+<refentry id="vidioc-expbuf">
+
+ <refmeta>
+ <refentrytitle>ioctl VIDIOC_EXPBUF</refentrytitle>
+ &manvol;
+ </refmeta>
+
+ <refnamediv>
+ <refname>VIDIOC_EXPBUF</refname>
+ <refpurpose>Export a buffer as a DMABUF file descriptor.</refpurpose>
+ </refnamediv>
+
+ <refsynopsisdiv>
+ <funcsynopsis>
+ <funcprototype>
+ <funcdef>int <function>ioctl</function></funcdef>
+ <paramdef>int <parameter>fd</parameter></paramdef>
+ <paramdef>int <parameter>request</parameter></paramdef>
+ <paramdef>struct v4l2_exportbuffer *<parameter>argp</parameter></paramdef>
+ </funcprototype>
+ </funcsynopsis>
+ </refsynopsisdiv>
+
+ <refsect1>
+ <title>Arguments</title>
+
+ <variablelist>
+ <varlistentry>
+ <term><parameter>fd</parameter></term>
+ <listitem>
+ <para>&fd;</para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term><parameter>request</parameter></term>
+ <listitem>
+ <para>VIDIOC_EXPBUF</para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term><parameter>argp</parameter></term>
+ <listitem>
+ <para></para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </refsect1>
+
+ <refsect1>
+ <title>Description</title>
+
+ <note>
+ <title>Experimental</title>
+ <para>This is an <link linkend="experimental"> experimental </link>
+ interface and may change in the future.</para>
+ </note>
+
+<para>This ioctl is an extension to the <link linkend="mmap">memory
+mapping</link> I/O method therefore it is available only for
+<constant>V4L2_MEMORY_MMAP</constant> buffers. It can be used to export a
+buffer as DMABUF file at any time after buffers have been allocated with the
+&VIDIOC-REQBUFS; ioctl.</para>
+
+<para>Prior to exporting an application calls <link
+linkend="vidioc-querybuf">VIDIOC_QUERYBUF</link> to obtain memory offsets. When
+using the <link linkend="planar-apis">multi-planar API</link> every plane has
+own offset.</para>
+
+<para>To export a buffer, the application fills &v4l2-exportbuffer;. The
+<structfield> mem_offset </structfield> field is set to the offset obtained
+from <constant> VIDIOC_QUERYBUF </constant>. Additional flags may be posted in
+the <structfield> flags </structfield> field. Refer to manual for open syscall
+for details. Currently only O_CLOEXEC is guaranteed to be supported. All other
+fields must be set to zero. In a case of multi-planar API, every plane is
+exported separately using multiple <constant> VIDIOC_EXPBUF </constant>
+calls.</para>
+
+<para> After calling <constant>VIDIOC_EXPBUF</constant> the <structfield> fd
+</structfield> field will be set by a driver. This is a DMABUF file
+descriptor. The application may pass it to other API. Refer to <link
+linkend="dmabuf">DMABUF importing</link> for details about importing DMABUF
+files into V4L2 nodes. A developer is encouraged to close a DMABUF file when it
+is no longer used. </para>
+
+ </refsect1>
+ <refsect1>
+ <section>
+ <title>Examples</title>
+
+ <example>
+ <title>Exporting a buffer.</title>
+ <programlisting>
+int buffer_export(int v4lfd, &v4l2-buf-type; bt, int index, int *dmafd)
+{
+ &v4l2-buffer; buf;
+ &v4l2-exportbuffer; expbuf;
+
+ memset(&buf, 0, sizeof buf);
+ buf.type = bt;
+ buf.memory = V4L2_MEMORY_MMAP;
+ buf.index = index;
+
+ if (ioctl (v4lfd, &VIDIOC-QUERYBUF;, &buf) == -1) {
+ perror ("VIDIOC_QUERYBUF");
+ return -1;
+ }
+
+ memset(&expbuf, 0, sizeof expbuf);
+ expbuf.mem_offset = buf.m.offset;
+ if (ioctl (v4lfd, &VIDIOC-EXPBUF;, &expbuf) == -1) {
+ perror ("VIDIOC_EXPBUF");
+ return -1;
+ }
+
+ *dmafd = expbuf.fd;
+
+ return 0;
+}
+ </programlisting>
+ </example>
+
+ <example>
+ <title>Exporting a buffer using multi plane API.</title>
+ <programlisting>
+int buffer_export_mp(int v4lfd, &v4l2-buf-type; bt, int index,
+ int dmafd[], int n_planes)
+{
+ &v4l2-buffer; buf;
+ &v4l2-plane; planes[VIDEO_MAX_PLANES];
+ int i;
+
+ memset(&buf, 0, sizeof buf);
+ buf.type = bt;
+ buf.memory = V4L2_MEMORY_MMAP;
+ buf.index = index;
+ buf.m.planes = planes;
+ buf.length = n_planes;
+ memset(&planes, 0, sizeof planes);
+
+ if (ioctl (v4lfd, &VIDIOC-QUERYBUF;, &buf) == -1) {
+ perror ("VIDIOC_QUERYBUF");
+ return -1;
+ }
+
+ for (i = 0; i < n_planes; ++i) {
+ &v4l2-exportbuffer; expbuf;
+
+ memset(&expbuf, 0, sizeof expbuf);
+ expbuf.mem_offset = plane[i].m.offset;
+ if (ioctl (v4lfd, &VIDIOC-EXPBUF;, &expbuf) == -1) {
+ perror ("VIDIOC_EXPBUF");
+ while (i)
+ close(dmafd[--i]);
+ return -1;
+ }
+ dmafd[i] = expbuf.fd;
+ }
+
+ return 0;
+}
+ </programlisting>
+ </example>
+ </section>
+ </refsect1>
+
+ <refsect1>
+ <table pgwide="1" frame="none" id="v4l2-exportbuffer">
+ <title>struct <structname>v4l2_exportbuffer</structname></title>
+ <tgroup cols="3">
+ &cs-str;
+ <tbody valign="top">
+ <row>
+ <entry>__u32</entry>
+ <entry><structfield>fd</structfield></entry>
+ <entry>The DMABUF file descriptor associated with a buffer. Set by
+ a driver.</entry>
+ </row>
+ <row>
+ <entry>__u32</entry>
+ <entry><structfield>reserved0</structfield></entry>
+ <entry>Reserved field for future use. Must be set to zero.</entry>
+ </row>
+ <row>
+ <entry>__u32</entry>
+ <entry><structfield>mem_offset</structfield></entry>
+ <entry>Buffer memory offset as returned by <constant>
+VIDIOC_QUERYBUF </constant> in &v4l2-buffer;<structfield> ::m.offset
+</structfield> (for single-plane formats) or &v4l2-plane;<structfield>
+::m.offset </structfield> (for multi-planar formats)</entry>
+ </row>
+ <row>
+ <entry>__u32</entry>
+ <entry><structfield>flags</structfield></entry>
+ <entry>Flags for newly created file, currently only <constant>
+O_CLOEXEC </constant> is supported, refer to manual of open syscall for more
+details.</entry>
+ </row>
+ <row>
+ <entry>__u32</entry>
+ <entry><structfield>reserved[12]</structfield></entry>
+ <entry>Reserved field for future use. Must be set to zero.</entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+
+ </refsect1>
+
+ <refsect1>
+ &return-value;
+ <variablelist>
+ <varlistentry>
+ <term><errorcode>EINVAL</errorcode></term>
+ <listitem>
+ <para>A queue is not in MMAP mode or DMABUF exporting is not
+supported or <structfield> flag </structfield> or <structfield> mem_offset
+</structfield> fields are invalid.</para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </refsect1>
+
+</refentry>
dequeued, until the &VIDIOC-STREAMOFF; or &VIDIOC-REQBUFS; ioctl is
called, or until the device is closed.</para>
+ <para>To enqueue a <link linkend="dmabuf">DMABUF</link> buffer applications
+set the <structfield>memory</structfield> field to
+<constant>V4L2_MEMORY_DMABUF</constant> and the <structfield>m.fd</structfield>
+to a file descriptor associated with a DMABUF buffer. When the multi-planar API is
+used and <structfield>m.fd</structfield> of the passed array of &v4l2-plane;
+have to be used instead. When <constant>VIDIOC_QBUF</constant> is called with a
+pointer to this structure the driver sets the
+<constant>V4L2_BUF_FLAG_QUEUED</constant> flag and clears the
+<constant>V4L2_BUF_FLAG_MAPPED</constant> and
+<constant>V4L2_BUF_FLAG_DONE</constant> flags in the
+<structfield>flags</structfield> field, or it returns an error code. This
+ioctl locks the buffer. Buffers remain locked until dequeued,
+until the &VIDIOC-STREAMOFF; or &VIDIOC-REQBUFS; ioctl is called, or until the
+device is closed.</para>
+
<para>Applications call the <constant>VIDIOC_DQBUF</constant>
ioctl to dequeue a filled (capturing) or displayed (output) buffer
from the driver's outgoing queue. They just set the
<refsect1>
<title>Description</title>
- <para>This ioctl is used to initiate <link linkend="mmap">memory
-mapped</link> or <link linkend="userp">user pointer</link>
-I/O. Memory mapped buffers are located in device memory and must be
-allocated with this ioctl before they can be mapped into the
-application's address space. User buffers are allocated by
-applications themselves, and this ioctl is merely used to switch the
-driver into user pointer I/O mode and to setup some internal structures.</para>
+<para>This ioctl is used to initiate <link linkend="mmap">memory mapped</link>,
+<link linkend="userp">user pointer</link> or <link
+linkend="dmabuf">DMABUF</link> based I/O. Memory mapped buffers are located in
+device memory and must be allocated with this ioctl before they can be mapped
+into the application's address space. User buffers are allocated by
+applications themselves, and this ioctl is merely used to switch the driver
+into user pointer I/O mode and to setup some internal structures.
+Similarly, DMABUF buffers are allocated by applications through a device
+driver, and this ioctl only configures the driver into DMABUF I/O mode without
+performing any direct allocation.</para>
- <para>To allocate device buffers applications initialize all
-fields of the <structname>v4l2_requestbuffers</structname> structure.
-They set the <structfield>type</structfield> field to the respective
-stream or buffer type, the <structfield>count</structfield> field to
-the desired number of buffers, <structfield>memory</structfield>
-must be set to the requested I/O method and the <structfield>reserved</structfield> array
-must be zeroed. When the ioctl
-is called with a pointer to this structure the driver will attempt to allocate
-the requested number of buffers and it stores the actual number
-allocated in the <structfield>count</structfield> field. It can be
-smaller than the number requested, even zero, when the driver runs out
-of free memory. A larger number is also possible when the driver requires
-more buffers to function correctly. For example video output requires at least two buffers,
-one displayed and one filled by the application.</para>
+ <para>To allocate device buffers applications initialize all fields of the
+<structname>v4l2_requestbuffers</structname> structure. They set the
+<structfield>type</structfield> field to the respective stream or buffer type,
+the <structfield>count</structfield> field to the desired number of buffers,
+<structfield>memory</structfield> must be set to the requested I/O method and
+the <structfield>reserved</structfield> array must be zeroed. When the ioctl is
+called with a pointer to this structure the driver will attempt to allocate the
+requested number of buffers and it stores the actual number allocated in the
+<structfield>count</structfield> field. It can be smaller than the number
+requested, even zero, when the driver runs out of free memory. A larger number
+is also possible when the driver requires more buffers to function correctly.
+For example video output requires at least two buffers, one displayed and one
+filled by the application.</para>
<para>When the I/O method is not supported the ioctl
returns an &EINVAL;.</para>
<entry>&v4l2-memory;</entry>
<entry><structfield>memory</structfield></entry>
<entry>Applications set this field to
-<constant>V4L2_MEMORY_MMAP</constant> or
-<constant>V4L2_MEMORY_USERPTR</constant>.</entry>
+<constant>V4L2_MEMORY_MMAP</constant>,
+<constant>V4L2_MEMORY_DMABUF</constant> or
+<constant>V4L2_MEMORY_USERPTR</constant>. See <xref linkend="v4l2-memory"
+/>.</entry>
</row>
<row>
<entry>__u32</entry>
--- /dev/null
+* Samsung Exynos Interrupt Combiner Controller
+
+Samsung's Exynos4 architecture includes a interrupt combiner controller which
+can combine interrupt sources as a group and provide a single interrupt request
+for the group. The interrupt request from each group are connected to a parent
+interrupt controller, such as GIC in case of Exynos4210.
+
+The interrupt combiner controller consists of multiple combiners. Upto eight
+interrupt sources can be connected to a combiner. The combiner outputs one
+combined interrupt for its eight interrupt sources. The combined interrupt
+is usually connected to a parent interrupt controller.
+
+A single node in the device tree is used to describe the interrupt combiner
+controller module (which includes multiple combiners). A combiner in the
+interrupt controller module shares config/control registers with other
+combiners. For example, a 32-bit interrupt enable/disable config register
+can accommodate upto 4 interrupt combiners (with each combiner supporting
+upto 8 interrupt sources).
+
+Required properties:
+- compatible: should be "samsung,exynos4210-combiner".
+- interrupt-controller: Identifies the node as an interrupt controller.
+- #interrupt-cells: should be <2>. The meaning of the cells are
+ * First Cell: Combiner Group Number.
+ * Second Cell: Interrupt number within the group.
+- reg: Base address and size of interrupt combiner registers.
+- interrupts: The list of interrupts generated by the combiners which are then
+ connected to a parent interrupt controller. The format of the interrupt
+ specifier depends in the interrupt parent controller.
+
+Optional properties:
+- samsung,combiner-nr: The number of interrupt combiners supported. If this
+ property is not specified, the default number of combiners is assumed
+ to be 16.
+- interrupt-parent: pHandle of the parent interrupt controller, if not
+ inherited from the parent node.
+
+
+Example:
+
+ The following is a an example from the Exynos4210 SoC dtsi file.
+
+ combiner:interrupt-controller@10440000 {
+ compatible = "samsung,exynos4210-combiner";
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ reg = <0x10440000 0x1000>;
+ interrupts = <0 0 0>, <0 1 0>, <0 2 0>, <0 3 0>,
+ <0 4 0>, <0 5 0>, <0 6 0>, <0 7 0>,
+ <0 8 0>, <0 9 0>, <0 10 0>, <0 11 0>,
+ <0 12 0>, <0 13 0>, <0 14 0>, <0 15 0>;
+ };
- compatible: value should be either of the following.
(a) "samsung, s3c2410-i2c", for i2c compatible with s3c2410 i2c.
(b) "samsung, s3c2440-i2c", for i2c compatible with s3c2440 i2c.
+ (c) "samsung, s3c2440-hdmiphy-i2c", for s3c2440-like i2c used
+ inside HDMIPHY block found on several samsung SoCs
- reg: physical base address of the controller and length of memory mapped
region.
- interrupts: interrupt number to the cpu.
- samsung,i2c-sda-delay: Delay (in ns) applied to data line (SDA) edges.
- - gpios: The order of the gpios should be the following: <SDA, SCL>.
- The gpio specifier depends on the gpio controller.
Optional properties:
+ - gpios: The order of the gpios should be the following: <SDA, SCL>.
+ The gpio specifier depends on the gpio controller. Required in all
+ cases except for "samsung,s3c2440-hdmiphy-i2c" whose input/output
+ lines are permanently wired to the respective client
- samsung,i2c-slave-addr: Slave address in multi-master enviroment. If not
specified, default value is 0.
- samsung,i2c-max-bus-freq: Desired frequency in Hz of the bus. If not
--- /dev/null
+* Synopsis Designware Mobile Storage Host Controller
+
+The Synopsis designware mobile storage host controller is used to interface
+a SoC with storage medium such as eMMC or SD/MMC cards.
+
+Required Properties:
+
+* compatible: should be one of the following
+ - synopsis,dw-mshc: for controllers compliant with synopsis dw-mshc.
+ - synopsis,dw-mshc-exynos5250: for controllers with Samsung
+ Exynos5250 specific extentions.
+
+* reg: physical base address of the dw-mshc controller and size of its memory
+ region.
+
+* interrupts: interrupt specifier for the controller. The format and value of
+ the interrupt specifier depends on the interrupt parent for the controller.
+
+# Slots: The slot specific information are contained within child-nodes with
+ each child-node representing a supported slot. There should be atleast one
+ child node representing a card slot. The name of the slot child node should
+ be 'slot{n}' where n is the unique number of the slot connnected to the
+ controller. The following are optional properties which can be included in
+ the slot child node.
+
+ * bus-width: specifies the width of the data bus connected from the
+ controller to the card slot. The value should be 1, 4 or 8. In case
+ this property is not specified, a default value of 1 is assumed for
+ this property.
+
+ * cd-gpios: specifies the card detect gpio line. The format of the
+ gpio specifier depends on the gpio controller.
+
+ * wp-gpios: specifies the write protect gpio line. The format of the
+ gpio specifier depends on the gpio controller.
+
+ * gpios: specifies a list of gpios used for command, clock and data
+ bus. The first gpio is the command line and the second gpio is the
+ clock line. The rest of the gpios (depending on the bus-width
+ property) are the data lines in no particular order. The format of
+ the gpio specifier depends on the gpio controller.
+
+Optional properties:
+
+* fifo-depth: The maximum size of the tx/rx fifo's. If this property is not
+ specified, the default value of the fifo size is determined from the
+ controller registers.
+
+* card-detect-delay: Delay in milli-seconds before detecting card after card
+ insert event. The default value is 0.
+
+* supports-highspeed: Enables support for high speed cards (upto 50MHz)
+
+* card-detection-broken: The card detection functionality is not available on
+ any of the slots.
+
+* no-write-protect: The write protect pad of the controller is not connected
+ to the write protect pin on the slot.
+
+Samsung Exynos5250 specific properties:
+
+* samsung,dw-mshc-sdr-timing: Specifies the value of CUI clock divider, CIU
+ clock phase shift value in transmit mode and CIU clock phase shift value in
+ receive mode for single data rate mode operation. Refer notes of the valid
+ values below.
+
+* samsung,dw-mshc-ddr-timing: Specifies the value of CUI clock divider, CIU
+ clock phase shift value in transmit mode and CIU clock phase shift value in
+ receive mode for double data rate mode operation. Refer notes of the valid
+ values below. The order of the cells should be
+
+ - First Cell: CIU clock divider value.
+ - Second Cell: CIU clock phase shift value for tx mode.
+ - Third Cell: CIU clock phase shift value for rx mode.
+
+ Valid values for SDR and DDR CIU clock timing:
+
+ - valid values for CIU clock divider, tx phase shift and rx phase shift
+ is 0 to 7.
+
+ - When CIU clock divider value is set to 3, all possible 8 phase shift
+ values can be used.
+
+ - If CIU clock divider value is 0 (that is divide by 1), both tx and rx
+ phase shift clocks should be 0.
+
+Example:
+
+ The MSHC controller node can be split into two portions, SoC specific and
+ board specific portions as listed below.
+
+ dwmmc0@12200000 {
+ compatible = "synopsis,dw-mshc-exynos5250";
+ reg = <0x12200000 0x1000>;
+ interrupts = <0 75 0>;
+ };
+
+ dwmmc0@12200000 {
+ supports-highspeed;
+ card-detection-broken;
+ no-write-protect;
+ fifo-depth = <0x80>;
+ card-detect-delay = <200>;
+ samsung,dw-mshc-sdr-timing = <2 3 3>;
+ samsung,dw-mshc-ddr-timing = <1 2 3>;
+
+ slot0 {
+ bus-width = <8>;
+ cd-gpios = <&gpc0 2 2 3 3>;
+ gpios = <&gpc0 0 2 0 3>, <&gpc0 1 2 0 3>,
+ <&gpc1 0 2 3 3>, <&gpc1 1 2 3 3>,
+ <&gpc1 2 2 3 3>, <&gpc1 3 2 3 3>,
+ <&gpc0 3 2 3 3>, <&gpc0 4 2 3 3>,
+ <&gpc0 5 2 3 3>, <&gpc0 6 2 3 3>;
+ };
+ };
--- /dev/null
+* Samsung SPI Controller
+
+The Samsung SPI controller is used to interface with various devices such as flash
+and display controllers using the SPI communication interface.
+
+Required SoC Specific Properties:
+
+- compatible: should be one of the following.
+ - samsung,s3c2443-spi: for s3c2443, s3c2416 and s3c2450 platforms
+ - samsung,s3c6410-spi: for s3c6410 platforms
+ - samsung,s5p6440-spi: for s5p6440 and s5p6450 platforms
+ - samsung,s5pv210-spi: for s5pv210 and s5pc110 platforms
+ - samsung,exynos4210-spi: for exynos4 and exynos5 platforms
+
+- reg: physical base address of the controller and length of memory mapped
+ region.
+
+- interrupts: The interrupt number to the cpu. The interrupt specifier format
+ depends on the interrupt controller.
+
+- tx-dma-channel: The dma channel specifier for tx operations. The format of
+ the dma specifier depends on the dma controller.
+
+- rx-dma-channel: The dma channel specifier for rx operations. The format of
+ the dma specifier depends on the dma controller.
+
+Required Board Specific Properties:
+
+- #address-cells: should be 1.
+- #size-cells: should be 0.
+- gpios: The gpio specifier for clock, mosi and miso interface lines (in the
+ order specified). The format of the gpio specifier depends on the gpio
+ controller.
+
+Optional Board Specific Properties:
+
+- samsung,spi-src-clk: If the spi controller includes a internal clock mux to
+ select the clock source for the spi bus clock, this property can be used to
+ indicate the clock to be used for driving the spi bus clock. If not specified,
+ the clock number 0 is used as default.
+
+- num-cs: Specifies the number of chip select lines supported. If
+ not specified, the default number of chip select lines is set to 1.
+
+SPI Controller specific data in SPI slave nodes:
+
+- The spi slave nodes should provide the following information which is required
+ by the spi controller.
+
+ - cs-gpio: A gpio specifier that specifies the gpio line used as
+ the slave select line by the spi controller. The format of the gpio
+ specifier depends on the gpio controller.
+
+ - samsung,spi-feedback-delay: The sampling phase shift to be applied on the
+ miso line (to account for any lag in the miso line). The following are the
+ valid values.
+
+ - 0: No phase shift.
+ - 1: 90 degree phase shift sampling.
+ - 2: 180 degree phase shift sampling.
+ - 3: 270 degree phase shift sampling.
+
+Aliases:
+
+- All the SPI controller nodes should be represented in the aliases node using
+ the following format 'spi{n}' where n is a unique number for the alias.
+
+
+Example:
+
+- SoC Specific Portion:
+
+ spi_0: spi@12d20000 {
+ compatible = "samsung,exynos4210-spi";
+ reg = <0x12d20000 0x100>;
+ interrupts = <0 66 0>;
+ tx-dma-channel = <&pdma0 5>;
+ rx-dma-channel = <&pdma0 4>;
+ };
+
+- Board Specific Portion:
+
+ spi_0: spi@12d20000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ gpios = <&gpa2 4 2 3 0>,
+ <&gpa2 6 2 3 0>,
+ <&gpa2 7 2 3 0>;
+
+ w25q80bw@0 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "w25x80";
+ reg = <0>;
+ spi-max-frequency = <10000>;
+
+ controller-data {
+ cs-gpio = <&gpa2 5 1 0 3>;
+ samsung,spi-feedback-delay = <0>;
+ };
+
+ partition@0 {
+ label = "U-Boot";
+ reg = <0x0 0x40000>;
+ read-only;
+ };
+
+ partition@40000 {
+ label = "Kernel";
+ reg = <0x40000 0xc0000>;
+ };
+ };
+ };
in memory, mapped into its own address space, so it can access the same area
of memory.
-*IMPORTANT*: [see https://lkml.org/lkml/2011/12/20/211 for more details]
-For this first version, A buffer shared using the dma_buf sharing API:
-- *may* be exported to user space using "mmap" *ONLY* by exporter, outside of
- this framework.
-- with this new iteration of the dma-buf api cpu access from the kernel has been
- enable, see below for the details.
-
dma-buf operations for device dma only
--------------------------------------
Note that these calls need to always succeed. The exporter needs to complete
any preparations that might fail in begin_cpu_access.
+ For some cases the overhead of kmap can be too high, a vmap interface
+ is introduced. This interface should be used very carefully, as vmalloc
+ space is a limited resources on many architectures.
+
+ Interfaces:
+ void *dma_buf_vmap(struct dma_buf *dmabuf)
+ void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
+
+ The vmap call can fail if there is no vmap support in the exporter, or if it
+ runs out of vmalloc space. Fallback to kmap should be implemented.
+
3. Finish access
When the importer is done accessing the range specified in begin_cpu_access,
enum dma_data_direction dir);
+Direct Userspace Access/mmap Support
+------------------------------------
+
+Being able to mmap an export dma-buf buffer object has 2 main use-cases:
+- CPU fallback processing in a pipeline and
+- supporting existing mmap interfaces in importers.
+
+1. CPU fallback processing in a pipeline
+
+ In many processing pipelines it is sometimes required that the cpu can access
+ the data in a dma-buf (e.g. for thumbnail creation, snapshots, ...). To avoid
+ the need to handle this specially in userspace frameworks for buffer sharing
+ it's ideal if the dma_buf fd itself can be used to access the backing storage
+ from userspace using mmap.
+
+ Furthermore Android's ION framework already supports this (and is otherwise
+ rather similar to dma-buf from a userspace consumer side with using fds as
+ handles, too). So it's beneficial to support this in a similar fashion on
+ dma-buf to have a good transition path for existing Android userspace.
+
+ No special interfaces, userspace simply calls mmap on the dma-buf fd.
+
+2. Supporting existing mmap interfaces in exporters
+
+ Similar to the motivation for kernel cpu access it is again important that
+ the userspace code of a given importing subsystem can use the same interfaces
+ with a imported dma-buf buffer object as with a native buffer object. This is
+ especially important for drm where the userspace part of contemporary OpenGL,
+ X, and other drivers is huge, and reworking them to use a different way to
+ mmap a buffer rather invasive.
+
+ The assumption in the current dma-buf interfaces is that redirecting the
+ initial mmap is all that's needed. A survey of some of the existing
+ subsystems shows that no driver seems to do any nefarious thing like syncing
+ up with outstanding asynchronous processing on the device or allocating
+ special resources at fault time. So hopefully this is good enough, since
+ adding interfaces to intercept pagefaults and allow pte shootdowns would
+ increase the complexity quite a bit.
+
+ Interface:
+ int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
+ unsigned long);
+
+ If the importing subsystem simply provides a special-purpose mmap call to set
+ up a mapping in userspace, calling do_mmap with dma_buf->file will equally
+ achieve that for a dma-buf object.
+
+3. Implementation notes for exporters
+
+ Because dma-buf buffers have invariant size over their lifetime, the dma-buf
+ core checks whether a vma is too large and rejects such mappings. The
+ exporter hence does not need to duplicate this check.
+
+ Because existing importing subsystems might presume coherent mappings for
+ userspace, the exporter needs to set up a coherent mapping. If that's not
+ possible, it needs to fake coherency by manually shooting down ptes when
+ leaving the cpu domain and flushing caches at fault time. Note that all the
+ dma_buf files share the same anon inode, hence the exporter needs to replace
+ the dma_buf file stored in vma->vm_file with it's own if pte shootdown is
+ requred. This is because the kernel uses the underlying inode's address_space
+ for vma tracking (and hence pte tracking at shootdown time with
+ unmap_mapping_range).
+
+ If the above shootdown dance turns out to be too expensive in certain
+ scenarios, we can extend dma-buf with a more explicit cache tracking scheme
+ for userspace mappings. But the current assumption is that using mmap is
+ always a slower path, so some inefficiencies should be acceptable.
+
+ Exporters that shoot down mappings (for any reasons) shall not do any
+ synchronization at fault time with outstanding device operations.
+ Synchronization is an orthogonal issue to sharing the backing storage of a
+ buffer and hence should not be handled by dma-buf itself. This is explictly
+ mentioned here because many people seem to want something like this, but if
+ different exporters handle this differently, buffer sharing can fail in
+ interesting ways depending upong the exporter (if userspace starts depending
+ upon this implicit synchronization).
+
Miscellaneous notes
-------------------
the exporting driver to create a dmabuf fd must provide a way to let
userspace control setting of O_CLOEXEC flag passed in to dma_buf_fd().
+- If an exporter needs to manually flush caches and hence needs to fake
+ coherency for mmap support, it needs to be able to zap all the ptes pointing
+ at the backing storage. Now linux mm needs a struct address_space associated
+ with the struct file stored in vma->vm_file to do that with the function
+ unmap_mapping_range. But the dma_buf framework only backs every dma_buf fd
+ with the anon_file struct file, i.e. all dma_bufs share the same file.
+
+ Hence exporters need to setup their own file (and address_space) association
+ by setting vma->vm_file and adjusting vma->vm_pgoff in the dma_buf mmap
+ callback. In the specific case of a gem driver the exporter could use the
+ shmem file already provided by gem (and set vm_pgoff = 0). Exporters can then
+ zap ptes by unmapping the corresponding range of the struct address_space
+ associated with their own file.
+
References:
[1] struct dma_buf_ops in include/linux/dma-buf.h
[2] All interfaces mentioned above defined in include/linux/dma-buf.h
+++ /dev/null
-Kernel driver exynos4_tmu
-=================
-
-Supported chips:
-* ARM SAMSUNG EXYNOS4 series of SoC
- Prefix: 'exynos4-tmu'
- Datasheet: Not publicly available
-
-Authors: Donggeun Kim <dg77.kim@samsung.com>
-
-Description
------------
-
-This driver allows to read temperature inside SAMSUNG EXYNOS4 series of SoC.
-
-The chip only exposes the measured 8-bit temperature code value
-through a register.
-Temperature can be taken from the temperature code.
-There are three equations converting from temperature to temperature code.
-
-The three equations are:
- 1. Two point trimming
- Tc = (T - 25) * (TI2 - TI1) / (85 - 25) + TI1
-
- 2. One point trimming
- Tc = T + TI1 - 25
-
- 3. No trimming
- Tc = T + 50
-
- Tc: Temperature code, T: Temperature,
- TI1: Trimming info for 25 degree Celsius (stored at TRIMINFO register)
- Temperature code measured at 25 degree Celsius which is unchanged
- TI2: Trimming info for 85 degree Celsius (stored at TRIMINFO register)
- Temperature code measured at 85 degree Celsius which is unchanged
-
-TMU(Thermal Management Unit) in EXYNOS4 generates interrupt
-when temperature exceeds pre-defined levels.
-The maximum number of configurable threshold is four.
-The threshold levels are defined as follows:
- Level_0: current temperature > trigger_level_0 + threshold
- Level_1: current temperature > trigger_level_1 + threshold
- Level_2: current temperature > trigger_level_2 + threshold
- Level_3: current temperature > trigger_level_3 + threshold
-
- The threshold and each trigger_level are set
- through the corresponding registers.
-
-When an interrupt occurs, this driver notify user space of
-one of four threshold levels for the interrupt
-through kobject_uevent_env and sysfs_notify functions.
-Although an interrupt condition for level_0 can be set,
-it is not notified to user space through sysfs_notify function.
-
-Sysfs Interface
----------------
-name name of the temperature sensor
- RO
-
-temp1_input temperature
- RO
-
-temp1_max temperature for level_1 interrupt
- RO
-
-temp1_crit temperature for level_2 interrupt
- RO
-
-temp1_emergency temperature for level_3 interrupt
- RO
-
-temp1_max_alarm alarm for level_1 interrupt
- RO
-
-temp1_crit_alarm
- alarm for level_2 interrupt
- RO
-
-temp1_emergency_alarm
- alarm for level_3 interrupt
- RO
Also note the kernel might malfunction if you disable
some critical bits.
+ cma=nn[MG] [ARM,KNL]
+ Sets the size of kernel global memory area for contiguous
+ memory allocations. For more information, see
+ include/linux/dma-contiguous.h
+
cmo_free_hint= [PPC] Format: { yes | no }
Specify whether pages are marked as being inactive
when they are freed. This is used in CMO environments
a hypervisor.
Default: yes
+ coherent_pool=nn[KMG] [ARM,KNL]
+ Sets the size of memory pool for coherent, atomic dma
+ allocations.
+
code_bytes [X86] How many bytes of object code to print
in an oops report.
Range: 0 - 8192
--- /dev/null
+CPU cooling APIs How To
+===================================
+
+Written by Amit Daniel Kachhap <amit.kachhap@linaro.org>
+
+Updated: 12 May 2012
+
+Copyright (c) 2012 Samsung Electronics Co., Ltd(http://www.samsung.com)
+
+0. Introduction
+
+The generic cpu cooling(freq clipping, cpuhotplug etc) provides
+registration/unregistration APIs to the caller. The binding of the cooling
+devices to the trip point is left for the user. The registration APIs returns
+the cooling device pointer.
+
+1. cpu cooling APIs
+
+1.1 cpufreq registration/unregistration APIs
+1.1.1 struct thermal_cooling_device *cpufreq_cooling_register(
+ struct freq_clip_table *tab_ptr, unsigned int tab_size)
+
+ This interface function registers the cpufreq cooling device with the name
+ "thermal-cpufreq-%x". This api can support multiple instances of cpufreq
+ cooling devices.
+
+ tab_ptr: The table containing the maximum value of frequency to be clipped
+ for each cooling state.
+ .freq_clip_max: Value of frequency to be clipped for each allowed
+ cpus.
+ .temp_level: Temperature level at which the frequency clamping will
+ happen.
+ .mask_val: cpumask of the allowed cpu's
+ tab_size: the total number of cpufreq cooling states.
+
+1.1.2 void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev)
+
+ This interface function unregisters the "thermal-cpufreq-%x" cooling device.
+
+ cdev: Cooling device pointer which has to be unregistered.
+
+
+1.2 CPU cooling action notifier register/unregister interface
+1.2.1 int cputherm_register_notifier(struct notifier_block *nb,
+ unsigned int list)
+
+ This interface registers a driver with cpu cooling layer. The driver will
+ be notified when any cpu cooling action is called.
+
+ nb: notifier function to register
+ list: CPUFREQ_COOLING_START or CPUFREQ_COOLING_STOP
+
+1.2.2 int cputherm_unregister_notifier(struct notifier_block *nb,
+ unsigned int list)
+
+ This interface registers a driver with cpu cooling layer. The driver will
+ be notified when any cpu cooling action is called.
+
+ nb: notifier function to register
+ list: CPUFREQ_COOLING_START or CPUFREQ_COOLING_STOP
--- /dev/null
+Kernel driver exynos4_tmu
+=================
+
+Supported chips:
+* ARM SAMSUNG EXYNOS4 series of SoC
+ Prefix: 'exynos4-tmu'
+ Datasheet: Not publicly available
+
+Authors: Donggeun Kim <dg77.kim@samsung.com>
+
+Description
+-----------
+
+This driver allows to read temperature inside SAMSUNG EXYNOS4 series of SoC.
+
+The chip only exposes the measured 8-bit temperature code value
+through a register.
+Temperature can be taken from the temperature code.
+There are three equations converting from temperature to temperature code.
+
+The three equations are:
+ 1. Two point trimming
+ Tc = (T - 25) * (TI2 - TI1) / (85 - 25) + TI1
+
+ 2. One point trimming
+ Tc = T + TI1 - 25
+
+ 3. No trimming
+ Tc = T + 50
+
+ Tc: Temperature code, T: Temperature,
+ TI1: Trimming info for 25 degree Celsius (stored at TRIMINFO register)
+ Temperature code measured at 25 degree Celsius which is unchanged
+ TI2: Trimming info for 85 degree Celsius (stored at TRIMINFO register)
+ Temperature code measured at 85 degree Celsius which is unchanged
+
+TMU(Thermal Management Unit) in EXYNOS4 generates interrupt
+when temperature exceeds pre-defined levels.
+The maximum number of configurable threshold is four.
+The threshold levels are defined as follows:
+ Level_0: current temperature > trigger_level_0 + threshold
+ Level_1: current temperature > trigger_level_1 + threshold
+ Level_2: current temperature > trigger_level_2 + threshold
+ Level_3: current temperature > trigger_level_3 + threshold
+
+ The threshold and each trigger_level are set
+ through the corresponding registers.
+
+When an interrupt occurs, this driver notify kernel thermal framework
+with the function exynos4_report_trigger.
+Although an interrupt condition for level_0 can be set,
+it can be used to synchronize the cooling action.
You should also set these fields:
- v4l2_dev: set to the v4l2_device parent device.
+
- name: set to something descriptive and unique.
+
- fops: set to the v4l2_file_operations struct.
+
- ioctl_ops: if you use the v4l2_ioctl_ops to simplify ioctl maintenance
(highly recommended to use this and it might become compulsory in the
future!), then set this to your v4l2_ioctl_ops struct.
+
- lock: leave to NULL if you want to do all the locking in the driver.
Otherwise you give it a pointer to a struct mutex_lock and before any
of the v4l2_file_operations is called this lock will be taken by the
- core and released afterwards.
+ core and released afterwards. See the next section for more details.
+
- prio: keeps track of the priorities. Used to implement VIDIOC_G/S_PRIORITY.
If left to NULL, then it will use the struct v4l2_prio_state in v4l2_device.
If you want to have a separate priority state per (group of) device node(s),
then you can point it to your own struct v4l2_prio_state.
+
- parent: you only set this if v4l2_device was registered with NULL as
the parent device struct. This only happens in cases where one hardware
device has multiple PCI devices that all share the same v4l2_device core.
(cx8802). Since the v4l2_device cannot be associated with a particular
PCI device it is setup without a parent device. But when the struct
video_device is setup you do know which parent PCI device to use.
+
- flags: optional. Set to V4L2_FL_USE_FH_PRIO if you want to let the framework
handle the VIDIOC_G/S_PRIORITY ioctls. This requires that you use struct
v4l2_fh. Eventually this flag will disappear once all drivers use the core
--------------------------------
You can set a pointer to a mutex_lock in struct video_device. Usually this
-will be either a top-level mutex or a mutex per device node. If you want
-finer-grained locking then you have to set it to NULL and do you own locking.
+will be either a top-level mutex or a mutex per device node. By default this
+lock will be used for each file operation and ioctl, but you can disable
+locking for selected ioctls by calling:
+
+ void v4l2_dont_use_lock(struct video_device *vdev, unsigned int cmd);
+
+E.g.: v4l2_dont_use_lock(vdev, VIDIOC_DQBUF);
+
+You have to call this before you register the video_device.
+
+Particularly with USB drivers where certain commands such as setting controls
+can take a long time you may want to do your own locking for the buffer queuing
+ioctls.
+
+If you want still finer-grained locking then you have to set mutex_lock to NULL
+and do you own locking completely.
It is up to the driver developer to decide which method to use. However, if
your driver has high-latency operations (for example, changing the exposure
--- /dev/null
+[Hook Overrides]
+cros_license_check: false
+tab_check: false
config HAVE_DMA_ATTRS
bool
+config HAVE_DMA_CONTIGUOUS
+ bool
+
config USE_GENERIC_SMP_HELPERS
bool
select HAVE_AOUT
select HAVE_DMA_API_DEBUG
select HAVE_IDE if PCI || ISA || PCMCIA
+ select HAVE_DMA_ATTRS
+ select HAVE_DMA_CONTIGUOUS if (CPU_V6 || CPU_V6K || CPU_V7)
+ select CMA if (CPU_V6 || CPU_V6K || CPU_V7)
select HAVE_MEMBLOCK
select RTC_LIB
select SYS_SUPPORTS_APM_EMULATION
config ARM_HAS_SG_CHAIN
bool
+config NEED_SG_DMA_LENGTH
+ bool
+
+config ARM_DMA_USE_IOMMU
+ select NEED_SG_DMA_LENGTH
+ select ARM_HAS_SG_CHAIN
+ bool
+
config HAVE_PWM
bool
select HAVE_S3C2410_I2C if I2C
select HAVE_S3C2410_WATCHDOG if WATCHDOG
select NEED_MACH_MEMORY_H
+ select USB_ARCH_HAS_XHCI
help
Support for SAMSUNG's EXYNOS SoCs (EXYNOS4/5)
Say Y here if you want the debug print routines to direct
their output to the serial port on MSM 8960 devices.
+ config DEBUG_S3C_UART3
+ depends on PLAT_SAMSUNG
+ bool "Use S3C UART 3 for low-level debug"
+ help
+ Say Y here if you want the debug print routines to direct
+ their output to UART 3. The port must have been initialised
+ by the boot-loader before use.
+
+ The uncompressor code port configuration is now handled
+ by CONFIG_S3C_LOWLEVEL_UART_PORT.
+
config DEBUG_REALVIEW_STD_PORT
bool "RealView Default UART"
depends on ARCH_REALVIEW
--- /dev/null
+/*
+ * Common device tree include for all Exynos 5250 boards based off of Daisy.
+ *
+ * Copyright (c) 2012 Google, Inc
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+/ {
+ memory {
+ reg = <0x40000000 0x80000000>;
+ };
+
+ chosen {
+ };
+
+ aliases {
+ sysmmu2 = &sysmmu_2;
+ sysmmu3 = &sysmmu_3;
+ sysmmu4 = &sysmmu_4;
+ sysmmu27 = &sysmmu_27;
+ sysmmu28 = &sysmmu_28;
+ sysmmu23 = &sysmmu_23;
+ sysmmu24 = &sysmmu_24;
+ sysmmu25 = &sysmmu_25;
+ sysmmu26 = &sysmmu_26;
+ gsc0 = &gsc_0;
+ gsc1 = &gsc_1;
+ gsc2 = &gsc_2;
+ gsc3 = &gsc_3;
+ i2s0 = &i2s_0;
+ };
+
+ i2c@12C60000 {
+ samsung,i2c-sda-delay = <100>;
+ samsung,i2c-max-bus-freq = <66000>;
+ gpios = <&gpb3 0 2 3 0>,
+ <&gpb3 1 2 3 0>;
+
+ max77686_pmic@9 {
+ compatible = "maxim,max77686-pmic";
+ interrupt-parent = <&wakeup_eint>;
+ interrupts = <26 0>;
+ reg = <0x9>;
+
+ max77686,buck_ramp_delay = <2>; /* default */
+
+ voltage-regulators {
+ ldo11_reg: LDO11 {
+ regulator-name = "vdd_ldo11";
+ regulator-min-microvolt = <1900000>;
+ regulator-max-microvolt = <1900000>;
+ regulator-always-on;
+ };
+
+ ldo14_reg: LDO14 {
+ regulator-name = "vdd_ldo14";
+ regulator-min-microvolt = <1900000>;
+ regulator-max-microvolt = <1900000>;
+ regulator-always-on;
+ };
+
+ buck1_reg: BUCK1 {
+ regulator-name = "vdd_mif";
+ regulator-min-microvolt = <950000>;
+ regulator-max-microvolt = <1300000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ buck2_reg: BUCK2 {
+ regulator-name = "vdd_arm";
+ regulator-min-microvolt = <850000>;
+ regulator-max-microvolt = <1350000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ buck3_reg: BUCK3 {
+ regulator-name = "vdd_int";
+ regulator-min-microvolt = <900000>;
+ regulator-max-microvolt = <1200000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ buck4_reg: BUCK4 {
+ regulator-name = "vdd_g3d";
+ regulator-min-microvolt = <850000>;
+ regulator-max-microvolt = <1300000>;
+ regulator-boot-on;
+ };
+
+ buck8_reg: BUCK8 {
+ regulator-name = "vdd_ummc";
+ regulator-min-microvolt = <900000>;
+ regulator-max-microvolt = <3000000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ en32khz_ap: EN32KHZ_AP {
+ regulator-name = "en32khz_ap";
+ regulator-boot-on;
+ };
+
+ en32khz_cp: EN32KHZ_CP {
+ regulator-name = "en32khz_cp";
+ regulator-boot-on;
+ };
+
+ enp32kh: ENP32KHZ {
+ regulator-name = "enp32khz";
+ regulator-boot-on;
+ };
+ };
+ };
+ };
+
+ i2c@12C70000 {
+ samsung,i2c-sda-delay = <100>;
+ samsung,i2c-max-bus-freq = <66000>;
+ gpios = <&gpb3 2 2 3 0>,
+ <&gpb3 3 2 3 0>;
+
+ trackpad {
+ reg = <0x67>;
+ compatible = "cypress,cyapa";
+ interrupts = <10 0>;
+ interrupt-parent = <&wakeup_eint>;
+ };
+ };
+
+ i2c@12C80000 {
+ samsung,i2c-sda-delay = <100>;
+ samsung,i2c-max-bus-freq = <66000>;
+ gpios = <&gpa0 6 3 3 0>,
+ <&gpa0 7 3 3 0>;
+
+ exynos_hdcp@3a {
+ compatible = "samsung,exynos_hdcp";
+ reg = <0x3a>;
+ };
+ };
+
+ i2c@12C90000 {
+ samsung,i2c-sda-delay = <100>;
+ samsung,i2c-max-bus-freq = <66000>;
+ gpios = <&gpa1 2 3 3 0>,
+ <&gpa1 3 3 3 0>;
+
+ tpm {
+ compatible = "infineon,slb9635tt";
+ reg = <0x20>;
+ };
+ };
+
+ i2c@12CA0000 {
+ samsung,i2c-sda-delay = <100>;
+ samsung,i2c-max-bus-freq = <66000>;
+ gpios = <&gpa2 0 3 3 0>,
+ <&gpa2 1 3 3 0>;
+
+ power-regulator {
+ compatible = "ti,tps65090";
+ reg = <0x48>;
+
+ voltage-regulators {
+ VFET1 {
+ tps65090-control-reg-offset = <15>;
+ regulator-name = "vcd_led";
+ regulator-min-microvolt = <12000000>;
+ regulator-max-microvolt = <12000000>;
+ };
+ VFET2 {
+ tps65090-control-reg-offset = <16>;
+ regulator-name = "video_mid";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ };
+ VFET3 {
+ tps65090-control-reg-offset = <17>;
+ regulator-name = "wwan_r";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ };
+ VFET4 {
+ tps65090-control-reg-offset = <18>;
+ regulator-name = "sdcard";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ };
+ VFET5 {
+ tps65090-control-reg-offset = <19>;
+ regulator-name = "camout";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ };
+ VFET6 {
+ tps65090-control-reg-offset = <20>;
+ regulator-name = "lcd_vdd";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ };
+ VFET7 {
+ tps65090-control-reg-offset = <21>;
+ regulator-name = "ts";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ };
+ };
+ };
+
+ ec {
+ compatible = "google,chromeos-ec";
+ reg = <0x1e>;
+ interrupts = <14 0>;
+ interrupt-parent = <&wakeup_eint>;
+ };
+
+ // i2c4 hsic hub @0x8, eeprom @0x50, batt @0xb
+ };
+
+ i2c@12CB0000 {
+ samsung,i2c-sda-delay = <100>;
+ samsung,i2c-max-bus-freq = <66000>;
+ gpios = <&gpa2 2 3 3 0>,
+ <&gpa2 3 3 3 0>;
+
+ // i2c5 conn? ts?
+ };
+
+ i2c@12CC0000 {
+ status = "disabled";
+
+ // i2c6 is not used on any cros5250 boards
+ };
+
+ i2c@12CD0000 {
+ samsung,i2c-sda-delay = <100>;
+ samsung,i2c-max-bus-freq = <66000>;
+ gpios = <&gpb2 2 3 3 0>,
+ <&gpb2 3 3 3 0>;
+
+ codec {
+ compatible = "maxim,max98095";
+ reg = <0x11>;
+ };
+
+ light-sensor {
+ compatible = "invn,isl29018";
+ reg = <0x44>;
+ };
+
+ // i2c7
+ // mipi cam codec 0x11 gyro @0x68
+ // LCD @0x50-57 ALS @? mic-detect @0x3b
+ };
+
+ spi_0: spi@12d20000 {
+ status = "disabled";
+ };
+
+ spi_1: spi@12d30000 {
+ gpios = <&gpa2 4 2 3 0>,
+ <&gpa2 6 2 3 0>,
+ <&gpa2 7 2 3 0>;
+ samsung,spi-src-clk = <0>;
+ num-cs = <1>;
+ };
+
+ spi_2: spi@12d40000 {
+ status = "disabled";
+ };
+
+ dwmmc0@12200000 {
+ supports-highspeed;
+ card-detection-broken;
+ no-write-protect;
+ fifo-depth = <0x80>;
+ card-detect-delay = <200>;
+ samsung,dw-mshc-sdr-timing = <2 3 3>;
+ samsung,dw-mshc-ddr-timing = <1 2 3>;
+
+ slot0 {
+ bus-width = <8>;
+ cd-gpios = <&gpc0 2 2 3 3>;
+ gpios = <&gpc0 0 2 0 3>, <&gpc0 1 2 0 3>,
+ <&gpc1 0 2 3 3>, <&gpc1 1 2 3 3>,
+ <&gpc1 2 2 3 3>, <&gpc1 3 2 3 3>,
+ <&gpc0 3 2 3 3>, <&gpc0 4 2 3 3>,
+ <&gpc0 5 2 3 3>, <&gpc0 6 2 3 3>;
+ };
+ };
+
+ dwmmc1@12210000 {
+ status = "disabled";
+ };
+
+ dwmmc2@12220000 {
+ supports-highspeed;
+ card-detection-broken;
+ no-write-protect;
+ fifo-depth = <0x80>;
+ card-detect-delay = <200>;
+ samsung,dw-mshc-sdr-timing = <2 3 3>;
+ samsung,dw-mshc-ddr-timing = <1 2 3>;
+
+ slot0 {
+ bus-width = <4>;
+ cd-gpios = <&gpc3 2 2 3 3>;
+ gpios = <&gpc3 1 2 0 3>, <&gpc3 0 2 0 3>,
+ <&gpc3 3 2 3 3>, <&gpc3 4 2 3 3>,
+ <&gpc3 5 2 3 3>, <&gpc3 6 2 3 3>;
+ };
+ };
+
+ dwmmc3@12230000 {
+ supports-highspeed;
+ card-detection-broken;
+ no-write-protect;
+ fifo-depth = <0x80>;
+ card-detect-delay = <200>;
+ samsung,dw-mshc-sdr-timing = <2 3 3>;
+ samsung,dw-mshc-ddr-timing = <1 2 3>;
+
+ slot0 {
+ bus-width = <4>;
+ gpios = <&gpc4 1 2 3 3>, <&gpc4 0 2 0 3>,
+ <&gpc4 3 2 3 3>, <&gpc4 4 2 3 3>,
+ <&gpc4 5 2 3 3>, <&gpc4 6 2 3 3>;
+ };
+ };
+
+ i2c@12CE0000 {
+ samsung,i2c-sda-delay = <100>;
+ samsung,i2c-max-bus-freq = <66000>;
+ };
+
+ ehci {
+ samsung,vbus-gpio = <&gpx1 1 1 3 3>;
+ };
+
+ xhci {
+ samsung,vbus-gpio = <&gpx2 7 1 3 3>;
+ };
+
+ fixed-regulator {
+ compatible = "regulator-fixed";
+ regulator-name = "hsichub-reset-l";
+ gpio = <&gpe1 0 1 0 0>;
+ enable-active-high;
+ regulator-always-on;
+ };
+
+ // NB: nodes must be at root for regulator-fixed to probe
+ // NB: must set regulator-boot-on for enable-active-high to be used
+ // NB: set regulator-always-on to suppress complaints
+ // "incomplete constraints, leaving on"
+ wifi-en {
+ compatible = "regulator-fixed";
+ regulator-name = "wifi-en";
+ gpio = <&gpx0 1 0 0 0>;
+ enable-active-high;
+ regulator-boot-on;
+ regulator-always-on;
+ };
+ wifi-rst {
+ compatible = "regulator-fixed";
+ regulator-name = "wifi-rst-l";
+ gpio = <&gpx0 2 0 0 0>;
+ enable-active-high;
+ regulator-boot-on;
+ regulator-always-on;
+ };
+ bt-rst {
+ compatible = "regulator-fixed";
+ regulator-name = "bt-reset-l";
+ gpio = <&gpx3 2 0 0 0>;
+ enable-active-high;
+ regulator-boot-on;
+ regulator-always-on;
+ };
+ wwan-en {
+ compatible = "regulator-fixed";
+ regulator-name = "wwan-en";
+ gpio = <&gpe0 0 0 0 0>;
+ enable-active-high;
+ regulator-boot-on;
+ regulator-always-on;
+ };
+ max98095-en {
+ compatible = "regulator-fixed";
+ regulator-name = "codec-en";
+ gpio = <&gpx1 7 0 0 0>;
+ enable-active-high;
+ regulator-boot-on;
+ regulator-always-on;
+ };
+
+ gpio-keys {
+ compatible = "gpio-keys";
+
+ power {
+ label = "Power";
+ gpios = <&gpx1 3 0 0 0>;
+ linux,code = <116>; /* KEY_POWER */
+ gpio-key,wakeup;
+ };
+ };
+};
i2c@138D0000 {
status = "disabled";
};
+
+ spi_0: spi@13920000 {
+ status = "disabled";
+ };
+
+ spi_1: spi@13930000 {
+ status = "disabled";
+ };
+
+ spi_2: spi@13940000 {
+ status = "disabled";
+ };
};
i2c@138D0000 {
status = "disabled";
};
+
+ spi_0: spi@13920000 {
+ status = "disabled";
+ };
+
+ spi_1: spi@13930000 {
+ status = "disabled";
+ };
+
+ spi_2: spi@13940000 {
+ gpios = <&gpc1 1 5 3 0>,
+ <&gpc1 3 5 3 0>,
+ <&gpc1 4 5 3 0>;
+
+ w25x80@0 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "w25x80";
+ reg = <0>;
+ spi-max-frequency = <1000000>;
+
+ controller-data {
+ cs-gpio = <&gpc1 2 1 0 3>;
+ samsung,spi-feedback-delay = <0>;
+ };
+
+ partition@0 {
+ label = "U-Boot";
+ reg = <0x0 0x40000>;
+ read-only;
+ };
+
+ partition@40000 {
+ label = "Kernel";
+ reg = <0x40000 0xc0000>;
+ };
+ };
+ };
};
compatible = "samsung,exynos4210";
interrupt-parent = <&gic>;
+ aliases {
+ spi0 = &spi_0;
+ spi1 = &spi_1;
+ spi2 = &spi_2;
+ };
+
gic:interrupt-controller@10490000 {
compatible = "arm,cortex-a9-gic";
#interrupt-cells = <3>;
interrupts = <0 65 0>;
};
+ spi_0: spi@13920000 {
+ compatible = "samsung,exynos4210-spi";
+ reg = <0x13920000 0x100>;
+ interrupts = <0 66 0>;
+ tx-dma-channel = <&pdma0 7>;
+ rx-dma-channel = <&pdma0 6>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ };
+
+ spi_1: spi@13930000 {
+ compatible = "samsung,exynos4210-spi";
+ reg = <0x13930000 0x100>;
+ interrupts = <0 67 0>;
+ tx-dma-channel = <&pdma1 7>;
+ rx-dma-channel = <&pdma1 6>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ };
+
+ spi_2: spi@13940000 {
+ compatible = "samsung,exynos4210-spi";
+ reg = <0x13940000 0x100>;
+ interrupts = <0 68 0>;
+ tx-dma-channel = <&pdma0 9>;
+ rx-dma-channel = <&pdma0 8>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ };
+
amba {
#address-cells = <1>;
#size-cells = <1>;
--- /dev/null
+/*
+ * Google Daisy board device tree source
+ *
+ * Copyright (c) 2012 Google, Inc
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+/dts-v1/;
+/include/ "exynos5250.dtsi"
+/include/ "cros5250-common.dtsi"
+
+/ {
+ model = "Google Daisy";
+ compatible = "google,daisy", "samsung,exynos5250";
+
+ sromc-bus {
+ lan9215@3,0 {
+ compatible = "smsc,lan9215", "smsc,lan9115";
+ reg = <3 0 0x20000>;
+ interrupts = <5 0>;
+ interrupt-parent = <&wakeup_eint>;
+ phy-mode = "mii";
+ smsc,irq-push-pull;
+ smsc,force-internal-phy;
+ local-mac-address = [00 80 00 23 45 67];
+ };
+ };
+
+ display-port-controller {
+ status = "disabled";
+ };
+};
};
chosen {
- bootargs = "root=/dev/ram0 rw ramdisk=8192 console=ttySAC1,115200";
+ bootargs = "root=/dev/ram0 rw ramdisk=8192 initrd=0x41000000,8M console=ttySAC3,115200 init=/linuxrc";
+ };
+
+ sromc-bus {
+ lan9215@1,0 {
+ compatible = "smsc,lan9215", "smsc,lan9115";
+ reg = <1 0 0x20000>;
+ interrupts = <5 0>;
+ interrupt-parent = <&wakeup_eint>;
+ phy-mode = "mii";
+ smsc,irq-push-pull;
+ smsc,force-internal-phy;
+ };
+ };
+
+ i2c@12C60000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ samsung,i2c-sda-delay = <100>;
+ samsung,i2c-max-bus-freq = <66000>;
+ gpios = <&gpb3 0 2 3 0>,
+ <&gpb3 1 2 3 0>;
+
+ eeprom@50 {
+ compatible = "samsung,s524ad0xd1";
+ reg = <0x50>;
+ };
+
+ max77686_pmic@9 {
+ compatible = "maxim,max77686-pmic";
+ interrupt-parent = <&wakeup_eint>;
+ interrupts = <26 0>;
+ reg = <0x9>;
+
+ max77686,buck_ramp_delay = <2>; /* default */
+
+ voltage-regulators {
+ ldo11_reg: LDO11 {
+ regulator-name = "vdd_ldo11";
+ regulator-min-microvolt = <1900000>;
+ regulator-max-microvolt = <1900000>;
+ regulator-always-on;
+ };
+
+ ldo14_reg: LDO14 {
+ regulator-name = "vdd_ldo14";
+ regulator-min-microvolt = <1900000>;
+ regulator-max-microvolt = <1900000>;
+ regulator-always-on;
+ };
+
+ buck1_reg: BUCK1 {
+ regulator-name = "vdd_mif";
+ regulator-min-microvolt = <950000>;
+ regulator-max-microvolt = <1300000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ buck2_reg: BUCK2 {
+ regulator-name = "vdd_arm";
+ regulator-min-microvolt = <850000>;
+ regulator-max-microvolt = <1350000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ buck3_reg: BUCK3 {
+ regulator-name = "vdd_int";
+ regulator-min-microvolt = <900000>;
+ regulator-max-microvolt = <1200000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ buck4_reg: BUCK4 {
+ regulator-name = "vdd_g3d";
+ regulator-min-microvolt = <850000>;
+ regulator-max-microvolt = <1300000>;
+ regulator-boot-on;
+ };
+
+ buck8_reg: BUCK8 {
+ regulator-name = "vdd_ummc";
+ regulator-min-microvolt = <900000>;
+ regulator-max-microvolt = <3000000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+ };
+ };
+ };
+
+ i2c@12C70000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ samsung,i2c-sda-delay = <100>;
+ samsung,i2c-max-bus-freq = <66000>;
+ gpios = <&gpb3 2 2 3 0>,
+ <&gpb3 3 2 3 0>;
+
+ eeprom@51 {
+ compatible = "samsung,s524ad0xd1";
+ reg = <0x51>;
+ };
+ };
+
+ i2c@12C80000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ samsung,i2c-sda-delay = <100>;
+ samsung,i2c-max-bus-freq = <66000>;
+ gpios = <&gpa0 6 3 3 0>,
+ <&gpa0 7 3 3 0>;
+
+ exynos_hdcp@3a {
+ compatible = "samsung,exynos_hdcp";
+ reg = <0x3a>;
+ };
+ };
+
+ i2c@12C90000 {
+ status = "disabled";
+ };
+
+ i2c@12CA0000 {
+ status = "disabled";
+ };
+
+ i2c@12CB0000 {
+ status = "disabled";
+ };
+
+ i2c@12CC0000 {
+ status = "disabled";
+ };
+
+ i2c@12CD0000 {
+ status = "disabled";
+ };
+
+ dwmmc0@12200000 {
+ supports-highspeed;
+ card-detection-broken;
+ no-write-protect;
+ fifo-depth = <0x80>;
+ card-detect-delay = <200>;
+ samsung,dw-mshc-sdr-timing = <2 3 3>;
+ samsung,dw-mshc-ddr-timing = <1 2 3>;
+
+ slot0 {
+ bus-width = <8>;
+ cd-gpios = <&gpc0 2 2 3 3>;
+ gpios = <&gpc0 0 2 0 3>, <&gpc0 1 2 0 3>,
+ <&gpc1 0 2 3 3>, <&gpc1 1 2 3 3>,
+ <&gpc1 2 2 3 3>, <&gpc1 3 2 3 3>,
+ <&gpc0 3 2 3 3>, <&gpc0 4 2 3 3>,
+ <&gpc0 5 2 3 3>, <&gpc0 6 2 3 3>;
+ };
+ };
+
+ dwmmc1@12210000 {
+ status = "disabled";
+ };
+
+ dwmmc2@12220000 {
+ card-detection-broken;
+ no-write-protect;
+ fifo-depth = <0x80>;
+ card-detect-delay = <200>;
+ samsung,dw-mshc-sdr-timing = <2 3 3>;
+ samsung,dw-mshc-ddr-timing = <1 2 3>;
+
+ slot0 {
+ bus-width = <4>;
+ cd-gpios = <&gpc3 2 2 3 3>;
+ gpios = <&gpc3 0 2 0 3>, <&gpc3 1 2 0 3>,
+ <&gpc3 3 2 3 3>, <&gpc3 4 2 3 3>,
+ <&gpc3 5 2 3 3>, <&gpc3 6 2 3 3>,
+ <&gpc4 3 3 3 3>, <&gpc4 3 3 3 3>,
+ <&gpc4 5 3 3 3>, <&gpc4 6 3 3 3>;
+ };
+ };
+
+ dwmmc3@12230000 {
+ status = "disabled";
+ };
+
+ i2c@12CE0000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ samsung,i2c-sda-delay = <100>;
+ samsung,i2c-max-bus-freq = <66000>;
+ };
+
+ spi_0: spi@12d20000 {
+ status = "disabled";
+ };
+
+ spi_1: spi@12d30000 {
+ gpios = <&gpa2 4 2 3 0>,
+ <&gpa2 6 2 3 0>,
+ <&gpa2 7 2 3 0>;
+
+ w25q80bw@0 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "w25x80";
+ reg = <0>;
+ spi-max-frequency = <1000000>;
+
+ controller-data {
+ cs-gpio = <&gpa2 5 1 0 3>;
+ samsung,spi-feedback-delay = <0>;
+ };
+
+ partition@0 {
+ label = "U-Boot";
+ reg = <0x0 0x40000>;
+ read-only;
+ };
+
+ partition@40000 {
+ label = "Kernel";
+ reg = <0x40000 0xc0000>;
+ };
+ };
+ };
+
+ spi_2: spi@12d40000 {
+ status = "disabled";
+ };
+
+ display-port-controller {
+ status = "disabled";
};
};
--- /dev/null
+/*
+ * Google Snow board device tree source
+ *
+ * Copyright (c) 2012 Google, Inc
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+/dts-v1/;
+/include/ "exynos5250.dtsi"
+/include/ "cros5250-common.dtsi"
+
+/ {
+ model = "Google Snow";
+ compatible = "google,snow", "samsung,exynos5250";
+
+ mipi {
+ status = "disabled";
+ };
+};
compatible = "samsung,exynos5250";
interrupt-parent = <&gic>;
- gic:interrupt-controller@10490000 {
+ aliases {
+ mshc0 = &mshc_0;
+ mshc1 = &mshc_1;
+ mshc2 = &mshc_2;
+ mshc3 = &mshc_3;
+ sysmmu2 = &sysmmu_2;
+ sysmmu3 = &sysmmu_3;
+ sysmmu4 = &sysmmu_4;
+ sysmmu27 = &sysmmu_27;
+ sysmmu28 = &sysmmu_28;
+ sysmmu23 = &sysmmu_23;
+ sysmmu24 = &sysmmu_24;
+ sysmmu25 = &sysmmu_25;
+ sysmmu26 = &sysmmu_26;
+ gsc0 = &gsc_0;
+ gsc1 = &gsc_1;
+ gsc2 = &gsc_2;
+ gsc3 = &gsc_3;
+ i2s0 = &i2s_0;
+ i2c0 = &i2c_0;
+ i2c1 = &i2c_1;
+ i2c2 = &i2c_2;
+ i2c3 = &i2c_3;
+ i2c4 = &i2c_4;
+ i2c5 = &i2c_5;
+ i2c6 = &i2c_6;
+ i2c7 = &i2c_7;
+ i2c8 = &i2c_8;
+ spi0 = &spi_0;
+ spi1 = &spi_1;
+ spi2 = &spi_2;
+ };
+
+ cpus {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ cpu@0 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a15";
+ reg = <0>;
+ };
+
+ cpu@1 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a15";
+ reg = <1>;
+ };
+ };
+
+ gic:interrupt-controller@10481000 {
compatible = "arm,cortex-a9-gic";
#interrupt-cells = <3>;
+ #address-cells = <0>;
+ #size-cells = <0>;
+ interrupt-controller;
+ reg = <0x10481000 0x1000>, <0x10482000 0x2000>;
+ };
+
+ combiner:interrupt-controller@10440000 {
+ compatible = "samsung,exynos4210-combiner";
+ #interrupt-cells = <2>;
+ interrupt-controller;
+ samsung,combiner-nr = <32>;
+ reg = <0x10440000 0x1000>;
+ interrupts = <0 0 0>, <0 1 0>, <0 2 0>, <0 3 0>,
+ <0 4 0>, <0 5 0>, <0 6 0>, <0 7 0>,
+ <0 8 0>, <0 9 0>, <0 10 0>, <0 11 0>,
+ <0 12 0>, <0 13 0>, <0 14 0>, <0 15 0>,
+ <0 16 0>, <0 17 0>, <0 18 0>, <0 19 0>,
+ <0 20 0>, <0 21 0>, <0 22 0>, <0 23 0>,
+ <0 24 0>, <0 25 0>, <0 26 0>, <0 27 0>,
+ <0 28 0>, <0 29 0>, <0 30 0>, <0 31 0>;
+ };
+
+ wakeup_eint: interrupt-controller@11400000 {
+ compatible = "samsung,exynos5210-wakeup-eint";
+ reg = <0x11400000 0x1000>;
interrupt-controller;
- reg = <0x10490000 0x1000>, <0x10480000 0x100>;
+ #interrupt-cells = <2>;
+ interrupt-parent = <&wakeup_map>;
+ interrupts = <0x0 0>, <0x1 0>, <0x2 0>, <0x3 0>,
+ <0x4 0>, <0x5 0>, <0x6 0>, <0x7 0>,
+ <0x8 0>, <0x9 0>, <0xa 0>, <0xb 0>,
+ <0xc 0>, <0xd 0>, <0xe 0>, <0xf 0>,
+ <0x10 0>;
+
+ wakeup_map: interrupt-map {
+ compatible = "samsung,exynos5210-wakeup-eint-map";
+ #interrupt-cells = <2>;
+ #address-cells = <0>;
+ #size-cells = <0>;
+ interrupt-map = <0x0 0 &combiner 23 0>,
+ <0x1 0 &combiner 24 0>,
+ <0x2 0 &combiner 25 0>,
+ <0x3 0 &combiner 25 1>,
+ <0x4 0 &combiner 26 0>,
+ <0x5 0 &combiner 26 1>,
+ <0x6 0 &combiner 27 0>,
+ <0x7 0 &combiner 27 1>,
+ <0x8 0 &combiner 28 0>,
+ <0x9 0 &combiner 28 1>,
+ <0xa 0 &combiner 29 0>,
+ <0xb 0 &combiner 29 1>,
+ <0xc 0 &combiner 30 0>,
+ <0xd 0 &combiner 30 1>,
+ <0xe 0 &combiner 31 0>,
+ <0xf 0 &combiner 31 1>,
+ <0x10 0 &gic 0 32 0>;
+ };
+ };
+
+ pmu {
+ compatible = "arm,cortex-a15-pmu", "arm,cortex-a9-pmu";
+ interrupts = <1 2>,
+ <22 4>;
+ interrupt-parent = <&combiner>;
};
watchdog {
interrupts = <0 42 0>;
};
- rtc {
- compatible = "samsung,s3c6410-rtc";
- reg = <0x101E0000 0x100>;
- interrupts = <0 43 0>, <0 44 0>;
+ mfc {
+ compatible = "samsung,s5p-mfc-v6";
+ reg = <0x11000000 0x10000>;
+ interrupts = <0 96 0>;
+ sysmmu_l = <&sysmmu_3>;
+ sysmmu_r = <&sysmmu_4>;
};
- sdhci@12200000 {
- compatible = "samsung,exynos4210-sdhci";
- reg = <0x12200000 0x100>;
- interrupts = <0 75 0>;
+ ohci {
+ compatible = "samsung,exynos-ohci";
+ reg = <0x12120000 0x100>;
+ interrupts = <0 71 0>;
};
- sdhci@12210000 {
- compatible = "samsung,exynos4210-sdhci";
- reg = <0x12210000 0x100>;
- interrupts = <0 76 0>;
+ ehci {
+ compatible = "samsung,exynos-ehci";
+ reg = <0x12110000 0x100>;
+ interrupts = <0 71 0>;
};
- sdhci@12220000 {
- compatible = "samsung,exynos4210-sdhci";
- reg = <0x12220000 0x100>;
- interrupts = <0 77 0>;
+ xhci {
+ compatible = "samsung,exynos-xhci";
+ reg = <0x12000000 0x10000>;
+ interrupts = <0 72 0>;
};
- sdhci@12230000 {
- compatible = "samsung,exynos4210-sdhci";
- reg = <0x12230000 0x100>;
- interrupts = <0 78 0>;
+ rtc {
+ compatible = "samsung,s3c6410-rtc";
+ reg = <0x101E0000 0x100>;
+ interrupts = <0 43 0>, <0 44 0>;
};
serial@12C00000 {
interrupts = <0 54 0>;
};
- i2c@12C60000 {
+ i2c_0: i2c@12C60000 {
compatible = "samsung,s3c2440-i2c";
reg = <0x12C60000 0x100>;
interrupts = <0 56 0>;
+ #address-cells = <1>;
+ #size-cells = <0>;
};
- i2c@12C70000 {
+ i2c_1: i2c@12C70000 {
compatible = "samsung,s3c2440-i2c";
reg = <0x12C70000 0x100>;
interrupts = <0 57 0>;
+ #address-cells = <1>;
+ #size-cells = <0>;
};
- i2c@12C80000 {
+ i2c_2: i2c@12C80000 {
compatible = "samsung,s3c2440-i2c";
reg = <0x12C80000 0x100>;
interrupts = <0 58 0>;
+ #address-cells = <1>;
+ #size-cells = <0>;
};
- i2c@12C90000 {
+ i2c_3: i2c@12C90000 {
compatible = "samsung,s3c2440-i2c";
reg = <0x12C90000 0x100>;
interrupts = <0 59 0>;
+ #address-cells = <1>;
+ #size-cells = <0>;
};
- i2c@12CA0000 {
+ i2c_4: i2c@12CA0000 {
compatible = "samsung,s3c2440-i2c";
reg = <0x12CA0000 0x100>;
interrupts = <0 60 0>;
+ #address-cells = <1>;
+ #size-cells = <0>;
};
- i2c@12CB0000 {
+ i2c_5: i2c@12CB0000 {
compatible = "samsung,s3c2440-i2c";
reg = <0x12CB0000 0x100>;
interrupts = <0 61 0>;
+ #address-cells = <1>;
+ #size-cells = <0>;
};
- i2c@12CC0000 {
+ i2c_6: i2c@12CC0000 {
compatible = "samsung,s3c2440-i2c";
reg = <0x12CC0000 0x100>;
interrupts = <0 62 0>;
+ #address-cells = <1>;
+ #size-cells = <0>;
};
- i2c@12CD0000 {
+ i2c_7: i2c@12CD0000 {
compatible = "samsung,s3c2440-i2c";
reg = <0x12CD0000 0x100>;
interrupts = <0 63 0>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ };
+
+ i2c_8: i2c@12CE0000 {
+ compatible = "samsung,s3c2440-hdmiphy-i2c";
+ reg = <0x12CE0000 0x1000>;
+ interrupts = <0 64 0>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ };
+
+ sysmmu_2: sysmmu@0x10A60000 {
+ compatible = "samsung,s5p-sysmmu";
+ reg = <0x10A60000 0x100>;
+ interrupts = <24 5>;
+ interrupt-parent = <&combiner>;
+ };
+
+ sysmmu_3: sysmmu@0x11210000 {
+ compatible = "samsung,s5p-sysmmu";
+ reg = <0x11210000 0x100>;
+ interrupts = <8 5>;
+ interrupt-parent = <&combiner>;
+ };
+
+ sysmmu_4: sysmmu@0x11200000 {
+ compatible = "samsung,s5p-sysmmu";
+ reg = <0x11200000 0x100>;
+ interrupts = <6 2>;
+ interrupt-parent = <&combiner>;
+ };
+
+ sysmmu_27: sysmmu@0x14640000 {
+ compatible = "samsung,s5p-sysmmu";
+ reg = <0x14640000 0x100>;
+ interrupts = <3 2>;
+ interrupt-parent = <&combiner>;
+ };
+
+ sysmmu_28: sysmmu@0x14650000 {
+ compatible = "samsung,s5p-sysmmu";
+ reg = <0x14650000 0x100>;
+ interrupts = <7 4>;
+ interrupt-parent = <&combiner>;
+ };
+
+ sysmmu_23: sysmmu@0x13E80000 {
+ compatible = "samsung,s5p-sysmmu";
+ reg = <0x13E80000 0x100>;
+ interrupts = <2 0>;
+ interrupt-parent = <&combiner>;
+ };
+
+ sysmmu_24: sysmmu@0x13E90000 {
+ compatible = "samsung,s5p-sysmmu";
+ reg = <0x13E90000 0x100>;
+ interrupts = <2 2>;
+ interrupt-parent = <&combiner>;
+ };
+
+ sysmmu_25: sysmmu@0x13EA0000 {
+ compatible = "samsung,s5p-sysmmu";
+ reg = <0x13EA0000 0x100>;
+ interrupts = <2 4>;
+ interrupt-parent = <&combiner>;
+ };
+
+ sysmmu_26: sysmmu@0x13EB0000 {
+ compatible = "samsung,s5p-sysmmu";
+ reg = <0x13EB0000 0x100>;
+ interrupts = <2 6>;
+ interrupt-parent = <&combiner>;
+ };
+
+ mshc_0: dwmmc0@12200000 {
+ compatible = "synopsis,dw-mshc-exynos5250";
+ reg = <0x12200000 0x1000>;
+ interrupts = <0 75 0>;
+ };
+
+ mshc_1: dwmmc1@12210000 {
+ compatible = "synopsis,dw-mshc-exynos5250";
+ reg = <0x12210000 0x1000>;
+ interrupts = <0 76 0>;
+ };
+
+ mshc_2: dwmmc2@12220000 {
+ compatible = "synopsis,dw-mshc-exynos5250";
+ reg = <0x12220000 0x1000>;
+ interrupts = <0 77 0>;
+ };
+
+ mshc_3: dwmmc3@12230000 {
+ compatible = "synopsis,dw-mshc-exynos5250";
+ reg = <0x12230000 0x1000>;
+ interrupts = <0 78 0>;
+ };
+
+ i2s_0: i2s@03830000 {
+ compatible = "samsung,i2s";
+ reg = <0x03830000 0x100>;
+ tx-dma-channel-secondary = <&pdma0 8>;
+ tx-dma-channel = <&pdma0 10>;
+ rx-dma-channel = <&pdma0 9>;
+ };
+
+ spi_0: spi@12d20000 {
+ compatible = "samsung,exynos4210-spi";
+ reg = <0x12d20000 0x100>;
+ interrupts = <0 66 0>;
+ tx-dma-channel = <&pdma0 5>;
+ rx-dma-channel = <&pdma0 4>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ };
+
+ spi_1: spi@12d30000 {
+ compatible = "samsung,exynos4210-spi";
+ reg = <0x12d30000 0x100>;
+ interrupts = <0 67 0>;
+ tx-dma-channel = <&pdma1 5>;
+ rx-dma-channel = <&pdma1 4>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ };
+
+ spi_2: spi@12d40000 {
+ compatible = "samsung,exynos4210-spi";
+ reg = <0x12d40000 0x100>;
+ interrupts = <0 68 0>;
+ tx-dma-channel = <&pdma0 7>;
+ rx-dma-channel = <&pdma0 6>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ };
+
+ tmu@10060000 {
+ compatible = "samsung,exynos5-tmu";
+ reg = <0x10060000 0x100>;
+ interrupts = <0 65 0>;
};
amba {
interrupts = <0 35 0>;
};
- mdma0: pdma@10800000 {
+ mdma0: mdma@10800000 {
compatible = "arm,pl330", "arm,primecell";
reg = <0x10800000 0x1000>;
interrupts = <0 33 0>;
};
- mdma1: pdma@11C10000 {
+ mdma1: mdma@11C10000 {
compatible = "arm,pl330", "arm,primecell";
reg = <0x11C10000 0x1000>;
interrupts = <0 124 0>;
#gpio-cells = <4>;
};
+ gpc4: gpio-controller@114002E0 {
+ compatible = "samsung,exynos4-gpio";
+ reg = <0x114002E0 0x20>;
+ #gpio-cells = <4>;
+ };
+
gpd0: gpio-controller@11400160 {
compatible = "samsung,exynos4-gpio";
reg = <0x11400160 0x20>;
gpv2: gpio-controller@10D10040 {
compatible = "samsung,exynos4-gpio";
- reg = <0x10D10040 0x20>;
+ reg = <0x10D10060 0x20>;
#gpio-cells = <4>;
};
gpv3: gpio-controller@10D10060 {
compatible = "samsung,exynos4-gpio";
- reg = <0x10D10060 0x20>;
+ reg = <0x10D10080 0x20>;
#gpio-cells = <4>;
};
gpv4: gpio-controller@10D10080 {
compatible = "samsung,exynos4-gpio";
- reg = <0x10D10080 0x20>;
+ reg = <0x10D100C0 0x20>;
#gpio-cells = <4>;
};
#gpio-cells = <4>;
};
};
+
+ fimd {
+ compatible = "samsung,exynos5-fb";
+ interrupt-parent = <&combiner>;
+ reg = <0x14400000 0x40000>;
+ interrupts = <18 4>, <18 5>, <18 6>;
+ sysmmu = <&sysmmu_27>;
+ };
+
+ mipi {
+ compatible = "samsung,exynos5-mipi";
+ reg = <0x14500000 0x10000>;
+ interrupts = <0 82 0>;
+ };
+
+ display-port-controller {
+ compatible = "samsung,exynos5-dp";
+ reg = <0x145B0000 0x10000>;
+ interrupts = <10 3>;
+ interrupt-parent = <&combiner>;
+ };
+
+ gsc_0: gsc@0x13e00000 {
+ compatible = "samsung,exynos-gsc";
+ reg = <0x13e00000 0x1000>;
+ interrupts = <0 85 0>;
+ sysmmu = <&sysmmu_23>;
+ };
+
+ gsc_1: gsc@0x13e10000 {
+ compatible = "samsung,exynos-gsc";
+ reg = <0x13e10000 0x1000>;
+ interrupts = <0 86 0>;
+ sysmmu = <&sysmmu_24>;
+ };
+
+ gsc_2: gsc@0x13e20000 {
+ compatible = "samsung,exynos-gsc";
+ reg = <0x13e20000 0x1000>;
+ interrupts = <0 87 0>;
+ sysmmu = <&sysmmu_25>;
+ };
+
+ gsc_3: gsc@0x13e30000 {
+ compatible = "samsung,exynos-gsc";
+ reg = <0x13e30000 0x1000>;
+ interrupts = <0 88 0>;
+ sysmmu = <&sysmmu_26>;
+ };
+
+ g2d {
+ compatible = "samsung,s5p-g2d";
+ reg = <0x10850000 0x400>;
+ interrupts = <0 91 0>;
+ };
+
+ hdmi {
+ compatible = "samsung,exynos5-hdmi";
+ reg = <0x14530000 0x100000>;
+ interrupts = <0 95 0>, <5 0>;
+ interrupt-parent = <&wakeup_eint>;
+ };
+
+ mixer {
+ compatible = "samsung,s5p-mixer";
+ reg = <0x14450000 0x10000>;
+ interrupts = <0 94 0>;
+ sysmmu = <&sysmmu_28>;
+ };
+
+ sromc-bus {
+ compatible = "samsung,exynos-sromc-bus", "simple-bus";
+
+ #address-cells = <2>;
+ #size-cells = <1>;
+ ranges = < 0 0 0x04000000 0x20000
+ 1 0 0x05000000 0x20000
+ 2 0 0x06000000 0x20000
+ 3 0 0x07000000 0x20000>;
+ };
};
read_lock_irqsave(&device_info->lock, flags);
list_for_each_entry(b, &device_info->safe_buffers, node)
- if (b->safe_dma_addr == safe_dma_addr) {
+ if (b->safe_dma_addr <= safe_dma_addr &&
+ b->safe_dma_addr + b->size > safe_dma_addr) {
rb = b;
break;
}
if (buf == NULL) {
dev_err(dev, "%s: unable to map unsafe buffer %p!\n",
__func__, ptr);
- return ~0;
+ return DMA_ERROR_CODE;
}
dev_dbg(dev, "%s: unsafe buffer %p (dma=%#x) mapped to %p (dma=%#x)\n",
* substitute the safe buffer for the unsafe one.
* (basically move the buffer from an unsafe area to a safe one)
*/
-dma_addr_t __dma_map_page(struct device *dev, struct page *page,
- unsigned long offset, size_t size, enum dma_data_direction dir)
+static dma_addr_t dmabounce_map_page(struct device *dev, struct page *page,
+ unsigned long offset, size_t size, enum dma_data_direction dir,
+ struct dma_attrs *attrs)
{
dma_addr_t dma_addr;
int ret;
ret = needs_bounce(dev, dma_addr, size);
if (ret < 0)
- return ~0;
+ return DMA_ERROR_CODE;
if (ret == 0) {
- __dma_page_cpu_to_dev(page, offset, size, dir);
+ arm_dma_ops.sync_single_for_device(dev, dma_addr, size, dir);
return dma_addr;
}
if (PageHighMem(page)) {
dev_err(dev, "DMA buffer bouncing of HIGHMEM pages is not supported\n");
- return ~0;
+ return DMA_ERROR_CODE;
}
return map_single(dev, page_address(page) + offset, size, dir);
}
-EXPORT_SYMBOL(__dma_map_page);
/*
* see if a mapped address was really a "safe" buffer and if so, copy
* the safe buffer. (basically return things back to the way they
* should be)
*/
-void __dma_unmap_page(struct device *dev, dma_addr_t dma_addr, size_t size,
- enum dma_data_direction dir)
+static void dmabounce_unmap_page(struct device *dev, dma_addr_t dma_addr, size_t size,
+ enum dma_data_direction dir, struct dma_attrs *attrs)
{
struct safe_buffer *buf;
buf = find_safe_buffer_dev(dev, dma_addr, __func__);
if (!buf) {
- __dma_page_dev_to_cpu(pfn_to_page(dma_to_pfn(dev, dma_addr)),
- dma_addr & ~PAGE_MASK, size, dir);
+ arm_dma_ops.sync_single_for_cpu(dev, dma_addr, size, dir);
return;
}
unmap_single(dev, buf, size, dir);
}
-EXPORT_SYMBOL(__dma_unmap_page);
-int dmabounce_sync_for_cpu(struct device *dev, dma_addr_t addr,
- unsigned long off, size_t sz, enum dma_data_direction dir)
+static int __dmabounce_sync_for_cpu(struct device *dev, dma_addr_t addr,
+ size_t sz, enum dma_data_direction dir)
{
struct safe_buffer *buf;
+ unsigned long off;
- dev_dbg(dev, "%s(dma=%#x,off=%#lx,sz=%zx,dir=%x)\n",
- __func__, addr, off, sz, dir);
+ dev_dbg(dev, "%s(dma=%#x,sz=%zx,dir=%x)\n",
+ __func__, addr, sz, dir);
buf = find_safe_buffer_dev(dev, addr, __func__);
if (!buf)
return 1;
+ off = addr - buf->safe_dma_addr;
+
BUG_ON(buf->direction != dir);
- dev_dbg(dev, "%s: unsafe buffer %p (dma=%#x) mapped to %p (dma=%#x)\n",
- __func__, buf->ptr, virt_to_dma(dev, buf->ptr),
+ dev_dbg(dev, "%s: unsafe buffer %p (dma=%#x off=%#lx) mapped to %p (dma=%#x)\n",
+ __func__, buf->ptr, virt_to_dma(dev, buf->ptr), off,
buf->safe, buf->safe_dma_addr);
DO_STATS(dev->archdata.dmabounce->bounce_count++);
}
return 0;
}
-EXPORT_SYMBOL(dmabounce_sync_for_cpu);
-int dmabounce_sync_for_device(struct device *dev, dma_addr_t addr,
- unsigned long off, size_t sz, enum dma_data_direction dir)
+static void dmabounce_sync_for_cpu(struct device *dev,
+ dma_addr_t handle, size_t size, enum dma_data_direction dir)
+{
+ if (!__dmabounce_sync_for_cpu(dev, handle, size, dir))
+ return;
+
+ arm_dma_ops.sync_single_for_cpu(dev, handle, size, dir);
+}
+
+static int __dmabounce_sync_for_device(struct device *dev, dma_addr_t addr,
+ size_t sz, enum dma_data_direction dir)
{
struct safe_buffer *buf;
+ unsigned long off;
- dev_dbg(dev, "%s(dma=%#x,off=%#lx,sz=%zx,dir=%x)\n",
- __func__, addr, off, sz, dir);
+ dev_dbg(dev, "%s(dma=%#x,sz=%zx,dir=%x)\n",
+ __func__, addr, sz, dir);
buf = find_safe_buffer_dev(dev, addr, __func__);
if (!buf)
return 1;
+ off = addr - buf->safe_dma_addr;
+
BUG_ON(buf->direction != dir);
- dev_dbg(dev, "%s: unsafe buffer %p (dma=%#x) mapped to %p (dma=%#x)\n",
- __func__, buf->ptr, virt_to_dma(dev, buf->ptr),
+ dev_dbg(dev, "%s: unsafe buffer %p (dma=%#x off=%#lx) mapped to %p (dma=%#x)\n",
+ __func__, buf->ptr, virt_to_dma(dev, buf->ptr), off,
buf->safe, buf->safe_dma_addr);
DO_STATS(dev->archdata.dmabounce->bounce_count++);
}
return 0;
}
-EXPORT_SYMBOL(dmabounce_sync_for_device);
+
+static void dmabounce_sync_for_device(struct device *dev,
+ dma_addr_t handle, size_t size, enum dma_data_direction dir)
+{
+ if (!__dmabounce_sync_for_device(dev, handle, size, dir))
+ return;
+
+ arm_dma_ops.sync_single_for_device(dev, handle, size, dir);
+}
+
+static int dmabounce_set_mask(struct device *dev, u64 dma_mask)
+{
+ if (dev->archdata.dmabounce)
+ return 0;
+
+ return arm_dma_ops.set_dma_mask(dev, dma_mask);
+}
+
+static struct dma_map_ops dmabounce_ops = {
+ .alloc = arm_dma_alloc,
+ .free = arm_dma_free,
+ .mmap = arm_dma_mmap,
+ .get_sgtable = arm_dma_get_sgtable,
+ .map_page = dmabounce_map_page,
+ .unmap_page = dmabounce_unmap_page,
+ .sync_single_for_cpu = dmabounce_sync_for_cpu,
+ .sync_single_for_device = dmabounce_sync_for_device,
+ .map_sg = arm_dma_map_sg,
+ .unmap_sg = arm_dma_unmap_sg,
+ .sync_sg_for_cpu = arm_dma_sync_sg_for_cpu,
+ .sync_sg_for_device = arm_dma_sync_sg_for_device,
+ .set_dma_mask = dmabounce_set_mask,
+};
static int dmabounce_init_pool(struct dmabounce_pool *pool, struct device *dev,
const char *name, unsigned long size)
#endif
dev->archdata.dmabounce = device_info;
+ set_dma_ops(dev, &dmabounce_ops);
dev_info(dev, "dmabounce: registered device\n");
struct dmabounce_device_info *device_info = dev->archdata.dmabounce;
dev->archdata.dmabounce = NULL;
+ set_dma_ops(dev, NULL);
if (!device_info) {
dev_warn(dev,
void (*flush_user_range)(unsigned long, unsigned long, unsigned int);
void (*coherent_kern_range)(unsigned long, unsigned long);
- void (*coherent_user_range)(unsigned long, unsigned long);
+ int (*coherent_user_range)(unsigned long, unsigned long);
void (*flush_kern_dcache_area)(void *, size_t);
void (*dma_map_area)(const void *, size_t, int);
extern void __cpuc_flush_user_all(void);
extern void __cpuc_flush_user_range(unsigned long, unsigned long, unsigned int);
extern void __cpuc_coherent_kern_range(unsigned long, unsigned long);
-extern void __cpuc_coherent_user_range(unsigned long, unsigned long);
+extern int __cpuc_coherent_user_range(unsigned long, unsigned long);
extern void __cpuc_flush_dcache_area(void *, size_t);
/*
* Harvard caches are synchronised for the user space address range.
* This is used for the ARM private sys_cacheflush system call.
*/
-#define flush_cache_user_range(vma,start,end) \
+#define flush_cache_user_range(start,end) \
__cpuc_coherent_user_range((start) & PAGE_MASK, PAGE_ALIGN(end))
/*
#define ASMARM_DEVICE_H
struct dev_archdata {
+ struct dma_map_ops *dma_ops;
#ifdef CONFIG_DMABOUNCE
struct dmabounce_device_info *dmabounce;
#endif
#ifdef CONFIG_IOMMU_API
void *iommu; /* private IOMMU data */
#endif
+#ifdef CONFIG_ARM_DMA_USE_IOMMU
+ struct dma_iommu_mapping *mapping;
+#endif
};
struct omap_device;
--- /dev/null
+#ifndef ASMARM_DMA_CONTIGUOUS_H
+#define ASMARM_DMA_CONTIGUOUS_H
+
+#ifdef __KERNEL__
+#ifdef CONFIG_CMA
+
+#include <linux/types.h>
+#include <asm-generic/dma-contiguous.h>
+
+void dma_contiguous_early_fixup(phys_addr_t base, unsigned long size);
+
+#endif
+#endif
+
+#endif
--- /dev/null
+#ifndef ASMARM_DMA_IOMMU_H
+#define ASMARM_DMA_IOMMU_H
+
+#ifdef __KERNEL__
+
+#include <linux/mm_types.h>
+#include <linux/scatterlist.h>
+#include <linux/dma-debug.h>
+#include <linux/kmemcheck.h>
+
+struct dma_iommu_mapping {
+ /* iommu specific data */
+ struct iommu_domain *domain;
+
+ void *bitmap;
+ size_t bits;
+ unsigned int order;
+ dma_addr_t base;
+
+ spinlock_t lock;
+ struct kref kref;
+};
+
+struct dma_iommu_mapping *
+arm_iommu_create_mapping(struct bus_type *bus, dma_addr_t base, size_t size,
+ int order);
+
+void arm_iommu_release_mapping(struct dma_iommu_mapping *mapping);
+
+int arm_iommu_attach_device(struct device *dev,
+ struct dma_iommu_mapping *mapping);
+
+#endif /* __KERNEL__ */
+#endif
#include <linux/mm_types.h>
#include <linux/scatterlist.h>
+#include <linux/dma-attrs.h>
#include <linux/dma-debug.h>
#include <asm-generic/dma-coherent.h>
#include <asm/memory.h>
+#define DMA_ERROR_CODE (~0)
+extern struct dma_map_ops arm_dma_ops;
+
+static inline struct dma_map_ops *get_dma_ops(struct device *dev)
+{
+ if (dev && dev->archdata.dma_ops)
+ return dev->archdata.dma_ops;
+ return &arm_dma_ops;
+}
+
+static inline void set_dma_ops(struct device *dev, struct dma_map_ops *ops)
+{
+ BUG_ON(!dev);
+ dev->archdata.dma_ops = ops;
+}
+
+#include <asm-generic/dma-mapping-common.h>
+
+static inline int dma_set_mask(struct device *dev, u64 mask)
+{
+ return get_dma_ops(dev)->set_dma_mask(dev, mask);
+}
+
#ifdef __arch_page_to_dma
#error Please update to __arch_pfn_to_dma
#endif
}
#endif
-/*
- * The DMA API is built upon the notion of "buffer ownership". A buffer
- * is either exclusively owned by the CPU (and therefore may be accessed
- * by it) or exclusively owned by the DMA device. These helper functions
- * represent the transitions between these two ownership states.
- *
- * Note, however, that on later ARMs, this notion does not work due to
- * speculative prefetches. We model our approach on the assumption that
- * the CPU does do speculative prefetches, which means we clean caches
- * before transfers and delay cache invalidation until transfer completion.
- *
- * Private support functions: these are not part of the API and are
- * liable to change. Drivers must not use these.
- */
-static inline void __dma_single_cpu_to_dev(const void *kaddr, size_t size,
- enum dma_data_direction dir)
-{
- extern void ___dma_single_cpu_to_dev(const void *, size_t,
- enum dma_data_direction);
-
- if (!arch_is_coherent())
- ___dma_single_cpu_to_dev(kaddr, size, dir);
-}
-
-static inline void __dma_single_dev_to_cpu(const void *kaddr, size_t size,
- enum dma_data_direction dir)
-{
- extern void ___dma_single_dev_to_cpu(const void *, size_t,
- enum dma_data_direction);
-
- if (!arch_is_coherent())
- ___dma_single_dev_to_cpu(kaddr, size, dir);
-}
-
-static inline void __dma_page_cpu_to_dev(struct page *page, unsigned long off,
- size_t size, enum dma_data_direction dir)
-{
- extern void ___dma_page_cpu_to_dev(struct page *, unsigned long,
- size_t, enum dma_data_direction);
-
- if (!arch_is_coherent())
- ___dma_page_cpu_to_dev(page, off, size, dir);
-}
-
-static inline void __dma_page_dev_to_cpu(struct page *page, unsigned long off,
- size_t size, enum dma_data_direction dir)
-{
- extern void ___dma_page_dev_to_cpu(struct page *, unsigned long,
- size_t, enum dma_data_direction);
-
- if (!arch_is_coherent())
- ___dma_page_dev_to_cpu(page, off, size, dir);
-}
-
-extern int dma_supported(struct device *, u64);
-extern int dma_set_mask(struct device *, u64);
-
/*
* DMA errors are defined by all-bits-set in the DMA address.
*/
static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
- return dma_addr == ~0;
+ return dma_addr == DMA_ERROR_CODE;
}
/*
{
}
+extern int dma_supported(struct device *dev, u64 mask);
+
/**
- * dma_alloc_coherent - allocate consistent memory for DMA
+ * arm_dma_alloc - allocate consistent memory for DMA
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
* @size: required memory size
* @handle: bus-specific DMA address
+ * @attrs: optinal attributes that specific mapping properties
*
- * Allocate some uncached, unbuffered memory for a device for
- * performing DMA. This function allocates pages, and will
- * return the CPU-viewed address, and sets @handle to be the
- * device-viewed address.
+ * Allocate some memory for a device for performing DMA. This function
+ * allocates pages, and will return the CPU-viewed address, and sets @handle
+ * to be the device-viewed address.
*/
-extern void *dma_alloc_coherent(struct device *, size_t, dma_addr_t *, gfp_t);
+extern void *arm_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
+ gfp_t gfp, struct dma_attrs *attrs);
+
+#define dma_alloc_coherent(d, s, h, f) dma_alloc_attrs(d, s, h, f, NULL)
+
+static inline void *dma_alloc_attrs(struct device *dev, size_t size,
+ dma_addr_t *dma_handle, gfp_t flag,
+ struct dma_attrs *attrs)
+{
+ struct dma_map_ops *ops = get_dma_ops(dev);
+ void *cpu_addr;
+ BUG_ON(!ops);
+
+ cpu_addr = ops->alloc(dev, size, dma_handle, flag, attrs);
+ debug_dma_alloc_coherent(dev, size, *dma_handle, cpu_addr);
+ return cpu_addr;
+}
/**
- * dma_free_coherent - free memory allocated by dma_alloc_coherent
+ * arm_dma_free - free memory allocated by arm_dma_alloc
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
* @size: size of memory originally requested in dma_alloc_coherent
* @cpu_addr: CPU-view address returned from dma_alloc_coherent
* @handle: device-view address returned from dma_alloc_coherent
+ * @attrs: optinal attributes that specific mapping properties
*
* Free (and unmap) a DMA buffer previously allocated by
- * dma_alloc_coherent().
+ * arm_dma_alloc().
*
* References to memory and mappings associated with cpu_addr/handle
* during and after this call executing are illegal.
*/
-extern void dma_free_coherent(struct device *, size_t, void *, dma_addr_t);
+extern void arm_dma_free(struct device *dev, size_t size, void *cpu_addr,
+ dma_addr_t handle, struct dma_attrs *attrs);
+
+#define dma_free_coherent(d, s, c, h) dma_free_attrs(d, s, c, h, NULL)
+
+static inline void dma_free_attrs(struct device *dev, size_t size,
+ void *cpu_addr, dma_addr_t dma_handle,
+ struct dma_attrs *attrs)
+{
+ struct dma_map_ops *ops = get_dma_ops(dev);
+ BUG_ON(!ops);
+
+ debug_dma_free_coherent(dev, size, cpu_addr, dma_handle);
+ ops->free(dev, size, cpu_addr, dma_handle, attrs);
+}
/**
- * dma_mmap_coherent - map a coherent DMA allocation into user space
+ * arm_dma_mmap - map a coherent DMA allocation into user space
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
* @vma: vm_area_struct describing requested user mapping
* @cpu_addr: kernel CPU-view address returned from dma_alloc_coherent
* @handle: device-view address returned from dma_alloc_coherent
* @size: size of memory originally requested in dma_alloc_coherent
+ * @attrs: optinal attributes that specific mapping properties
*
* Map a coherent DMA buffer previously allocated by dma_alloc_coherent
* into user space. The coherent DMA buffer must not be freed by the
* driver until the user space mapping has been released.
*/
-int dma_mmap_coherent(struct device *, struct vm_area_struct *,
- void *, dma_addr_t, size_t);
+extern int arm_dma_mmap(struct device *dev, struct vm_area_struct *vma,
+ void *cpu_addr, dma_addr_t dma_addr, size_t size,
+ struct dma_attrs *attrs);
+#define dma_mmap_coherent(d, v, c, h, s) dma_mmap_attrs(d, v, c, h, s, NULL)
-/**
- * dma_alloc_writecombine - allocate writecombining memory for DMA
- * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
- * @size: required memory size
- * @handle: bus-specific DMA address
- *
- * Allocate some uncached, buffered memory for a device for
- * performing DMA. This function allocates pages, and will
- * return the CPU-viewed address, and sets @handle to be the
- * device-viewed address.
- */
-extern void *dma_alloc_writecombine(struct device *, size_t, dma_addr_t *,
- gfp_t);
+static inline int dma_mmap_attrs(struct device *dev, struct vm_area_struct *vma,
+ void *cpu_addr, dma_addr_t dma_addr,
+ size_t size, struct dma_attrs *attrs)
+{
+ struct dma_map_ops *ops = get_dma_ops(dev);
+ BUG_ON(!ops);
+ return ops->mmap(dev, vma, cpu_addr, dma_addr, size, attrs);
+}
+
+static inline void *dma_alloc_writecombine(struct device *dev, size_t size,
+ dma_addr_t *dma_handle, gfp_t flag)
+{
+ DEFINE_DMA_ATTRS(attrs);
+ dma_set_attr(DMA_ATTR_WRITE_COMBINE, &attrs);
+ return dma_alloc_attrs(dev, size, dma_handle, flag, &attrs);
+}
-#define dma_free_writecombine(dev,size,cpu_addr,handle) \
- dma_free_coherent(dev,size,cpu_addr,handle)
+static inline void dma_free_writecombine(struct device *dev, size_t size,
+ void *cpu_addr, dma_addr_t dma_handle)
+{
+ DEFINE_DMA_ATTRS(attrs);
+ dma_set_attr(DMA_ATTR_WRITE_COMBINE, &attrs);
+ return dma_free_attrs(dev, size, cpu_addr, dma_handle, &attrs);
+}
-int dma_mmap_writecombine(struct device *, struct vm_area_struct *,
- void *, dma_addr_t, size_t);
+static inline int dma_mmap_writecombine(struct device *dev, struct vm_area_struct *vma,
+ void *cpu_addr, dma_addr_t dma_addr, size_t size)
+{
+ DEFINE_DMA_ATTRS(attrs);
+ dma_set_attr(DMA_ATTR_WRITE_COMBINE, &attrs);
+ return dma_mmap_attrs(dev, vma, cpu_addr, dma_addr, size, &attrs);
+}
/*
* This can be called during boot to increase the size of the consistent
* DMA region above it's default value of 2MB. It must be called before the
* memory allocator is initialised, i.e. before any core_initcall.
*/
-extern void __init init_consistent_dma_size(unsigned long size);
-
+static inline void init_consistent_dma_size(unsigned long size) { }
-#ifdef CONFIG_DMABOUNCE
/*
* For SA-1111, IXP425, and ADI systems the dma-mapping functions are "magic"
* and utilize bounce buffers as needed to work around limited DMA windows.
*/
extern void dmabounce_unregister_dev(struct device *);
-/*
- * The DMA API, implemented by dmabounce.c. See below for descriptions.
- */
-extern dma_addr_t __dma_map_page(struct device *, struct page *,
- unsigned long, size_t, enum dma_data_direction);
-extern void __dma_unmap_page(struct device *, dma_addr_t, size_t,
- enum dma_data_direction);
-
-/*
- * Private functions
- */
-int dmabounce_sync_for_cpu(struct device *, dma_addr_t, unsigned long,
- size_t, enum dma_data_direction);
-int dmabounce_sync_for_device(struct device *, dma_addr_t, unsigned long,
- size_t, enum dma_data_direction);
-#else
-static inline int dmabounce_sync_for_cpu(struct device *d, dma_addr_t addr,
- unsigned long offset, size_t size, enum dma_data_direction dir)
-{
- return 1;
-}
-static inline int dmabounce_sync_for_device(struct device *d, dma_addr_t addr,
- unsigned long offset, size_t size, enum dma_data_direction dir)
-{
- return 1;
-}
-
-
-static inline dma_addr_t __dma_map_page(struct device *dev, struct page *page,
- unsigned long offset, size_t size, enum dma_data_direction dir)
-{
- __dma_page_cpu_to_dev(page, offset, size, dir);
- return pfn_to_dma(dev, page_to_pfn(page)) + offset;
-}
-
-static inline void __dma_unmap_page(struct device *dev, dma_addr_t handle,
- size_t size, enum dma_data_direction dir)
-{
- __dma_page_dev_to_cpu(pfn_to_page(dma_to_pfn(dev, handle)),
- handle & ~PAGE_MASK, size, dir);
-}
-#endif /* CONFIG_DMABOUNCE */
-
-/**
- * dma_map_single - map a single buffer for streaming DMA
- * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
- * @cpu_addr: CPU direct mapped address of buffer
- * @size: size of buffer to map
- * @dir: DMA transfer direction
- *
- * Ensure that any data held in the cache is appropriately discarded
- * or written back.
- *
- * The device owns this memory once this call has completed. The CPU
- * can regain ownership by calling dma_unmap_single() or
- * dma_sync_single_for_cpu().
- */
-static inline dma_addr_t dma_map_single(struct device *dev, void *cpu_addr,
- size_t size, enum dma_data_direction dir)
-{
- unsigned long offset;
- struct page *page;
- dma_addr_t addr;
-
- BUG_ON(!virt_addr_valid(cpu_addr));
- BUG_ON(!virt_addr_valid(cpu_addr + size - 1));
- BUG_ON(!valid_dma_direction(dir));
-
- page = virt_to_page(cpu_addr);
- offset = (unsigned long)cpu_addr & ~PAGE_MASK;
- addr = __dma_map_page(dev, page, offset, size, dir);
- debug_dma_map_page(dev, page, offset, size, dir, addr, true);
-
- return addr;
-}
-
-/**
- * dma_map_page - map a portion of a page for streaming DMA
- * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
- * @page: page that buffer resides in
- * @offset: offset into page for start of buffer
- * @size: size of buffer to map
- * @dir: DMA transfer direction
- *
- * Ensure that any data held in the cache is appropriately discarded
- * or written back.
- *
- * The device owns this memory once this call has completed. The CPU
- * can regain ownership by calling dma_unmap_page().
- */
-static inline dma_addr_t dma_map_page(struct device *dev, struct page *page,
- unsigned long offset, size_t size, enum dma_data_direction dir)
-{
- dma_addr_t addr;
-
- BUG_ON(!valid_dma_direction(dir));
-
- addr = __dma_map_page(dev, page, offset, size, dir);
- debug_dma_map_page(dev, page, offset, size, dir, addr, false);
-
- return addr;
-}
-
-/**
- * dma_unmap_single - unmap a single buffer previously mapped
- * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
- * @handle: DMA address of buffer
- * @size: size of buffer (same as passed to dma_map_single)
- * @dir: DMA transfer direction (same as passed to dma_map_single)
- *
- * Unmap a single streaming mode DMA translation. The handle and size
- * must match what was provided in the previous dma_map_single() call.
- * All other usages are undefined.
- *
- * After this call, reads by the CPU to the buffer are guaranteed to see
- * whatever the device wrote there.
- */
-static inline void dma_unmap_single(struct device *dev, dma_addr_t handle,
- size_t size, enum dma_data_direction dir)
-{
- debug_dma_unmap_page(dev, handle, size, dir, true);
- __dma_unmap_page(dev, handle, size, dir);
-}
-
-/**
- * dma_unmap_page - unmap a buffer previously mapped through dma_map_page()
- * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
- * @handle: DMA address of buffer
- * @size: size of buffer (same as passed to dma_map_page)
- * @dir: DMA transfer direction (same as passed to dma_map_page)
- *
- * Unmap a page streaming mode DMA translation. The handle and size
- * must match what was provided in the previous dma_map_page() call.
- * All other usages are undefined.
- *
- * After this call, reads by the CPU to the buffer are guaranteed to see
- * whatever the device wrote there.
- */
-static inline void dma_unmap_page(struct device *dev, dma_addr_t handle,
- size_t size, enum dma_data_direction dir)
-{
- debug_dma_unmap_page(dev, handle, size, dir, false);
- __dma_unmap_page(dev, handle, size, dir);
-}
-
-/**
- * dma_sync_single_range_for_cpu
- * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
- * @handle: DMA address of buffer
- * @offset: offset of region to start sync
- * @size: size of region to sync
- * @dir: DMA transfer direction (same as passed to dma_map_single)
- *
- * Make physical memory consistent for a single streaming mode DMA
- * translation after a transfer.
- *
- * If you perform a dma_map_single() but wish to interrogate the
- * buffer using the cpu, yet do not wish to teardown the PCI dma
- * mapping, you must call this function before doing so. At the
- * next point you give the PCI dma address back to the card, you
- * must first the perform a dma_sync_for_device, and then the
- * device again owns the buffer.
- */
-static inline void dma_sync_single_range_for_cpu(struct device *dev,
- dma_addr_t handle, unsigned long offset, size_t size,
- enum dma_data_direction dir)
-{
- BUG_ON(!valid_dma_direction(dir));
-
- debug_dma_sync_single_for_cpu(dev, handle + offset, size, dir);
-
- if (!dmabounce_sync_for_cpu(dev, handle, offset, size, dir))
- return;
-
- __dma_single_dev_to_cpu(dma_to_virt(dev, handle) + offset, size, dir);
-}
-
-static inline void dma_sync_single_range_for_device(struct device *dev,
- dma_addr_t handle, unsigned long offset, size_t size,
- enum dma_data_direction dir)
-{
- BUG_ON(!valid_dma_direction(dir));
-
- debug_dma_sync_single_for_device(dev, handle + offset, size, dir);
-
- if (!dmabounce_sync_for_device(dev, handle, offset, size, dir))
- return;
-
- __dma_single_cpu_to_dev(dma_to_virt(dev, handle) + offset, size, dir);
-}
-
-static inline void dma_sync_single_for_cpu(struct device *dev,
- dma_addr_t handle, size_t size, enum dma_data_direction dir)
-{
- dma_sync_single_range_for_cpu(dev, handle, 0, size, dir);
-}
-
-static inline void dma_sync_single_for_device(struct device *dev,
- dma_addr_t handle, size_t size, enum dma_data_direction dir)
-{
- dma_sync_single_range_for_device(dev, handle, 0, size, dir);
-}
/*
* The scatter list versions of the above methods.
*/
-extern int dma_map_sg(struct device *, struct scatterlist *, int,
- enum dma_data_direction);
-extern void dma_unmap_sg(struct device *, struct scatterlist *, int,
+extern int arm_dma_map_sg(struct device *, struct scatterlist *, int,
+ enum dma_data_direction, struct dma_attrs *attrs);
+extern void arm_dma_unmap_sg(struct device *, struct scatterlist *, int,
+ enum dma_data_direction, struct dma_attrs *attrs);
+extern void arm_dma_sync_sg_for_cpu(struct device *, struct scatterlist *, int,
enum dma_data_direction);
-extern void dma_sync_sg_for_cpu(struct device *, struct scatterlist *, int,
+extern void arm_dma_sync_sg_for_device(struct device *, struct scatterlist *, int,
enum dma_data_direction);
-extern void dma_sync_sg_for_device(struct device *, struct scatterlist *, int,
- enum dma_data_direction);
-
+extern int arm_dma_get_sgtable(struct device *dev, struct sg_table *sgt,
+ void *cpu_addr, dma_addr_t dma_addr, size_t size,
+ struct dma_attrs *attrs);
#endif /* __KERNEL__ */
#endif
#define MT_MEMORY_DTCM 12
#define MT_MEMORY_ITCM 13
#define MT_MEMORY_SO 14
+#define MT_MEMORY_DMA_READY 15
#ifdef CONFIG_MMU
extern void iotable_init(struct map_desc *, int);
extern void paging_init(struct machine_desc *desc);
extern void sanity_check_meminfo(void);
extern void reboot_setup(char *str);
+extern void setup_dma_zone(struct machine_desc *desc);
unsigned int processor_id;
EXPORT_SYMBOL(processor_id);
machine_desc = mdesc;
machine_name = mdesc->name;
-#ifdef CONFIG_ZONE_DMA
- if (mdesc->dma_zone_size) {
- extern unsigned long arm_dma_zone_size;
- arm_dma_zone_size = mdesc->dma_zone_size;
- }
-#endif
+ setup_dma_zone(mdesc);
+
if (mdesc->restart_mode)
reboot_setup(&mdesc->restart_mode);
return regs->ARM_r0;
}
-static inline void
+static inline int
do_cache_op(unsigned long start, unsigned long end, int flags)
{
struct mm_struct *mm = current->active_mm;
struct vm_area_struct *vma;
if (end < start || flags)
- return;
+ return -EINVAL;
down_read(&mm->mmap_sem);
vma = find_vma(mm, start);
if (end > vma->vm_end)
end = vma->vm_end;
- flush_cache_user_range(vma, start, end);
+ up_read(&mm->mmap_sem);
+ return flush_cache_user_range(start, end);
}
up_read(&mm->mmap_sem);
+ return -EINVAL;
}
/*
* the specified region).
*/
case NR(cacheflush):
- do_cache_op(regs->ARM_r0, regs->ARM_r1, regs->ARM_r2);
- return 0;
+ return do_cache_op(regs->ARM_r0, regs->ARM_r1, regs->ARM_r2);
case NR(usr26):
if (!(elf_hwcap & HWCAP_26BIT))
bool "SAMSUNG EXYNOS5250"
default y
depends on ARCH_EXYNOS5
+ select SAMSUNG_DMADEV
+ select ARM_CPU_SUSPEND if (PM || CPU_IDLE)
+ select S5P_PM if PM
+ select S5P_SLEEP if (PM || CPU_IDLE)
+
help
Enable EXYNOS5250 SoC support
help
Use MCT (Multi Core Timer) as kernel timers
-config EXYNOS4_DEV_DMA
+config EXYNOS_DEV_DMA
bool
help
Compile in amba device definitions for DMA controller
help
Common setup code for FIMD0.
-config EXYNOS4_DEV_SYSMMU
+config EXYNOS4_SETUP_DP
+ bool
+ help
+ Common setup code for DP.
+
+config EXYNOS4_SETUP_FIMD
bool
help
- Common setup code for SYSTEM MMU in EXYNOS4
+ Common setup code for FIMD.
+
+config EXYNOS_DEV_SYSMMU
+ bool
+ help
+ Common setup code for SYSTEM MMU in EXYNOS
config EXYNOS4_DEV_DWMCI
bool
help
Common setup code for SPI GPIO configurations.
+config EXYNOS4_SETUP_MIPI_DSIM
+ bool
+ depends on FB_MIPI_DSIM
+ help
+ Common setup code for MIPI_DSIM to support mainline style fimd.
+
+config EXYNOS4_SETUP_TVOUT
+ bool
+ help
+ Common setup code for TVOUT
+
# machine support
if ARCH_EXYNOS4
select S3C_DEV_HSMMC2
select S3C_DEV_HSMMC3
select EXYNOS4_DEV_AHCI
- select EXYNOS4_DEV_DMA
+ select EXYNOS_DEV_DMA
select EXYNOS4_DEV_SYSMMU
select EXYNOS4_SETUP_SDHCI
help
select SAMSUNG_DEV_BACKLIGHT
select SAMSUNG_DEV_KEYPAD
select SAMSUNG_DEV_PWM
- select EXYNOS4_DEV_DMA
+ select EXYNOS_DEV_DMA
select EXYNOS4_SETUP_I2C1
select EXYNOS4_SETUP_I2C3
select EXYNOS4_SETUP_I2C7
select SOC_EXYNOS5250
select USE_OF
select ARM_AMBA
+ select EXYNOS_DEV_SYSMMU
+ select EXYNOS4_SETUP_MIPI_DSIM
+ select EXYNOS4_SETUP_DP
+ select EXYNOS4_SETUP_USB_PHY
+ select SAMSUNG_DEV_BACKLIGHT
+ select SAMSUNG_DEV_PWM
+ select EXYNOS4_SETUP_TVOUT
help
Machine support for Samsung Exynos4 machine with device tree enabled.
Select this if a fdt blob is available for the EXYNOS4 SoC based board.
obj-$(CONFIG_PM_GENERIC_DOMAINS) += pm_domains.o
obj-$(CONFIG_CPU_IDLE) += cpuidle.o
-obj-$(CONFIG_ARCH_EXYNOS4) += pmu.o
+obj-$(CONFIG_ARCH_EXYNOS) += pmu.o
obj-$(CONFIG_SMP) += platsmp.o headsmp.o
obj-$(CONFIG_HOTPLUG_CPU) += hotplug.o
+obj-$(CONFIG_ARCH_EXYNOS) += clock-audss.o
+
# machine support
obj-$(CONFIG_MACH_SMDKC210) += mach-smdkv310.o
obj-y += dev-uart.o
obj-$(CONFIG_ARCH_EXYNOS4) += dev-audio.o
obj-$(CONFIG_EXYNOS4_DEV_AHCI) += dev-ahci.o
-obj-$(CONFIG_EXYNOS4_DEV_SYSMMU) += dev-sysmmu.o
+obj-$(CONFIG_EXYNOS_DEV_SYSMMU) += dev-sysmmu.o
obj-$(CONFIG_EXYNOS4_DEV_DWMCI) += dev-dwmci.o
-obj-$(CONFIG_EXYNOS4_DEV_DMA) += dma.o
+obj-$(CONFIG_EXYNOS_DEV_DMA) += dma.o
obj-$(CONFIG_EXYNOS4_DEV_USB_OHCI) += dev-ohci.o
obj-$(CONFIG_ARCH_EXYNOS) += setup-i2c0.o
obj-$(CONFIG_EXYNOS4_SETUP_FIMC) += setup-fimc.o
obj-$(CONFIG_EXYNOS4_SETUP_FIMD0) += setup-fimd0.o
+obj-$(CONFIG_EXYNOS4_SETUP_DP) += setup-dp.o
+obj-$(CONFIG_EXYNOS4_SETUP_FIMD) += setup-fimd.o
+obj-$(CONFIG_EXYNOS4_SETUP_MIPI_DSIM) += setup-mipidsim.o
obj-$(CONFIG_EXYNOS4_SETUP_I2C1) += setup-i2c1.o
obj-$(CONFIG_EXYNOS4_SETUP_I2C2) += setup-i2c2.o
obj-$(CONFIG_EXYNOS4_SETUP_I2C3) += setup-i2c3.o
obj-$(CONFIG_EXYNOS4_SETUP_SDHCI_GPIO) += setup-sdhci-gpio.o
obj-$(CONFIG_EXYNOS4_SETUP_USB_PHY) += setup-usb-phy.o
obj-$(CONFIG_EXYNOS4_SETUP_SPI) += setup-spi.o
+obj-$(CONFIG_EXYNOS4_SETUP_TVOUT) += setup-tvout.o
zreladdr-y += 0x40008000
params_phys-y := 0x40000100
+
+
+dtb-$(CONFIG_MACH_EXYNOS4_DT) += exynos4210-origen.dtb exynos4210-smdkv310.dtb
+dtb-$(CONFIG_MACH_EXYNOS5_DT) += exynos5250-smdk5250.dtb exynos5250-daisy.dtb exynos5250-snow.dtb
--- /dev/null
+/*
+ * Copyright (c) 2012 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com
+ *
+ * Clock support for EXYNOS Audio Subsystem
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+#include <linux/kernel.h>
+#include <linux/err.h>
+
+#include <plat/clock.h>
+#include <plat/s5p-clock.h>
+#include <plat/clock-clksrc.h>
+
+#include <mach/map.h>
+#include <mach/regs-audss.h>
+
+static int exynos_clk_audss_ctrl(struct clk *clk, int enable)
+{
+ return s5p_gatectrl(EXYNOS_CLKGATE_AUDSS, clk, enable);
+}
+
+static struct clk *exynos_clkset_mout_audss_list[] = {
+ &clk_ext_xtal_mux,
+ &clk_fout_epll,
+};
+
+static struct clksrc_sources clkset_mout_audss = {
+ .sources = exynos_clkset_mout_audss_list,
+ .nr_sources = ARRAY_SIZE(exynos_clkset_mout_audss_list),
+};
+
+static struct clksrc_clk exynos_clk_mout_audss = {
+ .clk = {
+ .name = "mout_audss",
+ },
+ .sources = &clkset_mout_audss,
+ .reg_src = { .reg = EXYNOS_CLKSRC_AUDSS, .shift = 0, .size = 1 },
+};
+
+static struct clksrc_clk exynos_clk_dout_audss_srp = {
+ .clk = {
+ .name = "dout_srp",
+ .parent = &exynos_clk_mout_audss.clk,
+ },
+ .reg_div = { .reg = EXYNOS_CLKDIV_AUDSS, .shift = 0, .size = 4 },
+};
+
+static struct clksrc_clk exynos_clk_dout_audss_bus = {
+ .clk = {
+ .name = "dout_bus",
+ .parent = &exynos_clk_dout_audss_srp.clk,
+ },
+ .reg_div = { .reg = EXYNOS_CLKDIV_AUDSS, .shift = 4, .size = 4 },
+};
+
+static struct clksrc_clk exynos_clk_dout_audss_i2s = {
+ .clk = {
+ .name = "dout_i2s",
+ .parent = &exynos_clk_mout_audss.clk,
+ },
+ .reg_div = { .reg = EXYNOS_CLKDIV_AUDSS, .shift = 8, .size = 4 },
+};
+
+/* Clock initialization code */
+static struct clksrc_clk *exynos_audss_clks[] = {
+ &exynos_clk_mout_audss,
+ &exynos_clk_dout_audss_srp,
+ &exynos_clk_dout_audss_bus,
+ &exynos_clk_dout_audss_i2s,
+};
+
+static struct clk exynos_init_audss_clocks[] = {
+ {
+ .name = "srpclk",
+ .parent = &exynos_clk_dout_audss_srp.clk,
+ .enable = exynos_clk_audss_ctrl,
+ .ctrlbit = EXYNOS_AUDSS_CLKGATE_RP
+ | EXYNOS_AUDSS_CLKGATE_UART
+ | EXYNOS_AUDSS_CLKGATE_TIMER,
+ }, {
+ .name = "iis",
+ .devname = "samsung-i2s.0",
+ .enable = exynos_clk_audss_ctrl,
+ .ctrlbit = EXYNOS_AUDSS_CLKGATE_I2SSPECIAL
+ | EXYNOS_AUDSS_CLKGATE_I2SBUS,
+ }, {
+ .name = "iis",
+ .devname = "samsung-i2s.4",
+ .enable = exynos_clk_audss_ctrl,
+ .ctrlbit = EXYNOS_AUDSS_CLKGATE_I2SSPECIAL
+ | EXYNOS_AUDSS_CLKGATE_I2SBUS,
+ }, {
+ .name = "pcm",
+ .devname = "samsung-pcm.0",
+ .enable = exynos_clk_audss_ctrl,
+ .ctrlbit = EXYNOS_AUDSS_CLKGATE_I2SSPECIAL
+ | EXYNOS_AUDSS_CLKGATE_I2SBUS,
+ }, {
+ .name = "pcm",
+ .devname = "samsung-pcm.4",
+ .enable = exynos_clk_audss_ctrl,
+ .ctrlbit = EXYNOS_AUDSS_CLKGATE_I2SSPECIAL
+ | EXYNOS_AUDSS_CLKGATE_I2SBUS,
+ },
+};
+
+void __init exynos_register_audss_clocks(void)
+{
+ int ptr;
+
+ for (ptr = 0; ptr < ARRAY_SIZE(exynos_audss_clks); ptr++)
+ s3c_register_clksrc(exynos_audss_clks[ptr], 1);
+
+ s3c_register_clocks(exynos_init_audss_clocks,
+ ARRAY_SIZE(exynos_init_audss_clocks));
+ s3c_disable_clocks(exynos_init_audss_clocks,
+ ARRAY_SIZE(exynos_init_audss_clocks));
+}
.ctrlbit = (1 << 13),
}, {
.name = "spi",
- .devname = "s3c64xx-spi.0",
+ .devname = "exynos4210-spi.0",
.enable = exynos4_clk_ip_peril_ctrl,
.ctrlbit = (1 << 16),
}, {
.name = "spi",
- .devname = "s3c64xx-spi.1",
+ .devname = "exynos4210-spi.1",
.enable = exynos4_clk_ip_peril_ctrl,
.ctrlbit = (1 << 17),
}, {
.name = "spi",
- .devname = "s3c64xx-spi.2",
+ .devname = "exynos4210-spi.2",
.enable = exynos4_clk_ip_peril_ctrl,
.ctrlbit = (1 << 18),
}, {
.reg_div = { .reg = EXYNOS4_CLKDIV_FSYS2, .shift = 24, .size = 8 },
};
+static struct clksrc_clk exynos4_clk_mdout_spi0 = {
+ .clk = {
+ .name = "sclk_spi_mdout",
+ .devname = "exynos4210-spi.0",
+ },
+ .sources = &exynos4_clkset_group,
+ .reg_src = { .reg = EXYNOS4_CLKSRC_PERIL1, .shift = 16, .size = 4 },
+ .reg_div = { .reg = EXYNOS4_CLKDIV_PERIL1, .shift = 0, .size = 4 },
+
+};
+
static struct clksrc_clk exynos4_clk_sclk_spi0 = {
.clk = {
.name = "sclk_spi",
- .devname = "s3c64xx-spi.0",
+ .devname = "exynos4210-spi.0",
+ .parent = &exynos4_clk_mdout_spi0.clk,
.enable = exynos4_clksrc_mask_peril1_ctrl,
.ctrlbit = (1 << 16),
},
+ .reg_div = { .reg = EXYNOS4_CLKDIV_PERIL1, .shift = 8, .size = 8 },
+};
+
+static struct clksrc_clk exynos4_clk_mdout_spi1 = {
+ .clk = {
+ .name = "sclk_spi_mdout",
+ .devname = "exynos4210-spi.1",
+ },
.sources = &exynos4_clkset_group,
- .reg_src = { .reg = EXYNOS4_CLKSRC_PERIL1, .shift = 16, .size = 4 },
- .reg_div = { .reg = EXYNOS4_CLKDIV_PERIL1, .shift = 0, .size = 4 },
+ .reg_src = { .reg = EXYNOS4_CLKSRC_PERIL1, .shift = 20, .size = 4 },
+ .reg_div = { .reg = EXYNOS4_CLKDIV_PERIL1, .shift = 16, .size = 4 },
+
};
static struct clksrc_clk exynos4_clk_sclk_spi1 = {
.clk = {
.name = "sclk_spi",
- .devname = "s3c64xx-spi.1",
+ .devname = "exynos4210-spi.1",
+ .parent = &exynos4_clk_mdout_spi1.clk,
.enable = exynos4_clksrc_mask_peril1_ctrl,
.ctrlbit = (1 << 20),
},
+ .reg_div = { .reg = EXYNOS4_CLKDIV_PERIL1, .shift = 24, .size = 8 },
+};
+
+static struct clksrc_clk exynos4_clk_mdout_spi2 = {
+ .clk = {
+ .name = "sclk_spi_mdout",
+ .devname = "exynos4210-spi.2",
+ },
.sources = &exynos4_clkset_group,
- .reg_src = { .reg = EXYNOS4_CLKSRC_PERIL1, .shift = 20, .size = 4 },
- .reg_div = { .reg = EXYNOS4_CLKDIV_PERIL1, .shift = 16, .size = 4 },
+ .reg_src = { .reg = EXYNOS4_CLKSRC_PERIL1, .shift = 24, .size = 4 },
+ .reg_div = { .reg = EXYNOS4_CLKDIV_PERIL2, .shift = 0, .size = 4 },
+
};
static struct clksrc_clk exynos4_clk_sclk_spi2 = {
.clk = {
.name = "sclk_spi",
- .devname = "s3c64xx-spi.2",
+ .devname = "exynos4210-spi.2",
+ .parent = &exynos4_clk_mdout_spi2.clk,
.enable = exynos4_clksrc_mask_peril1_ctrl,
.ctrlbit = (1 << 24),
},
- .sources = &exynos4_clkset_group,
- .reg_src = { .reg = EXYNOS4_CLKSRC_PERIL1, .shift = 24, .size = 4 },
- .reg_div = { .reg = EXYNOS4_CLKDIV_PERIL2, .shift = 0, .size = 4 },
+ .reg_div = { .reg = EXYNOS4_CLKDIV_PERIL2, .shift = 8, .size = 8 },
};
/* Clock initialization code */
&exynos4_clk_sclk_spi0,
&exynos4_clk_sclk_spi1,
&exynos4_clk_sclk_spi2,
-
+ &exynos4_clk_mdout_spi0,
+ &exynos4_clk_mdout_spi1,
+ &exynos4_clk_mdout_spi2,
};
static struct clk_lookup exynos4_clk_lookup[] = {
CLKDEV_INIT("dma-pl330.0", "apb_pclk", &exynos4_clk_pdma0),
CLKDEV_INIT("dma-pl330.1", "apb_pclk", &exynos4_clk_pdma1),
CLKDEV_INIT("dma-pl330.2", "apb_pclk", &exynos4_clk_mdma1),
- CLKDEV_INIT("s3c64xx-spi.0", "spi_busclk0", &exynos4_clk_sclk_spi0.clk),
- CLKDEV_INIT("s3c64xx-spi.1", "spi_busclk0", &exynos4_clk_sclk_spi1.clk),
- CLKDEV_INIT("s3c64xx-spi.2", "spi_busclk0", &exynos4_clk_sclk_spi2.clk),
+ CLKDEV_INIT("exynos4210-spi.0", "spi_busclk0", &exynos4_clk_sclk_spi0.clk),
+ CLKDEV_INIT("exynos4210-spi.1", "spi_busclk0", &exynos4_clk_sclk_spi1.clk),
+ CLKDEV_INIT("exynos4210-spi.2", "spi_busclk0", &exynos4_clk_sclk_spi2.clk),
};
static int xtal_rate;
#include <plat/pll.h>
#include <plat/s5p-clock.h>
#include <plat/clock-clksrc.h>
+#include <plat/devs.h>
#include <plat/pm.h>
+#include <plat/cpu.h>
#include <mach/map.h>
#include <mach/regs-clock.h>
+#include <mach/regs-audss.h>
#include <mach/sysmmu.h>
#include "common.h"
#ifdef CONFIG_PM_SLEEP
static struct sleep_save exynos5_clock_save[] = {
- /* will be implemented */
+ SAVE_ITEM(EXYNOS5_CLKSRC_MASK_TOP),
+ SAVE_ITEM(EXYNOS5_CLKSRC_MASK_GSCL),
+ SAVE_ITEM(EXYNOS5_CLKSRC_MASK_DISP1_0),
+ SAVE_ITEM(EXYNOS5_CLKSRC_MASK_FSYS),
+ SAVE_ITEM(EXYNOS5_CLKSRC_MASK_MAUDIO),
+ SAVE_ITEM(EXYNOS5_CLKSRC_MASK_PERIC0),
+ SAVE_ITEM(EXYNOS5_CLKSRC_MASK_PERIC1),
+ SAVE_ITEM(EXYNOS5_CLKGATE_IP_GSCL),
+ SAVE_ITEM(EXYNOS5_CLKGATE_IP_DISP1),
+ SAVE_ITEM(EXYNOS5_CLKGATE_IP_MFC),
+ SAVE_ITEM(EXYNOS5_CLKGATE_IP_G3D),
+ SAVE_ITEM(EXYNOS5_CLKGATE_IP_GEN),
+ SAVE_ITEM(EXYNOS5_CLKGATE_IP_FSYS),
+ SAVE_ITEM(EXYNOS5_CLKGATE_IP_GPS),
+ SAVE_ITEM(EXYNOS5_CLKGATE_IP_PERIC),
+ SAVE_ITEM(EXYNOS5_CLKGATE_IP_PERIS),
+ SAVE_ITEM(EXYNOS5_CLKGATE_BLOCK),
+ SAVE_ITEM(EXYNOS5_CLKDIV_TOP0),
+ SAVE_ITEM(EXYNOS5_CLKDIV_TOP1),
+ SAVE_ITEM(EXYNOS5_CLKDIV_GSCL),
+ SAVE_ITEM(EXYNOS5_CLKDIV_DISP1_0),
+ SAVE_ITEM(EXYNOS5_CLKDIV_GEN),
+ SAVE_ITEM(EXYNOS5_CLKDIV_MAUDIO),
+ SAVE_ITEM(EXYNOS5_CLKDIV_FSYS0),
+ SAVE_ITEM(EXYNOS5_CLKDIV_FSYS1),
+ SAVE_ITEM(EXYNOS5_CLKDIV_FSYS2),
+ SAVE_ITEM(EXYNOS5_CLKDIV_FSYS3),
+ SAVE_ITEM(EXYNOS5_CLKDIV_PERIC0),
+ SAVE_ITEM(EXYNOS5_CLKDIV_PERIC1),
+ SAVE_ITEM(EXYNOS5_CLKDIV_PERIC2),
+ SAVE_ITEM(EXYNOS5_CLKDIV_PERIC3),
+ SAVE_ITEM(EXYNOS5_CLKDIV_PERIC4),
+ SAVE_ITEM(EXYNOS5_CLKDIV_PERIC5),
+ SAVE_ITEM(EXYNOS5_SCLK_DIV_ISP),
+ SAVE_ITEM(EXYNOS5_CLKSRC_TOP0),
+ SAVE_ITEM(EXYNOS5_CLKSRC_TOP1),
+ SAVE_ITEM(EXYNOS5_CLKSRC_TOP2),
+ SAVE_ITEM(EXYNOS5_CLKSRC_TOP3),
+ SAVE_ITEM(EXYNOS5_CLKSRC_GSCL),
+ SAVE_ITEM(EXYNOS5_CLKSRC_DISP1_0),
+ SAVE_ITEM(EXYNOS5_CLKSRC_MAUDIO),
+ SAVE_ITEM(EXYNOS5_CLKSRC_FSYS),
+ SAVE_ITEM(EXYNOS5_CLKSRC_PERIC0),
+ SAVE_ITEM(EXYNOS5_CLKSRC_PERIC1),
+ SAVE_ITEM(EXYNOS5_SCLK_SRC_ISP),
+ SAVE_ITEM(EXYNOS5_EPLL_CON0),
+ SAVE_ITEM(EXYNOS5_EPLL_CON1),
+ SAVE_ITEM(EXYNOS5_EPLL_CON2),
+ SAVE_ITEM(EXYNOS5_VPLL_CON0),
+ SAVE_ITEM(EXYNOS5_VPLL_CON1),
+ SAVE_ITEM(EXYNOS5_VPLL_CON2),
};
#endif
.rate = 48000000,
};
+struct clksrc_clk exynos5_clk_audiocdclk0 = {
+ .clk = {
+ .name = "audiocdclk",
+ .rate = 16934400,
+ },
+};
+
static int exynos5_clksrc_mask_top_ctrl(struct clk *clk, int enable)
{
return s5p_gatectrl(EXYNOS5_CLKSRC_MASK_TOP, clk, enable);
return s5p_gatectrl(EXYNOS5_CLKSRC_MASK_FSYS, clk, enable);
}
+static int exynos5_clk_ip_gscl_ctrl(struct clk *clk, int enable)
+{
+ return s5p_gatectrl(EXYNOS5_CLKGATE_IP_GSCL, clk, enable);
+}
+
static int exynos5_clksrc_mask_gscl_ctrl(struct clk *clk, int enable)
{
return s5p_gatectrl(EXYNOS5_CLKSRC_MASK_GSCL, clk, enable);
return s5p_gatectrl(EXYNOS5_CLKGATE_IP_GPS, clk, enable);
}
+static int exynos5_clksrc_mask_maudio_ctrl(struct clk *clk, int enable)
+{
+ return s5p_gatectrl(EXYNOS5_CLKSRC_MASK_MAUDIO, clk, enable);
+}
+
+static int exynos5_clk_audss_ctrl(struct clk *clk, int enable)
+{
+ return s5p_gatectrl(EXYNOS_CLKGATE_AUDSS, clk, enable);
+}
+
static int exynos5_clk_ip_mfc_ctrl(struct clk *clk, int enable)
{
return s5p_gatectrl(EXYNOS5_CLKGATE_IP_MFC, clk, enable);
return s5p_gatectrl(EXYNOS5_CLKGATE_IP_PERIS, clk, enable);
}
+static int exynos5_clksrc_mask_peric1_ctrl(struct clk *clk, int enable)
+{
+ return s5p_gatectrl(EXYNOS5_CLKSRC_MASK_PERIC1, clk, enable);
+}
+
+static int exynos5_clk_ip_acp_ctrl(struct clk *clk, int enable)
+{
+ return s5p_gatectrl(EXYNOS5_CLKGATE_IP_ACP, clk, enable);
+}
+
+static int exynos5_clk_ip_isp0_ctrl(struct clk *clk, int enable)
+{
+ return s5p_gatectrl(EXYNOS5_CLKGATE_IP_ISP0, clk, enable);
+}
+
+static int exynos5_clk_ip_isp1_ctrl(struct clk *clk, int enable)
+{
+ return s5p_gatectrl(EXYNOS5_CLKGATE_IP_ISP1, clk, enable);
+}
+
+static int exynos5_clk_hdmiphy_ctrl(struct clk *clk, int enable)
+{
+ return s5p_gatectrl(S5P_HDMI_PHY_CONTROL, clk, enable);
+}
+
/* Core list of CMU_CPU side */
static struct clksrc_clk exynos5_clk_mout_apll = {
.reg_div = { .reg = EXYNOS5_CLKDIV_CPU0, .shift = 24, .size = 3 },
};
+static struct clk clk_fout_bpll_div2 = {
+ .name = "fout_bpll_div2",
+};
+
+static struct clk *exynos5_clkset_mout_bpll_fout_list[] = {
+ [0] = &clk_fout_bpll_div2,
+ [1] = &clk_fout_bpll,
+};
+
+static struct clksrc_sources exynos5_clkset_mout_bpll_fout = {
+ .sources = exynos5_clkset_mout_bpll_fout_list,
+ .nr_sources = ARRAY_SIZE(exynos5_clkset_mout_bpll_fout_list),
+};
+
+static struct clksrc_clk exynos5_clk_mout_bpll_fout = {
+ .clk = {
+ .name = "mout_bpll_fout",
+ },
+ .sources = &exynos5_clkset_mout_bpll_fout,
+ .reg_src = { .reg = EXYNOS5_PLL_DIV2_SEL, .shift = 0, .size = 1 },
+};
+
+/* Possible clock sources for BPLL Mux */
+static struct clk *clk_src_bpll_list[] = {
+ [0] = &clk_fin_bpll,
+ [1] = &exynos5_clk_mout_bpll_fout.clk,
+};
+
+struct clksrc_sources clk_src_bpll = {
+ .sources = clk_src_bpll_list,
+ .nr_sources = ARRAY_SIZE(clk_src_bpll_list),
+};
+
static struct clksrc_clk exynos5_clk_mout_bpll = {
.clk = {
.name = "mout_bpll",
.reg_src = { .reg = EXYNOS5_CLKSRC_TOP2, .shift = 12, .size = 1 },
};
+static struct clk clk_fout_mpll_div2 = {
+ .name = "fout_mpll_div2",
+};
+
+static struct clk *exynos5_clkset_mout_mpll_fout_list[] = {
+ [0] = &clk_fout_mpll_div2,
+ [1] = &clk_fout_mpll,
+};
+
+static struct clksrc_sources exynos5_clkset_mout_mpll_fout = {
+ .sources = exynos5_clkset_mout_mpll_fout_list,
+ .nr_sources = ARRAY_SIZE(exynos5_clkset_mout_mpll_fout_list),
+};
+
+static struct clksrc_clk exynos5_clk_mout_mpll_fout = {
+ .clk = {
+ .name = "mout_mpll_fout",
+ },
+ .sources = &exynos5_clkset_mout_mpll_fout,
+ .reg_src = { .reg = EXYNOS5_PLL_DIV2_SEL, .shift = 4, .size = 1 },
+};
+
+static struct clk *exynos5_clk_src_mpll_list[] = {
+ [0] = &clk_fin_mpll,
+ [1] = &exynos5_clk_mout_mpll_fout.clk,
+};
+
+struct clksrc_sources exynos5_clk_src_mpll = {
+ .sources = exynos5_clk_src_mpll_list,
+ .nr_sources = ARRAY_SIZE(exynos5_clk_src_mpll_list),
+};
+
struct clksrc_clk exynos5_clk_mout_mpll = {
.clk = {
.name = "mout_mpll",
},
- .sources = &clk_src_mpll,
+ .sources = &exynos5_clk_src_mpll,
.reg_src = { .reg = EXYNOS5_CLKSRC_CORE1, .shift = 8, .size = 1 },
};
.enable = exynos5_clk_ip_peris_ctrl,
.ctrlbit = (1 << 20),
}, {
- .name = "hsmmc",
- .devname = "exynos4-sdhci.0",
+ .name = "watchdog",
+ .parent = &exynos5_clk_aclk_66.clk,
+ .enable = exynos5_clk_ip_peris_ctrl,
+ .ctrlbit = (1 << 19),
+ }, {
+ .name = "biu",
+ .devname = "dw_mmc.0",
.parent = &exynos5_clk_aclk_200.clk,
.enable = exynos5_clk_ip_fsys_ctrl,
.ctrlbit = (1 << 12),
}, {
- .name = "hsmmc",
- .devname = "exynos4-sdhci.1",
+ .name = "biu",
+ .devname = "dw_mmc.1",
.parent = &exynos5_clk_aclk_200.clk,
.enable = exynos5_clk_ip_fsys_ctrl,
.ctrlbit = (1 << 13),
}, {
- .name = "hsmmc",
- .devname = "exynos4-sdhci.2",
+ .name = "biu",
+ .devname = "dw_mmc.2",
.parent = &exynos5_clk_aclk_200.clk,
.enable = exynos5_clk_ip_fsys_ctrl,
.ctrlbit = (1 << 14),
}, {
- .name = "hsmmc",
- .devname = "exynos4-sdhci.3",
+ .name = "biu",
+ .devname = "dw_mmc.3",
.parent = &exynos5_clk_aclk_200.clk,
.enable = exynos5_clk_ip_fsys_ctrl,
.ctrlbit = (1 << 15),
- }, {
- .name = "dwmci",
- .parent = &exynos5_clk_aclk_200.clk,
- .enable = exynos5_clk_ip_fsys_ctrl,
- .ctrlbit = (1 << 16),
}, {
.name = "sata",
.devname = "ahci",
.ctrlbit = (1 << 25),
}, {
.name = "mfc",
- .devname = "s5p-mfc",
+ .devname = "s5p-mfc-v6",
.enable = exynos5_clk_ip_mfc_ctrl,
.ctrlbit = (1 << 0),
}, {
.name = "hdmi",
- .devname = "exynos4-hdmi",
+ .devname = "exynos5-hdmi",
.enable = exynos5_clk_ip_disp1_ctrl,
.ctrlbit = (1 << 6),
+ }, {
+ .name = "hdmiphy",
+ .devname = "exynos5-hdmi",
+ .enable = exynos5_clk_hdmiphy_ctrl,
+ .ctrlbit = (1 << 0),
}, {
.name = "mixer",
.devname = "s5p-mixer",
.name = "dsim0",
.enable = exynos5_clk_ip_disp1_ctrl,
.ctrlbit = (1 << 3),
+ }, {
+ .name = "fimd",
+ .devname = "exynos5-fb",
+ .enable = exynos5_clk_ip_disp1_ctrl,
+ .ctrlbit = (1 << 0),
}, {
.name = "iis",
.devname = "samsung-i2s.1",
.name = "usbhost",
.enable = exynos5_clk_ip_fsys_ctrl ,
.ctrlbit = (1 << 18),
+ }, {
+ .name = "usbdrd30",
+ .parent = &exynos5_clk_aclk_200.clk,
+ .enable = exynos5_clk_ip_fsys_ctrl,
+ .ctrlbit = (1 << 19),
}, {
.name = "usbotg",
.enable = exynos5_clk_ip_fsys_ctrl,
.ctrlbit = (1 << 13),
}, {
.name = "i2c",
- .devname = "s3c2440-hdmiphy-i2c",
.parent = &exynos5_clk_aclk_66.clk,
.enable = exynos5_clk_ip_peric_ctrl,
.ctrlbit = (1 << 14),
+ }, {
+ .name = "gscl",
+ .devname = "exynos-gsc.0",
+ .enable = exynos5_clk_ip_gscl_ctrl,
+ .ctrlbit = (1 << 0),
+ }, {
+ .name = "gscl",
+ .devname = "exynos-gsc.1",
+ .enable = exynos5_clk_ip_gscl_ctrl,
+ .ctrlbit = (1 << 1),
+ }, {
+ .name = "gscl",
+ .devname = "exynos-gsc.2",
+ .enable = exynos5_clk_ip_gscl_ctrl,
+ .ctrlbit = (1 << 2),
+ }, {
+ .name = "gscl",
+ .devname = "exynos-gsc.3",
+ .enable = exynos5_clk_ip_gscl_ctrl,
+ .ctrlbit = (1 << 3),
+ }, {
+ .name = "fimg2d",
+ .enable = exynos5_clk_ip_acp_ctrl,
+ .ctrlbit = (1 << 3),
+ }, {
+ .name = "dp",
+ .devname = "s5p-dp",
+ .enable = exynos5_clk_ip_disp1_ctrl,
+ .ctrlbit = (1 << 4),
+ }, {
+ .name = SYSMMU_CLOCK_NAME,
+ .devname = SYSMMU_CLOCK_DEVNAME(mfc_l, 3),
+ .enable = &exynos5_clk_ip_mfc_ctrl,
+ /* There is change in the MFC_L & MFC_R in v6.5 */
+ .ctrlbit = (1 << 2),
+ .parent = &exynos5_init_clocks_off[10],
+ }, {
+ .name = SYSMMU_CLOCK_NAME,
+ .devname = SYSMMU_CLOCK_DEVNAME(mfc_r, 4),
+ .enable = &exynos5_clk_ip_mfc_ctrl,
+ .ctrlbit = (1 << 1),
+ .parent = &exynos5_init_clocks_off[10],
+ }, {
+ .name = SYSMMU_CLOCK_NAME,
+ .devname = SYSMMU_CLOCK_DEVNAME(tv, 28),
+ .enable = &exynos5_clk_ip_disp1_ctrl,
+ .ctrlbit = (1 << 9),
+ .parent = &exynos5_init_clocks_off[13],
+ }, {
+ .name = SYSMMU_CLOCK_NAME,
+ .devname = SYSMMU_CLOCK_DEVNAME(jpeg, 7),
+ .enable = &exynos5_clk_ip_gen_ctrl,
+ .ctrlbit = (1 << 7),
+ }, {
+ .name = SYSMMU_CLOCK_NAME,
+ .devname = SYSMMU_CLOCK_DEVNAME(rot, 5),
+ .enable = &exynos5_clk_ip_gen_ctrl,
+ .ctrlbit = (1 << 6)
+ }, {
+ .name = SYSMMU_CLOCK_NAME,
+ .devname = SYSMMU_CLOCK_DEVNAME(gsc0, 23),
+ .enable = &exynos5_clk_ip_gscl_ctrl,
+ .ctrlbit = (1 << 7),
+ .parent = &exynos5_init_clocks_off[40],
+ }, {
+ .name = SYSMMU_CLOCK_NAME,
+ .devname = SYSMMU_CLOCK_DEVNAME(gsc1, 24),
+ .enable = &exynos5_clk_ip_gscl_ctrl,
+ .ctrlbit = (1 << 8),
+ .parent = &exynos5_init_clocks_off[41],
+ }, {
+ .name = SYSMMU_CLOCK_NAME,
+ .devname = SYSMMU_CLOCK_DEVNAME(gsc2, 25),
+ .enable = &exynos5_clk_ip_gscl_ctrl,
+ .ctrlbit = (1 << 9),
+ .parent = &exynos5_init_clocks_off[42],
+ }, {
+ .name = SYSMMU_CLOCK_NAME,
+ .devname = SYSMMU_CLOCK_DEVNAME(gsc3, 26),
+ .enable = &exynos5_clk_ip_gscl_ctrl,
+ .ctrlbit = (1 << 10),
+ .parent = &exynos5_init_clocks_off[43],
+ }, {
+ .name = SYSMMU_CLOCK_NAME,
+ .devname = SYSMMU_CLOCK_DEVNAME(isp, 11),
+ .enable = &exynos5_clk_ip_isp0_ctrl,
+ .ctrlbit = (0x3F << 8),
+ }, {
+ .name = SYSMMU_CLOCK_NAME2,
+ .devname = SYSMMU_CLOCK_DEVNAME(isp, 11),
+ .enable = &exynos5_clk_ip_isp1_ctrl,
+ .ctrlbit = (0xF << 4),
+ }, {
+ .name = SYSMMU_CLOCK_NAME,
+ .devname = SYSMMU_CLOCK_DEVNAME(2d, 2),
+ .enable = &exynos5_clk_ip_acp_ctrl,
+ .ctrlbit = (1 << 7),
+ }, {
+ .name = "spi",
+ .devname = "exynos4210-spi.0",
+ .parent = &exynos5_clk_aclk_66.clk,
+ .enable = exynos5_clk_ip_peric_ctrl,
+ .ctrlbit = (1 << 16),
+ }, {
+ .name = "spi",
+ .devname = "exynos4210-spi.1",
+ .parent = &exynos5_clk_aclk_66.clk,
+ .enable = exynos5_clk_ip_peric_ctrl,
+ .ctrlbit = (1 << 17),
+ }, {
+ .name = "spi",
+ .devname = "exynos4210-spi.2",
+ .parent = &exynos5_clk_aclk_66.clk,
+ .enable = exynos5_clk_ip_peric_ctrl,
+ .ctrlbit = (1 << 18),
+ }, {
+ .name = "tmu_apbif",
+ .parent = &exynos5_clk_aclk_66.clk,
+ .enable = exynos5_clk_ip_peris_ctrl,
+ .ctrlbit = (1 << 21),
}
};
}
};
+static struct clk *clkset_sclk_audio0_list[] = {
+ [0] = &exynos5_clk_audiocdclk0.clk,
+ [1] = &clk_ext_xtal_mux,
+ [2] = &exynos5_clk_sclk_hdmi27m,
+ [3] = &exynos5_clk_sclk_dptxphy,
+ [4] = &exynos5_clk_sclk_usbphy,
+ [5] = &exynos5_clk_sclk_hdmiphy,
+ [6] = &exynos5_clk_mout_mpll.clk,
+ [7] = &exynos5_clk_mout_epll.clk,
+ [8] = &exynos5_clk_sclk_vpll.clk,
+ [9] = &exynos5_clk_mout_cpll.clk,
+};
+
+static struct clksrc_sources exynos5_clkset_sclk_audio0 = {
+ .sources = clkset_sclk_audio0_list,
+ .nr_sources = ARRAY_SIZE(clkset_sclk_audio0_list),
+};
+
+static struct clksrc_clk exynos5_clk_sclk_audio0 = {
+ .clk = {
+ .name = "audio-bus",
+ .enable = exynos5_clksrc_mask_maudio_ctrl,
+ .ctrlbit = (1 << 0),
+ },
+ .sources = &exynos5_clkset_sclk_audio0,
+ .reg_src = { .reg = EXYNOS5_CLKSRC_MAUDIO, .shift = 0, .size = 4 },
+ .reg_div = { .reg = EXYNOS5_CLKDIV_MAUDIO, .shift = 0, .size = 4 },
+};
+
+static struct clk *exynos5_clkset_mout_audss_list[] = {
+ &clk_ext_xtal_mux,
+ &clk_fout_epll,
+};
+
+static struct clksrc_sources clkset_mout_audss = {
+ .sources = exynos5_clkset_mout_audss_list,
+ .nr_sources = ARRAY_SIZE(exynos5_clkset_mout_audss_list),
+};
+
+static struct clksrc_clk exynos5_clk_mout_audss = {
+ .clk = {
+ .name = "mout_audss",
+ },
+ .sources = &clkset_mout_audss,
+ .reg_src = { .reg = EXYNOS_CLKSRC_AUDSS, .shift = 0, .size = 1 },
+};
+
+static struct clk *exynos5_clkset_sclk_audss_list[] = {
+ &exynos5_clk_mout_audss.clk,
+ &exynos5_clk_audiocdclk0.clk,
+ &exynos5_clk_sclk_audio0.clk,
+};
+
+static struct clksrc_sources exynos5_clkset_sclk_audss = {
+ .sources = exynos5_clkset_sclk_audss_list,
+ .nr_sources = ARRAY_SIZE(exynos5_clkset_sclk_audss_list),
+};
+
+static struct clksrc_clk exynos5_clk_sclk_audss_i2s = {
+ .clk = {
+ .name = "i2sclk",
+ .enable = exynos5_clk_audss_ctrl,
+ .ctrlbit = EXYNOS_AUDSS_CLKGATE_I2SSPECIAL,
+ },
+ .sources = &exynos5_clkset_sclk_audss,
+ .reg_src = { .reg = EXYNOS_CLKSRC_AUDSS, .shift = 2, .size = 2 },
+ .reg_div = { .reg = EXYNOS_CLKDIV_AUDSS, .shift = 8, .size = 4 },
+};
+
+static struct clksrc_clk exynos5_clk_dout_audss_srp = {
+ .clk = {
+ .name = "dout_srp",
+ .parent = &exynos5_clk_mout_audss.clk,
+ },
+ .reg_div = { .reg = EXYNOS_CLKDIV_AUDSS, .shift = 0, .size = 4 },
+};
+
+static struct clksrc_clk exynos5_clk_sclk_audss_bus = {
+ .clk = {
+ .name = "busclk",
+ .parent = &exynos5_clk_dout_audss_srp.clk,
+ .enable = exynos5_clk_audss_ctrl,
+ .ctrlbit = EXYNOS_AUDSS_CLKGATE_I2SBUS,
+ },
+ .reg_div = { .reg = EXYNOS_CLKDIV_AUDSS, .shift = 4, .size = 4 },
+};
+
static struct clk exynos5_clk_pdma0 = {
.name = "dma",
.devname = "dma-pl330.0",
.nr_sources = ARRAY_SIZE(exynos5_clkset_group_list),
};
+struct clk *exynos5_clkset_usbdrd30_list[] = {
+ [0] = &exynos5_clk_mout_mpll.clk,
+ [1] = &exynos5_clk_mout_cpll.clk,
+};
+
+struct clksrc_sources exynos5_clkset_usbdrd30 = {
+ .sources = exynos5_clkset_usbdrd30_list,
+ .nr_sources = ARRAY_SIZE(exynos5_clkset_usbdrd30_list),
+};
+
/* Possible clock sources for aclk_266_gscl_sub Mux */
static struct clk *clk_src_gscl_266_list[] = {
[0] = &clk_ext_xtal_mux,
static struct clksrc_clk exynos5_clk_sclk_mmc0 = {
.clk = {
- .name = "sclk_mmc",
- .devname = "exynos4-sdhci.0",
+ .name = "ciu",
+ .devname = "dw_mmc.0",
.parent = &exynos5_clk_dout_mmc0.clk,
.enable = exynos5_clksrc_mask_fsys_ctrl,
.ctrlbit = (1 << 0),
static struct clksrc_clk exynos5_clk_sclk_mmc1 = {
.clk = {
- .name = "sclk_mmc",
- .devname = "exynos4-sdhci.1",
+ .name = "ciu",
+ .devname = "dw_mmc.1",
.parent = &exynos5_clk_dout_mmc1.clk,
.enable = exynos5_clksrc_mask_fsys_ctrl,
.ctrlbit = (1 << 4),
static struct clksrc_clk exynos5_clk_sclk_mmc2 = {
.clk = {
- .name = "sclk_mmc",
- .devname = "exynos4-sdhci.2",
+ .name = "ciu",
+ .devname = "dw_mmc.2",
.parent = &exynos5_clk_dout_mmc2.clk,
.enable = exynos5_clksrc_mask_fsys_ctrl,
.ctrlbit = (1 << 8),
static struct clksrc_clk exynos5_clk_sclk_mmc3 = {
.clk = {
- .name = "sclk_mmc",
- .devname = "exynos4-sdhci.3",
+ .name = "ciu",
+ .devname = "dw_mmc.3",
.parent = &exynos5_clk_dout_mmc3.clk,
.enable = exynos5_clksrc_mask_fsys_ctrl,
.ctrlbit = (1 << 12),
.reg_div = { .reg = EXYNOS5_CLKDIV_FSYS2, .shift = 24, .size = 8 },
};
+static struct clksrc_clk exynos5_clk_mdout_spi0 = {
+ .clk = {
+ .name = "sclk_spi_mdout",
+ .devname = "exynos4210-spi.0",
+ },
+ .sources = &exynos5_clkset_group,
+ .reg_src = { .reg = EXYNOS5_CLKSRC_PERIC1, .shift = 16, .size = 4 },
+ .reg_div = { .reg = EXYNOS5_CLKDIV_PERIC1, .shift = 0, .size = 4 },
+
+};
+
+static struct clksrc_clk exynos5_clk_sclk_spi0 = {
+ .clk = {
+ .name = "sclk_spi",
+ .devname = "exynos4210-spi.0",
+ .parent = &exynos5_clk_mdout_spi0.clk,
+ .enable = exynos5_clksrc_mask_peric1_ctrl,
+ .ctrlbit = (1 << 16),
+ },
+ .reg_div = { .reg = EXYNOS5_CLKDIV_PERIC1, .shift = 8, .size = 8 },
+};
+
+static struct clksrc_clk exynos5_clk_mdout_spi1 = {
+ .clk = {
+ .name = "sclk_spi_mdout",
+ .devname = "exynos4210-spi.1",
+ },
+ .sources = &exynos5_clkset_group,
+ .reg_src = { .reg = EXYNOS5_CLKSRC_PERIC1, .shift = 20, .size = 4 },
+ .reg_div = { .reg = EXYNOS5_CLKDIV_PERIC1, .shift = 16, .size = 4 },
+
+};
+
+static struct clksrc_clk exynos5_clk_sclk_spi1 = {
+ .clk = {
+ .name = "sclk_spi",
+ .devname = "exynos4210-spi.1",
+ .parent = &exynos5_clk_mdout_spi1.clk,
+ .enable = exynos5_clksrc_mask_peric1_ctrl,
+ .ctrlbit = (1 << 20),
+ },
+ .reg_div = { .reg = EXYNOS5_CLKDIV_PERIC1, .shift = 24, .size = 8 },
+};
+
+static struct clksrc_clk exynos5_clk_mdout_spi2 = {
+ .clk = {
+ .name = "sclk_spi_mdout",
+ .devname = "exynos4210-spi.2",
+ },
+ .sources = &exynos5_clkset_group,
+ .reg_src = { .reg = EXYNOS5_CLKSRC_PERIC1, .shift = 24, .size = 4 },
+ .reg_div = { .reg = EXYNOS5_CLKDIV_PERIC2, .shift = 0, .size = 4 },
+
+};
+
+static struct clksrc_clk exynos5_clk_sclk_spi2 = {
+ .clk = {
+ .name = "sclk_spi",
+ .devname = "exynos4210-spi.2",
+ .parent = &exynos5_clk_mdout_spi2.clk,
+ .enable = exynos5_clksrc_mask_peric1_ctrl,
+ .ctrlbit = (1 << 24),
+ },
+ .reg_div = { .reg = EXYNOS5_CLKDIV_PERIC2, .shift = 8, .size = 8 },
+};
+
static struct clksrc_clk exynos5_clksrcs[] = {
{
- .clk = {
- .name = "sclk_dwmci",
- .parent = &exynos5_clk_dout_mmc4.clk,
- .enable = exynos5_clksrc_mask_fsys_ctrl,
- .ctrlbit = (1 << 16),
- },
- .reg_div = { .reg = EXYNOS5_CLKDIV_FSYS3, .shift = 8, .size = 8 },
- }, {
.clk = {
.name = "sclk_fimd",
- .devname = "s3cfb.1",
+ .devname = "exynos5-fb",
.enable = exynos5_clksrc_mask_disp1_0_ctrl,
.ctrlbit = (1 << 0),
},
.parent = &exynos5_clk_mout_cpll.clk,
},
.reg_div = { .reg = EXYNOS5_CLKDIV_GEN, .shift = 4, .size = 3 },
+ }, {
+ .clk = {
+ .name = "sclk_usbdrd30",
+ .enable = exynos5_clksrc_mask_fsys_ctrl,
+ .ctrlbit = (1 << 28),
+ },
+ .sources = &exynos5_clkset_usbdrd30,
+ .reg_src = { .reg = EXYNOS5_CLKSRC_FSYS, .shift = 28, .size = 1 },
+ .reg_div = { .reg = EXYNOS5_CLKDIV_FSYS0, .shift = 24, .size = 4 },
+ },
+};
+
+/* For ACLK_300_gscl_mid */
+static struct clksrc_clk exynos5_clk_mout_aclk_300_gscl_mid = {
+ .clk = {
+ .name = "mout_aclk_300_gscl_mid",
+ },
+ .sources = &exynos5_clkset_aclk,
+ .reg_src = { .reg = EXYNOS5_CLKSRC_TOP0, .shift = 24, .size = 1 },
+};
+
+/* For ACLK_300_gscl */
+struct clk *exynos5_clkset_aclk_300_gscl_list[] = {
+ [0] = &exynos5_clk_mout_aclk_300_gscl_mid.clk,
+ [1] = &exynos5_clk_sclk_vpll.clk,
+};
+
+struct clksrc_sources exynos5_clkset_aclk_300_gscl = {
+ .sources = exynos5_clkset_aclk_300_gscl_list,
+ .nr_sources = ARRAY_SIZE(exynos5_clkset_aclk_300_gscl_list),
+};
+
+static struct clksrc_clk exynos5_clk_mout_aclk_300_gscl = {
+ .clk = {
+ .name = "mout_aclk_300_gscl",
+ },
+ .sources = &exynos5_clkset_aclk_300_gscl,
+ .reg_src = { .reg = EXYNOS5_CLKSRC_TOP0, .shift = 25, .size = 1 },
+};
+
+static struct clksrc_clk exynos5_clk_dout_aclk_300_gscl = {
+ .clk = {
+ .name = "dout_aclk_300_gscl",
+ .parent = &exynos5_clk_mout_aclk_300_gscl.clk,
+ },
+ .reg_div = { .reg = EXYNOS5_CLKDIV_TOP1, .shift = 12, .size = 3 },
+};
+
+/* Possible clock sources for aclk_300_gscl_sub Mux */
+static struct clk *clk_src_gscl_300_list[] = {
+ [0] = &clk_ext_xtal_mux,
+ [1] = &exynos5_clk_dout_aclk_300_gscl.clk,
+};
+
+static struct clksrc_sources clk_src_gscl_300 = {
+ .sources = clk_src_gscl_300_list,
+ .nr_sources = ARRAY_SIZE(clk_src_gscl_300_list),
+};
+
+static struct clksrc_clk exynos5_clk_aclk_300_gscl = {
+ .clk = {
+ .name = "aclk_300_gscl",
},
+ .sources = &clk_src_gscl_300,
+ .reg_src = { .reg = EXYNOS5_CLKSRC_TOP3, .shift = 10, .size = 1 },
};
/* Clock initialization code */
&exynos5_clk_mout_apll,
&exynos5_clk_sclk_apll,
&exynos5_clk_mout_bpll,
+ &exynos5_clk_mout_bpll_fout,
&exynos5_clk_mout_bpll_user,
&exynos5_clk_mout_cpll,
&exynos5_clk_mout_epll,
&exynos5_clk_mout_mpll,
+ &exynos5_clk_mout_mpll_fout,
&exynos5_clk_mout_mpll_user,
&exynos5_clk_vpllsrc,
&exynos5_clk_sclk_vpll,
&exynos5_clk_aclk_266,
&exynos5_clk_aclk_200,
&exynos5_clk_aclk_166,
+ &exynos5_clk_mout_aclk_300_gscl_mid,
+ &exynos5_clk_mout_aclk_300_gscl,
+ &exynos5_clk_dout_aclk_300_gscl,
+ &exynos5_clk_aclk_300_gscl,
&exynos5_clk_aclk_66_pre,
&exynos5_clk_aclk_66,
&exynos5_clk_dout_mmc0,
&exynos5_clk_dout_mmc4,
&exynos5_clk_aclk_acp,
&exynos5_clk_pclk_acp,
+ &exynos5_clk_sclk_spi0,
+ &exynos5_clk_sclk_spi1,
+ &exynos5_clk_sclk_spi2,
+ &exynos5_clk_mdout_spi0,
+ &exynos5_clk_mdout_spi1,
+ &exynos5_clk_mdout_spi2,
+ &exynos5_clk_mout_audss,
+ &exynos5_clk_sclk_audss_bus,
+ &exynos5_clk_sclk_audss_i2s,
+ &exynos5_clk_dout_audss_srp,
+ &exynos5_clk_sclk_audio0,
};
static struct clk *exynos5_clk_cdev[] = {
CLKDEV_INIT("exynos4-sdhci.1", "mmc_busclk.2", &exynos5_clk_sclk_mmc1.clk),
CLKDEV_INIT("exynos4-sdhci.2", "mmc_busclk.2", &exynos5_clk_sclk_mmc2.clk),
CLKDEV_INIT("exynos4-sdhci.3", "mmc_busclk.2", &exynos5_clk_sclk_mmc3.clk),
+ CLKDEV_INIT("exynos4210-spi.0", "spi_busclk0", &exynos5_clk_sclk_spi0.clk),
+ CLKDEV_INIT("exynos4210-spi.1", "spi_busclk0", &exynos5_clk_sclk_spi1.clk),
+ CLKDEV_INIT("exynos4210-spi.2", "spi_busclk0", &exynos5_clk_sclk_spi2.clk),
CLKDEV_INIT("dma-pl330.0", "apb_pclk", &exynos5_clk_pdma0),
CLKDEV_INIT("dma-pl330.1", "apb_pclk", &exynos5_clk_pdma1),
CLKDEV_INIT("dma-pl330.2", "apb_pclk", &exynos5_clk_mdma1),
&exynos5_clk_sclk_hdmi27m,
&exynos5_clk_sclk_hdmiphy,
&clk_fout_bpll,
+ &clk_fout_bpll_div2,
+ &clk_fout_mpll_div2,
&clk_fout_cpll,
&exynos5_clk_armclk,
};
clk_fout_apll.ops = &exynos5_fout_apll_ops;
clk_fout_bpll.rate = bpll;
+ clk_fout_bpll_div2.rate = bpll >> 1;
clk_fout_cpll.rate = cpll;
clk_fout_mpll.rate = mpll;
+ clk_fout_mpll_div2.rate = mpll >> 1;
clk_fout_epll.rate = epll;
clk_fout_vpll.rate = vpll;
#include <linux/serial_core.h>
#include <linux/of.h>
#include <linux/of_irq.h>
+#include <linux/export.h>
+#include <linux/irqdomain.h>
+#include <linux/of_address.h>
#include <asm/proc-fns.h>
#include <asm/exception.h>
static void exynos5_init_clocks(int xtal);
static void exynos_init_uarts(struct s3c2410_uartcfg *cfg, int no);
static int exynos_init(void);
+static int exynos_init_irq_eint(struct device_node *np,
+ struct device_node *parent);
static struct cpu_table cpu_ids[] __initdata = {
{
.pfn = __phys_to_pfn(EXYNOS4_PA_HSPHY),
.length = SZ_4K,
.type = MT_DEVICE,
+ }, {
+ .virtual = (unsigned long)S5P_VA_AUDSS,
+ .pfn = __phys_to_pfn(EXYNOS_PA_AUDSS),
+ .length = SZ_4K,
+ .type = MT_DEVICE,
},
};
}, {
.virtual = (unsigned long)S5P_VA_GIC_CPU,
.pfn = __phys_to_pfn(EXYNOS5_PA_GIC_CPU),
- .length = SZ_64K,
+ .length = SZ_8K,
.type = MT_DEVICE,
}, {
.virtual = (unsigned long)S5P_VA_GIC_DIST,
.pfn = __phys_to_pfn(EXYNOS5_PA_GIC_DIST),
- .length = SZ_64K,
+ .length = SZ_4K,
+ .type = MT_DEVICE,
+ }, {
+ .virtual = (unsigned long)S3C_VA_USB_HSPHY,
+ .pfn = __phys_to_pfn(EXYNOS5_PA_USB_PHY),
+ .length = SZ_256K,
+ .type = MT_DEVICE,
+ }, {
+ .virtual = (unsigned long)S5P_VA_DRD_PHY,
+ .pfn = __phys_to_pfn(EXYNOS5_PA_DRD_PHY),
+ .length = SZ_256K,
+ .type = MT_DEVICE,
+ }, {
+ .virtual = (unsigned long)S5P_VA_AUDSS,
+ .pfn = __phys_to_pfn(EXYNOS_PA_AUDSS),
+ .length = SZ_4K,
.type = MT_DEVICE,
},
};
__raw_writel(0x1, EXYNOS_SWRESET);
}
+static void wdt_reset_init(void)
+{
+ unsigned int value;
+
+ value = __raw_readl(EXYNOS5_AUTOMATIC_WDT_RESET_DISABLE);
+ value &= ~EXYNOS5_SYS_WDTRESET;
+ __raw_writel(value, EXYNOS5_AUTOMATIC_WDT_RESET_DISABLE);
+
+ value = __raw_readl(EXYNOS5_MASK_WDT_RESET_REQUEST);
+ value &= ~EXYNOS5_SYS_WDTRESET;
+ __raw_writel(value, EXYNOS5_MASK_WDT_RESET_REQUEST);
+}
+
/*
* exynos_map_io
*
s5p_init_cpu(S5P_VA_CHIPID);
s3c_init_cpu(samsung_cpu_id, cpu_ids, ARRAY_SIZE(cpu_ids));
+
+ /* TO support Watch dog reset */
+ wdt_reset_init();
}
static void __init exynos4_map_io(void)
exynos4_register_clocks();
exynos4_setup_clocks();
+ exynos_register_audss_clocks();
}
static void __init exynos5_init_clocks(int xtal)
exynos5_register_clocks();
exynos5_setup_clocks();
+ exynos_register_audss_clocks();
}
#define COMBINER_ENABLE_SET 0x0
unsigned int irq_offset;
unsigned int irq_mask;
void __iomem *base;
+ unsigned int gic_irq;
};
+static struct irq_domain *combiner_irq_domain;
static struct combiner_chip_data combiner_data[MAX_COMBINER_NR];
static inline void __iomem *combiner_base(struct irq_data *data)
static void combiner_mask_irq(struct irq_data *data)
{
- u32 mask = 1 << (data->irq % 32);
+ u32 mask = 1 << (data->hwirq % 32);
__raw_writel(mask, combiner_base(data) + COMBINER_ENABLE_CLEAR);
}
static void combiner_unmask_irq(struct irq_data *data)
{
- u32 mask = 1 << (data->irq % 32);
+ u32 mask = 1 << (data->hwirq % 32);
__raw_writel(mask, combiner_base(data) + COMBINER_ENABLE_SET);
}
+static int combiner_set_affinity(struct irq_data *data, const struct
+ cpumask *dest, bool force)
+{
+ struct combiner_chip_data *chip_data = data->chip_data;
+
+ if (!chip_data)
+ return -EINVAL;
+
+ return irq_set_affinity(chip_data->gic_irq, dest);
+}
+
static void combiner_handle_cascade_irq(unsigned int irq, struct irq_desc *desc)
{
struct combiner_chip_data *chip_data = irq_get_handler_data(irq);
.name = "COMBINER",
.irq_mask = combiner_mask_irq,
.irq_unmask = combiner_unmask_irq,
+ .irq_set_affinity = combiner_set_affinity,
};
static void __init combiner_cascade_irq(unsigned int combiner_nr, unsigned int irq)
if (irq_set_handler_data(irq, &combiner_data[combiner_nr]) != 0)
BUG();
irq_set_chained_handler(irq, combiner_handle_cascade_irq);
+ combiner_data[combiner_nr].gic_irq = irq;
}
-static void __init combiner_init(unsigned int combiner_nr, void __iomem *base,
- unsigned int irq_start)
+static void __init combiner_init_one(unsigned int combiner_nr,
+ void __iomem *base)
{
- unsigned int i;
- unsigned int max_nr;
-
- if (soc_is_exynos5250())
- max_nr = EXYNOS5_MAX_COMBINER_NR;
- else
- max_nr = EXYNOS4_MAX_COMBINER_NR;
-
- if (combiner_nr >= max_nr)
- BUG();
-
combiner_data[combiner_nr].base = base;
- combiner_data[combiner_nr].irq_offset = irq_start;
+ combiner_data[combiner_nr].irq_offset = irq_find_mapping(
+ combiner_irq_domain, combiner_nr * MAX_IRQ_IN_COMBINER);
combiner_data[combiner_nr].irq_mask = 0xff << ((combiner_nr % 4) << 3);
/* Disable all interrupts */
__raw_writel(combiner_data[combiner_nr].irq_mask,
base + COMBINER_ENABLE_CLEAR);
+}
+
+#ifdef CONFIG_OF
+static int combiner_irq_domain_xlate(struct irq_domain *d,
+ struct device_node *controller, const u32 *intspec,
+ unsigned int intsize, unsigned long *out_hwirq,
+ unsigned int *out_type)
+{
+ if (d->of_node != controller)
+ return -EINVAL;
+ if (intsize < 2)
+ return -EINVAL;
+ *out_hwirq = intspec[0] * MAX_IRQ_IN_COMBINER + intspec[1];
+ *out_type = 0;
+ return 0;
+}
+#else
+static int combiner_irq_domain_xlate(struct irq_domain *d,
+ struct device_node *controller, const u32 *intspec,
+ unsigned int intsize, unsigned long *out_hwirq,
+ unsigned int *out_type)
+{
+ return -EINVAL;
+}
+#endif
+
+static int combiner_irq_domain_map(struct irq_domain *d, unsigned int irq,
+ irq_hw_number_t hw)
+{
+ irq_set_chip_and_handler(irq, &combiner_chip, handle_level_irq);
+ irq_set_chip_data(irq, &combiner_data[hw >> 3]);
+ set_irq_flags(irq, IRQF_VALID | IRQF_PROBE);
+ return 0;
+}
- /* Setup the Linux IRQ subsystem */
+static struct irq_domain_ops combiner_irq_domain_ops = {
+ .xlate = combiner_irq_domain_xlate,
+ .map = combiner_irq_domain_map,
+};
- for (i = irq_start; i < combiner_data[combiner_nr].irq_offset
- + MAX_IRQ_IN_COMBINER; i++) {
- irq_set_chip_and_handler(i, &combiner_chip, handle_level_irq);
- irq_set_chip_data(i, &combiner_data[combiner_nr]);
- set_irq_flags(i, IRQF_VALID | IRQF_PROBE);
+void __init combiner_init(void __iomem *combiner_base, struct device_node *np)
+{
+ int i, irq, irq_base;
+ unsigned int max_nr, nr_irq;
+
+ if (np) {
+ if (of_property_read_u32(np, "samsung,combiner-nr", &max_nr)) {
+ pr_warning("%s: number of combiners not specified, "
+ "setting default as %d.\n",
+ __func__, EXYNOS4_MAX_COMBINER_NR);
+ max_nr = EXYNOS4_MAX_COMBINER_NR;
+ }
+ } else {
+ max_nr = soc_is_exynos5250() ? EXYNOS5_MAX_COMBINER_NR :
+ EXYNOS4_MAX_COMBINER_NR;
+ }
+ nr_irq = max_nr * MAX_IRQ_IN_COMBINER;
+
+ irq_base = irq_alloc_descs(COMBINER_IRQ(0, 0), 1, nr_irq, 0);
+ if (IS_ERR_VALUE(irq_base)) {
+ irq_base = COMBINER_IRQ(0, 0);
+ pr_warning("%s: irq desc alloc failed. Continuing with %d as "
+ "linux irq base\n", __func__, irq_base);
+ }
+
+ combiner_irq_domain = irq_domain_add_legacy(np, nr_irq, irq_base, 0,
+ &combiner_irq_domain_ops, &combiner_data);
+ if (WARN_ON(!combiner_irq_domain)) {
+ pr_warning("%s: irq domain init failed\n", __func__);
+ return;
+ }
+
+ for (i = 0; i < max_nr; i++) {
+ combiner_init_one(i, combiner_base + (i >> 2) * 0x10);
+ irq = np ? irq_of_parse_and_map(np, i) : IRQ_SPI(i);
+ combiner_cascade_irq(i, irq);
}
}
#ifdef CONFIG_OF
+int __init combiner_of_init(struct device_node *np, struct device_node *parent)
+{
+ void __iomem *combiner_base;
+
+ combiner_base = of_iomap(np, 0);
+ if (!combiner_base) {
+ pr_err("%s: failed to map combiner registers\n", __func__);
+ return -ENXIO;
+ }
+
+ combiner_init(combiner_base, np);
+ return 0;
+}
+
static const struct of_device_id exynos4_dt_irq_match[] = {
{ .compatible = "arm,cortex-a9-gic", .data = gic_of_init, },
+ { .compatible = "samsung,exynos4210-combiner",
+ .data = combiner_of_init, },
+ { .compatible = "samsung,exynos5210-wakeup-eint-map",
+ .data = exynos_init_irq_eint, },
{},
};
#endif
void __init exynos4_init_irq(void)
{
- int irq;
unsigned int gic_bank_offset;
gic_bank_offset = soc_is_exynos4412() ? 0x4000 : 0x8000;
of_irq_init(exynos4_dt_irq_match);
#endif
- for (irq = 0; irq < EXYNOS4_MAX_COMBINER_NR; irq++) {
-
- combiner_init(irq, (void __iomem *)S5P_VA_COMBINER(irq),
- COMBINER_IRQ(irq, 0));
- combiner_cascade_irq(irq, IRQ_SPI(irq));
+ if (!of_have_populated_dt()) {
+ combiner_init(S5P_VA_COMBINER_BASE, NULL);
+ exynos_init_irq_eint(NULL, NULL);
}
/*
void __init exynos5_init_irq(void)
{
- int irq;
-
#ifdef CONFIG_OF
of_irq_init(exynos4_dt_irq_match);
#endif
-
- for (irq = 0; irq < EXYNOS5_MAX_COMBINER_NR; irq++) {
- combiner_init(irq, (void __iomem *)S5P_VA_COMBINER(irq),
- COMBINER_IRQ(irq, 0));
- combiner_cascade_irq(irq, IRQ_SPI(irq));
- }
-
/*
* The parameters of s5p_init_irq() are for VIC init.
* Theses parameters should be NULL and 0 because EXYNOS4
* uses GIC instead of VIC.
*/
s5p_init_irq(NULL, 0);
-}
-struct bus_type exynos4_subsys = {
- .name = "exynos4-core",
- .dev_name = "exynos4-core",
-};
+ gic_arch_extn.irq_set_wake = s3c_irq_wake;
+}
-struct bus_type exynos5_subsys = {
- .name = "exynos5-core",
- .dev_name = "exynos5-core",
+struct bus_type exynos_subsys = {
+ .name = "exynos-core",
+ .dev_name = "exynos-core",
};
static struct device exynos4_dev = {
- .bus = &exynos4_subsys,
-};
-
-static struct device exynos5_dev = {
- .bus = &exynos5_subsys,
+ .bus = &exynos_subsys,
};
static int __init exynos_core_init(void)
{
- if (soc_is_exynos5250())
- return subsys_system_register(&exynos5_subsys, NULL);
- else
- return subsys_system_register(&exynos4_subsys, NULL);
+ return subsys_system_register(&exynos_subsys, NULL);
}
core_initcall(exynos_core_init);
{
printk(KERN_INFO "EXYNOS: Initializing architecture\n");
- if (soc_is_exynos5250())
- return device_register(&exynos5_dev);
- else
- return device_register(&exynos4_dev);
+ return device_register(&exynos4_dev);
}
/* uart registration process */
static unsigned int eint0_15_data[16];
+#define EXYNOS_EINT_NR 32
+static struct irq_domain *irq_domain;
+
static inline int exynos4_irq_to_gpio(unsigned int irq)
{
if (irq < IRQ_EINT(0))
u32 mask;
spin_lock(&eint_lock);
- mask = __raw_readl(EINT_MASK(exynos_eint_base, data->irq));
- mask |= EINT_OFFSET_BIT(data->irq);
- __raw_writel(mask, EINT_MASK(exynos_eint_base, data->irq));
+ mask = __raw_readl(EINT_MASK(exynos_eint_base, data->hwirq));
+ mask |= EINT_OFFSET_BIT(data->hwirq);
+ __raw_writel(mask, EINT_MASK(exynos_eint_base, data->hwirq));
spin_unlock(&eint_lock);
}
u32 mask;
spin_lock(&eint_lock);
- mask = __raw_readl(EINT_MASK(exynos_eint_base, data->irq));
- mask &= ~(EINT_OFFSET_BIT(data->irq));
- __raw_writel(mask, EINT_MASK(exynos_eint_base, data->irq));
+ mask = __raw_readl(EINT_MASK(exynos_eint_base, data->hwirq));
+ mask &= ~(EINT_OFFSET_BIT(data->hwirq));
+ __raw_writel(mask, EINT_MASK(exynos_eint_base, data->hwirq));
spin_unlock(&eint_lock);
}
static inline void exynos_irq_eint_ack(struct irq_data *data)
{
- __raw_writel(EINT_OFFSET_BIT(data->irq),
- EINT_PEND(exynos_eint_base, data->irq));
+ __raw_writel(EINT_OFFSET_BIT(data->hwirq),
+ EINT_PEND(exynos_eint_base, data->hwirq));
}
static void exynos_irq_eint_maskack(struct irq_data *data)
static int exynos_irq_eint_set_type(struct irq_data *data, unsigned int type)
{
- int offs = EINT_OFFSET(data->irq);
+ int offs = data->hwirq;
int shift;
u32 ctrl, mask;
u32 newvalue = 0;
mask = 0x7 << shift;
spin_lock(&eint_lock);
- ctrl = __raw_readl(EINT_CON(exynos_eint_base, data->irq));
+ ctrl = __raw_readl(EINT_CON(exynos_eint_base, data->hwirq));
ctrl &= ~mask;
ctrl |= newvalue << shift;
- __raw_writel(ctrl, EINT_CON(exynos_eint_base, data->irq));
+ __raw_writel(ctrl, EINT_CON(exynos_eint_base, data->hwirq));
spin_unlock(&eint_lock);
if (soc_is_exynos5250())
while (status) {
irq = fls(status) - 1;
- generic_handle_irq(irq + start);
+ generic_handle_irq(irq_find_mapping(irq_domain, irq + start));
status &= ~(1 << irq);
}
}
{
struct irq_chip *chip = irq_get_chip(irq);
chained_irq_enter(chip, desc);
- exynos_irq_demux_eint(IRQ_EINT(16));
- exynos_irq_demux_eint(IRQ_EINT(24));
+ exynos_irq_demux_eint(16);
+ exynos_irq_demux_eint(24);
chained_irq_exit(chip, desc);
}
{
u32 *irq_data = irq_get_handler_data(irq);
struct irq_chip *chip = irq_get_chip(irq);
+ int eint_irq;
chained_irq_enter(chip, desc);
chip->irq_mask(&desc->irq_data);
if (chip->irq_ack)
chip->irq_ack(&desc->irq_data);
- generic_handle_irq(*irq_data);
+ eint_irq = irq_find_mapping(irq_domain, *irq_data);
+ generic_handle_irq(eint_irq);
chip->irq_unmask(&desc->irq_data);
chained_irq_exit(chip, desc);
}
-static int __init exynos_init_irq_eint(void)
+static int exynos_eint_irq_domain_map(struct irq_domain *d, unsigned int irq,
+ irq_hw_number_t hw)
{
- int irq;
+ irq_set_chip_and_handler(irq, &exynos_irq_eint, handle_level_irq);
+ set_irq_flags(irq, IRQF_VALID);
+ return 0;
+}
- if (soc_is_exynos5250())
- exynos_eint_base = ioremap(EXYNOS5_PA_GPIO1, SZ_4K);
- else
- exynos_eint_base = ioremap(EXYNOS4_PA_GPIO2, SZ_4K);
+#ifdef CONFIG_OF
+static int exynos_eint_irq_domain_xlate(struct irq_domain *d,
+ struct device_node *controller, const u32 *intspec,
+ unsigned int intsize, unsigned long *out_hwirq,
+ unsigned int *out_type)
+{
+ if (d->of_node != controller)
+ return -EINVAL;
+ if (intsize < 2)
+ return -EINVAL;
+ *out_hwirq = intspec[0];
- if (exynos_eint_base == NULL) {
+ switch (intspec[1]) {
+ case S5P_IRQ_TYPE_LEVEL_LOW:
+ *out_type = IRQ_TYPE_LEVEL_LOW;
+ break;
+ case S5P_IRQ_TYPE_LEVEL_HIGH:
+ *out_type = IRQ_TYPE_LEVEL_HIGH;
+ break;
+ case S5P_IRQ_TYPE_EDGE_FALLING:
+ *out_type = IRQ_TYPE_EDGE_FALLING;
+ break;
+ case S5P_IRQ_TYPE_EDGE_RISING:
+ *out_type = IRQ_TYPE_EDGE_RISING;
+ break;
+ case S5P_IRQ_TYPE_EDGE_BOTH:
+ *out_type = IRQ_TYPE_EDGE_BOTH;
+ break;
+ };
+
+ return 0;
+}
+#else
+static int exynos_eint_irq_domain_xlate(struct irq_domain *d,
+ struct device_node *controller, const u32 *intspec,
+ unsigned int intsize, unsigned long *out_hwirq,
+ unsigned int *out_type)
+{
+ return -EINVAL;
+}
+#endif
+
+static struct irq_domain_ops exynos_eint_irq_domain_ops = {
+ .xlate = exynos_eint_irq_domain_xlate,
+ .map = exynos_eint_irq_domain_map,
+};
+
+static int __init exynos_init_irq_eint(struct device_node *eint_np,
+ struct device_node *parent)
+{
+ int irq, *src_int, irq_base, irq_eint;
+ unsigned int paddr;
+ static unsigned int retry = 0;
+ static struct device_node *np;
+
+ if (retry)
+ goto retry_init;
+
+ if (!eint_np) {
+ paddr = soc_is_exynos5250() ? EXYNOS5_PA_GPIO1 :
+ EXYNOS4_PA_GPIO2;
+ exynos_eint_base = ioremap(paddr, SZ_4K);
+ } else {
+ np = of_get_parent(eint_np);
+ exynos_eint_base = of_iomap(np, 0);
+ }
+ if (!exynos_eint_base) {
pr_err("unable to ioremap for EINT base address\n");
- return -ENOMEM;
+ return -ENXIO;
}
- for (irq = 0 ; irq <= 31 ; irq++) {
- irq_set_chip_and_handler(IRQ_EINT(irq), &exynos_irq_eint,
- handle_level_irq);
- set_irq_flags(IRQ_EINT(irq), IRQF_VALID);
+ irq_base = irq_alloc_descs(IRQ_EINT(0), 1, EXYNOS_EINT_NR, 0);
+ if (IS_ERR_VALUE(irq_base)) {
+ irq_base = IRQ_EINT(0);
+ pr_warning("%s: irq desc alloc failed. Continuing with %d as "
+ "linux irq base\n", __func__, irq_base);
+ }
+
+ irq_domain = irq_domain_add_legacy(np, EXYNOS_EINT_NR, irq_base, 0,
+ &exynos_eint_irq_domain_ops, NULL);
+ if (WARN_ON(!irq_domain)) {
+ pr_warning("%s: irq domain init failed\n", __func__);
+ return 0;
}
- irq_set_chained_handler(EXYNOS_IRQ_EINT16_31, exynos_irq_demux_eint16_31);
-
- for (irq = 0 ; irq <= 15 ; irq++) {
- eint0_15_data[irq] = IRQ_EINT(irq);
-
- if (soc_is_exynos5250()) {
- irq_set_handler_data(exynos5_eint0_15_src_int[irq],
- &eint0_15_data[irq]);
- irq_set_chained_handler(exynos5_eint0_15_src_int[irq],
- exynos_irq_eint0_15);
- } else {
- irq_set_handler_data(exynos4_eint0_15_src_int[irq],
- &eint0_15_data[irq]);
- irq_set_chained_handler(exynos4_eint0_15_src_int[irq],
- exynos_irq_eint0_15);
+ irq_eint = eint_np ? irq_of_parse_and_map(np, 16) : EXYNOS_IRQ_EINT16_31;
+ irq_set_chained_handler(irq_eint, exynos_irq_demux_eint16_31);
+
+retry_init:
+ for (irq = 0; irq <= 15; irq++) {
+ eint0_15_data[irq] = irq;
+ src_int = soc_is_exynos5250() ? exynos5_eint0_15_src_int :
+ exynos4_eint0_15_src_int;
+ irq_eint = eint_np ? irq_of_parse_and_map(np, irq) : src_int[irq];
+ if (!irq_eint) {
+ of_node_put(np);
+ retry = 1;
+ return -EAGAIN;
}
+ irq_set_handler_data(irq_eint, &eint0_15_data[irq]);
+ irq_set_chained_handler(irq_eint, exynos_irq_eint0_15);
}
return 0;
}
-arch_initcall(exynos_init_irq_eint);
extern struct sys_timer exynos4_timer;
void exynos_init_io(struct map_desc *mach_desc, int size);
+void exynos_register_audss_clocks(void);
void exynos4_init_irq(void);
void exynos5_init_irq(void);
void exynos4_restart(char mode, const char *cmd);
#include <asm/smp_scu.h>
#include <asm/suspend.h>
#include <asm/unified.h>
+#include <asm/cpuidle.h>
#include <mach/regs-pmu.h>
#include <mach/pmu.h>
#define S5P_CHECK_AFTR 0xFCBA0D10
-static int exynos4_enter_idle(struct cpuidle_device *dev,
- struct cpuidle_driver *drv,
- int index);
static int exynos4_enter_lowpower(struct cpuidle_device *dev,
struct cpuidle_driver *drv,
int index);
static struct cpuidle_state exynos4_cpuidle_set[] __initdata = {
- [0] = {
- .enter = exynos4_enter_idle,
- .exit_latency = 1,
- .target_residency = 100000,
- .flags = CPUIDLE_FLAG_TIME_VALID,
- .name = "C0",
- .desc = "ARM clock gating(WFI)",
- },
+ [0] = ARM_CPUIDLE_WFI_STATE,
[1] = {
.enter = exynos4_enter_lowpower,
.exit_latency = 300,
static DEFINE_PER_CPU(struct cpuidle_device, exynos4_cpuidle_device);
static struct cpuidle_driver exynos4_idle_driver = {
- .name = "exynos4_idle",
- .owner = THIS_MODULE,
+ .name = "exynos4_idle",
+ .owner = THIS_MODULE,
+ .en_core_tk_irqen = 1,
};
/* Ext-GIC nIRQ/nFIQ is the only wakeup source in AFTR */
struct cpuidle_driver *drv,
int index)
{
- struct timeval before, after;
- int idle_time;
unsigned long tmp;
- local_irq_disable();
- do_gettimeofday(&before);
-
exynos4_set_wakeupmask();
/* Set value of power down register for aftr mode */
cpu_suspend(0, idle_finisher);
#ifdef CONFIG_SMP
- scu_enable(S5P_VA_SCU);
+ if (!soc_is_exynos5250())
+ scu_enable(S5P_VA_SCU);
#endif
cpu_pm_exit();
/* Clear wakeup state register */
__raw_writel(0x0, S5P_WAKEUP_STAT);
- do_gettimeofday(&after);
-
- local_irq_enable();
- idle_time = (after.tv_sec - before.tv_sec) * USEC_PER_SEC +
- (after.tv_usec - before.tv_usec);
-
- dev->last_residency = idle_time;
- return index;
-}
-
-static int exynos4_enter_idle(struct cpuidle_device *dev,
- struct cpuidle_driver *drv,
- int index)
-{
- struct timeval before, after;
- int idle_time;
-
- local_irq_disable();
- do_gettimeofday(&before);
-
- cpu_do_idle();
-
- do_gettimeofday(&after);
- local_irq_enable();
- idle_time = (after.tv_sec - before.tv_sec) * USEC_PER_SEC +
- (after.tv_usec - before.tv_usec);
-
- dev->last_residency = idle_time;
return index;
}
new_index = drv->safe_state_index;
if (new_index == 0)
- return exynos4_enter_idle(dev, drv, new_index);
+ return arm_cpuidle_simple_enter(dev, drv, new_index);
else
return exynos4_enter_core0_aftr(dev, drv, new_index);
}
-/* linux/arch/arm/mach-exynos4/dev-sysmmu.c
+/* linux/arch/arm/mach-exynos/dev-sysmmu.c
*
* Copyright (c) 2010 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
- * EXYNOS4 - System MMU support
+ * EXYNOS - System MMU support
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
#include <linux/platform_device.h>
#include <linux/dma-mapping.h>
-#include <linux/export.h>
-
+#include <linux/of_platform.h>
#include <mach/map.h>
#include <mach/irqs.h>
#include <mach/sysmmu.h>
#include <plat/s5p-clock.h>
+#include <asm/dma-iommu.h>
+#include <linux/slab.h>
-/* These names must be equal to the clock names in mach-exynos4/clock.c */
-const char *sysmmu_ips_name[EXYNOS4_SYSMMU_TOTAL_IPNUM] = {
- "SYSMMU_MDMA" ,
- "SYSMMU_SSS" ,
- "SYSMMU_FIMC0" ,
- "SYSMMU_FIMC1" ,
- "SYSMMU_FIMC2" ,
- "SYSMMU_FIMC3" ,
- "SYSMMU_JPEG" ,
- "SYSMMU_FIMD0" ,
- "SYSMMU_FIMD1" ,
- "SYSMMU_PCIe" ,
- "SYSMMU_G2D" ,
- "SYSMMU_ROTATOR",
- "SYSMMU_MDMA2" ,
- "SYSMMU_TV" ,
- "SYSMMU_MFC_L" ,
- "SYSMMU_MFC_R" ,
-};
-
-static struct resource exynos4_sysmmu_resource[] = {
- [0] = {
- .start = EXYNOS4_PA_SYSMMU_MDMA,
- .end = EXYNOS4_PA_SYSMMU_MDMA + SZ_64K - 1,
- .flags = IORESOURCE_MEM,
- },
- [1] = {
- .start = IRQ_SYSMMU_MDMA0_0,
- .end = IRQ_SYSMMU_MDMA0_0,
- .flags = IORESOURCE_IRQ,
- },
- [2] = {
- .start = EXYNOS4_PA_SYSMMU_SSS,
- .end = EXYNOS4_PA_SYSMMU_SSS + SZ_64K - 1,
- .flags = IORESOURCE_MEM,
- },
- [3] = {
- .start = IRQ_SYSMMU_SSS_0,
- .end = IRQ_SYSMMU_SSS_0,
- .flags = IORESOURCE_IRQ,
- },
- [4] = {
- .start = EXYNOS4_PA_SYSMMU_FIMC0,
- .end = EXYNOS4_PA_SYSMMU_FIMC0 + SZ_64K - 1,
- .flags = IORESOURCE_MEM,
- },
- [5] = {
- .start = IRQ_SYSMMU_FIMC0_0,
- .end = IRQ_SYSMMU_FIMC0_0,
- .flags = IORESOURCE_IRQ,
- },
- [6] = {
- .start = EXYNOS4_PA_SYSMMU_FIMC1,
- .end = EXYNOS4_PA_SYSMMU_FIMC1 + SZ_64K - 1,
- .flags = IORESOURCE_MEM,
- },
- [7] = {
- .start = IRQ_SYSMMU_FIMC1_0,
- .end = IRQ_SYSMMU_FIMC1_0,
- .flags = IORESOURCE_IRQ,
- },
- [8] = {
- .start = EXYNOS4_PA_SYSMMU_FIMC2,
- .end = EXYNOS4_PA_SYSMMU_FIMC2 + SZ_64K - 1,
- .flags = IORESOURCE_MEM,
- },
- [9] = {
- .start = IRQ_SYSMMU_FIMC2_0,
- .end = IRQ_SYSMMU_FIMC2_0,
- .flags = IORESOURCE_IRQ,
- },
- [10] = {
- .start = EXYNOS4_PA_SYSMMU_FIMC3,
- .end = EXYNOS4_PA_SYSMMU_FIMC3 + SZ_64K - 1,
- .flags = IORESOURCE_MEM,
- },
- [11] = {
- .start = IRQ_SYSMMU_FIMC3_0,
- .end = IRQ_SYSMMU_FIMC3_0,
- .flags = IORESOURCE_IRQ,
- },
- [12] = {
- .start = EXYNOS4_PA_SYSMMU_JPEG,
- .end = EXYNOS4_PA_SYSMMU_JPEG + SZ_64K - 1,
- .flags = IORESOURCE_MEM,
- },
- [13] = {
- .start = IRQ_SYSMMU_JPEG_0,
- .end = IRQ_SYSMMU_JPEG_0,
- .flags = IORESOURCE_IRQ,
- },
- [14] = {
- .start = EXYNOS4_PA_SYSMMU_FIMD0,
- .end = EXYNOS4_PA_SYSMMU_FIMD0 + SZ_64K - 1,
- .flags = IORESOURCE_MEM,
- },
- [15] = {
- .start = IRQ_SYSMMU_LCD0_M0_0,
- .end = IRQ_SYSMMU_LCD0_M0_0,
- .flags = IORESOURCE_IRQ,
- },
- [16] = {
- .start = EXYNOS4_PA_SYSMMU_FIMD1,
- .end = EXYNOS4_PA_SYSMMU_FIMD1 + SZ_64K - 1,
- .flags = IORESOURCE_MEM,
- },
- [17] = {
- .start = IRQ_SYSMMU_LCD1_M1_0,
- .end = IRQ_SYSMMU_LCD1_M1_0,
- .flags = IORESOURCE_IRQ,
- },
- [18] = {
- .start = EXYNOS4_PA_SYSMMU_PCIe,
- .end = EXYNOS4_PA_SYSMMU_PCIe + SZ_64K - 1,
- .flags = IORESOURCE_MEM,
- },
- [19] = {
- .start = IRQ_SYSMMU_PCIE_0,
- .end = IRQ_SYSMMU_PCIE_0,
- .flags = IORESOURCE_IRQ,
- },
- [20] = {
- .start = EXYNOS4_PA_SYSMMU_G2D,
- .end = EXYNOS4_PA_SYSMMU_G2D + SZ_64K - 1,
- .flags = IORESOURCE_MEM,
- },
- [21] = {
- .start = IRQ_SYSMMU_2D_0,
- .end = IRQ_SYSMMU_2D_0,
- .flags = IORESOURCE_IRQ,
- },
- [22] = {
- .start = EXYNOS4_PA_SYSMMU_ROTATOR,
- .end = EXYNOS4_PA_SYSMMU_ROTATOR + SZ_64K - 1,
- .flags = IORESOURCE_MEM,
- },
- [23] = {
- .start = IRQ_SYSMMU_ROTATOR_0,
- .end = IRQ_SYSMMU_ROTATOR_0,
- .flags = IORESOURCE_IRQ,
- },
- [24] = {
- .start = EXYNOS4_PA_SYSMMU_MDMA2,
- .end = EXYNOS4_PA_SYSMMU_MDMA2 + SZ_64K - 1,
- .flags = IORESOURCE_MEM,
- },
- [25] = {
- .start = IRQ_SYSMMU_MDMA1_0,
- .end = IRQ_SYSMMU_MDMA1_0,
- .flags = IORESOURCE_IRQ,
- },
- [26] = {
- .start = EXYNOS4_PA_SYSMMU_TV,
- .end = EXYNOS4_PA_SYSMMU_TV + SZ_64K - 1,
- .flags = IORESOURCE_MEM,
- },
- [27] = {
- .start = IRQ_SYSMMU_TV_M0_0,
- .end = IRQ_SYSMMU_TV_M0_0,
- .flags = IORESOURCE_IRQ,
- },
- [28] = {
- .start = EXYNOS4_PA_SYSMMU_MFC_L,
- .end = EXYNOS4_PA_SYSMMU_MFC_L + SZ_64K - 1,
- .flags = IORESOURCE_MEM,
- },
- [29] = {
- .start = IRQ_SYSMMU_MFC_M0_0,
- .end = IRQ_SYSMMU_MFC_M0_0,
- .flags = IORESOURCE_IRQ,
- },
- [30] = {
- .start = EXYNOS4_PA_SYSMMU_MFC_R,
- .end = EXYNOS4_PA_SYSMMU_MFC_R + SZ_64K - 1,
- .flags = IORESOURCE_MEM,
- },
- [31] = {
- .start = IRQ_SYSMMU_MFC_M1_0,
- .end = IRQ_SYSMMU_MFC_M1_0,
- .flags = IORESOURCE_IRQ,
- },
-};
+#ifdef CONFIG_ARM_DMA_USE_IOMMU
+struct dma_iommu_mapping * s5p_create_iommu_mapping(struct device *client,
+ dma_addr_t base, unsigned int size,
+ int order, struct dma_iommu_mapping *mapping)
+{
+ if (!client)
+ return NULL;
-struct platform_device exynos4_device_sysmmu = {
- .name = "s5p-sysmmu",
- .id = 32,
- .num_resources = ARRAY_SIZE(exynos4_sysmmu_resource),
- .resource = exynos4_sysmmu_resource,
-};
-EXPORT_SYMBOL(exynos4_device_sysmmu);
+ if (mapping == NULL) {
+ mapping = arm_iommu_create_mapping(&platform_bus_type,
+ base, size, order);
+ if (!mapping)
+ return NULL;
+ }
-static struct clk *sysmmu_clk[S5P_SYSMMU_TOTAL_IPNUM];
-void sysmmu_clk_init(struct device *dev, sysmmu_ips ips)
-{
- sysmmu_clk[ips] = clk_get(dev, sysmmu_ips_name[ips]);
- if (IS_ERR(sysmmu_clk[ips]))
- sysmmu_clk[ips] = NULL;
- else
- clk_put(sysmmu_clk[ips]);
+ client->dma_parms = kzalloc(sizeof(*client->dma_parms), GFP_KERNEL);
+ dma_set_max_seg_size(client, 0xffffffffu);
+ arm_iommu_attach_device(client, mapping);
+ return mapping;
}
-void sysmmu_clk_enable(sysmmu_ips ips)
+struct platform_device *find_sysmmu_dt(struct platform_device *pdev,
+ char *name)
{
- if (sysmmu_clk[ips])
- clk_enable(sysmmu_clk[ips]);
-}
+ struct device_node *dn, *dns;
+ struct platform_device *pds;
+ const __be32 *parp;
-void sysmmu_clk_disable(sysmmu_ips ips)
-{
- if (sysmmu_clk[ips])
- clk_disable(sysmmu_clk[ips]);
+ dn = pdev->dev.of_node;
+ parp = of_get_property(dn, name, NULL);
+ if (parp==NULL) {
+ printk(KERN_ERR "Could not find property SYSMMU\n");
+ return NULL;
+ }
+ dns = of_find_node_by_phandle(be32_to_cpup(parp));
+ if (dns==NULL) {
+ printk(KERN_ERR "Could not find node\n");
+ return NULL;
+ }
+
+ pds = of_find_device_by_node(dns);
+ if (!pds) {
+ printk(KERN_ERR "No platform device found\n");
+ return NULL;
+ }
+ return pds;
}
+#endif
+
+
DMACH_MIPI_HSI5,
};
-struct dma_pl330_platdata exynos4_pdma0_pdata;
+static u8 exynos5250_pdma0_peri[] = {
+ DMACH_PCM0_RX,
+ DMACH_PCM0_TX,
+ DMACH_PCM2_RX,
+ DMACH_PCM2_TX,
+ DMACH_SPI0_RX,
+ DMACH_SPI0_TX,
+ DMACH_SPI2_RX,
+ DMACH_SPI2_TX,
+ DMACH_I2S0S_TX,
+ DMACH_I2S0_RX,
+ DMACH_I2S0_TX,
+ DMACH_I2S2_RX,
+ DMACH_I2S2_TX,
+ DMACH_UART0_RX,
+ DMACH_UART0_TX,
+ DMACH_UART2_RX,
+ DMACH_UART2_TX,
+ DMACH_UART4_RX,
+ DMACH_UART4_TX,
+ DMACH_SLIMBUS0_RX,
+ DMACH_SLIMBUS0_TX,
+ DMACH_SLIMBUS2_RX,
+ DMACH_SLIMBUS2_TX,
+ DMACH_SLIMBUS4_RX,
+ DMACH_SLIMBUS4_TX,
+ DMACH_AC97_MICIN,
+ DMACH_AC97_PCMIN,
+ DMACH_AC97_PCMOUT,
+ DMACH_MIPI_HSI0,
+ DMACH_MIPI_HSI2,
+ DMACH_MIPI_HSI4,
+ DMACH_MIPI_HSI6,
+};
+
+static struct dma_pl330_platdata exynos_pdma0_pdata;
static AMBA_AHB_DEVICE(exynos4_pdma0, "dma-pl330.0", 0x00041330,
- EXYNOS4_PA_PDMA0, {EXYNOS4_IRQ_PDMA0}, &exynos4_pdma0_pdata);
+ EXYNOS4_PA_PDMA0, {EXYNOS4_IRQ_PDMA0}, &exynos_pdma0_pdata);
static u8 exynos4210_pdma1_peri[] = {
DMACH_PCM0_RX,
DMACH_MIPI_HSI7,
};
-static struct dma_pl330_platdata exynos4_pdma1_pdata;
+static u8 exynos5250_pdma1_peri[] = {
+ DMACH_PCM0_RX,
+ DMACH_PCM0_TX,
+ DMACH_PCM1_RX,
+ DMACH_PCM1_TX,
+ DMACH_SPI1_RX,
+ DMACH_SPI1_TX,
+ DMACH_PWM,
+ DMACH_SPDIF,
+ DMACH_I2S0S_TX,
+ DMACH_I2S0_RX,
+ DMACH_I2S0_TX,
+ DMACH_I2S1_RX,
+ DMACH_I2S1_TX,
+ DMACH_UART0_RX,
+ DMACH_UART0_TX,
+ DMACH_UART1_RX,
+ DMACH_UART1_TX,
+ DMACH_UART3_RX,
+ DMACH_UART3_TX,
+ DMACH_SLIMBUS1_RX,
+ DMACH_SLIMBUS1_TX,
+ DMACH_SLIMBUS3_RX,
+ DMACH_SLIMBUS3_TX,
+ DMACH_SLIMBUS5_RX,
+ DMACH_SLIMBUS5_TX,
+ DMACH_SLIMBUS0AUX_RX,
+ DMACH_SLIMBUS0AUX_TX,
+ DMACH_DISP1,
+ DMACH_MIPI_HSI1,
+ DMACH_MIPI_HSI3,
+ DMACH_MIPI_HSI5,
+ DMACH_MIPI_HSI7,
+};
+
+static struct dma_pl330_platdata exynos_pdma1_pdata;
static AMBA_AHB_DEVICE(exynos4_pdma1, "dma-pl330.1", 0x00041330,
- EXYNOS4_PA_PDMA1, {EXYNOS4_IRQ_PDMA1}, &exynos4_pdma1_pdata);
+ EXYNOS4_PA_PDMA1, {EXYNOS4_IRQ_PDMA1}, &exynos_pdma1_pdata);
static u8 mdma_peri[] = {
DMACH_MTOM_0,
DMACH_MTOM_7,
};
-static struct dma_pl330_platdata exynos4_mdma1_pdata = {
+static struct dma_pl330_platdata exynos_mdma1_pdata = {
.nr_valid_peri = ARRAY_SIZE(mdma_peri),
.peri_id = mdma_peri,
};
static AMBA_AHB_DEVICE(exynos4_mdma1, "dma-pl330.2", 0x00041330,
- EXYNOS4_PA_MDMA1, {EXYNOS4_IRQ_MDMA1}, &exynos4_mdma1_pdata);
+ EXYNOS4_PA_MDMA1, {EXYNOS4_IRQ_MDMA1}, &exynos_mdma1_pdata);
-static int __init exynos4_dma_init(void)
+static int __init exynos_dma_init(void)
{
if (of_have_populated_dt())
return 0;
if (soc_is_exynos4210()) {
- exynos4_pdma0_pdata.nr_valid_peri =
+ exynos_pdma0_pdata.nr_valid_peri =
ARRAY_SIZE(exynos4210_pdma0_peri);
- exynos4_pdma0_pdata.peri_id = exynos4210_pdma0_peri;
- exynos4_pdma1_pdata.nr_valid_peri =
+ exynos_pdma0_pdata.peri_id = exynos4210_pdma0_peri;
+ exynos_pdma1_pdata.nr_valid_peri =
ARRAY_SIZE(exynos4210_pdma1_peri);
- exynos4_pdma1_pdata.peri_id = exynos4210_pdma1_peri;
+ exynos_pdma1_pdata.peri_id = exynos4210_pdma1_peri;
} else if (soc_is_exynos4212() || soc_is_exynos4412()) {
- exynos4_pdma0_pdata.nr_valid_peri =
+ exynos_pdma0_pdata.nr_valid_peri =
ARRAY_SIZE(exynos4212_pdma0_peri);
- exynos4_pdma0_pdata.peri_id = exynos4212_pdma0_peri;
- exynos4_pdma1_pdata.nr_valid_peri =
+ exynos_pdma0_pdata.peri_id = exynos4212_pdma0_peri;
+ exynos_pdma1_pdata.nr_valid_peri =
ARRAY_SIZE(exynos4212_pdma1_peri);
- exynos4_pdma1_pdata.peri_id = exynos4212_pdma1_peri;
+ exynos_pdma1_pdata.peri_id = exynos4212_pdma1_peri;
+ } else if (soc_is_exynos5250()) {
+ exynos_pdma0_pdata.nr_valid_peri =
+ ARRAY_SIZE(exynos5250_pdma0_peri);
+ exynos_pdma0_pdata.peri_id = exynos5250_pdma0_peri;
+ exynos_pdma1_pdata.nr_valid_peri =
+ ARRAY_SIZE(exynos5250_pdma1_peri);
+ exynos_pdma1_pdata.peri_id = exynos5250_pdma1_peri;
+
+ exynos4_pdma0_device.res.start = EXYNOS5_PA_PDMA0;
+ exynos4_pdma0_device.res.end = EXYNOS5_PA_PDMA0 + SZ_4K;
+ exynos4_pdma0_device.irq[0] = EXYNOS5_IRQ_PDMA0;
+ exynos4_pdma1_device.res.start = EXYNOS5_PA_PDMA1;
+ exynos4_pdma1_device.res.end = EXYNOS5_PA_PDMA1 + SZ_4K;
+ exynos4_pdma0_device.irq[0] = EXYNOS5_IRQ_PDMA1;
+ exynos4_mdma1_device.res.start = EXYNOS5_PA_MDMA1;
+ exynos4_mdma1_device.res.end = EXYNOS5_PA_MDMA1 + SZ_4K;
+ exynos4_pdma0_device.irq[0] = EXYNOS5_IRQ_MDMA1;
}
- dma_cap_set(DMA_SLAVE, exynos4_pdma0_pdata.cap_mask);
- dma_cap_set(DMA_CYCLIC, exynos4_pdma0_pdata.cap_mask);
+ dma_cap_set(DMA_SLAVE, exynos_pdma0_pdata.cap_mask);
+ dma_cap_set(DMA_CYCLIC, exynos_pdma0_pdata.cap_mask);
amba_device_register(&exynos4_pdma0_device, &iomem_resource);
- dma_cap_set(DMA_SLAVE, exynos4_pdma1_pdata.cap_mask);
- dma_cap_set(DMA_CYCLIC, exynos4_pdma1_pdata.cap_mask);
+ dma_cap_set(DMA_SLAVE, exynos_pdma1_pdata.cap_mask);
+ dma_cap_set(DMA_CYCLIC, exynos_pdma1_pdata.cap_mask);
amba_device_register(&exynos4_pdma1_device, &iomem_resource);
- dma_cap_set(DMA_MEMCPY, exynos4_mdma1_pdata.cap_mask);
+ dma_cap_set(DMA_MEMCPY, exynos_mdma1_pdata.cap_mask);
amba_device_register(&exynos4_mdma1_device, &iomem_resource);
return 0;
}
-arch_initcall(exynos4_dma_init);
+arch_initcall(exynos_dma_init);
#include <asm/cp15.h>
#include <asm/smp_plat.h>
+#include <plat/cpu.h>
#include <mach/regs-pmu.h>
extern volatile int pen_release;
-static inline void cpu_enter_lowpower(void)
+static inline void cpu_enter_lowpower_a9(void)
{
unsigned int v;
: "cc");
}
+static inline void cpu_enter_lowpower_a15(void)
+{
+ unsigned int v;
+
+ asm volatile(
+ " mrc p15, 0, %0, c1, c0, 0\n"
+ " bic %0, %0, %1\n"
+ " mcr p15, 0, %0, c1, c0, 0\n"
+ : "=&r" (v)
+ : "Ir" (CR_C)
+ : "cc");
+
+ flush_cache_all();
+
+ asm volatile(
+ /*
+ * Turn off coherency
+ */
+ " mrc p15, 0, %0, c1, c0, 1\n"
+ " bic %0, %0, %1\n"
+ " mcr p15, 0, %0, c1, c0, 1\n"
+ : "=&r" (v)
+ : "Ir" (0x40)
+ : "cc");
+
+ isb();
+ dsb();
+}
+
static inline void cpu_leave_lowpower(void)
{
unsigned int v;
/*
* we're ready for shutdown now, so do it
*/
- cpu_enter_lowpower();
+ if (soc_is_exynos5250())
+ cpu_enter_lowpower_a15();
+ else
+ cpu_enter_lowpower_a9();
platform_do_lowpower(cpu, &spurious);
/*
#define EXYNOS5_GPIO_B2_NR (4)
#define EXYNOS5_GPIO_B3_NR (4)
#define EXYNOS5_GPIO_C0_NR (7)
-#define EXYNOS5_GPIO_C1_NR (7)
+#define EXYNOS5_GPIO_C1_NR (4)
#define EXYNOS5_GPIO_C2_NR (7)
#define EXYNOS5_GPIO_C3_NR (7)
-#define EXYNOS5_GPIO_D0_NR (8)
+#define EXYNOS5_GPIO_C4_NR (7)
+#define EXYNOS5_GPIO_D0_NR (4)
#define EXYNOS5_GPIO_D1_NR (8)
#define EXYNOS5_GPIO_Y0_NR (6)
#define EXYNOS5_GPIO_Y1_NR (4)
EXYNOS5_GPIO_C1_START = EXYNOS_GPIO_NEXT(EXYNOS5_GPIO_C0),
EXYNOS5_GPIO_C2_START = EXYNOS_GPIO_NEXT(EXYNOS5_GPIO_C1),
EXYNOS5_GPIO_C3_START = EXYNOS_GPIO_NEXT(EXYNOS5_GPIO_C2),
- EXYNOS5_GPIO_D0_START = EXYNOS_GPIO_NEXT(EXYNOS5_GPIO_C3),
+ EXYNOS5_GPIO_C4_START = EXYNOS_GPIO_NEXT(EXYNOS5_GPIO_C3),
+ EXYNOS5_GPIO_D0_START = EXYNOS_GPIO_NEXT(EXYNOS5_GPIO_C4),
EXYNOS5_GPIO_D1_START = EXYNOS_GPIO_NEXT(EXYNOS5_GPIO_D0),
EXYNOS5_GPIO_Y0_START = EXYNOS_GPIO_NEXT(EXYNOS5_GPIO_D1),
EXYNOS5_GPIO_Y1_START = EXYNOS_GPIO_NEXT(EXYNOS5_GPIO_Y0),
#define EXYNOS5_GPC1(_nr) (EXYNOS5_GPIO_C1_START + (_nr))
#define EXYNOS5_GPC2(_nr) (EXYNOS5_GPIO_C2_START + (_nr))
#define EXYNOS5_GPC3(_nr) (EXYNOS5_GPIO_C3_START + (_nr))
+#define EXYNOS5_GPC4(_nr) (EXYNOS5_GPIO_C4_START + (_nr))
#define EXYNOS5_GPD0(_nr) (EXYNOS5_GPIO_D0_START + (_nr))
#define EXYNOS5_GPD1(_nr) (EXYNOS5_GPIO_D1_START + (_nr))
#define EXYNOS5_GPY0(_nr) (EXYNOS5_GPIO_Y0_START + (_nr))
#define IRQ_IIC6 EXYNOS4_IRQ_IIC6
#define IRQ_IIC7 EXYNOS4_IRQ_IIC7
+#define IRQ_SPI0 EXYNOS4_IRQ_SPI0
+#define IRQ_SPI1 EXYNOS4_IRQ_SPI1
+#define IRQ_SPI2 EXYNOS4_IRQ_SPI2
+
#define IRQ_USB_HOST EXYNOS4_IRQ_USB_HOST
#define IRQ_HSMMC0 EXYNOS4_IRQ_HSMMC0
#define EXYNOS5_IRQ_MIPICSI1 IRQ_SPI(80)
#define EXYNOS5_IRQ_EFNFCON_DMA_ABORT IRQ_SPI(81)
#define EXYNOS5_IRQ_MIPIDSI0 IRQ_SPI(82)
+#define EXYNOS5_IRQ_WDT_IOP IRQ_SPI(83)
#define EXYNOS5_IRQ_ROTATOR IRQ_SPI(84)
#define EXYNOS5_IRQ_GSC0 IRQ_SPI(85)
#define EXYNOS5_IRQ_GSC1 IRQ_SPI(86)
#define EXYNOS5_IRQ_JPEG IRQ_SPI(89)
#define EXYNOS5_IRQ_EFNFCON_DMA IRQ_SPI(90)
#define EXYNOS5_IRQ_2D IRQ_SPI(91)
-#define EXYNOS5_IRQ_SFMC0 IRQ_SPI(92)
-#define EXYNOS5_IRQ_SFMC1 IRQ_SPI(93)
+#define EXYNOS5_IRQ_EFNFCON_0 IRQ_SPI(92)
+#define EXYNOS5_IRQ_EFNFCON_1 IRQ_SPI(93)
#define EXYNOS5_IRQ_MIXER IRQ_SPI(94)
#define EXYNOS5_IRQ_HDMI IRQ_SPI(95)
#define EXYNOS5_IRQ_MFC IRQ_SPI(96)
#define EXYNOS5_IRQ_PCM2 IRQ_SPI(104)
#define EXYNOS5_IRQ_SPDIF IRQ_SPI(105)
#define EXYNOS5_IRQ_ADC0 IRQ_SPI(106)
-
+#define EXYNOS5_IRQ_ADC1 IRQ_SPI(107)
#define EXYNOS5_IRQ_SATA_PHY IRQ_SPI(108)
#define EXYNOS5_IRQ_SATA_PMEMREQ IRQ_SPI(109)
#define EXYNOS5_IRQ_CAM_C IRQ_SPI(110)
#define EXYNOS5_IRQ_DP1_INTP1 IRQ_SPI(113)
#define EXYNOS5_IRQ_CEC IRQ_SPI(114)
#define EXYNOS5_IRQ_SATA IRQ_SPI(115)
-#define EXYNOS5_IRQ_NFCON IRQ_SPI(116)
+#define EXYNOS5_GPU_IRQ_NUMBER IRQ_SPI(117)
+#define EXYNOS5_JOB_IRQ_NUMBER IRQ_SPI(118)
+#define EXYNOS5_MMU_IRQ_NUMBER IRQ_SPI(119)
+#define EXYNOS5_IRQ_MCT_L0 IRQ_SPI(120)
+#define EXYNOS5_IRQ_MCT_L1 IRQ_SPI(121)
#define EXYNOS5_IRQ_MMC44 IRQ_SPI(123)
#define EXYNOS5_IRQ_MDMA1 IRQ_SPI(124)
#define EXYNOS5_IRQ_FIMC_LITE0 IRQ_SPI(125)
#define EXYNOS5_IRQ_RP_TIMER IRQ_SPI(127)
#define EXYNOS5_IRQ_PMU COMBINER_IRQ(1, 2)
-#define EXYNOS5_IRQ_PMU_CPU1 COMBINER_IRQ(1, 6)
#define EXYNOS5_IRQ_SYSMMU_GSC0_0 COMBINER_IRQ(2, 0)
#define EXYNOS5_IRQ_SYSMMU_GSC0_1 COMBINER_IRQ(2, 1)
#define EXYNOS5_IRQ_SYSMMU_GSC3_0 COMBINER_IRQ(2, 6)
#define EXYNOS5_IRQ_SYSMMU_GSC3_1 COMBINER_IRQ(2, 7)
+#define EXYNOS5_IRQ_SYSMMU_LITE2_0 COMBINER_IRQ(3, 0)
+#define EXYNOS5_IRQ_SYSMMU_LITE2_1 COMBINER_IRQ(3, 1)
#define EXYNOS5_IRQ_SYSMMU_FIMD1_0 COMBINER_IRQ(3, 2)
#define EXYNOS5_IRQ_SYSMMU_FIMD1_1 COMBINER_IRQ(3, 3)
#define EXYNOS5_IRQ_SYSMMU_LITE0_0 COMBINER_IRQ(3, 4)
#define EXYNOS5_IRQ_SYSMMU_ARM_0 COMBINER_IRQ(6, 0)
#define EXYNOS5_IRQ_SYSMMU_ARM_1 COMBINER_IRQ(6, 1)
-#define EXYNOS5_IRQ_SYSMMU_MFC_L_0 COMBINER_IRQ(6, 2)
-#define EXYNOS5_IRQ_SYSMMU_MFC_L_1 COMBINER_IRQ(6, 3)
+#define EXYNOS5_IRQ_SYSMMU_MFC_R_0 COMBINER_IRQ(6, 2)
+#define EXYNOS5_IRQ_SYSMMU_MFC_R_1 COMBINER_IRQ(6, 3)
#define EXYNOS5_IRQ_SYSMMU_RTIC_0 COMBINER_IRQ(6, 4)
#define EXYNOS5_IRQ_SYSMMU_RTIC_1 COMBINER_IRQ(6, 5)
#define EXYNOS5_IRQ_SYSMMU_SSS_0 COMBINER_IRQ(6, 6)
#define EXYNOS5_IRQ_SYSMMU_MDMA1_1 COMBINER_IRQ(7, 3)
#define EXYNOS5_IRQ_SYSMMU_TV_0 COMBINER_IRQ(7, 4)
#define EXYNOS5_IRQ_SYSMMU_TV_1 COMBINER_IRQ(7, 5)
-#define EXYNOS5_IRQ_SYSMMU_GPSX_0 COMBINER_IRQ(7, 6)
-#define EXYNOS5_IRQ_SYSMMU_GPSX_1 COMBINER_IRQ(7, 7)
+#define EXYNOS5_IRQ_SYSMMU_GPSX_0 COMBINER_IRQ(7, 6)
+#define EXYNOS5_IRQ_SYSMMU_GPSX_1 COMBINER_IRQ(7, 7)
-#define EXYNOS5_IRQ_SYSMMU_MFC_R_0 COMBINER_IRQ(8, 5)
-#define EXYNOS5_IRQ_SYSMMU_MFC_R_1 COMBINER_IRQ(8, 6)
+#define EXYNOS5_IRQ_SYSMMU_MFC_L_0 COMBINER_IRQ(8, 5)
+#define EXYNOS5_IRQ_SYSMMU_MFC_L_1 COMBINER_IRQ(8, 6)
#define EXYNOS5_IRQ_SYSMMU_DIS1_0 COMBINER_IRQ(9, 4)
#define EXYNOS5_IRQ_SYSMMU_DIS1_1 COMBINER_IRQ(9, 5)
#define EXYNOS5_IRQ_SYSMMU_DRC_0 COMBINER_IRQ(11, 6)
#define EXYNOS5_IRQ_SYSMMU_DRC_1 COMBINER_IRQ(11, 7)
+#define EXYNOS5_IRQ_MDMA1_ABORT COMBINER_IRQ(13, 1)
+
+#define EXYNOS5_IRQ_MDMA0_ABORT COMBINER_IRQ(15, 3)
+
#define EXYNOS5_IRQ_FIMD1_FIFO COMBINER_IRQ(18, 4)
#define EXYNOS5_IRQ_FIMD1_VSYNC COMBINER_IRQ(18, 5)
#define EXYNOS5_IRQ_FIMD1_SYSTEM COMBINER_IRQ(18, 6)
+#define EXYNOS5_IRQ_ARMIOP_GIC COMBINER_IRQ(19, 0)
+#define EXYNOS5_IRQ_ARMISP_GIC COMBINER_IRQ(19, 1)
+#define EXYNOS5_IRQ_IOP_GIC COMBINER_IRQ(19, 3)
+#define EXYNOS5_IRQ_ISP_GIC COMBINER_IRQ(19, 4)
+
+#define EXYNOS5_IRQ_PMU_CPU1 COMBINER_IRQ(22, 4)
+
#define EXYNOS5_IRQ_EINT0 COMBINER_IRQ(23, 0)
-#define EXYNOS5_IRQ_MCT_L0 COMBINER_IRQ(23, 1)
-#define EXYNOS5_IRQ_MCT_L1 COMBINER_IRQ(23, 2)
#define EXYNOS5_IRQ_MCT_G0 COMBINER_IRQ(23, 3)
#define EXYNOS5_IRQ_MCT_G1 COMBINER_IRQ(23, 4)
-#define EXYNOS5_IRQ_MCT_G2 COMBINER_IRQ(23, 5)
-#define EXYNOS5_IRQ_MCT_G3 COMBINER_IRQ(23, 6)
#define EXYNOS5_IRQ_EINT1 COMBINER_IRQ(24, 0)
#define EXYNOS5_IRQ_SYSMMU_LITE1_0 COMBINER_IRQ(24, 1)
#define EXYNOS5_MAX_COMBINER_NR 32
-#define EXYNOS5_IRQ_GPIO1_NR_GROUPS 13
+#define EXYNOS5_IRQ_GPIO1_NR_GROUPS 14
#define EXYNOS5_IRQ_GPIO2_NR_GROUPS 9
#define EXYNOS5_IRQ_GPIO3_NR_GROUPS 5
#define EXYNOS5_IRQ_GPIO4_NR_GROUPS 1
#define EXYNOS4_PA_G2D 0x12800000
+#define EXYNOS_PA_AUDSS 0x03810000
#define EXYNOS4_PA_I2S0 0x03830000
#define EXYNOS4_PA_I2S1 0xE3100000
#define EXYNOS4_PA_I2S2 0xE2A00000
#define EXYNOS4_PA_GIC_CPU 0x10480000
#define EXYNOS4_PA_GIC_DIST 0x10490000
-#define EXYNOS5_PA_GIC_CPU 0x10480000
-#define EXYNOS5_PA_GIC_DIST 0x10490000
+#define EXYNOS5_PA_GIC_CPU 0x10482000
+#define EXYNOS5_PA_GIC_DIST 0x10481000
#define EXYNOS4_PA_COREPERI 0x10500000
#define EXYNOS4_PA_TWD 0x10500600
#define EXYNOS4_PA_SPI0 0x13920000
#define EXYNOS4_PA_SPI1 0x13930000
#define EXYNOS4_PA_SPI2 0x13940000
+#define EXYNOS5_PA_SPI0 0x12D20000
+#define EXYNOS5_PA_SPI1 0x12D30000
+#define EXYNOS5_PA_SPI2 0x12D40000
#define EXYNOS4_PA_GPIO1 0x11400000
#define EXYNOS4_PA_GPIO2 0x11000000
#define EXYNOS4_PA_UART 0x13800000
#define EXYNOS5_PA_UART 0x12C00000
+#define EXYNOS5_PA_USB_PHY 0x12130000
+#define EXYNOS5_PA_DRD_PHY 0x12100000
+
#define EXYNOS4_PA_VP 0x12C00000
#define EXYNOS4_PA_MIXER 0x12C10000
#define EXYNOS4_PA_SDO 0x12C20000
#define EXYNOS4_PA_SDRAM 0x40000000
#define EXYNOS5_PA_SDRAM 0x40000000
+#define EXYNOS5_PA_G3D 0x11800000
+
/* Compatibiltiy Defines */
#define S3C_PA_HSMMC0 EXYNOS4_PA_HSMMC(0)
__raw_writel(tmp, S5P_WAKEUP_MASK);
__raw_writel(s3c_irqwake_intmask, S5P_WAKEUP_MASK);
- __raw_writel(s3c_irqwake_eintmask, S5P_EINT_WAKEUP_MASK);
+ __raw_writel(s3c_irqwake_eintmask & 0xFFFFFFFE, S5P_EINT_WAKEUP_MASK);
}
static inline void s3c_pm_arch_stop_clocks(void)
#define EXYNOS4_AUDSS_INT_MEM (0x03000000)
+#define EXYNOS_AUDSSREG(x) (S5P_VA_AUDSS + (x))
+
+#define EXYNOS_CLKSRC_AUDSS_OFFSET 0x0
+#define EXYNOS_CLKDIV_AUDSS_OFFSET 0x4
+#define EXYNOS_CLKGATE_AUDSS_OFFSET 0x8
+
+#define EXYNOS_CLKSRC_AUDSS (EXYNOS_AUDSSREG \
+ (EXYNOS_CLKSRC_AUDSS_OFFSET))
+#define EXYNOS_CLKDIV_AUDSS (EXYNOS_AUDSSREG \
+ (EXYNOS_CLKDIV_AUDSS_OFFSET))
+#define EXYNOS_CLKGATE_AUDSS (EXYNOS_AUDSSREG \
+ (EXYNOS_CLKGATE_AUDSS_OFFSET))
+
+/* IP Clock Gate 0 Registers */
+#define EXYNOS_AUDSS_CLKGATE_RP (1<<0)
+#define EXYNOS_AUDSS_CLKGATE_I2SBUS (1<<2)
+#define EXYNOS_AUDSS_CLKGATE_I2SSPECIAL (1<<3)
+#define EXYNOS_AUDSS_CLKGATE_PCMBUS (1<<4)
+#define EXYNOS_AUDSS_CLKGATE_PCMSPECIAL (1<<5)
+#define EXYNOS_AUDSS_CLKGATE_UART (1<<7)
+#define EXYNOS_AUDSS_CLKGATE_TIMER (1<<8)
+
#endif /* _PLAT_REGS_AUDSS_H */
--- /dev/null
+/* linux/arch/arm/mach-exynos/include/mach/regs-cec.h
+ *
+ * Copyright (c) 2012 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com/
+ *
+ * CEC register header file for Samsung TVOUT driver
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __ARCH_ARM_REGS_CEC_H
+#define __ARCH_ARM_REGS_CEC_H
+
+/*
+ * Register part
+ */
+#define S5P_CEC_TX_STATUS_0 (0x0000)
+#define S5P_CEC_TX_STATUS_1 (0x0004)
+#define S5P_CEC_RX_STATUS_0 (0x0008)
+#define S5P_CEC_RX_STATUS_1 (0x000C)
+#define S5P_CEC_IRQ_MASK (0x0010)
+#define S5P_CEC_IRQ_CLEAR (0x0014)
+#define S5P_CEC_LOGIC_ADDR (0x0020)
+#define S5P_CEC_DIVISOR_0 (0x0030)
+#define S5P_CEC_DIVISOR_1 (0x0034)
+#define S5P_CEC_DIVISOR_2 (0x0038)
+#define S5P_CEC_DIVISOR_3 (0x003C)
+
+#define S5P_CEC_TX_CTRL (0x0040)
+#define S5P_CEC_TX_BYTES (0x0044)
+#define S5P_CEC_TX_STATUS_2 (0x0060)
+#define S5P_CEC_TX_STATUS_3 (0x0064)
+#define S5P_CEC_TX_BUFF0 (0x0080)
+#define S5P_CEC_TX_BUFF1 (0x0084)
+#define S5P_CEC_TX_BUFF2 (0x0088)
+#define S5P_CEC_TX_BUFF3 (0x008C)
+#define S5P_CEC_TX_BUFF4 (0x0090)
+#define S5P_CEC_TX_BUFF5 (0x0094)
+#define S5P_CEC_TX_BUFF6 (0x0098)
+#define S5P_CEC_TX_BUFF7 (0x009C)
+#define S5P_CEC_TX_BUFF8 (0x00A0)
+#define S5P_CEC_TX_BUFF9 (0x00A4)
+#define S5P_CEC_TX_BUFF10 (0x00A8)
+#define S5P_CEC_TX_BUFF11 (0x00AC)
+#define S5P_CEC_TX_BUFF12 (0x00B0)
+#define S5P_CEC_TX_BUFF13 (0x00B4)
+#define S5P_CEC_TX_BUFF14 (0x00B8)
+#define S5P_CEC_TX_BUFF15 (0x00BC)
+
+#define S5P_CEC_RX_CTRL (0x00C0)
+#define S5P_CEC_RX_STATUS_2 (0x00E0)
+#define S5P_CEC_RX_STATUS_3 (0x00E4)
+#define S5P_CEC_RX_BUFF0 (0x0100)
+#define S5P_CEC_RX_BUFF1 (0x0104)
+#define S5P_CEC_RX_BUFF2 (0x0108)
+#define S5P_CEC_RX_BUFF3 (0x010C)
+#define S5P_CEC_RX_BUFF4 (0x0110)
+#define S5P_CEC_RX_BUFF5 (0x0114)
+#define S5P_CEC_RX_BUFF6 (0x0118)
+#define S5P_CEC_RX_BUFF7 (0x011C)
+#define S5P_CEC_RX_BUFF8 (0x0120)
+#define S5P_CEC_RX_BUFF9 (0x0124)
+#define S5P_CEC_RX_BUFF10 (0x0128)
+#define S5P_CEC_RX_BUFF11 (0x012C)
+#define S5P_CEC_RX_BUFF12 (0x0130)
+#define S5P_CEC_RX_BUFF13 (0x0134)
+#define S5P_CEC_RX_BUFF14 (0x0138)
+#define S5P_CEC_RX_BUFF15 (0x013C)
+
+#define S5P_CEC_RX_FILTER_CTRL (0x0180)
+#define S5P_CEC_RX_FILTER_TH (0x0184)
+
+/*
+ * Bit definition part
+ */
+#define S5P_CEC_IRQ_TX_DONE (1 << 0)
+#define S5P_CEC_IRQ_TX_ERROR (1 << 1)
+#define S5P_CEC_IRQ_RX_DONE (1 << 4)
+#define S5P_CEC_IRQ_RX_ERROR (1 << 5)
+
+#define S5P_CEC_TX_CTRL_START (1 << 0)
+#define S5P_CEC_TX_CTRL_BCAST (1 << 1)
+#define S5P_CEC_TX_CTRL_RETRY (5 << 4)
+#define S5P_CEC_TX_CTRL_RESET (1 << 7)
+
+#define S5P_CEC_RX_CTRL_ENABLE (1 << 0)
+#define S5P_CEC_RX_CTRL_RESET (1 << 7)
+
+#define S5P_CEC_LOGIC_ADDR_MASK (0xF)
+
+#endif /* __ARCH_ARM_REGS_CEC_H */
#define EXYNOS5_CLKSRC_CORE1 EXYNOS_CLKREG(0x04204)
#define EXYNOS5_CLKGATE_IP_CORE EXYNOS_CLKREG(0x04900)
+#define EXYNOS5_CLKGATE_ISP0 EXYNOS_CLKREG(0x0C800)
+
#define EXYNOS5_CLKDIV_ACP EXYNOS_CLKREG(0x08500)
-#define EXYNOS5_CLKSRC_TOP2 EXYNOS_CLKREG(0x10218)
#define EXYNOS5_EPLL_CON0 EXYNOS_CLKREG(0x10130)
#define EXYNOS5_EPLL_CON1 EXYNOS_CLKREG(0x10134)
+#define EXYNOS5_EPLL_CON2 EXYNOS_CLKREG(0x10138)
#define EXYNOS5_VPLL_CON0 EXYNOS_CLKREG(0x10140)
#define EXYNOS5_VPLL_CON1 EXYNOS_CLKREG(0x10144)
+#define EXYNOS5_VPLL_CON2 EXYNOS_CLKREG(0x10148)
#define EXYNOS5_CPLL_CON0 EXYNOS_CLKREG(0x10120)
#define EXYNOS5_CLKSRC_TOP0 EXYNOS_CLKREG(0x10210)
+#define EXYNOS5_CLKSRC_TOP1 EXYNOS_CLKREG(0x10214)
+#define EXYNOS5_CLKSRC_TOP2 EXYNOS_CLKREG(0x10218)
#define EXYNOS5_CLKSRC_TOP3 EXYNOS_CLKREG(0x1021C)
#define EXYNOS5_CLKSRC_GSCL EXYNOS_CLKREG(0x10220)
#define EXYNOS5_CLKSRC_DISP1_0 EXYNOS_CLKREG(0x1022C)
+#define EXYNOS5_CLKSRC_MAUDIO EXYNOS_CLKREG(0x10240)
#define EXYNOS5_CLKSRC_FSYS EXYNOS_CLKREG(0x10244)
#define EXYNOS5_CLKSRC_PERIC0 EXYNOS_CLKREG(0x10250)
+#define EXYNOS5_CLKSRC_PERIC1 EXYNOS_CLKREG(0x10254)
+#define EXYNOS5_SCLK_SRC_ISP EXYNOS_CLKREG(0x10270)
#define EXYNOS5_CLKSRC_MASK_TOP EXYNOS_CLKREG(0x10310)
#define EXYNOS5_CLKSRC_MASK_GSCL EXYNOS_CLKREG(0x10320)
#define EXYNOS5_CLKSRC_MASK_DISP1_0 EXYNOS_CLKREG(0x1032C)
+#define EXYNOS5_CLKSRC_MASK_MAUDIO EXYNOS_CLKREG(0x10334)
#define EXYNOS5_CLKSRC_MASK_FSYS EXYNOS_CLKREG(0x10340)
#define EXYNOS5_CLKSRC_MASK_PERIC0 EXYNOS_CLKREG(0x10350)
+#define EXYNOS5_CLKSRC_MASK_PERIC1 EXYNOS_CLKREG(0x10354)
#define EXYNOS5_CLKDIV_TOP0 EXYNOS_CLKREG(0x10510)
#define EXYNOS5_CLKDIV_TOP1 EXYNOS_CLKREG(0x10514)
#define EXYNOS5_CLKDIV_GSCL EXYNOS_CLKREG(0x10520)
#define EXYNOS5_CLKDIV_DISP1_0 EXYNOS_CLKREG(0x1052C)
#define EXYNOS5_CLKDIV_GEN EXYNOS_CLKREG(0x1053C)
+#define EXYNOS5_CLKDIV_MAUDIO EXYNOS_CLKREG(0x10544)
#define EXYNOS5_CLKDIV_FSYS0 EXYNOS_CLKREG(0x10548)
#define EXYNOS5_CLKDIV_FSYS1 EXYNOS_CLKREG(0x1054C)
#define EXYNOS5_CLKDIV_FSYS2 EXYNOS_CLKREG(0x10550)
#define EXYNOS5_CLKDIV_FSYS3 EXYNOS_CLKREG(0x10554)
#define EXYNOS5_CLKDIV_PERIC0 EXYNOS_CLKREG(0x10558)
+#define EXYNOS5_CLKDIV_PERIC1 EXYNOS_CLKREG(0x1055C)
+#define EXYNOS5_CLKDIV_PERIC2 EXYNOS_CLKREG(0x10560)
+#define EXYNOS5_CLKDIV_PERIC3 EXYNOS_CLKREG(0x10564)
+#define EXYNOS5_CLKDIV_PERIC4 EXYNOS_CLKREG(0x10568)
+#define EXYNOS5_CLKDIV_PERIC5 EXYNOS_CLKREG(0x1056C)
+#define EXYNOS5_SCLK_DIV_ISP EXYNOS_CLKREG(0x10580)
+#define EXYNOS5_CLKDIV_STAT_TOP0 EXYNOS_CLKREG(0x10610)
#define EXYNOS5_CLKGATE_IP_ACP EXYNOS_CLKREG(0x08800)
+#define EXYNOS5_CLKGATE_IP_ISP0 EXYNOS_CLKREG(0x0C800)
+#define EXYNOS5_CLKGATE_IP_ISP1 EXYNOS_CLKREG(0x0C804)
#define EXYNOS5_CLKGATE_IP_GSCL EXYNOS_CLKREG(0x10920)
#define EXYNOS5_CLKGATE_IP_DISP1 EXYNOS_CLKREG(0x10928)
#define EXYNOS5_CLKGATE_IP_MFC EXYNOS_CLKREG(0x1092C)
+#define EXYNOS5_CLKGATE_IP_G3D EXYNOS_CLKREG(0x10930)
#define EXYNOS5_CLKGATE_IP_GEN EXYNOS_CLKREG(0x10934)
#define EXYNOS5_CLKGATE_IP_FSYS EXYNOS_CLKREG(0x10944)
#define EXYNOS5_CLKGATE_IP_GPS EXYNOS_CLKREG(0x1094C)
#define EXYNOS5_CLKGATE_IP_PERIC EXYNOS_CLKREG(0x10950)
#define EXYNOS5_CLKGATE_IP_PERIS EXYNOS_CLKREG(0x10960)
#define EXYNOS5_CLKGATE_BLOCK EXYNOS_CLKREG(0x10980)
+#define EXYNOS5_CLKOUT_CMU_TOP EXYNOS_CLKREG(0x10A00)
+
+#define EXYNOS5_PLL_DIV2_SEL EXYNOS_CLKREG(0x20A24)
#define EXYNOS5_BPLL_CON0 EXYNOS_CLKREG(0x20110)
#define EXYNOS5_CLKSRC_CDREX EXYNOS_CLKREG(0x20200)
#include <mach/map.h>
#include <mach/irqs.h>
-#define EINT_REG_NR(x) (EINT_OFFSET(x) >> 3)
+#define EINT_REG_NR(x) ((x) >> 3)
#define EINT_CON(b, x) (b + 0xE00 + (EINT_REG_NR(x) * 4))
#define EINT_FLTCON(b, x) (b + 0xE80 + (EINT_REG_NR(x) * 4))
#define EINT_MASK(b, x) (b + 0xF00 + (EINT_REG_NR(x) * 4))
#define EINT_PEND(b, x) (b + 0xF40 + (EINT_REG_NR(x) * 4))
-#define EINT_OFFSET_BIT(x) (1 << (EINT_OFFSET(x) & 0x7))
+#define EINT_OFFSET_BIT(x) (1 << ((x) & 0x7))
/* compatibility for plat-s5p/irq-pm.c */
#define EXYNOS4_EINT40CON (S5P_VA_GPIO2 + 0xE00)
-/* linux/arch/arm/mach-exynos4/include/mach/regs-pmu.h
- *
- * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
+/*
+ * Copyright (c) 2010-2012 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
- * EXYNOS4 - Power management unit definition
+ * EXYNOS - Power management unit definition
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
#include <mach/map.h>
#define S5P_PMUREG(x) (S5P_VA_PMU + (x))
+#define S5P_SYSREG(x) (S3C_VA_SYS + (x))
#define S5P_CENTRAL_SEQ_CONFIGURATION S5P_PMUREG(0x0200)
#define S5P_HDMI_PHY_CONTROL S5P_PMUREG(0x0700)
#define S5P_HDMI_PHY_ENABLE (1 << 0)
+/* only for EXYNOS5250*/
+#define S5P_USBDRD_PHY_CONTROL S5P_PMUREG(0x0704)
+#define S5P_USBDRD_PHY_ENABLE (1 << 0)
+
#define S5P_DAC_PHY_CONTROL S5P_PMUREG(0x070C)
#define S5P_DAC_PHY_ENABLE (1 << 0)
#define S5P_MIPI_DPHY_SRESETN (1 << 1)
#define S5P_MIPI_DPHY_MRESETN (1 << 2)
+#define S5P_DPTX_PHY_CONTROL S5P_PMUREG(0x720)
+#define S5P_DPTX_PHY_ENABLE (1 << 0)
+
#define S5P_INFORM0 S5P_PMUREG(0x0800)
#define S5P_INFORM1 S5P_PMUREG(0x0804)
#define S5P_INFORM2 S5P_PMUREG(0x0808)
#define S5P_SECSS_MEM_OPTION S5P_PMUREG(0x2EC8)
#define S5P_ROTATOR_MEM_OPTION S5P_PMUREG(0x2F48)
+/* For EXYNOS5 */
+
+#define EXYNOS5_GPS_LPI S5P_PMUREG(0x0004)
+
+#define EXYNOS5_USB_CFG S5P_PMUREG(0x0230)
+
+#define EXYNOS5_SYS_WDTRESET (1 << 20)
+#define EXYNOS5_AUTOMATIC_WDT_RESET_DISABLE S5P_PMUREG(0x0408)
+#define EXYNOS5_MASK_WDT_RESET_REQUEST S5P_PMUREG(0x040C)
+
+#define EXYNOS5_ARM_CORE0_SYS_PWR_REG S5P_PMUREG(0x1000)
+#define EXYNOS5_DIS_IRQ_ARM_CORE0_LOCAL_SYS_PWR_REG S5P_PMUREG(0x1004)
+#define EXYNOS5_DIS_IRQ_ARM_CORE0_CENTRAL_SYS_PWR_REG S5P_PMUREG(0x1008)
+#define EXYNOS5_ARM_CORE1_SYS_PWR_REG S5P_PMUREG(0x1010)
+#define EXYNOS5_DIS_IRQ_ARM_CORE1_LOCAL_SYS_PWR_REG S5P_PMUREG(0x1014)
+#define EXYNOS5_DIS_IRQ_ARM_CORE1_CENTRAL_SYS_PWR_REG S5P_PMUREG(0x1018)
+#define EXYNOS5_FSYS_ARM_SYS_PWR_REG S5P_PMUREG(0x1040)
+#define EXYNOS5_DIS_IRQ_FSYS_ARM_CENTRAL_SYS_PWR_REG S5P_PMUREG(0x1048)
+#define EXYNOS5_ISP_ARM_SYS_PWR_REG S5P_PMUREG(0x1050)
+#define EXYNOS5_DIS_IRQ_ISP_ARM_LOCAL_SYS_PWR_REG S5P_PMUREG(0x1054)
+#define EXYNOS5_DIS_IRQ_ISP_ARM_CENTRAL_SYS_PWR_REG S5P_PMUREG(0x1058)
+#define EXYNOS5_ARM_COMMON_SYS_PWR_REG S5P_PMUREG(0x1080)
+#define EXYNOS5_ARM_L2_SYS_PWR_REG S5P_PMUREG(0x10C0)
+#define EXYNOS5_CMU_ACLKSTOP_SYS_PWR_REG S5P_PMUREG(0x1100)
+#define EXYNOS5_CMU_SCLKSTOP_SYS_PWR_REG S5P_PMUREG(0x1104)
+#define EXYNOS5_CMU_RESET_SYS_PWR_REG S5P_PMUREG(0x110C)
+#define EXYNOS5_CMU_ACLKSTOP_SYSMEM_SYS_PWR_REG S5P_PMUREG(0x1120)
+#define EXYNOS5_CMU_SCLKSTOP_SYSMEM_SYS_PWR_REG S5P_PMUREG(0x1124)
+#define EXYNOS5_CMU_RESET_SYSMEM_SYS_PWR_REG S5P_PMUREG(0x112C)
+#define EXYNOS5_DRAM_FREQ_DOWN_SYS_PWR_REG S5P_PMUREG(0x1130)
+#define EXYNOS5_DDRPHY_DLLOFF_SYS_PWR_REG S5P_PMUREG(0x1134)
+#define EXYNOS5_DDRPHY_DLLLOCK_SYS_PWR_REG S5P_PMUREG(0x1138)
+#define EXYNOS5_APLL_SYSCLK_SYS_PWR_REG S5P_PMUREG(0x1140)
+#define EXYNOS5_MPLL_SYSCLK_SYS_PWR_REG S5P_PMUREG(0x1144)
+#define EXYNOS5_VPLL_SYSCLK_SYS_PWR_REG S5P_PMUREG(0x1148)
+#define EXYNOS5_EPLL_SYSCLK_SYS_PWR_REG S5P_PMUREG(0x114C)
+#define EXYNOS5_BPLL_SYSCLK_SYS_PWR_REG S5P_PMUREG(0x1150)
+#define EXYNOS5_CPLL_SYSCLK_SYS_PWR_REG S5P_PMUREG(0x1154)
+#define EXYNOS5_MPLLUSER_SYSCLK_SYS_PWR_REG S5P_PMUREG(0x1164)
+#define EXYNOS5_BPLLUSER_SYSCLK_SYS_PWR_REG S5P_PMUREG(0x1170)
+#define EXYNOS5_TOP_BUS_SYS_PWR_REG S5P_PMUREG(0x1180)
+#define EXYNOS5_TOP_RETENTION_SYS_PWR_REG S5P_PMUREG(0x1184)
+#define EXYNOS5_TOP_PWR_SYS_PWR_REG S5P_PMUREG(0x1188)
+#define EXYNOS5_TOP_BUS_SYSMEM_SYS_PWR_REG S5P_PMUREG(0x1190)
+#define EXYNOS5_TOP_RETENTION_SYSMEM_SYS_PWR_REG S5P_PMUREG(0x1194)
+#define EXYNOS5_TOP_PWR_SYSMEM_SYS_PWR_REG S5P_PMUREG(0x1198)
+#define EXYNOS5_LOGIC_RESET_SYS_PWR_REG S5P_PMUREG(0x11A0)
+#define EXYNOS5_OSCCLK_GATE_SYS_PWR_REG S5P_PMUREG(0x11A4)
+#define EXYNOS5_LOGIC_RESET_SYSMEM_SYS_PWR_REG S5P_PMUREG(0x11B0)
+#define EXYNOS5_OSCCLK_GATE_SYSMEM_SYS_PWR_REG S5P_PMUREG(0x11B4)
+#define EXYNOS5_USBOTG_MEM_SYS_PWR_REG S5P_PMUREG(0x11C0)
+#define EXYNOS5_G2D_MEM_SYS_PWR_REG S5P_PMUREG(0x11C8)
+#define EXYNOS5_USBDRD_MEM_SYS_PWR_REG S5P_PMUREG(0x11CC)
+#define EXYNOS5_SDMMC_MEM_SYS_PWR_REG S5P_PMUREG(0x11D0)
+#define EXYNOS5_CSSYS_MEM_SYS_PWR_REG S5P_PMUREG(0x11D4)
+#define EXYNOS5_SECSS_MEM_SYS_PWR_REG S5P_PMUREG(0x11D8)
+#define EXYNOS5_ROTATOR_MEM_SYS_PWR_REG S5P_PMUREG(0x11DC)
+#define EXYNOS5_INTRAM_MEM_SYS_PWR_REG S5P_PMUREG(0x11E0)
+#define EXYNOS5_INTROM_MEM_SYS_PWR_REG S5P_PMUREG(0x11E4)
+#define EXYNOS5_JPEG_MEM_SYS_PWR_REG S5P_PMUREG(0x11E8)
+#define EXYNOS5_HSI_MEM_SYS_PWR_REG S5P_PMUREG(0x11EC)
+#define EXYNOS5_MCUIOP_MEM_SYS_PWR_REG S5P_PMUREG(0x11F4)
+#define EXYNOS5_SATA_MEM_SYS_PWR_REG S5P_PMUREG(0x11FC)
+#define EXYNOS5_PAD_RETENTION_DRAM_SYS_PWR_REG S5P_PMUREG(0x1200)
+#define EXYNOS5_PAD_RETENTION_MAU_SYS_PWR_REG S5P_PMUREG(0x1204)
+#define EXYNOS5_PAD_RETENTION_EFNAND_SYS_PWR_REG S5P_PMUREG(0x1208)
+#define EXYNOS5_PAD_RETENTION_GPIO_SYS_PWR_REG S5P_PMUREG(0x1220)
+#define EXYNOS5_PAD_RETENTION_UART_SYS_PWR_REG S5P_PMUREG(0x1224)
+#define EXYNOS5_PAD_RETENTION_MMCA_SYS_PWR_REG S5P_PMUREG(0x1228)
+#define EXYNOS5_PAD_RETENTION_MMCB_SYS_PWR_REG S5P_PMUREG(0x122C)
+#define EXYNOS5_PAD_RETENTION_EBIA_SYS_PWR_REG S5P_PMUREG(0x1230)
+#define EXYNOS5_PAD_RETENTION_EBIB_SYS_PWR_REG S5P_PMUREG(0x1234)
+#define EXYNOS5_PAD_RETENTION_SPI_SYS_PWR_REG S5P_PMUREG(0x1238)
+#define EXYNOS5_PAD_RETENTION_GPIO_SYSMEM_SYS_PWR_REG S5P_PMUREG(0x123C)
+#define EXYNOS5_PAD_ISOLATION_SYS_PWR_REG S5P_PMUREG(0x1240)
+#define EXYNOS5_PAD_ISOLATION_SYSMEM_SYS_PWR_REG S5P_PMUREG(0x1250)
+#define EXYNOS5_PAD_ALV_SEL_SYS_PWR_REG S5P_PMUREG(0x1260)
+#define EXYNOS5_XUSBXTI_SYS_PWR_REG S5P_PMUREG(0x1280)
+#define EXYNOS5_XXTI_SYS_PWR_REG S5P_PMUREG(0x1284)
+#define EXYNOS5_EXT_REGULATOR_SYS_PWR_REG S5P_PMUREG(0x12C0)
+#define EXYNOS5_GPIO_MODE_SYS_PWR_REG S5P_PMUREG(0x1300)
+#define EXYNOS5_GPIO_MODE_SYSMEM_SYS_PWR_REG S5P_PMUREG(0x1320)
+#define EXYNOS5_GPIO_MODE_MAU_SYS_PWR_REG S5P_PMUREG(0x1340)
+#define EXYNOS5_TOP_ASB_RESET_SYS_PWR_REG S5P_PMUREG(0x1344)
+#define EXYNOS5_TOP_ASB_ISOLATION_SYS_PWR_REG S5P_PMUREG(0x1348)
+#define EXYNOS5_GSCL_SYS_PWR_REG S5P_PMUREG(0x1400)
+#define EXYNOS5_ISP_SYS_PWR_REG S5P_PMUREG(0x1404)
+#define EXYNOS5_MFC_SYS_PWR_REG S5P_PMUREG(0x1408)
+#define EXYNOS5_G3D_SYS_PWR_REG S5P_PMUREG(0x140C)
+#define EXYNOS5_DISP1_SYS_PWR_REG S5P_PMUREG(0x1414)
+#define EXYNOS5_MAU_SYS_PWR_REG S5P_PMUREG(0x1418)
+#define EXYNOS5_GPS_SYS_PWR_REG S5P_PMUREG(0x141C)
+#define EXYNOS5_CMU_CLKSTOP_GSCL_SYS_PWR_REG S5P_PMUREG(0x1480)
+#define EXYNOS5_CMU_CLKSTOP_ISP_SYS_PWR_REG S5P_PMUREG(0x1484)
+#define EXYNOS5_CMU_CLKSTOP_MFC_SYS_PWR_REG S5P_PMUREG(0x1488)
+#define EXYNOS5_CMU_CLKSTOP_G3D_SYS_PWR_REG S5P_PMUREG(0x148C)
+#define EXYNOS5_CMU_CLKSTOP_DISP1_SYS_PWR_REG S5P_PMUREG(0x1494)
+#define EXYNOS5_CMU_CLKSTOP_MAU_SYS_PWR_REG S5P_PMUREG(0x1498)
+#define EXYNOS5_CMU_CLKSTOP_GPS_SYS_PWR_REG S5P_PMUREG(0x149C)
+#define EXYNOS5_CMU_SYSCLK_GSCL_SYS_PWR_REG S5P_PMUREG(0x14C0)
+#define EXYNOS5_CMU_SYSCLK_ISP_SYS_PWR_REG S5P_PMUREG(0x14C4)
+#define EXYNOS5_CMU_SYSCLK_MFC_SYS_PWR_REG S5P_PMUREG(0x14C8)
+#define EXYNOS5_CMU_SYSCLK_G3D_SYS_PWR_REG S5P_PMUREG(0x14CC)
+#define EXYNOS5_CMU_SYSCLK_DISP1_SYS_PWR_REG S5P_PMUREG(0x14D4)
+#define EXYNOS5_CMU_SYSCLK_MAU_SYS_PWR_REG S5P_PMUREG(0x14D8)
+#define EXYNOS5_CMU_SYSCLK_GPS_SYS_PWR_REG S5P_PMUREG(0x14DC)
+#define EXYNOS5_CMU_RESET_GSCL_SYS_PWR_REG S5P_PMUREG(0x1580)
+#define EXYNOS5_CMU_RESET_ISP_SYS_PWR_REG S5P_PMUREG(0x1584)
+#define EXYNOS5_CMU_RESET_MFC_SYS_PWR_REG S5P_PMUREG(0x1588)
+#define EXYNOS5_CMU_RESET_G3D_SYS_PWR_REG S5P_PMUREG(0x158C)
+#define EXYNOS5_CMU_RESET_DISP1_SYS_PWR_REG S5P_PMUREG(0x1594)
+#define EXYNOS5_CMU_RESET_MAU_SYS_PWR_REG S5P_PMUREG(0x1598)
+#define EXYNOS5_CMU_RESET_GPS_SYS_PWR_REG S5P_PMUREG(0x159C)
+
+#define EXYNOS5_ARM_CORE0_OPTION S5P_PMUREG(0x2008)
+#define EXYNOS5_ARM_CORE1_OPTION S5P_PMUREG(0x2088)
+#define EXYNOS5_FSYS_ARM_OPTION S5P_PMUREG(0x2208)
+#define EXYNOS5_ISP_ARM_OPTION S5P_PMUREG(0x2288)
+#define EXYNOS5_ARM_COMMON_OPTION S5P_PMUREG(0x2408)
+#define EXYNOS5_TOP_PWR_OPTION S5P_PMUREG(0x2C48)
+#define EXYNOS5_TOP_PWR_SYSMEM_OPTION S5P_PMUREG(0x2CC8)
+#define EXYNOS5_JPEG_MEM_OPTION S5P_PMUREG(0x2F48)
+#define EXYNOS5_PS_HOLD_CONTROL S5P_PMUREG(0x330C)
+#define EXYNOS5_GSCL_STATUS S5P_PMUREG(0x4004)
+#define EXYNOS5_ISP_STATUS S5P_PMUREG(0x4024)
+#define EXYNOS5_GSCL_OPTION S5P_PMUREG(0x4008)
+#define EXYNOS5_ISP_OPTION S5P_PMUREG(0x4028)
+#define EXYNOS5_MFC_OPTION S5P_PMUREG(0x4048)
+#define EXYNOS5_G3D_CONFIGURATION S5P_PMUREG(0x4060)
+#define EXYNOS5_G3D_STATUS S5P_PMUREG(0x4064)
+#define EXYNOS5_G3D_OPTION S5P_PMUREG(0x4068)
+#define EXYNOS5_DISP1_OPTION S5P_PMUREG(0x40A8)
+#define EXYNOS5_MAU_OPTION S5P_PMUREG(0x40C8)
+#define EXYNOS5_GPS_OPTION S5P_PMUREG(0x40E8)
+
+#define EXYNOS5_USE_SC_FEEDBACK (1 << 1)
+#define EXYNOS5_USE_SC_COUNTER (1 << 0)
+
+#define EXYNOS5_MANUAL_L2RSTDISABLE_CONTROL (1 << 2)
+#define EXYNOS5_SKIP_DEACTIVATE_ACEACP_IN_PWDN (1 << 7)
+
+#define EXYNOS5_OPTION_USE_STANDBYWFE (1 << 24)
+#define EXYNOS5_OPTION_USE_STANDBYWFI (1 << 16)
+
+#define EXYNOS5_OPTION_USE_RETENTION (1 << 4)
+
+#define EXYNOS5_SYS_I2C_CFG S5P_SYSREG(0x234)
+#define EXYNOS5_SYS_DISP1BLK_CFG S5P_SYSREG(0x214)
+#define ENABLE_FIMDBYPASS_DISP1 (1 << 15)
+
#endif /* __ASM_ARCH_REGS_PMU_H */
#define EXYNOS4_HSOTG_PHYREG(x) ((x) + S3C_VA_USB_HSPHY)
+/* Exynos 4 */
#define EXYNOS4_PHYPWR EXYNOS4_HSOTG_PHYREG(0x00)
#define PHY1_HSIC_NORMAL_MASK (0xf << 9)
#define PHY1_HSIC1_SLEEP (1 << 12)
#define EXYNOS4_PHY1CON EXYNOS4_HSOTG_PHYREG(0x34)
#define FPENABLEN (1 << 0)
+
+/* Exynos 5 */
+#define EXYNOS5_PHY_HOST_CTRL0 EXYNOS4_HSOTG_PHYREG(0x00)
+#define HOST_CTRL0_PHYSWRSTALL (0x1 << 31)
+#define HOST_CTRL0_REFCLKSEL(val) (val << 19)
+#define EXYNOS5_CLKSEL_50M (0x7)
+#define EXYNOS5_CLKSEL_24M (0x5)
+#define EXYNOS5_CLKSEL_20M (0x4)
+#define EXYNOS5_CLKSEL_19200K (0x3)
+#define EXYNOS5_CLKSEL_12M (0x2)
+#define EXYNOS5_CLKSEL_10M (0x1)
+#define EXYNOS5_CLKSEL_9600K (0x0)
+#define HOST_CTRL0_CLKSEL_SHIFT (16)
+#define HOST_CTRL0_FSEL_MASK (0x7 << 16)
+
+#define HOST_CTRL0_COMMONON_N (0x1 << 9)
+#define HOST_CTRL0_SIDDQ (0x1 << 6)
+#define HOST_CTRL0_FORCESLEEP (0x1 << 5)
+#define HOST_CTRL0_FORCESUSPEND (0x1 << 4)
+#define HOST_CTRL0_WORDINTERFACE (0x1 << 3)
+#define HOST_CTRL0_UTMISWRST (0x1 << 2)
+#define HOST_CTRL0_LINKSWRST (0x1 << 1)
+#define HOST_CTRL0_PHYSWRST (0x1 << 0)
+
+#define EXYNOS5_PHY_HOST_TUNE0 EXYNOS4_HSOTG_PHYREG(0x04)
+#define EXYNOS5_PHY_HOST_TEST0 EXYNOS4_HSOTG_PHYREG(0x08)
+
+#define EXYNOS5_PHY_HSIC_CTRL1 EXYNOS4_HSOTG_PHYREG(0x10)
+#define EXYNOS5_PHY_HSIC_CTRL2 EXYNOS4_HSOTG_PHYREG(0x20)
+#define HSIC_CTRL_REFCLKSEL(val) ((val&0x3) << 23)
+#define HSIC_CTRL_REFCLKDIV(val) ((val&0x7f) << 16)
+#define HSIC_CTRL_SIDDQ (0x1 << 6)
+#define HSIC_CTRL_FORCESLEEP (0x1 << 5)
+#define HSIC_CTRL_FORCESUSPEND (0x1 << 4)
+#define HSIC_CTRL_WORDINTERFACE (0x1 << 3)
+#define HSIC_CTRL_UTMISWRST (0x1 << 2)
+#define HSIC_CTRL_PHYSWRST (0x1 << 0)
+
+#define EXYNOS5_PHY_HOST_EHCICTRL EXYNOS4_HSOTG_PHYREG(0x30)
+#define EHCICTRL_ENAINCRXALIGN (0x1 << 29)
+#define EHCICTRL_ENAINCR4 (0x1 << 28)
+#define EHCICTRL_ENAINCR8 (0x1 << 27)
+#define EHCICTRL_ENAINCR16 (0x1 << 26)
+
+#define EXYNOS5_PHY_HOST_OHCICTRL EXYNOS4_HSOTG_PHYREG(0x34)
+
+#define EXYNOS5_PHY_OTG_SYS EXYNOS4_HSOTG_PHYREG(0x38)
+#define OTG_SYS_PHYLINK_SW_RESET (0x1 << 14)
+#define OTG_SYS_LINK_SW_RST_UOTG (0x1 << 13)
+#define OTG_SYS_PHY0_SW_RST (0x1 << 12)
+#define OTG_SYS_REF_CLK_SEL(val) ((val&0x3) << 9)
+#define OTG_SYS_REF_CLK_SEL_MASK (0x3 << 9)
+#define OTG_SYS_IP_PULLUP_UOTG (0x1 << 8)
+#define OTG_SYS_COMMON_ON (0x1 << 7)
+#define OTG_SYS_CLKSEL_SHIFT (4)
+#define OTG_SYS_CTRL0_FSEL_MASK (0x7 << 4)
+#define OTG_SYS_FORCE_SLEEP (0x1 << 3)
+#define OTG_SYS_OTGDISABLE (0x1 << 2)
+#define OTG_SYS_SIDDQ_UOTG (0x1 << 1)
+#define OTG_SYS_FORCE_SUSPEND (0x1 << 0)
+
#endif /* __PLAT_S5P_REGS_USB_PHY_H */
-/* linux/arch/arm/mach-exynos4/include/mach/sysmmu.h
+/* linux/arch/arm/mach-exynos/include/mach/sysmmu.h
*
* Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
* http://www.samsung.com
#ifndef __ASM_ARM_ARCH_SYSMMU_H
#define __ASM_ARM_ARCH_SYSMMU_H __FILE__
+#include <linux/device.h>
-enum exynos4_sysmmu_ips {
- SYSMMU_MDMA,
- SYSMMU_SSS,
- SYSMMU_FIMC0,
- SYSMMU_FIMC1,
- SYSMMU_FIMC2,
- SYSMMU_FIMC3,
- SYSMMU_JPEG,
- SYSMMU_FIMD0,
- SYSMMU_FIMD1,
- SYSMMU_PCIe,
- SYSMMU_G2D,
- SYSMMU_ROTATOR,
- SYSMMU_MDMA2,
- SYSMMU_TV,
- SYSMMU_MFC_L,
- SYSMMU_MFC_R,
- EXYNOS4_SYSMMU_TOTAL_IPNUM,
-};
-
-#define S5P_SYSMMU_TOTAL_IPNUM EXYNOS4_SYSMMU_TOTAL_IPNUM
+#define SYSMMU_CLOCK_NAME "sysmmu"
+#define SYSMMU_CLOCK_NAME2 "sysmmu_mc"
+#define SYSMMU_DEVNAME_BASE "s5p-sysmmu"
+#define SYSMMU_CLOCK_DEVNAME(ipname, id) (SYSMMU_DEVNAME_BASE "." #id)
-extern const char *sysmmu_ips_name[EXYNOS4_SYSMMU_TOTAL_IPNUM];
-
-typedef enum exynos4_sysmmu_ips sysmmu_ips;
+struct sysmmu_platform_data {
+ char *dbgname;
+ char *clockname;
+};
-void sysmmu_clk_init(struct device *dev, sysmmu_ips ips);
-void sysmmu_clk_enable(sysmmu_ips ips);
-void sysmmu_clk_disable(sysmmu_ips ips);
+static inline void platform_set_sysmmu(
+ struct device *sysmmu, struct device *dev)
+{
+ dev->archdata.iommu = sysmmu;
+}
+struct dma_iommu_mapping *s5p_create_iommu_mapping(struct device *client,
+ dma_addr_t base, unsigned int size, int order,
+ struct dma_iommu_mapping *mapping);
+struct platform_device *find_sysmmu_dt(struct platform_device *pdev,
+ char *name);
#endif /* __ASM_ARM_ARCH_SYSMMU_H */
--- /dev/null
+/* linux/arch/arm/mach-exynos/include/mach/videonode-exynos5.h
+ *
+ * Copyright (c) 2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com
+ *
+ * EXYNOS5 - Video node definitions
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+#ifndef __MACH_VIDEONODE_EXYNOS5_H
+#define __MACH_VIDEONODE_EXYNOS5_H __FILE__
+
+#define S5P_VIDEONODE_MFC_DEC 6
+#define S5P_VIDEONODE_MFC_ENC 7
+
+#define EXYNOS_VIDEONODE_ROTATOR 21
+
+#define EXYNOS_VIDEONODE_GSC_M2M(x) (23 + (x) * 3)
+#define EXYNOS_VIDEONODE_GSC_OUT(x) (24 + (x) * 3)
+#define EXYNOS_VIDEONODE_GSC_CAP(x) (25 + (x) * 3)
+
+#define EXYNOS_VIDEONODE_FLITE(x) (36 + x)
+/* Exynos4x12 supports video, graphic0~1 layer
+ * Exynos5250 supports graphic0~3 layer */
+#define EXYNOS_VIDEONODE_MXR_GRP(x) (16 + x)
+#define EXYNOS_VIDEONODE_MXR_VIDEO 20
+#define EXYNOS_VIDEONODE_FIMC_IS (40)
+
+#endif /* __MACH_VIDEONODE_EXYNOS5_H */
"exynos4-sdhci.3", NULL),
OF_DEV_AUXDATA("samsung,s3c2440-i2c", EXYNOS4_PA_IIC(0),
"s3c2440-i2c.0", NULL),
+ OF_DEV_AUXDATA("samsung,exynos4210-spi", EXYNOS4_PA_SPI0,
+ "exynos4210-spi.0", NULL),
+ OF_DEV_AUXDATA("samsung,exynos4210-spi", EXYNOS4_PA_SPI1,
+ "exynos4210-spi.1", NULL),
+ OF_DEV_AUXDATA("samsung,exynos4210-spi", EXYNOS4_PA_SPI2,
+ "exynos4210-spi.2", NULL),
OF_DEV_AUXDATA("arm,pl330", EXYNOS4_PA_PDMA0, "dma-pl330.0", NULL),
OF_DEV_AUXDATA("arm,pl330", EXYNOS4_PA_PDMA1, "dma-pl330.1", NULL),
{},
* published by the Free Software Foundation.
*/
+#include <linux/gpio.h>
#include <linux/of_platform.h>
+#include <linux/platform_data/dwc3-exynos.h>
+#include <linux/regulator/fixed.h>
+#include <linux/regulator/machine.h>
#include <linux/serial_core.h>
+#include <linux/smsc911x.h>
+#include <linux/delay.h>
+#include <linux/i2c.h>
+#include <linux/pwm_backlight.h>
+#include <linux/mfd/wm8994/pdata.h>
+#include <linux/regulator/machine.h>
+#include <linux/spi/spi.h>
#include <asm/mach/arch.h>
#include <asm/hardware/gic.h>
#include <mach/map.h>
+#include <mach/ohci.h>
+#include <mach/regs-pmu.h>
+#include <mach/sysmmu.h>
+#include <mach/ohci.h>
+#include <mach/regs-audss.h>
+#include <plat/audio.h>
#include <plat/cpu.h>
+#include <plat/dsim.h>
+#include <plat/fb.h>
+#include <plat/mipi_dsi.h>
+#include <plat/gpio-cfg.h>
+#include <plat/regs-fb.h>
#include <plat/regs-serial.h>
+#include <plat/regs-srom.h>
+#include <plat/backlight.h>
+#include <plat/devs.h>
+#include <plat/usb-phy.h>
+#include <plat/ehci.h>
+#include <plat/dp.h>
+#include <plat/s3c64xx-spi.h>
+#include <video/platform_lcd.h>
+
+#include "drm/exynos_drm.h"
#include "common.h"
+static void __init smsc911x_init(int ncs)
+{
+ u32 data;
+
+ /* configure nCS1 width to 16 bits */
+ data = __raw_readl(S5P_SROM_BW) &
+ ~(S5P_SROM_BW__CS_MASK << (ncs * 4));
+ data |= ((1 << S5P_SROM_BW__DATAWIDTH__SHIFT) |
+ (1 << S5P_SROM_BW__WAITENABLE__SHIFT) |
+ (1 << S5P_SROM_BW__BYTEENABLE__SHIFT)) << (ncs * 4);
+ __raw_writel(data, S5P_SROM_BW);
+
+ /* set timing for nCS1 suitable for ethernet chip */
+ __raw_writel((0x1 << S5P_SROM_BCX__PMC__SHIFT) |
+ (0x9 << S5P_SROM_BCX__TACP__SHIFT) |
+ (0xc << S5P_SROM_BCX__TCAH__SHIFT) |
+ (0x1 << S5P_SROM_BCX__TCOH__SHIFT) |
+ (0x6 << S5P_SROM_BCX__TACC__SHIFT) |
+ (0x1 << S5P_SROM_BCX__TCOS__SHIFT) |
+ (0x1 << S5P_SROM_BCX__TACS__SHIFT),
+ S5P_SROM_BC0 + (ncs * 4));
+}
+
+static struct s3c_fb_pd_win smdk5250_fb_win0 = {
+ .win_mode = {
+ .left_margin = 4,
+ .right_margin = 4,
+ .upper_margin = 4,
+ .lower_margin = 4,
+ .hsync_len = 4,
+ .vsync_len = 4,
+ .xres = 1280,
+ .yres = 800,
+ },
+ .virtual_x = 1280,
+ .virtual_y = 800 * 2,
+ .width = 223,
+ .height = 125,
+ .max_bpp = 32,
+ .default_bpp = 24,
+};
+
+static struct s3c_fb_pd_win smdk5250_fb_win1 = {
+ .win_mode = {
+ .left_margin = 4,
+ .right_margin = 4,
+ .upper_margin = 4,
+ .lower_margin = 4,
+ .hsync_len = 4,
+ .vsync_len = 4,
+ .xres = 1280,
+ .yres = 800,
+ },
+ .virtual_x = 1280,
+ .virtual_y = 800 * 2,
+ .width = 223,
+ .height = 125,
+ .max_bpp = 32,
+ .default_bpp = 24,
+};
+
+static struct s3c_fb_pd_win smdk5250_fb_win2 = {
+ .win_mode = {
+ .left_margin = 0x4,
+ .right_margin = 0x4,
+ .upper_margin = 4,
+ .lower_margin = 4,
+ .hsync_len = 4,
+ .vsync_len = 4,
+ .xres = 1280,
+ .yres = 800,
+ },
+ .virtual_x = 1280,
+ .virtual_y = 800 * 2,
+ .width = 223,
+ .height = 125,
+ .max_bpp = 32,
+ .default_bpp = 24,
+};
+
+static struct fb_videomode snow_fb_window = {
+ .left_margin = 0x80,
+ .right_margin = 0x48,
+ .upper_margin = 14,
+ .lower_margin = 3,
+ .hsync_len = 5,
+ .vsync_len = 32,
+ .xres = 1366,
+ .yres = 768,
+};
+
+static void exynos_fimd_gpio_setup_24bpp(void)
+{
+ unsigned int reg = 0;
+
+ /*
+ * Set DISP1BLK_CFG register for Display path selection
+ * FIMD of DISP1_BLK Bypass selection : DISP1BLK_CFG[15]
+ * ---------------------
+ * 0 | MIE/MDNIE
+ * 1 | FIMD : selected
+ */
+ reg = __raw_readl(S3C_VA_SYS + 0x0214);
+ reg &= ~(1 << 15); /* To save other reset values */
+ reg |= (1 << 15);
+ __raw_writel(reg, S3C_VA_SYS + 0x0214);
+}
+
+static void exynos_dp_gpio_setup_24bpp(void)
+{
+ exynos_fimd_gpio_setup_24bpp();
+
+ /* Set Hotplug detect for DP */
+ gpio_request(EXYNOS5_GPX0(7), "DP hotplug");
+ s3c_gpio_cfgpin(EXYNOS5_GPX0(7), S3C_GPIO_SFN(3));
+}
+
+#ifdef CONFIG_DRM_EXYNOS_FIMD
+static struct exynos_drm_fimd_pdata smdk5250_lcd1_pdata = {
+ .panel.timing = {
+ .xres = 1280,
+ .yres = 800,
+ .hsync_len = 4,
+ .left_margin = 0x4,
+ .right_margin = 0x4,
+ .vsync_len = 4,
+ .upper_margin = 4,
+ .lower_margin = 4,
+ .refresh = 60,
+ },
+ .vidcon0 = VIDCON0_VIDOUT_RGB | VIDCON0_PNRMODE_RGB,
+ .vidcon1 = VIDCON1_INV_VCLK,
+ .default_win = 0,
+ .bpp = 32,
+ .clock_rate = 800 * 1000 * 1000,
+};
+#else
+static struct s3c_fb_platdata smdk5250_lcd1_pdata __initdata = {
+ .win[0] = &smdk5250_fb_win0,
+ .win[1] = &smdk5250_fb_win1,
+ .win[2] = &smdk5250_fb_win2,
+ .default_win = 0,
+ .vidcon0 = VIDCON0_VIDOUT_RGB | VIDCON0_PNRMODE_RGB,
+ .vidcon1 = VIDCON1_INV_VCLK,
+ .setup_gpio = exynos_fimd_gpio_setup_24bpp,
+ .clock_rate = 800 * 1000 * 1000,
+};
+#endif
+
+static struct mipi_dsim_config dsim_info = {
+ .e_interface = DSIM_VIDEO,
+ .e_pixel_format = DSIM_24BPP_888,
+ /* main frame fifo auto flush at VSYNC pulse */
+ .auto_flush = false,
+ .eot_disable = false,
+ .auto_vertical_cnt = false,
+ .hse = false,
+ .hfp = false,
+ .hbp = false,
+ .hsa = false,
+
+ .e_no_data_lane = DSIM_DATA_LANE_4,
+ .e_byte_clk = DSIM_PLL_OUT_DIV8,
+ .e_burst_mode = DSIM_BURST,
+
+ .p = 3,
+ .m = 115,
+ .s = 1,
+
+ /* D-PHY PLL stable time spec :min = 200usec ~ max 400usec */
+ .pll_stable_time = 500,
+
+ .esc_clk = 0.4 * 1000000, /* escape clk : 10MHz */
+
+ /* stop state holding counter after bta change count 0 ~ 0xfff */
+ .stop_holding_cnt = 0x0f,
+ .bta_timeout = 0xff, /* bta timeout 0 ~ 0xff */
+ .rx_timeout = 0xffff, /* lp rx timeout 0 ~ 0xffff */
+
+ .dsim_ddi_pd = &tc358764_mipi_lcd_driver,
+};
+
+static struct mipi_dsim_lcd_config dsim_lcd_info = {
+ .rgb_timing.left_margin = 0x4,
+ .rgb_timing.right_margin = 0x4,
+ .rgb_timing.upper_margin = 0x4,
+ .rgb_timing.lower_margin = 0x4,
+ .rgb_timing.hsync_len = 0x4,
+ .rgb_timing.vsync_len = 0x4,
+ .cpu_timing.cs_setup = 0,
+ .cpu_timing.wr_setup = 1,
+ .cpu_timing.wr_act = 0,
+ .cpu_timing.wr_hold = 0,
+ .lcd_size.width = 1280,
+ .lcd_size.height = 800,
+};
+
+static struct s5p_platform_mipi_dsim dsim_platform_data = {
+ .clk_name = "dsim0",
+ .dsim_config = &dsim_info,
+ .dsim_lcd_config = &dsim_lcd_info,
+
+ .part_reset = s5p_dsim_part_reset,
+ .init_d_phy = s5p_dsim_init_d_phy,
+ .get_fb_frame_done = NULL,
+ .trigger = NULL,
+
+ /*
+ * the stable time of needing to write data on SFR
+ * when the mipi mode becomes LP mode.
+ */
+ .delay_for_stabilization = 600,
+};
+
+static struct platform_device exynos_drm_device = {
+ .name = "exynos-drm",
+ .dev = {
+ .dma_mask = &exynos_drm_device.dev.coherent_dma_mask,
+ .coherent_dma_mask = 0xffffffffUL,
+ }
+};
+
+static void lcd_set_power(struct plat_lcd_data *pd,
+ unsigned int power)
+{
+ if (of_machine_is_compatible("google,daisy") ||
+ of_machine_is_compatible("google,snow")) {
+ struct regulator *lcd_fet;
+
+ lcd_fet = regulator_get(NULL, "lcd_vdd");
+ if (!IS_ERR(lcd_fet)) {
+ if (power)
+ regulator_enable(lcd_fet);
+ else
+ regulator_disable(lcd_fet);
+
+ regulator_put(lcd_fet);
+ }
+ }
+
+ if (!of_machine_is_compatible("google,snow")) {
+ /* reset */
+ gpio_request_one(EXYNOS5_GPX1(5), GPIOF_OUT_INIT_HIGH, "GPX1");
+ mdelay(20);
+ gpio_set_value(EXYNOS5_GPX1(5), 0);
+ mdelay(20);
+ gpio_set_value(EXYNOS5_GPX1(5), 1);
+ mdelay(20);
+ gpio_free(EXYNOS5_GPX1(5));
+ mdelay(20);
+ }
+
+
+ /* Turn on regulator for backlight */
+ if (of_machine_is_compatible("google,daisy") ||
+ of_machine_is_compatible("google,snow")) {
+ struct regulator *backlight_fet;
+
+ backlight_fet = regulator_get(NULL, "vcd_led");
+ if (!IS_ERR(backlight_fet)) {
+ if (power)
+ regulator_enable(backlight_fet);
+ else
+ regulator_disable(backlight_fet);
+
+ regulator_put(backlight_fet);
+ }
+ /* Wait 10 ms between regulator on and PWM start per spec */
+ mdelay(10);
+ }
+
+ /*
+ * Request lcd_bl_en GPIO for smdk5250_bl_notify().
+ * TODO: Fix this so we are not at risk of requesting the GPIO
+ * multiple times, this should be done with device tree, and
+ * likely integrated into the plat-samsung/dev-backlight.c init.
+ */
+ gpio_request_one(EXYNOS5_GPX3(0), GPIOF_OUT_INIT_LOW, "GPX3");
+ gpio_free(EXYNOS5_GPX3(0));
+}
+
+static int smdk5250_match_fb(struct plat_lcd_data *pd, struct fb_info *info)
+{
+ /* Don't call .set_power callback while unblanking */
+ return 0;
+}
+
+static struct plat_lcd_data smdk5250_lcd_data = {
+ .set_power = lcd_set_power,
+ .match_fb = smdk5250_match_fb,
+};
+
+static struct platform_device smdk5250_lcd = {
+ .name = "platform-lcd",
+ .dev.platform_data = &smdk5250_lcd_data,
+};
+
+static int smdk5250_bl_notify(struct device *unused, int brightness)
+{
+ /* manage lcd_bl_en signal */
+ if (brightness)
+ gpio_set_value(EXYNOS5_GPX3(0), 1);
+ else
+ gpio_set_value(EXYNOS5_GPX3(0), 0);
+
+ return brightness;
+}
+
+/* LCD Backlight data */
+static struct samsung_bl_gpio_info smdk5250_bl_gpio_info = {
+ .no = EXYNOS5_GPB2(0),
+ .func = S3C_GPIO_SFN(2),
+};
+
+static struct platform_pwm_backlight_data smdk5250_bl_data = {
+ .pwm_period_ns = 1000000,
+ .notify = smdk5250_bl_notify,
+};
+
+struct platform_device exynos_device_md0 = {
+ .name = "exynos-mdev",
+ .id = 0,
+};
+
+struct platform_device exynos_device_md1 = {
+ .name = "exynos-mdev",
+ .id = 1,
+};
+
+struct platform_device exynos_device_md2 = {
+ .name = "exynos-mdev",
+ .id = 2,
+};
+
+static struct regulator_consumer_supply wm8994_avdd1_supply =
+ REGULATOR_SUPPLY("AVDD1", "1-001a");
+
+static struct regulator_consumer_supply wm8994_dcvdd_supply =
+ REGULATOR_SUPPLY("DCVDD", "1-001a");
+
+static struct regulator_init_data wm8994_ldo1_data = {
+ .constraints = {
+ .name = "AVDD1",
+ },
+ .num_consumer_supplies = 1,
+ .consumer_supplies = &wm8994_avdd1_supply,
+};
+
+static struct regulator_init_data wm8994_ldo2_data = {
+ .constraints = {
+ .name = "DCVDD",
+ },
+ .num_consumer_supplies = 1,
+ .consumer_supplies = &wm8994_dcvdd_supply,
+};
+
+static struct wm8994_pdata wm8994_platform_data = {
+ /* configure gpio1 function: 0x0001(Logic level input/output) */
+ .gpio_defaults[0] = 0x0001,
+ /* If the i2s0 and i2s2 is enabled simultaneously */
+ .gpio_defaults[7] = 0x8100, /* GPIO8 DACDAT3 in */
+ .gpio_defaults[8] = 0x0100, /* GPIO9 ADCDAT3 out */
+ .gpio_defaults[9] = 0x0100, /* GPIO10 LRCLK3 out */
+ .gpio_defaults[10] = 0x0100,/* GPIO11 BCLK3 out */
+ .ldo[0] = { 0, &wm8994_ldo1_data },
+ .ldo[1] = { 0, &wm8994_ldo2_data },
+};
+
+static struct i2c_board_info i2c_devs1[] __initdata = {
+ {
+ I2C_BOARD_INFO("wm8994", 0x1a),
+ .platform_data = &wm8994_platform_data,
+ },
+};
+
+static struct s3c64xx_spi_csinfo spi1_csi[] = {
+ [0] = {
+ .line = EXYNOS5_GPA2(5),
+ .fb_delay = 0x2,
+ },
+};
+
+static struct spi_board_info spi1_board_info[] __initdata = {
+ {
+ .modalias = "spidev",
+ .platform_data = NULL,
+ .max_speed_hz = 10*1000*1000,
+ .bus_num = 1,
+ .chip_select = 0,
+ .mode = SPI_MODE_0,
+ .controller_data = spi1_csi,
+ }
+};
+
+struct sysmmu_platform_data platdata_sysmmu_mfc_l = {
+ .dbgname = "mfc_l",
+ .clockname = "sysmmu",
+};
+
+struct sysmmu_platform_data platdata_sysmmu_mfc_r = {
+ .dbgname = "mfc_r",
+ .clockname = "sysmmu",
+};
+
+struct sysmmu_platform_data platdata_sysmmu_gsc = {
+ .dbgname = "gsc",
+ .clockname = "sysmmu",
+};
+
+struct sysmmu_platform_data platdata_sysmmu_g2d = {
+ .dbgname = "g2d",
+ .clockname = "sysmmu",
+};
+
+struct sysmmu_platform_data platdata_sysmmu_fimd = {
+ .dbgname = "fimd",
+ .clockname = "sysmmu",
+};
+
+struct sysmmu_platform_data platdata_sysmmu_tv = {
+ .dbgname = "tv",
+ .clockname = "sysmmu",
+};
+
+#ifdef CONFIG_VIDEO_FIMG2D4X
+static struct fimg2d_platdata fimg2d_data __initdata = {
+ .hw_ver = 0x42,
+ .gate_clkname = "fimg2d",
+};
+#endif
+
+static struct exynos4_ohci_platdata smdk5250_ohci_pdata = {
+ .phy_init = s5p_usb_phy_init,
+ .phy_exit = s5p_usb_phy_exit,
+};
+
+static struct s5p_ehci_platdata smdk5250_ehci_pdata = {
+ .phy_init = s5p_usb_phy_init,
+ .phy_exit = s5p_usb_phy_exit,
+};
+
+static struct dwc3_exynos_data smdk5250_xhci_pdata = {
+ .phy_type = S5P_USB_PHY_DRD,
+ .phy_init = s5p_usb_phy_init,
+ .phy_exit = s5p_usb_phy_exit,
+};
+
+struct exynos_gpio_cfg {
+ unsigned int addr;
+ unsigned int num;
+ unsigned int bit;
+};
+
+static const char *rclksrc[] = {
+ [0] = "busclk",
+ [1] = "i2sclk",
+};
+
+static struct video_info smdk5250_dp_config = {
+ .name = "eDP-LVDS NXP PTN3460",
+
+ .h_sync_polarity = 0,
+ .v_sync_polarity = 0,
+ .interlaced = 0,
+
+ .color_space = COLOR_RGB,
+ .dynamic_range = VESA,
+ .ycbcr_coeff = COLOR_YCBCR601,
+ .color_depth = COLOR_8,
+
+ .link_rate = LINK_RATE_2_70GBPS,
+ .lane_count = LANE_COUNT2,
+};
+
+static struct exynos_dp_platdata smdk5250_dp_data __initdata = {
+ .video_info = &smdk5250_dp_config,
+ .training_type = HW_LINK_TRAINING,
+ .phy_init = s5p_dp_phy_init,
+ .phy_exit = s5p_dp_phy_exit,
+};
+
+#define S5P_PMU_DEBUG S5P_PMUREG(0x0A00)
+/* PMU_DEBUG bits [12:8] = 0x10000 selects XXTI clock source */
+#define PMU_DEBUG_XXTI (0x10 << 8)
+/* Mask bit[12:8] for xxti clock selection */
+#define PMU_DEBUG_CLKOUT_SEL_MASK 0x1f00
+
+static void __init enable_xclkout(void)
+{
+ unsigned int tmp;
+
+ tmp = readl(S5P_PMU_DEBUG);
+ tmp &= ~PMU_DEBUG_CLKOUT_SEL_MASK;
+ tmp |= PMU_DEBUG_XXTI;
+ writel(tmp, S5P_PMU_DEBUG);
+}
+
+static int exynos_cfg_i2s_gpio(struct platform_device *pdev)
+{
+ int id;
+ /* configure GPIO for i2s port */
+ struct exynos_gpio_cfg exynos5_cfg[3] = {
+ { EXYNOS5_GPZ(0), 7, S3C_GPIO_SFN(2) },
+ { EXYNOS5_GPB0(0), 5, S3C_GPIO_SFN(2) },
+ { EXYNOS5_GPB1(0), 5, S3C_GPIO_SFN(2) }
+ };
+
+ if (pdev->dev.of_node) {
+ id = of_alias_get_id(pdev->dev.of_node, "i2s");
+ if (id < 0)
+ dev_err(&pdev->dev, "failed to get alias id:%d\n", id);
+ } else {
+ id = pdev->id;
+ }
+
+ if (id < 0 || id > 2) {
+ printk(KERN_ERR "Invalid Device %d\n", id);
+ return -EINVAL;
+ }
+
+ s3c_gpio_cfgpin_range(exynos5_cfg[id].addr,
+ exynos5_cfg[id].num, exynos5_cfg[id].bit);
+
+ return 0;
+}
+
+static struct s3c_audio_pdata i2sv5_pdata = {
+ .cfg_gpio = exynos_cfg_i2s_gpio,
+ .type = {
+ .i2s = {
+ .quirks = QUIRK_PRI_6CHAN | QUIRK_SEC_DAI
+ | QUIRK_NEED_RSTCLR,
+ .src_clk = rclksrc,
+ .idma_addr = EXYNOS4_AUDSS_INT_MEM,
+ },
+ },
+};
/*
* The following lookup table is used to override device names when devices
* are registered from device tree. This is temporarily added to enable
"exynos4210-uart.2", NULL),
OF_DEV_AUXDATA("samsung,exynos4210-uart", EXYNOS5_PA_UART3,
"exynos4210-uart.3", NULL),
+ OF_DEV_AUXDATA("samsung,s3c2440-i2c", EXYNOS5_PA_IIC(0),
+ "s3c2440-i2c.0", NULL),
+ OF_DEV_AUXDATA("samsung,s3c2440-i2c", EXYNOS5_PA_IIC(1),
+ "s3c2440-i2c.1", NULL),
+ OF_DEV_AUXDATA("samsung,s3c2440-i2c", EXYNOS5_PA_IIC(2),
+ "s3c2440-i2c.2", NULL),
+ OF_DEV_AUXDATA("samsung,s3c2440-i2c", EXYNOS5_PA_IIC(3),
+ "s3c2440-i2c.3", NULL),
+ OF_DEV_AUXDATA("samsung,s3c2440-i2c", EXYNOS5_PA_IIC(4),
+ "s3c2440-i2c.4", NULL),
+ OF_DEV_AUXDATA("samsung,s3c2440-i2c", EXYNOS5_PA_IIC(5),
+ "s3c2440-i2c.5", NULL),
+ OF_DEV_AUXDATA("samsung,s3c2440-i2c", EXYNOS5_PA_IIC(6),
+ "s3c2440-i2c.6", NULL),
+ OF_DEV_AUXDATA("samsung,s3c2440-i2c", EXYNOS5_PA_IIC(7),
+ "s3c2440-i2c.7", NULL),
+ OF_DEV_AUXDATA("samsung,s3c2440-i2c", EXYNOS5_PA_IIC(8),
+ "s3c2440-hdmiphy-i2c", NULL),
+ OF_DEV_AUXDATA("samsung,exynos4210-spi", EXYNOS5_PA_SPI0,
+ "exynos4210-spi.0", NULL),
+ OF_DEV_AUXDATA("samsung,exynos4210-spi", EXYNOS5_PA_SPI1,
+ "exynos4210-spi.1", NULL),
+ OF_DEV_AUXDATA("samsung,exynos4210-spi", EXYNOS5_PA_SPI2,
+ "exynos4210-spi.2", NULL),
+ OF_DEV_AUXDATA("synopsis,dw-mshc-exynos5250", 0x12200000,
+ "dw_mmc.0", NULL),
+ OF_DEV_AUXDATA("synopsis,dw-mshc-exynos5250", 0x12210000,
+ "dw_mmc.1", NULL),
+ OF_DEV_AUXDATA("synopsis,dw-mshc-exynos5250", 0x12220000,
+ "dw_mmc.2", NULL),
+ OF_DEV_AUXDATA("synopsis,dw-mshc-exynos5250", 0x12230000,
+ "dw_mmc.3", NULL),
OF_DEV_AUXDATA("arm,pl330", EXYNOS5_PA_PDMA0, "dma-pl330.0", NULL),
OF_DEV_AUXDATA("arm,pl330", EXYNOS5_PA_PDMA1, "dma-pl330.1", NULL),
OF_DEV_AUXDATA("arm,pl330", EXYNOS5_PA_MDMA1, "dma-pl330.2", NULL),
+ OF_DEV_AUXDATA("samsung,s5p-sysmmu", 0x10A60000,
+ "s5p-sysmmu.2", &platdata_sysmmu_g2d),
+ OF_DEV_AUXDATA("samsung,s5p-sysmmu", 0x11210000,
+ "s5p-sysmmu.3", &platdata_sysmmu_mfc_l),
+ OF_DEV_AUXDATA("samsung,s5p-sysmmu", 0x11200000,
+ "s5p-sysmmu.4", &platdata_sysmmu_mfc_r),
+ OF_DEV_AUXDATA("samsung,s5p-sysmmu", 0x14640000,
+ "s5p-sysmmu.27", &platdata_sysmmu_fimd),
+ OF_DEV_AUXDATA("samsung,s5p-sysmmu", 0x14650000,
+ "s5p-sysmmu.28", &platdata_sysmmu_tv),
+ OF_DEV_AUXDATA("samsung,s5p-sysmmu", 0x13E80000,
+ "s5p-sysmmu.23", &platdata_sysmmu_gsc),
+ OF_DEV_AUXDATA("samsung,s5p-sysmmu", 0x13E90000,
+ "s5p-sysmmu.24", &platdata_sysmmu_gsc),
+ OF_DEV_AUXDATA("samsung,s5p-sysmmu", 0x13EA0000,
+ "s5p-sysmmu.25", &platdata_sysmmu_gsc),
+ OF_DEV_AUXDATA("samsung,s5p-sysmmu", 0x13EB0000,
+ "s5p-sysmmu.26", &platdata_sysmmu_gsc),
+ OF_DEV_AUXDATA("samsung,exynos5-fb", 0x14400000,
+ "exynos5-fb", &smdk5250_lcd1_pdata),
+ OF_DEV_AUXDATA("samsung,exynos5-mipi", 0x14500000,
+ "s5p-mipi-dsim", &dsim_platform_data),
+ OF_DEV_AUXDATA("samsung,exynos5-dp", 0x145B0000,
+ "s5p-dp", &smdk5250_dp_data),
+ OF_DEV_AUXDATA("samsung,s5p-mfc-v6", 0x11000000, "s5p-mfc-v6", NULL),
+ OF_DEV_AUXDATA("samsung,exynos-gsc", 0x13E00000,
+ "exynos-gsc.0", NULL),
+ OF_DEV_AUXDATA("samsung,exynos-gsc", 0x13E10000,
+ "exynos-gsc.1", NULL),
+ OF_DEV_AUXDATA("samsung,exynos-gsc", 0x13E20000,
+ "exynos-gsc.2", NULL),
+ OF_DEV_AUXDATA("samsung,exynos-gsc", 0x13E30000,
+ "exynos-gsc.3", NULL),
+#ifdef CONFIG_VIDEO_FIMG2D4X
+ OF_DEV_AUXDATA("samsung,s5p-g2d", 0x10850000,
+ "s5p-g2d", &fimg2d_data),
+#endif
+ OF_DEV_AUXDATA("samsung,exynos-ohci", 0x12120000,
+ "exynos-ohci", &smdk5250_ohci_pdata),
+ OF_DEV_AUXDATA("samsung,exynos-ehci", 0x12110000,
+ "s5p-ehci", &smdk5250_ehci_pdata),
+ OF_DEV_AUXDATA("samsung,exynos-xhci", 0x12000000,
+ "exynos-dwc3", &smdk5250_xhci_pdata),
+ OF_DEV_AUXDATA("samsung,i2s", 0x03830000,
+ "samsung-i2s.0", &i2sv5_pdata),
+ OF_DEV_AUXDATA("samsung,exynos5-hdmi", 0x14530000,
+ "exynos5-hdmi", NULL),
+ OF_DEV_AUXDATA("samsung,s5p-mixer", 0x14450000, "s5p-mixer", NULL),
{},
};
+static struct platform_device *smdk5250_devices[] __initdata = {
+ &smdk5250_lcd, /* for platform_lcd device */
+ &exynos_device_md0, /* for media device framework */
+ &exynos_device_md1, /* for media device framework */
+ &exynos_device_md2, /* for media device framework */
+ &samsung_asoc_dma, /* for audio dma interface device */
+ &exynos_drm_device,
+};
+
+static struct regulator_consumer_supply dummy_supplies[] = {
+ REGULATOR_SUPPLY("vddvario", "7000000.lan9215"),
+ REGULATOR_SUPPLY("vdd33a", "7000000.lan9215"),
+};
+
static void __init exynos5250_dt_map_io(void)
{
exynos_init_io(NULL, 0);
s3c24xx_init_clocks(24000000);
}
+static void __init exynos5_reserve(void)
+{
+ /* required to have enough address range to remap the IOMMU
+ * allocated buffers */
+ init_consistent_dma_size(SZ_64M);
+}
+
+static void s5p_tv_setup(void)
+{
+ /* direct HPD to HDMI chip */
+ gpio_request(EXYNOS5_GPX3(7), "hpd-plug");
+
+ gpio_direction_input(EXYNOS5_GPX3(7));
+ s3c_gpio_cfgpin(EXYNOS5_GPX3(7), S3C_GPIO_SFN(0x3));
+ s3c_gpio_setpull(EXYNOS5_GPX3(7), S3C_GPIO_PULL_NONE);
+}
+
+static void exynos5_i2c_setup(void)
+{
+ /* Setup the low-speed i2c controller interrupts */
+ writel(0x0, EXYNOS5_SYS_I2C_CFG);
+}
+
static void __init exynos5250_dt_machine_init(void)
{
+ struct device_node *srom_np, *np;
+ int ret;
+
+ regulator_register_fixed(0, dummy_supplies, ARRAY_SIZE(dummy_supplies));
+
+ /* Setup pins for any SMSC 911x controller on the SROMC bus */
+ srom_np = of_find_node_by_path("/sromc-bus");
+ if (!srom_np) {
+ printk(KERN_ERR "No /sromc-bus property.\n");
+ goto out;
+ }
+ for_each_child_of_node(srom_np, np) {
+ if (of_device_is_compatible(np, "smsc,lan9115")) {
+ u32 reg;
+ of_property_read_u32(np, "reg", ®);
+ smsc911x_init(reg);
+ }
+ }
+
+ samsung_bl_set(&smdk5250_bl_gpio_info, &smdk5250_bl_data);
+
+ /*
+ * HACK ALERT! TODO: FIXME!
+ *
+ * We're going to hack in Daisy LCD info here for bringup purposes.
+ * Lots of things wrong with what we're doing here, but it works for
+ * bringup purposes.
+ */
+
+ if (of_machine_is_compatible("google,daisy")) {
+#ifdef CONFIG_DRM_EXYNOS_FIMD
+ smdk5250_lcd1_pdata.panel.timing.xres = 1366;
+ smdk5250_lcd1_pdata.panel.timing.yres = 768;
+ smdk5250_lcd1_pdata.panel_type = MIPI_LCD;
+#else
+ smdk5250_fb_win0.win_mode.xres = 1366;
+ smdk5250_fb_win0.win_mode.yres = 768;
+ smdk5250_fb_win0.virtual_x = 1366;
+ smdk5250_fb_win0.virtual_y = 768 * 2;
+
+ smdk5250_fb_win1.win_mode.xres = 1366;
+ smdk5250_fb_win1.win_mode.yres = 768;
+ smdk5250_fb_win1.virtual_x = 1366;
+ smdk5250_fb_win1.virtual_y = 768 * 2;
+
+ smdk5250_fb_win2.win_mode.xres = 1366;
+ smdk5250_fb_win2.win_mode.yres = 768;
+ smdk5250_fb_win2.virtual_x = 1366;
+ smdk5250_fb_win2.virtual_y = 768 * 2;
+#endif
+ dsim_lcd_info.lcd_size.width = 1366;
+ dsim_lcd_info.lcd_size.height = 768;
+ } else if (of_machine_is_compatible("google,snow")) {
+#ifdef CONFIG_DRM_EXYNOS_FIMD
+ smdk5250_lcd1_pdata.panel.timing = snow_fb_window;
+ smdk5250_lcd1_pdata.panel_type = DP_LCD;
+ smdk5250_lcd1_pdata.clock_rate = 267 * 1000 * 1000;
+ smdk5250_lcd1_pdata.vidcon1 = 0;
+#endif
+ ret = gpio_request_one(EXYNOS5_GPX1(5), GPIOF_OUT_INIT_HIGH,
+ "DP_PD_N");
+ WARN_ON(ret);
+ }
+
+ if (gpio_request_one(EXYNOS5_GPX2(6), GPIOF_OUT_INIT_HIGH,
+ "HOST_VBUS_CONTROL")) {
+ printk(KERN_ERR "failed to request gpio_host_vbus\n");
+ } else {
+ s3c_gpio_setpull(EXYNOS5_GPX2(6), S3C_GPIO_PULL_NONE);
+ gpio_free(EXYNOS5_GPX2(6));
+ }
+
+ exynos5_i2c_setup();
+
+ /*
+ * BIG HACK: The wm8994 is not device tree enabled apparently, so
+ * needs to be added manually. ...but it's only on SMDK5250.
+ */
+ if (of_machine_is_compatible("samsung,smdk5250")) {
+ i2c_register_board_info(1, i2c_devs1, ARRAY_SIZE(i2c_devs1));
+ }
+
+ /* XCLKOUT needs to be moved over to the clock interface, but enable it
+ * here for now.
+ */
+ enable_xclkout();
+
+ if (gpio_request_one(EXYNOS5_GPA2(5), GPIOF_OUT_INIT_HIGH, "SPI1_CS")) {
+ printk(KERN_ERR "Spidev ChipSelect unavailable\n");
+ } else {
+ s3c_gpio_cfgpin(EXYNOS5_GPA2(5), S3C_GPIO_SFN(0x1));
+ s3c_gpio_setpull(EXYNOS5_GPA2(5), S3C_GPIO_PULL_NONE);
+ s5p_gpio_set_drvstr(EXYNOS5_GPA2(5), S5P_GPIO_DRVSTR_LV4);
+ spi_register_board_info(spi1_board_info,
+ ARRAY_SIZE(spi1_board_info));
+ }
+
of_platform_populate(NULL, of_default_bus_match_table,
exynos5250_auxdata_lookup, NULL);
+
+#ifdef CONFIG_DRM_EXYNOS_FIMD
+ if (of_machine_is_compatible("google,snow"))
+ exynos_dp_gpio_setup_24bpp();
+ else
+ exynos_fimd_gpio_setup_24bpp();
+#endif
+ s5p_tv_setup();
+
+ platform_add_devices(smdk5250_devices, ARRAY_SIZE(smdk5250_devices));
+out:
+ of_node_put(srom_np);
+ return;
}
static char const *exynos5250_dt_compat[] __initdata = {
DT_MACHINE_START(EXYNOS5_DT, "SAMSUNG EXYNOS5 (Flattened Device Tree)")
/* Maintainer: Kukjin Kim <kgene.kim@samsung.com> */
.init_irq = exynos5_init_irq,
+ .reserve = exynos5_reserve,
.map_io = exynos5250_dt_map_io,
.handle_irq = gic_handle_irq,
.init_machine = exynos5250_dt_machine_init,
{
struct mct_clock_event_device *mevt;
unsigned int cpu = smp_processor_id();
+ int mct_lx_irq;
mevt = this_cpu_ptr(&percpu_mct_tick);
mevt->evt = evt;
if (mct_int_type == MCT_INT_SPI) {
if (cpu == 0) {
+ mct_lx_irq = soc_is_exynos4210() ? EXYNOS4_IRQ_MCT_L0 :
+ EXYNOS5_IRQ_MCT_L0;
mct_tick0_event_irq.dev_id = mevt;
- evt->irq = EXYNOS4_IRQ_MCT_L0;
- setup_irq(EXYNOS4_IRQ_MCT_L0, &mct_tick0_event_irq);
+ evt->irq = mct_lx_irq;
+ setup_irq(mct_lx_irq, &mct_tick0_event_irq);
} else {
+ mct_lx_irq = soc_is_exynos4210() ? EXYNOS4_IRQ_MCT_L1 :
+ EXYNOS5_IRQ_MCT_L1;
mct_tick1_event_irq.dev_id = mevt;
- evt->irq = EXYNOS4_IRQ_MCT_L1;
- setup_irq(EXYNOS4_IRQ_MCT_L1, &mct_tick1_event_irq);
- irq_set_affinity(EXYNOS4_IRQ_MCT_L1, cpumask_of(1));
+ evt->irq = mct_lx_irq;
+ setup_irq(mct_lx_irq, &mct_tick1_event_irq);
+ irq_set_affinity(mct_lx_irq, cpumask_of(1));
}
} else {
enable_percpu_irq(EXYNOS_IRQ_MCT_LOCALTIMER, 0);
static void __init exynos4_timer_init(void)
{
- if (soc_is_exynos4210())
+ if ((soc_is_exynos4210()) || (soc_is_exynos5250()))
mct_int_type = MCT_INT_SPI;
else
mct_int_type = MCT_INT_PPI;
-/* linux/arch/arm/mach-exynos4/pm.c
- *
- * Copyright (c) 2011 Samsung Electronics Co., Ltd.
+/*
+ * Copyright (c) 2011-2012 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
- * EXYNOS4210 - Power Management support
+ * EXYNOS - Power Management support
*
* Based on arch/arm/mach-s3c2410/pm.c
* Copyright (c) 2006 Simtec Electronics
SAVE_ITEM(EXYNOS4_VPLL_CON1),
};
-static struct sleep_save exynos4_core_save[] = {
- /* GIC side */
- SAVE_ITEM(S5P_VA_GIC_CPU + 0x000),
- SAVE_ITEM(S5P_VA_GIC_CPU + 0x004),
- SAVE_ITEM(S5P_VA_GIC_CPU + 0x008),
- SAVE_ITEM(S5P_VA_GIC_CPU + 0x00C),
- SAVE_ITEM(S5P_VA_GIC_CPU + 0x014),
- SAVE_ITEM(S5P_VA_GIC_CPU + 0x018),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x000),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x004),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x100),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x104),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x108),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x300),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x304),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x308),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x400),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x404),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x408),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x40C),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x410),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x414),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x418),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x41C),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x420),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x424),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x428),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x42C),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x430),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x434),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x438),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x43C),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x440),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x444),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x448),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x44C),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x450),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x454),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x458),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x45C),
-
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x800),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x804),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x808),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x80C),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x810),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x814),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x818),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x81C),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x820),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x824),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x828),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x82C),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x830),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x834),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x838),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x83C),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x840),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x844),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x848),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x84C),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x850),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x854),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x858),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0x85C),
-
- SAVE_ITEM(S5P_VA_GIC_DIST + 0xC00),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0xC04),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0xC08),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0xC0C),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0xC10),
- SAVE_ITEM(S5P_VA_GIC_DIST + 0xC14),
-
- SAVE_ITEM(S5P_VA_COMBINER_BASE + 0x000),
- SAVE_ITEM(S5P_VA_COMBINER_BASE + 0x010),
- SAVE_ITEM(S5P_VA_COMBINER_BASE + 0x020),
- SAVE_ITEM(S5P_VA_COMBINER_BASE + 0x030),
- SAVE_ITEM(S5P_VA_COMBINER_BASE + 0x040),
- SAVE_ITEM(S5P_VA_COMBINER_BASE + 0x050),
- SAVE_ITEM(S5P_VA_COMBINER_BASE + 0x060),
- SAVE_ITEM(S5P_VA_COMBINER_BASE + 0x070),
- SAVE_ITEM(S5P_VA_COMBINER_BASE + 0x080),
- SAVE_ITEM(S5P_VA_COMBINER_BASE + 0x090),
+static struct sleep_save exynos5_sys_save[] = {
+ SAVE_ITEM(EXYNOS5_SYS_I2C_CFG),
+};
+static struct sleep_save exynos_core_save[] = {
/* SROM side */
SAVE_ITEM(S5P_SROM_BW),
SAVE_ITEM(S5P_SROM_BC0),
/* For Cortex-A9 Diagnostic and Power control register */
static unsigned int save_arm_register[2];
-static int exynos4_cpu_suspend(unsigned long arg)
+static int exynos_cpu_suspend(unsigned long arg)
{
+#ifdef CONFIG_CACHE_L2X0
outer_flush_all();
+#endif
+ /*
+ * To enter suspend mode, GPS LPI register must be set.
+ */
+ if (soc_is_exynos5250())
+ __raw_writel(0x10000, EXYNOS5_GPS_LPI);
/* issue the standby signal into the pm unit. */
cpu_do_idle();
panic("sleep resumed to originator?");
}
-static void exynos4_pm_prepare(void)
+static void exynos_pm_prepare(void)
{
- u32 tmp;
-
- s3c_pm_do_save(exynos4_core_save, ARRAY_SIZE(exynos4_core_save));
- s3c_pm_do_save(exynos4_epll_save, ARRAY_SIZE(exynos4_epll_save));
- s3c_pm_do_save(exynos4_vpll_save, ARRAY_SIZE(exynos4_vpll_save));
+ unsigned int tmp;
- tmp = __raw_readl(S5P_INFORM1);
+ s3c_pm_do_save(exynos_core_save, ARRAY_SIZE(exynos_core_save));
+
+ if (!soc_is_exynos5250()) {
+ s3c_pm_do_save(exynos4_epll_save,
+ ARRAY_SIZE(exynos4_epll_save));
+ s3c_pm_do_save(exynos4_vpll_save,
+ ARRAY_SIZE(exynos4_vpll_save));
+ } else {
+ s3c_pm_do_save(exynos5_sys_save, ARRAY_SIZE(exynos5_sys_save));
+ /* Disable USE_RETENTION of JPEG_MEM_OPTION */
+ tmp = __raw_readl(EXYNOS5_JPEG_MEM_OPTION);
+ tmp &= ~EXYNOS5_OPTION_USE_RETENTION;
+ __raw_writel(tmp, EXYNOS5_JPEG_MEM_OPTION);
+ }
/* Set value of power down register for sleep mode */
-
exynos4_sys_powerdown_conf(SYS_SLEEP);
__raw_writel(S5P_CHECK_SLEEP, S5P_INFORM1);
/* ensure at least INFORM0 has the resume address */
-
__raw_writel(virt_to_phys(s3c_cpu_resume), S5P_INFORM0);
- /* Before enter central sequence mode, clock src register have to set */
-
- s3c_pm_do_restore_core(exynos4_set_clksrc, ARRAY_SIZE(exynos4_set_clksrc));
-
+ /*
+ * Before enter central sequence mode,
+ * clock src register have to set.
+ */
+ if (!soc_is_exynos5250())
+ s3c_pm_do_restore_core(exynos4_set_clksrc,
+ ARRAY_SIZE(exynos4_set_clksrc));
if (soc_is_exynos4210())
s3c_pm_do_restore_core(exynos4210_set_clksrc, ARRAY_SIZE(exynos4210_set_clksrc));
}
-static int exynos4_pm_add(struct device *dev, struct subsys_interface *sif)
+static int exynos_pm_add(struct device *dev, struct subsys_interface *sif)
{
- pm_cpu_prep = exynos4_pm_prepare;
- pm_cpu_sleep = exynos4_cpu_suspend;
+ pm_cpu_prep = exynos_pm_prepare;
+ pm_cpu_sleep = exynos_cpu_suspend;
return 0;
}
} while (epll_wait || vpll_wait);
}
-static struct subsys_interface exynos4_pm_interface = {
- .name = "exynos4_pm",
- .subsys = &exynos4_subsys,
- .add_dev = exynos4_pm_add,
+static struct subsys_interface exynos_pm_interface = {
+ .name = "exynos_pm",
+ .subsys = &exynos_subsys,
+ .add_dev = exynos_pm_add,
};
-static __init int exynos4_pm_drvinit(void)
+static __init int exynos_pm_drvinit(void)
{
struct clk *pll_base;
unsigned int tmp;
tmp |= ((0xFF << 8) | (0x1F << 1));
__raw_writel(tmp, S5P_WAKEUP_MASK);
- pll_base = clk_get(NULL, "xtal");
+ if (!soc_is_exynos5250()) {
+ pll_base = clk_get(NULL, "xtal");
- if (!IS_ERR(pll_base)) {
- pll_base_rate = clk_get_rate(pll_base);
- clk_put(pll_base);
+ if (!IS_ERR(pll_base)) {
+ pll_base_rate = clk_get_rate(pll_base);
+ clk_put(pll_base);
+ }
}
- return subsys_interface_register(&exynos4_pm_interface);
+ return subsys_interface_register(&exynos_pm_interface);
}
-arch_initcall(exynos4_pm_drvinit);
+arch_initcall(exynos_pm_drvinit);
-static int exynos4_pm_suspend(void)
+static int exynos_pm_suspend(void)
{
unsigned long tmp;
/* Setting Central Sequence Register for power down mode */
-
tmp = __raw_readl(S5P_CENTRAL_SEQ_CONFIGURATION);
tmp &= ~S5P_CENTRAL_LOWPWR_CFG;
__raw_writel(tmp, S5P_CENTRAL_SEQ_CONFIGURATION);
- if (soc_is_exynos4212()) {
- tmp = __raw_readl(S5P_CENTRAL_SEQ_OPTION);
- tmp &= ~(S5P_USE_STANDBYWFI_ISP_ARM |
- S5P_USE_STANDBYWFE_ISP_ARM);
- __raw_writel(tmp, S5P_CENTRAL_SEQ_OPTION);
- }
-
- /* Save Power control register */
- asm ("mrc p15, 0, %0, c15, c0, 0"
- : "=r" (tmp) : : "cc");
- save_arm_register[0] = tmp;
+ /* Setting SEQ_OPTION register */
+ tmp = (S5P_USE_STANDBY_WFI0 | S5P_USE_STANDBY_WFE0);
+ __raw_writel(tmp, S5P_CENTRAL_SEQ_OPTION);
- /* Save Diagnostic register */
- asm ("mrc p15, 0, %0, c15, c0, 1"
- : "=r" (tmp) : : "cc");
- save_arm_register[1] = tmp;
+ if (!soc_is_exynos5250()) {
+ /* Save Power control register */
+ asm ("mrc p15, 0, %0, c15, c0, 0"
+ : "=r" (tmp) : : "cc");
+ save_arm_register[0] = tmp;
+ /* Save Diagnostic register */
+ asm ("mrc p15, 0, %0, c15, c0, 1"
+ : "=r" (tmp) : : "cc");
+ save_arm_register[1] = tmp;
+ }
return 0;
}
-static void exynos4_pm_resume(void)
+static void exynos_pm_resume(void)
{
unsigned long tmp;
/* No need to perform below restore code */
goto early_wakeup;
}
- /* Restore Power control register */
- tmp = save_arm_register[0];
- asm volatile ("mcr p15, 0, %0, c15, c0, 0"
- : : "r" (tmp)
- : "cc");
-
- /* Restore Diagnostic register */
- tmp = save_arm_register[1];
- asm volatile ("mcr p15, 0, %0, c15, c0, 1"
- : : "r" (tmp)
- : "cc");
+ if (!soc_is_exynos5250()) {
+ /* Restore Power control register */
+ tmp = save_arm_register[0];
+ asm volatile ("mcr p15, 0, %0, c15, c0, 0"
+ : : "r" (tmp)
+ : "cc");
+
+ /* Restore Diagnostic register */
+ tmp = save_arm_register[1];
+ asm volatile ("mcr p15, 0, %0, c15, c0, 1"
+ : : "r" (tmp)
+ : "cc");
+ }
/* For release retention */
__raw_writel((1 << 28), S5P_PAD_RET_MAUDIO_OPTION);
__raw_writel((1 << 28), S5P_PAD_RET_EBIA_OPTION);
__raw_writel((1 << 28), S5P_PAD_RET_EBIB_OPTION);
- s3c_pm_do_restore_core(exynos4_core_save, ARRAY_SIZE(exynos4_core_save));
+ if (soc_is_exynos5250())
+ s3c_pm_do_restore(exynos5_sys_save, ARRAY_SIZE(exynos5_sys_save));
+
+ s3c_pm_do_restore_core(exynos_core_save, ARRAY_SIZE(exynos_core_save));
- exynos4_restore_pll();
+ if (!soc_is_exynos5250()) {
+ exynos4_restore_pll();
#ifdef CONFIG_SMP
scu_enable(S5P_VA_SCU);
#endif
-
+ }
early_wakeup:
+
+ /* Clear SLEEP mode set in INFORM1 */
+ __raw_writel(0x0, S5P_INFORM1);
+
return;
}
-static struct syscore_ops exynos4_pm_syscore_ops = {
- .suspend = exynos4_pm_suspend,
- .resume = exynos4_pm_resume,
+static struct syscore_ops exynos_pm_syscore_ops = {
+ .suspend = exynos_pm_suspend,
+ .resume = exynos_pm_resume,
};
-static __init int exynos4_pm_syscore_init(void)
+static __init int exynos_pm_syscore_init(void)
{
- register_syscore_ops(&exynos4_pm_syscore_ops);
+ register_syscore_ops(&exynos_pm_syscore_ops);
return 0;
}
-arch_initcall(exynos4_pm_syscore_init);
+arch_initcall(exynos_pm_syscore_init);
#include <linux/io.h>
#include <linux/kernel.h>
+#include <linux/bug.h>
+#include <linux/delay.h>
+#include <linux/pm.h>
#include <mach/regs-clock.h>
#include <mach/pmu.h>
{ PMU_TABLE_END,},
};
+static struct exynos4_pmu_conf exynos5250_pmu_config[] = {
+ /* { .reg = address, .val = { AFTR, LPA, SLEEP } */
+ { EXYNOS5_ARM_CORE0_SYS_PWR_REG, { 0x0, 0x0, 0x2} },
+ { EXYNOS5_DIS_IRQ_ARM_CORE0_LOCAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} },
+ { EXYNOS5_DIS_IRQ_ARM_CORE0_CENTRAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} },
+ { EXYNOS5_ARM_CORE1_SYS_PWR_REG, { 0x0, 0x0, 0x2} },
+ { EXYNOS5_DIS_IRQ_ARM_CORE1_LOCAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} },
+ { EXYNOS5_DIS_IRQ_ARM_CORE1_CENTRAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} },
+ { EXYNOS5_FSYS_ARM_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_DIS_IRQ_FSYS_ARM_CENTRAL_SYS_PWR_REG, { 0x1, 0x1, 0x1} },
+ { EXYNOS5_ISP_ARM_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_DIS_IRQ_ISP_ARM_LOCAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} },
+ { EXYNOS5_DIS_IRQ_ISP_ARM_CENTRAL_SYS_PWR_REG, { 0x0, 0x0, 0x0} },
+ { EXYNOS5_ARM_COMMON_SYS_PWR_REG, { 0x0, 0x0, 0x2} },
+ { EXYNOS5_ARM_L2_SYS_PWR_REG, { 0x3, 0x3, 0x3} },
+ { EXYNOS5_CMU_ACLKSTOP_SYS_PWR_REG, { 0x1, 0x0, 0x1} },
+ { EXYNOS5_CMU_SCLKSTOP_SYS_PWR_REG, { 0x1, 0x0, 0x1} },
+ { EXYNOS5_CMU_RESET_SYS_PWR_REG, { 0x1, 0x1, 0x0} },
+ { EXYNOS5_CMU_ACLKSTOP_SYSMEM_SYS_PWR_REG, { 0x1, 0x0, 0x1} },
+ { EXYNOS5_CMU_SCLKSTOP_SYSMEM_SYS_PWR_REG, { 0x1, 0x0, 0x1} },
+ { EXYNOS5_CMU_RESET_SYSMEM_SYS_PWR_REG, { 0x1, 0x1, 0x0} },
+ { EXYNOS5_DRAM_FREQ_DOWN_SYS_PWR_REG, { 0x1, 0x1, 0x1} },
+ { EXYNOS5_DDRPHY_DLLOFF_SYS_PWR_REG, { 0x1, 0x1, 0x1} },
+ { EXYNOS5_DDRPHY_DLLLOCK_SYS_PWR_REG, { 0x1, 0x1, 0x1} },
+ { EXYNOS5_APLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_MPLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_VPLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_EPLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x1, 0x0} },
+ { EXYNOS5_BPLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_CPLL_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_MPLLUSER_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_BPLLUSER_SYSCLK_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_TOP_BUS_SYS_PWR_REG, { 0x3, 0x0, 0x0} },
+ { EXYNOS5_TOP_RETENTION_SYS_PWR_REG, { 0x1, 0x0, 0x1} },
+ { EXYNOS5_TOP_PWR_SYS_PWR_REG, { 0x3, 0x0, 0x3} },
+ { EXYNOS5_TOP_BUS_SYSMEM_SYS_PWR_REG, { 0x3, 0x0, 0x0} },
+ { EXYNOS5_TOP_RETENTION_SYSMEM_SYS_PWR_REG, { 0x1, 0x0, 0x1} },
+ { EXYNOS5_TOP_PWR_SYSMEM_SYS_PWR_REG, { 0x3, 0x0, 0x3} },
+ { EXYNOS5_LOGIC_RESET_SYS_PWR_REG, { 0x1, 0x1, 0x0} },
+ { EXYNOS5_OSCCLK_GATE_SYS_PWR_REG, { 0x1, 0x0, 0x1} },
+ { EXYNOS5_LOGIC_RESET_SYSMEM_SYS_PWR_REG, { 0x1, 0x1, 0x0} },
+ { EXYNOS5_OSCCLK_GATE_SYSMEM_SYS_PWR_REG, { 0x1, 0x0, 0x1} },
+ { EXYNOS5_USBOTG_MEM_SYS_PWR_REG, { 0x3, 0x0, 0x0} },
+ { EXYNOS5_G2D_MEM_SYS_PWR_REG, { 0x3, 0x0, 0x0} },
+ { EXYNOS5_USBDRD_MEM_SYS_PWR_REG, { 0x3, 0x0, 0x0} },
+ { EXYNOS5_SDMMC_MEM_SYS_PWR_REG, { 0x3, 0x0, 0x0} },
+ { EXYNOS5_CSSYS_MEM_SYS_PWR_REG, { 0x3, 0x0, 0x0} },
+ { EXYNOS5_SECSS_MEM_SYS_PWR_REG, { 0x3, 0x0, 0x0} },
+ { EXYNOS5_ROTATOR_MEM_SYS_PWR_REG, { 0x3, 0x0, 0x0} },
+ { EXYNOS5_INTRAM_MEM_SYS_PWR_REG, { 0x3, 0x0, 0x0} },
+ { EXYNOS5_INTROM_MEM_SYS_PWR_REG, { 0x3, 0x0, 0x0} },
+ { EXYNOS5_JPEG_MEM_SYS_PWR_REG, { 0x3, 0x0, 0x0} },
+ { EXYNOS5_HSI_MEM_SYS_PWR_REG, { 0x3, 0x0, 0x0} },
+ { EXYNOS5_MCUIOP_MEM_SYS_PWR_REG, { 0x3, 0x0, 0x0} },
+ { EXYNOS5_SATA_MEM_SYS_PWR_REG, { 0x3, 0x0, 0x0} },
+ { EXYNOS5_PAD_RETENTION_DRAM_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_PAD_RETENTION_MAU_SYS_PWR_REG, { 0x1, 0x1, 0x0} },
+ { EXYNOS5_PAD_RETENTION_GPIO_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_PAD_RETENTION_UART_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_PAD_RETENTION_MMCA_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_PAD_RETENTION_MMCB_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_PAD_RETENTION_EBIA_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_PAD_RETENTION_EBIB_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_PAD_RETENTION_SPI_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_PAD_RETENTION_GPIO_SYSMEM_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_PAD_ISOLATION_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_PAD_ISOLATION_SYSMEM_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_PAD_ALV_SEL_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_XUSBXTI_SYS_PWR_REG, { 0x1, 0x1, 0x1} },
+ { EXYNOS5_XXTI_SYS_PWR_REG, { 0x1, 0x1, 0x0} },
+ { EXYNOS5_EXT_REGULATOR_SYS_PWR_REG, { 0x1, 0x1, 0x0} },
+ { EXYNOS5_GPIO_MODE_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_GPIO_MODE_SYSMEM_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_GPIO_MODE_MAU_SYS_PWR_REG, { 0x1, 0x1, 0x0} },
+ { EXYNOS5_TOP_ASB_RESET_SYS_PWR_REG, { 0x1, 0x1, 0x1} },
+ { EXYNOS5_TOP_ASB_ISOLATION_SYS_PWR_REG, { 0x1, 0x0, 0x1} },
+ { EXYNOS5_GSCL_SYS_PWR_REG, { 0x7, 0x0, 0x0} },
+ { EXYNOS5_ISP_SYS_PWR_REG, { 0x7, 0x0, 0x0} },
+ { EXYNOS5_MFC_SYS_PWR_REG, { 0x7, 0x0, 0x0} },
+ { EXYNOS5_G3D_SYS_PWR_REG, { 0x7, 0x0, 0x0} },
+ { EXYNOS5_DISP1_SYS_PWR_REG, { 0x7, 0x0, 0x0} },
+ { EXYNOS5_MAU_SYS_PWR_REG, { 0x7, 0x7, 0x0} },
+ { EXYNOS5_GPS_SYS_PWR_REG, { 0x7, 0x0, 0x0} },
+ { EXYNOS5_CMU_CLKSTOP_GSCL_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_CMU_CLKSTOP_ISP_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_CMU_CLKSTOP_MFC_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_CMU_CLKSTOP_G3D_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_CMU_CLKSTOP_DISP1_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_CMU_CLKSTOP_MAU_SYS_PWR_REG, { 0x1, 0x1, 0x0} },
+ { EXYNOS5_CMU_CLKSTOP_GPS_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_CMU_SYSCLK_GSCL_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_CMU_SYSCLK_ISP_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_CMU_SYSCLK_MFC_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_CMU_SYSCLK_G3D_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_CMU_SYSCLK_DISP1_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_CMU_SYSCLK_MAU_SYS_PWR_REG, { 0x1, 0x1, 0x0} },
+ { EXYNOS5_CMU_SYSCLK_GPS_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_CMU_RESET_GSCL_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_CMU_RESET_ISP_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_CMU_RESET_MFC_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_CMU_RESET_G3D_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_CMU_RESET_DISP1_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { EXYNOS5_CMU_RESET_MAU_SYS_PWR_REG, { 0x1, 0x1, 0x0} },
+ { EXYNOS5_CMU_RESET_GPS_SYS_PWR_REG, { 0x1, 0x0, 0x0} },
+ { PMU_TABLE_END,},
+};
+
+void __iomem *exynos5_list_both_cnt_feed[] = {
+ EXYNOS5_ARM_CORE0_OPTION,
+ EXYNOS5_ARM_CORE1_OPTION,
+ EXYNOS5_ARM_COMMON_OPTION,
+ EXYNOS5_GSCL_OPTION,
+ EXYNOS5_ISP_OPTION,
+ EXYNOS5_MFC_OPTION,
+ EXYNOS5_G3D_OPTION,
+ EXYNOS5_DISP1_OPTION,
+ EXYNOS5_MAU_OPTION,
+ EXYNOS5_TOP_PWR_OPTION,
+ EXYNOS5_TOP_PWR_SYSMEM_OPTION,
+};
+
+void __iomem *exynos5_list_diable_wfi_wfe[] = {
+ EXYNOS5_ARM_CORE1_OPTION,
+ EXYNOS5_FSYS_ARM_OPTION,
+ EXYNOS5_ISP_ARM_OPTION,
+};
+
+static void exynos5_power_off(void)
+{
+ unsigned int tmp;
+
+ pr_info("Power down.\n");
+ tmp = __raw_readl(EXYNOS5_PS_HOLD_CONTROL);
+ tmp &= ~(1 << 8);
+ __raw_writel(tmp, EXYNOS5_PS_HOLD_CONTROL);
+
+ /* Wait a little so we don't give a false warning below */
+ mdelay(100);
+
+ pr_err("Power down failed, please power off system manually.\n");
+ while (1)
+ ;
+}
+
+static void exynos5_init_pmu(void)
+{
+ unsigned int i;
+ unsigned int tmp;
+
+ /*
+ * Enable both SC_FEEDBACK and SC_COUNTER
+ */
+ for (i = 0 ; i < ARRAY_SIZE(exynos5_list_both_cnt_feed) ; i++) {
+ tmp = __raw_readl(exynos5_list_both_cnt_feed[i]);
+ tmp |= (EXYNOS5_USE_SC_FEEDBACK |
+ EXYNOS5_USE_SC_COUNTER);
+ __raw_writel(tmp, exynos5_list_both_cnt_feed[i]);
+ }
+
+ /*
+ * SKIP_DEACTIVATE_ACEACP_IN_PWDN_BITFIELD Enable
+ * MANUAL_L2RSTDISABLE_CONTROL_BITFIELD Enable
+ */
+ tmp = __raw_readl(EXYNOS5_ARM_COMMON_OPTION);
+ tmp |= (EXYNOS5_MANUAL_L2RSTDISABLE_CONTROL |
+ EXYNOS5_SKIP_DEACTIVATE_ACEACP_IN_PWDN);
+ __raw_writel(tmp, EXYNOS5_ARM_COMMON_OPTION);
+
+ /*
+ * Disable WFI/WFE on XXX_OPTION
+ */
+ for (i = 0 ; i < ARRAY_SIZE(exynos5_list_diable_wfi_wfe) ; i++) {
+ tmp = __raw_readl(exynos5_list_diable_wfi_wfe[i]);
+ tmp &= ~(EXYNOS5_OPTION_USE_STANDBYWFE |
+ EXYNOS5_OPTION_USE_STANDBYWFI);
+ __raw_writel(tmp, exynos5_list_diable_wfi_wfe[i]);
+ }
+}
+
void exynos4_sys_powerdown_conf(enum sys_powerdown mode)
{
unsigned int i;
+ if (soc_is_exynos5250())
+ exynos5_init_pmu();
+
for (i = 0; (exynos4_pmu_config[i].reg != PMU_TABLE_END) ; i++)
__raw_writel(exynos4_pmu_config[i].val[mode],
exynos4_pmu_config[i].reg);
} else if (soc_is_exynos4212()) {
exynos4_pmu_config = exynos4212_pmu_config;
pr_info("EXYNOS4212 PMU Initialize\n");
+ } else if (soc_is_exynos5250()) {
+ exynos4_pmu_config = exynos5250_pmu_config;
+ pm_power_off = exynos5_power_off;
+ pr_info("EXYNOS5250 PMU Initialize\n");
} else {
pr_info("EXYNOS4: PMU not supported\n");
}
--- /dev/null
+/* linux/arch/arm/mach-exynos/setup-dp.c
+ *
+ * Copyright (c) 2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com/
+ *
+ * Base Samsung Exynos DP configuration
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+#include <linux/io.h>
+#include <mach/regs-clock.h>
+
+void s5p_dp_phy_init(void)
+{
+ u32 reg;
+
+ /* Reference clock selection for DPTX_PHY: PAD_OSC_IN */
+ reg = __raw_readl(S3C_VA_SYS + 0x04d4);
+ reg &= ~(1 << 0);
+ __raw_writel(reg, S3C_VA_SYS + 0x04d4);
+
+ /* Select clock source for DPTX_PHY as XXTI */
+ reg = __raw_readl(S3C_VA_SYS + 0x04d8);
+ reg &= ~(1 << 3);
+ __raw_writel(reg, S3C_VA_SYS + 0x04d8);
+
+ reg = __raw_readl(S5P_DPTX_PHY_CONTROL);
+ reg |= S5P_DPTX_PHY_ENABLE;
+ __raw_writel(reg, S5P_DPTX_PHY_CONTROL);
+}
+
+void s5p_dp_phy_exit(void)
+{
+ u32 reg;
+
+ reg = __raw_readl(S5P_DPTX_PHY_CONTROL);
+ reg &= ~S5P_DPTX_PHY_ENABLE;
+ __raw_writel(reg, S5P_DPTX_PHY_CONTROL);
+}
--- /dev/null
+/* linux/arch/arm/mach-exynos/setup-fimd.c
+ *
+ * Copyright (c) 2009-2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com
+ *
+ * Base Exynos4 FIMD configuration
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/fb.h>
+#include <linux/gpio.h>
+#include <linux/clk.h>
+
+#include <plat/fb.h>
+#include <plat/gpio-cfg.h>
+#include <plat/clock.h>
+
+#include <mach/regs-clock.h>
+#include <mach/map.h>
+
+void exynos4_fimd_cfg_gpios(unsigned int base, unsigned int nr,
+ unsigned int cfg, s5p_gpio_drvstr_t drvstr)
+{
+ s3c_gpio_cfgrange_nopull(base, nr, cfg);
+
+ for (; nr > 0; nr--, base++)
+ s5p_gpio_set_drvstr(base, drvstr);
+}
+
+int __init exynos4_fimd_setup_clock(struct device *dev, const char *bus_clk,
+ const char *parent, unsigned long clk_rate)
+{
+ struct clk *clk_parent;
+ struct clk *sclk;
+
+ sclk = clk_get(dev, bus_clk);
+ if (IS_ERR(sclk))
+ return PTR_ERR(sclk);
+
+ clk_parent = clk_get(NULL, parent);
+ if (IS_ERR(clk_parent)) {
+ clk_put(sclk);
+ return PTR_ERR(clk_parent);
+ }
+
+ if (clk_set_parent(sclk, clk_parent)) {
+ pr_err("Unable to set parent %s of clock %s.\n",
+ clk_parent->name, sclk->name);
+ clk_put(sclk);
+ clk_put(clk_parent);
+ return PTR_ERR(sclk);
+ }
+
+ if (!clk_rate)
+ clk_rate = 134000000UL;
+
+ if (clk_set_rate(sclk, clk_rate)) {
+ pr_err("%s rate change failed: %lu\n", sclk->name, clk_rate);
+ clk_put(sclk);
+ clk_put(clk_parent);
+ return PTR_ERR(sclk);
+ }
+
+ clk_put(sclk);
+ clk_put(clk_parent);
+
+ return 0;
+}
--- /dev/null
+/* linux/arch/arm/mach-exynos/setup-mipidsim.c
+ *
+ * Copyright (c) 2011 Samsung Electronics
+ *
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of
+ * the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * ERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston,
+ * A 02111-1307 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/io.h>
+#include <linux/err.h>
+#include <linux/platform_device.h>
+#include <linux/clk.h>
+#include <linux/regulator/consumer.h>
+
+#include <mach/map.h>
+#include <mach/regs-clock.h>
+
+#include <plat/dsim.h>
+#include <plat/clock.h>
+#include <plat/regs-mipidsim.h>
+
+#define S5P_MIPI_M_RESETN 4
+
+static int s5p_dsim_enable_d_phy(struct mipi_dsim_device *dsim,
+ unsigned int enable)
+{
+ unsigned int reg;
+#if defined(CONFIG_ARCH_EXYNOS5)
+ reg = readl(S5P_MIPI_DPHY_CONTROL(1)) & ~(1 << 0);
+ reg |= (enable << 0);
+ writel(reg, S5P_MIPI_DPHY_CONTROL(1));
+#else
+ reg = readl(S5P_MIPI_DPHY_CONTROL(0)) & ~(1 << 0);
+ reg |= (enable << 0);
+ writel(reg, S5P_MIPI_DPHY_CONTROL(0));
+#endif
+ return 0;
+}
+
+static int s5p_dsim_enable_dsi_master(struct mipi_dsim_device *dsim,
+ unsigned int enable)
+{
+ unsigned int reg;
+#if defined(CONFIG_ARCH_EXYNOS5)
+ reg = readl(S5P_MIPI_DPHY_CONTROL(1)) & ~(1 << 2);
+ reg |= (enable << 2);
+ writel(reg, S5P_MIPI_DPHY_CONTROL(1));
+#else
+ reg = readl(S5P_MIPI_DPHY_CONTROL(0)) & ~(1 << 2);
+ reg |= (enable << 2);
+ writel(reg, S5P_MIPI_DPHY_CONTROL(0));
+#endif
+ return 0;
+}
+
+int s5p_dsim_part_reset(struct mipi_dsim_device *dsim)
+{
+#if defined(CONFIG_ARCH_EXYNOS5)
+ if (dsim->id == 0)
+ writel(S5P_MIPI_M_RESETN, S5P_MIPI_DPHY_CONTROL(1));
+#else
+ if (dsim->id == 0)
+ writel(S5P_MIPI_M_RESETN, S5P_MIPI_DPHY_CONTROL(0));
+#endif
+ return 0;
+}
+
+int s5p_dsim_init_d_phy(struct mipi_dsim_device *dsim, unsigned int enable)
+{
+ /**
+ * DPHY and aster block must be enabled at the system initialization
+ * step before data access from/to DPHY begins.
+ */
+ s5p_dsim_enable_d_phy(dsim, enable);
+
+ s5p_dsim_enable_dsi_master(dsim, enable);
+ return 0;
+}
*/
#include <linux/gpio.h>
-#include <linux/platform_device.h>
-
#include <plat/gpio-cfg.h>
-#include <plat/s3c64xx-spi.h>
#ifdef CONFIG_S3C64XX_DEV_SPI0
-struct s3c64xx_spi_info s3c64xx_spi0_pdata __initdata = {
- .fifo_lvl_mask = 0x1ff,
- .rx_lvl_offset = 15,
- .high_speed = 1,
- .clk_from_cmu = true,
- .tx_st_done = 25,
-};
-
-int s3c64xx_spi0_cfg_gpio(struct platform_device *dev)
+int s3c64xx_spi0_cfg_gpio(void)
{
s3c_gpio_cfgpin(EXYNOS4_GPB(0), S3C_GPIO_SFN(2));
s3c_gpio_setpull(EXYNOS4_GPB(0), S3C_GPIO_PULL_UP);
#endif
#ifdef CONFIG_S3C64XX_DEV_SPI1
-struct s3c64xx_spi_info s3c64xx_spi1_pdata __initdata = {
- .fifo_lvl_mask = 0x7f,
- .rx_lvl_offset = 15,
- .high_speed = 1,
- .clk_from_cmu = true,
- .tx_st_done = 25,
-};
-
-int s3c64xx_spi1_cfg_gpio(struct platform_device *dev)
+int s3c64xx_spi1_cfg_gpio(void)
{
s3c_gpio_cfgpin(EXYNOS4_GPB(4), S3C_GPIO_SFN(2));
s3c_gpio_setpull(EXYNOS4_GPB(4), S3C_GPIO_PULL_UP);
#endif
#ifdef CONFIG_S3C64XX_DEV_SPI2
-struct s3c64xx_spi_info s3c64xx_spi2_pdata __initdata = {
- .fifo_lvl_mask = 0x7f,
- .rx_lvl_offset = 15,
- .high_speed = 1,
- .clk_from_cmu = true,
- .tx_st_done = 25,
-};
-
-int s3c64xx_spi2_cfg_gpio(struct platform_device *dev)
+int s3c64xx_spi2_cfg_gpio(void)
{
s3c_gpio_cfgpin(EXYNOS4_GPC1(1), S3C_GPIO_SFN(5));
s3c_gpio_setpull(EXYNOS4_GPC1(1), S3C_GPIO_PULL_UP);
--- /dev/null
+/* linux/arch/arm/mach-exynos/setup-tvout.c
+ *
+ * Copyright (c) 2010 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com/
+ *
+ * Base TVOUT gpio configuration
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+#include <linux/clk.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/gpio.h>
+#include <linux/kernel.h>
+#include <linux/platform_device.h>
+#include <linux/types.h>
+#include <plat/clock.h>
+#include <plat/gpio-cfg.h>
+#include <mach/regs-clock.h>
+#include <mach/regs-gpio.h>
+#include <linux/io.h>
+#include <mach/map.h>
+#include <mach/gpio.h>
+#include <plat/tvout.h>
+#include <plat/cpu.h>
+
+#if defined(CONFIG_ARCH_EXYNOS4)
+#define HDMI_GPX(_nr) EXYNOS4_GPX3(_nr)
+#elif defined(CONFIG_ARCH_EXYNOS5)
+#define HDMI_GPX(_nr) EXYNOS5_GPX3(_nr)
+#endif
+
+struct platform_device; /* don't need the contents */
+
+void s5p_int_src_hdmi_hpd(struct platform_device *pdev)
+{
+ s3c_gpio_cfgpin(HDMI_GPX(7), S3C_GPIO_SFN(0x3));
+ s3c_gpio_setpull(HDMI_GPX(7), S3C_GPIO_PULL_DOWN);
+}
+
+void s5p_int_src_ext_hpd(struct platform_device *pdev)
+{
+ s3c_gpio_cfgpin(HDMI_GPX(7), S3C_GPIO_SFN(0xf));
+ s3c_gpio_setpull(HDMI_GPX(7), S3C_GPIO_PULL_DOWN);
+}
+
+int s5p_hpd_read_gpio(struct platform_device *pdev)
+{
+ return gpio_get_value(HDMI_GPX(7));
+}
+
+int s5p_v4l2_hpd_read_gpio(void)
+{
+ return gpio_get_value(HDMI_GPX(7));
+}
+
+void s5p_v4l2_int_src_hdmi_hpd(void)
+{
+ s3c_gpio_cfgpin(HDMI_GPX(7), S3C_GPIO_SFN(0x3));
+ s3c_gpio_setpull(HDMI_GPX(7), S3C_GPIO_PULL_DOWN);
+}
+
+void s5p_v4l2_int_src_ext_hpd(void)
+{
+ s3c_gpio_cfgpin(HDMI_GPX(7), S3C_GPIO_SFN(0xf));
+ s3c_gpio_setpull(HDMI_GPX(7), S3C_GPIO_PULL_DOWN);
+}
+
+void s5p_cec_cfg_gpio(struct platform_device *pdev)
+{
+ s3c_gpio_cfgpin(HDMI_GPX(6), S3C_GPIO_SFN(0x3));
+ s3c_gpio_setpull(HDMI_GPX(6), S3C_GPIO_PULL_NONE);
+}
+
+#ifdef CONFIG_VIDEO_EXYNOS_TV
+void s5p_tv_setup(void)
+{
+ int ret;
+
+ /* direct HPD to HDMI chip */
+ ret = gpio_request(HDMI_GPX(7), "hpd-plug");
+ if (ret)
+ printk(KERN_ERR "failed to request HPD-plug\n");
+ gpio_direction_input(HDMI_GPX(7));
+ s3c_gpio_cfgpin(HDMI_GPX(7), S3C_GPIO_SFN(0xf));
+ s3c_gpio_setpull(HDMI_GPX(7), S3C_GPIO_PULL_NONE);
+
+ /* HDMI CEC */
+ ret = gpio_request(HDMI_GPX(6), "hdmi-cec");
+ if (ret)
+ printk(KERN_ERR "failed to request HDMI-CEC\n");
+ gpio_direction_input(HDMI_GPX(6));
+ s3c_gpio_cfgpin(HDMI_GPX(6), S3C_GPIO_SFN(0x3));
+ s3c_gpio_setpull(HDMI_GPX(6), S3C_GPIO_PULL_NONE);
+}
+#endif
#include <mach/regs-usb-phy.h>
#include <plat/cpu.h>
#include <plat/usb-phy.h>
+#include <plat/regs-usb3-exynos-drd-phy.h>
+
+#define PHY_ENABLE 1
+#define PHY_DISABLE 0
+#define EXYNOS4_USB_CFG (S3C_VA_SYS + 0x21C)
+
+enum usb_phy_type {
+ USB_PHY = (0x1 << 0),
+ USB_PHY0 = (0x1 << 0),
+ USB_PHY1 = (0x1 << 1),
+ USB_PHY_HSIC0 = (0x1 << 1),
+ USB_PHY_HSIC1 = (0x1 << 2),
+};
+
+struct exynos_usb_phy {
+ u8 phy0_usage;
+ u8 phy1_usage;
+ u8 phy2_usage;
+ unsigned long flags;
+};
+
static atomic_t host_usage;
+static struct exynos_usb_phy usb_phy_control;
+static DEFINE_SPINLOCK(phy_lock);
+static struct clk *phy_clk;
static int exynos4_usb_host_phy_is_on(void)
{
return (readl(EXYNOS4_PHYPWR) & PHY1_STD_ANALOG_POWERDOWN) ? 0 : 1;
}
+static int exynos5_usb_host_phy20_is_on(void)
+{
+ return (readl(EXYNOS5_PHY_HOST_CTRL0) & HOST_CTRL0_PHYSWRSTALL) ? 0 : 1;
+}
+
+static int exynos_usb_phy_clock_enable(struct platform_device *pdev)
+{
+ int err;
+
+ if (!phy_clk) {
+ if (soc_is_exynos5250())
+ phy_clk = clk_get(&pdev->dev, "usbhost");
+
+ if (IS_ERR(phy_clk)) {
+ dev_err(&pdev->dev, "Failed to get phy clock\n");
+ return PTR_ERR(phy_clk);
+ }
+ }
+
+ err = clk_enable(phy_clk);
+
+ return err;
+}
+
+static void exynos_usb_mux_change(struct platform_device *pdev, int val)
+{
+ u32 is_host;
+ if (soc_is_exynos5250()) {
+ is_host = readl(EXYNOS5_USB_CFG);
+ writel(val, EXYNOS5_USB_CFG);
+ }
+ if (is_host != val)
+ dev_dbg(&pdev->dev, "Change USB MUX from %s to %s",
+ is_host ? "Host" : "Device", val ? "Host" : "Device");
+}
+
+static void exynos_usb_phy_control(enum usb_phy_type phy_type , int on)
+{
+ spin_lock(&phy_lock);
+ if (soc_is_exynos5250()) {
+ if (phy_type & USB_PHY0) {
+ if (on == PHY_ENABLE
+ && (usb_phy_control.phy0_usage++) == 0)
+ writel(S5P_USBDRD_PHY_ENABLE,
+ S5P_USBDRD_PHY_CONTROL);
+ else if (on == PHY_DISABLE
+ && (--usb_phy_control.phy0_usage) == 0)
+ writel(~S5P_USBDRD_PHY_ENABLE,
+ S5P_USBDRD_PHY_CONTROL);
+
+ } else if (phy_type & USB_PHY1) {
+ if (on == PHY_ENABLE
+ && (usb_phy_control.phy1_usage++) == 0)
+ writel(S5P_USBHOST_PHY_ENABLE,
+ S5P_USBHOST_PHY_CONTROL);
+ else if (on == PHY_DISABLE
+ && (--usb_phy_control.phy1_usage) == 0)
+ writel(~S5P_USBHOST_PHY_ENABLE,
+ S5P_USBHOST_PHY_CONTROL);
+ }
+ }
+ spin_unlock(&phy_lock);
+}
+
+static u32 exynos_usb_phy_set_clock(struct platform_device *pdev)
+{
+ struct clk *ref_clk;
+ u32 refclk_freq = 0;
+
+ ref_clk = clk_get(&pdev->dev, "ext_xtal");
+
+ if (IS_ERR(ref_clk)) {
+ dev_err(&pdev->dev, "Failed to get reference clock\n");
+ return PTR_ERR(ref_clk);
+ }
+ if (soc_is_exynos5250()) {
+ switch (clk_get_rate(ref_clk)) {
+ case 96 * 100000:
+ refclk_freq = EXYNOS5_CLKSEL_9600K;
+ break;
+ case 10 * MHZ:
+ refclk_freq = EXYNOS5_CLKSEL_10M;
+ break;
+ case 12 * MHZ:
+ refclk_freq = EXYNOS5_CLKSEL_12M;
+ break;
+ case 192 * 100000:
+ refclk_freq = EXYNOS5_CLKSEL_19200K;
+ break;
+ case 20 * MHZ:
+ refclk_freq = EXYNOS5_CLKSEL_20M;
+ break;
+ case 50 * MHZ:
+ refclk_freq = EXYNOS5_CLKSEL_50M;
+ break;
+ case 24 * MHZ:
+ default:
+ /* default reference clock */
+ refclk_freq = EXYNOS5_CLKSEL_24M;
+ break;
+ }
+ }
+ clk_put(ref_clk);
+
+ return refclk_freq;
+}
+
+static int exynos5_usb_phy30_init(struct platform_device *pdev)
+{
+ int ret;
+ u32 reg;
+ bool use_ext_clk = true;
+
+ ret = exynos_usb_phy_clock_enable(pdev);
+ if (ret)
+ return ret;
+
+ exynos_usb_phy_control(USB_PHY0, PHY_ENABLE);
+
+ /* Reset USB 3.0 PHY */
+ writel(0x00000000, EXYNOS_USB3_PHYREG0);
+ writel(0x24d4e6e4, EXYNOS_USB3_PHYPARAM0);
+ writel(0x03fff820, EXYNOS_USB3_PHYPARAM1);
+ writel(0x00000000, EXYNOS_USB3_PHYRESUME);
+
+ writel(0x08000000, EXYNOS_USB3_LINKSYSTEM);
+ writel(0x00000004, EXYNOS_USB3_PHYBATCHG);
+ /* REVISIT :use externel clock 100MHz */
+ if (use_ext_clk)
+ writel(readl(EXYNOS_USB3_PHYPARAM0) | (0x1<<31),
+ EXYNOS_USB3_PHYPARAM0);
+ else
+ writel(readl(EXYNOS_USB3_PHYPARAM0) & ~(0x1<<31),
+ EXYNOS_USB3_PHYPARAM0);
+
+ /* UTMI Power Control */
+ writel(EXYNOS_USB3_PHYUTMI_OTGDISABLE, EXYNOS_USB3_PHYUTMI);
+
+ /* Set 100MHz external clock */
+ reg = EXYNOS_USB3_PHYCLKRST_PORTRESET |
+ /* HS PLL uses ref_pad_clk{p,m} or ref_alt_clk_{p,m}
+ * as reference */
+ EXYNOS_USB3_PHYCLKRST_REFCLKSEL(2) |
+ /* Digital power supply in normal operating mode */
+ EXYNOS_USB3_PHYCLKRST_RETENABLEN |
+ /* 0x27-100MHz, 0x2a-24MHz, 0x31-20MHz, 0x38-19.2MHz */
+ EXYNOS_USB3_PHYCLKRST_FSEL(0x27) |
+ /* 0x19-100MHz, 0x68-24MHz, 0x7d-20Mhz */
+ EXYNOS_USB3_PHYCLKRST_MPLL_MULTIPLIER(0x19) |
+ /* Enable ref clock for SS function */
+ EXYNOS_USB3_PHYCLKRST_REF_SSP_EN |
+ /* Enable spread spectrum */
+ EXYNOS_USB3_PHYCLKRST_SSC_EN |
+ EXYNOS_USB3_PHYCLKRST_COMMONONN;
+
+ writel(reg, EXYNOS_USB3_PHYCLKRST);
+
+ udelay(10);
+
+ reg &= ~(EXYNOS_USB3_PHYCLKRST_PORTRESET);
+ writel(reg, EXYNOS_USB3_PHYCLKRST);
+
+ return 0;
+}
+
+static int exynos5_usb_phy30_exit(struct platform_device *pdev)
+{
+ u32 reg;
+
+ reg = EXYNOS_USB3_PHYUTMI_OTGDISABLE |
+ EXYNOS_USB3_PHYUTMI_FORCESUSPEND |
+ EXYNOS_USB3_PHYUTMI_FORCESLEEP;
+ writel(reg, EXYNOS_USB3_PHYUTMI);
+
+ exynos_usb_phy_control(USB_PHY0, PHY_DISABLE);
+
+ return 0;
+}
+
+static int exynos5_usb_phy20_init(struct platform_device *pdev)
+{
+ int ret;
+ u32 refclk_freq;
+ u32 hostphy_ctrl0, otgphy_sys, hsic_ctrl, ehcictrl;
+
+ atomic_inc(&host_usage);
+ ret = exynos_usb_phy_clock_enable(pdev);
+ if (ret)
+ return ret;
+
+ if (exynos5_usb_host_phy20_is_on()) {
+ dev_err(&pdev->dev, "Already power on PHY\n");
+ return 0;
+ }
+
+ exynos_usb_mux_change(pdev, 1);
+
+ exynos_usb_phy_control(USB_PHY1, PHY_ENABLE);
+
+ /* Host and Device should be set at the same time */
+ hostphy_ctrl0 = readl(EXYNOS5_PHY_HOST_CTRL0);
+ hostphy_ctrl0 &= ~(HOST_CTRL0_FSEL_MASK);
+ otgphy_sys = readl(EXYNOS5_PHY_OTG_SYS);
+ otgphy_sys &= ~(OTG_SYS_CTRL0_FSEL_MASK);
+
+ /* 2.0 phy reference clock configuration */
+ refclk_freq = exynos_usb_phy_set_clock(pdev);
+ hostphy_ctrl0 |= (refclk_freq << HOST_CTRL0_CLKSEL_SHIFT);
+ otgphy_sys |= (refclk_freq << OTG_SYS_CLKSEL_SHIFT);
+
+ /* COMMON Block configuration during suspend */
+ hostphy_ctrl0 &= ~(HOST_CTRL0_COMMONON_N);
+ otgphy_sys |= (OTG_SYS_COMMON_ON);
+
+ /* otg phy reset */
+ otgphy_sys &= ~(OTG_SYS_FORCE_SUSPEND | OTG_SYS_SIDDQ_UOTG
+ | OTG_SYS_FORCE_SLEEP);
+ otgphy_sys &= ~(OTG_SYS_REF_CLK_SEL_MASK);
+ otgphy_sys |= (OTG_SYS_REF_CLK_SEL(0x2) | OTG_SYS_OTGDISABLE);
+ otgphy_sys |= (OTG_SYS_PHY0_SW_RST | OTG_SYS_LINK_SW_RST_UOTG
+ | OTG_SYS_PHYLINK_SW_RESET);
+ writel(otgphy_sys, EXYNOS5_PHY_OTG_SYS);
+ udelay(10);
+ otgphy_sys &= ~(OTG_SYS_PHY0_SW_RST | OTG_SYS_LINK_SW_RST_UOTG
+ | OTG_SYS_PHYLINK_SW_RESET);
+ writel(otgphy_sys, EXYNOS5_PHY_OTG_SYS);
+ /* host phy reset */
+ hostphy_ctrl0 &= ~(HOST_CTRL0_PHYSWRST | HOST_CTRL0_PHYSWRSTALL
+ | HOST_CTRL0_SIDDQ);
+ hostphy_ctrl0 &= ~(HOST_CTRL0_FORCESUSPEND | HOST_CTRL0_FORCESLEEP);
+ hostphy_ctrl0 |= (HOST_CTRL0_LINKSWRST | HOST_CTRL0_UTMISWRST);
+ writel(hostphy_ctrl0, EXYNOS5_PHY_HOST_CTRL0);
+ udelay(10);
+ hostphy_ctrl0 &= ~(HOST_CTRL0_LINKSWRST | HOST_CTRL0_UTMISWRST);
+ writel(hostphy_ctrl0, EXYNOS5_PHY_HOST_CTRL0);
+
+ /* HSIC phy reset */
+ hsic_ctrl = (HSIC_CTRL_REFCLKDIV(0x24) | HSIC_CTRL_REFCLKSEL(0x2)
+ | HSIC_CTRL_PHYSWRST);
+ writel(hsic_ctrl, EXYNOS5_PHY_HSIC_CTRL1);
+ writel(hsic_ctrl, EXYNOS5_PHY_HSIC_CTRL2);
+ udelay(10);
+ hsic_ctrl &= ~(HSIC_CTRL_PHYSWRST);
+ writel(hsic_ctrl, EXYNOS5_PHY_HSIC_CTRL1);
+ writel(hsic_ctrl, EXYNOS5_PHY_HSIC_CTRL2);
+
+ udelay(80);
+
+ ehcictrl = readl(EXYNOS5_PHY_HOST_EHCICTRL);
+ ehcictrl |= (EHCICTRL_ENAINCRXALIGN | EHCICTRL_ENAINCR4
+ | EHCICTRL_ENAINCR8 | EHCICTRL_ENAINCR16);
+ writel(ehcictrl, EXYNOS5_PHY_HOST_EHCICTRL);
+
+ return 0;
+}
+static int exynos5_usb_phy20_exit(struct platform_device *pdev)
+{
+ u32 hostphy_ctrl0, otgphy_sys, hsic_ctrl;
+
+ if (atomic_dec_return(&host_usage) > 0) {
+ dev_info(&pdev->dev, "still being used\n");
+ return -EBUSY;
+ }
+
+ hsic_ctrl = (HSIC_CTRL_REFCLKDIV(0x24) | HSIC_CTRL_REFCLKSEL(0x2)
+ | HSIC_CTRL_SIDDQ | HSIC_CTRL_FORCESLEEP
+ | HSIC_CTRL_FORCESUSPEND);
+ writel(hsic_ctrl, EXYNOS5_PHY_HSIC_CTRL1);
+ writel(hsic_ctrl, EXYNOS5_PHY_HSIC_CTRL2);
+
+ hostphy_ctrl0 = readl(EXYNOS5_PHY_HOST_CTRL0);
+ hostphy_ctrl0 |= (HOST_CTRL0_SIDDQ);
+ hostphy_ctrl0 |= (HOST_CTRL0_FORCESUSPEND | HOST_CTRL0_FORCESLEEP);
+ hostphy_ctrl0 |= (HOST_CTRL0_PHYSWRST | HOST_CTRL0_PHYSWRSTALL);
+ writel(hostphy_ctrl0, EXYNOS5_PHY_HOST_CTRL0);
+
+ otgphy_sys = readl(EXYNOS5_PHY_OTG_SYS);
+ otgphy_sys |= (OTG_SYS_FORCE_SUSPEND | OTG_SYS_SIDDQ_UOTG
+ | OTG_SYS_FORCE_SLEEP);
+ writel(otgphy_sys, EXYNOS5_PHY_OTG_SYS);
+
+ exynos_usb_phy_control(USB_PHY1, PHY_DISABLE);
+
+ return 0;
+}
+
+
static int exynos4_usb_phy1_init(struct platform_device *pdev)
{
struct clk *otg_clk;
int s5p_usb_phy_init(struct platform_device *pdev, int type)
{
- if (type == S5P_USB_PHY_HOST)
- return exynos4_usb_phy1_init(pdev);
-
+ if (type == S5P_USB_PHY_HOST) {
+ if (soc_is_exynos5250())
+ return exynos5_usb_phy20_init(pdev);
+ else
+ return exynos4_usb_phy1_init(pdev);
+ } else if (type == S5P_USB_PHY_DRD) {
+ if (soc_is_exynos5250())
+ return exynos5_usb_phy30_init(pdev);
+ else
+ dev_err(&pdev->dev, "USB 3.0 DRD not present\n");
+ }
return -EINVAL;
}
int s5p_usb_phy_exit(struct platform_device *pdev, int type)
{
- if (type == S5P_USB_PHY_HOST)
- return exynos4_usb_phy1_exit(pdev);
-
+ if (type == S5P_USB_PHY_HOST) {
+ if (soc_is_exynos5250())
+ return exynos5_usb_phy20_exit(pdev);
+ else
+ return exynos4_usb_phy1_exit(pdev);
+ } else if (type == S5P_USB_PHY_DRD) {
+ if (soc_is_exynos5250())
+ return exynos5_usb_phy30_exit(pdev);
+ else
+ dev_err(&pdev->dev, "USB 3.0 DRD not present\n");
+ }
return -EINVAL;
}
.ctrlbit = S3C_CLKCON_PCLK_KEYPAD,
}, {
.name = "spi",
- .devname = "s3c64xx-spi.0",
+ .devname = "s3c6410-spi.0",
.parent = &clk_p,
.enable = s3c64xx_pclk_ctrl,
.ctrlbit = S3C_CLKCON_PCLK_SPI0,
}, {
.name = "spi",
- .devname = "s3c64xx-spi.1",
+ .devname = "s3c6410-spi.1",
.parent = &clk_p,
.enable = s3c64xx_pclk_ctrl,
.ctrlbit = S3C_CLKCON_PCLK_SPI1,
static struct clk clk_48m_spi0 = {
.name = "spi_48m",
- .devname = "s3c64xx-spi.0",
+ .devname = "s3c6410-spi.0",
.parent = &clk_48m,
.enable = s3c64xx_sclk_ctrl,
.ctrlbit = S3C_CLKCON_SCLK_SPI0_48,
static struct clk clk_48m_spi1 = {
.name = "spi_48m",
- .devname = "s3c64xx-spi.1",
+ .devname = "s3c6410-spi.1",
.parent = &clk_48m,
.enable = s3c64xx_sclk_ctrl,
.ctrlbit = S3C_CLKCON_SCLK_SPI1_48,
static struct clksrc_clk clk_sclk_spi0 = {
.clk = {
.name = "spi-bus",
- .devname = "s3c64xx-spi.0",
+ .devname = "s3c6410-spi.0",
.ctrlbit = S3C_CLKCON_SCLK_SPI0,
.enable = s3c64xx_sclk_ctrl,
},
static struct clksrc_clk clk_sclk_spi1 = {
.clk = {
.name = "spi-bus",
- .devname = "s3c64xx-spi.1",
+ .devname = "s3c6410-spi.1",
.ctrlbit = S3C_CLKCON_SCLK_SPI1,
.enable = s3c64xx_sclk_ctrl,
},
CLKDEV_INIT("s3c-sdhci.1", "mmc_busclk.2", &clk_sclk_mmc1.clk),
CLKDEV_INIT("s3c-sdhci.2", "mmc_busclk.2", &clk_sclk_mmc2.clk),
CLKDEV_INIT(NULL, "spi_busclk0", &clk_p),
- CLKDEV_INIT("s3c64xx-spi.0", "spi_busclk1", &clk_sclk_spi0.clk),
- CLKDEV_INIT("s3c64xx-spi.0", "spi_busclk2", &clk_48m_spi0),
- CLKDEV_INIT("s3c64xx-spi.1", "spi_busclk1", &clk_sclk_spi1.clk),
- CLKDEV_INIT("s3c64xx-spi.1", "spi_busclk2", &clk_48m_spi1),
+ CLKDEV_INIT("s3c6410-spi.0", "spi_busclk1", &clk_sclk_spi0.clk),
+ CLKDEV_INIT("s3c6410-spi.0", "spi_busclk2", &clk_48m_spi0),
+ CLKDEV_INIT("s3c6410-spi.1", "spi_busclk1", &clk_sclk_spi1.clk),
+ CLKDEV_INIT("s3c6410-spi.1", "spi_busclk2", &clk_48m_spi1),
};
#define GET_DIV(clk, field) ((((clk) & field##_MASK) >> field##_SHIFT) + 1)
i2c_register_board_info(1, i2c_devs1, ARRAY_SIZE(i2c_devs1));
samsung_keypad_set_platdata(&crag6410_keypad_data);
- s3c64xx_spi0_set_platdata(&s3c64xx_spi0_pdata, 0, 1);
+ s3c64xx_spi0_set_platdata(NULL, 0, 1);
platform_add_devices(crag6410_devices, ARRAY_SIZE(crag6410_devices));
*/
#include <linux/gpio.h>
-#include <linux/platform_device.h>
-
#include <plat/gpio-cfg.h>
-#include <plat/s3c64xx-spi.h>
#ifdef CONFIG_S3C64XX_DEV_SPI0
-struct s3c64xx_spi_info s3c64xx_spi0_pdata __initdata = {
- .fifo_lvl_mask = 0x7f,
- .rx_lvl_offset = 13,
- .tx_st_done = 21,
-};
-
-int s3c64xx_spi0_cfg_gpio(struct platform_device *dev)
+int s3c64xx_spi0_cfg_gpio(void)
{
s3c_gpio_cfgall_range(S3C64XX_GPC(0), 3,
S3C_GPIO_SFN(2), S3C_GPIO_PULL_UP);
#endif
#ifdef CONFIG_S3C64XX_DEV_SPI1
-struct s3c64xx_spi_info s3c64xx_spi1_pdata __initdata = {
- .fifo_lvl_mask = 0x7f,
- .rx_lvl_offset = 13,
- .tx_st_done = 21,
-};
-
-int s3c64xx_spi1_cfg_gpio(struct platform_device *dev)
+int s3c64xx_spi1_cfg_gpio(void)
{
s3c_gpio_cfgall_range(S3C64XX_GPC(4), 3,
S3C_GPIO_SFN(2), S3C_GPIO_PULL_UP);
.ctrlbit = (1 << 17),
}, {
.name = "spi",
- .devname = "s3c64xx-spi.0",
+ .devname = "s5p64x0-spi.0",
.parent = &clk_pclk_low.clk,
.enable = s5p64x0_pclk_ctrl,
.ctrlbit = (1 << 21),
}, {
.name = "spi",
- .devname = "s3c64xx-spi.1",
+ .devname = "s5p64x0-spi.1",
.parent = &clk_pclk_low.clk,
.enable = s5p64x0_pclk_ctrl,
.ctrlbit = (1 << 22),
static struct clksrc_clk clk_sclk_spi0 = {
.clk = {
.name = "sclk_spi",
- .devname = "s3c64xx-spi.0",
+ .devname = "s5p64x0-spi.0",
.ctrlbit = (1 << 20),
.enable = s5p64x0_sclk_ctrl,
},
static struct clksrc_clk clk_sclk_spi1 = {
.clk = {
.name = "sclk_spi",
- .devname = "s3c64xx-spi.1",
+ .devname = "s5p64x0-spi.1",
.ctrlbit = (1 << 21),
.enable = s5p64x0_sclk_ctrl,
},
CLKDEV_INIT(NULL, "clk_uart_baud2", &clk_pclk_low.clk),
CLKDEV_INIT(NULL, "clk_uart_baud3", &clk_sclk_uclk.clk),
CLKDEV_INIT(NULL, "spi_busclk0", &clk_p),
- CLKDEV_INIT("s3c64xx-spi.0", "spi_busclk1", &clk_sclk_spi0.clk),
- CLKDEV_INIT("s3c64xx-spi.1", "spi_busclk1", &clk_sclk_spi1.clk),
+ CLKDEV_INIT("s5p64x0-spi.0", "spi_busclk1", &clk_sclk_spi0.clk),
+ CLKDEV_INIT("s5p64x0-spi.1", "spi_busclk1", &clk_sclk_spi1.clk),
CLKDEV_INIT("s3c-sdhci.0", "mmc_busclk.2", &clk_sclk_mmc0.clk),
CLKDEV_INIT("s3c-sdhci.1", "mmc_busclk.2", &clk_sclk_mmc1.clk),
CLKDEV_INIT("s3c-sdhci.2", "mmc_busclk.2", &clk_sclk_mmc2.clk),
.ctrlbit = (1 << 17),
}, {
.name = "spi",
- .devname = "s3c64xx-spi.0",
+ .devname = "s5p64x0-spi.0",
.parent = &clk_pclk_low.clk,
.enable = s5p64x0_pclk_ctrl,
.ctrlbit = (1 << 21),
}, {
.name = "spi",
- .devname = "s3c64xx-spi.1",
+ .devname = "s5p64x0-spi.1",
.parent = &clk_pclk_low.clk,
.enable = s5p64x0_pclk_ctrl,
.ctrlbit = (1 << 22),
static struct clksrc_clk clk_sclk_spi0 = {
.clk = {
.name = "sclk_spi",
- .devname = "s3c64xx-spi.0",
+ .devname = "s5p64x0-spi.0",
.ctrlbit = (1 << 20),
.enable = s5p64x0_sclk_ctrl,
},
static struct clksrc_clk clk_sclk_spi1 = {
.clk = {
.name = "sclk_spi",
- .devname = "s3c64xx-spi.1",
+ .devname = "s5p64x0-spi.1",
.ctrlbit = (1 << 21),
.enable = s5p64x0_sclk_ctrl,
},
CLKDEV_INIT(NULL, "clk_uart_baud2", &clk_pclk_low.clk),
CLKDEV_INIT(NULL, "clk_uart_baud3", &clk_sclk_uclk.clk),
CLKDEV_INIT(NULL, "spi_busclk0", &clk_p),
- CLKDEV_INIT("s3c64xx-spi.0", "spi_busclk1", &clk_sclk_spi0.clk),
- CLKDEV_INIT("s3c64xx-spi.1", "spi_busclk1", &clk_sclk_spi1.clk),
+ CLKDEV_INIT("s5p64x0-spi.0", "spi_busclk1", &clk_sclk_spi0.clk),
+ CLKDEV_INIT("s5p64x0-spi.1", "spi_busclk1", &clk_sclk_spi1.clk),
CLKDEV_INIT("s3c-sdhci.0", "mmc_busclk.2", &clk_sclk_mmc0.clk),
CLKDEV_INIT("s3c-sdhci.1", "mmc_busclk.2", &clk_sclk_mmc1.clk),
CLKDEV_INIT("s3c-sdhci.2", "mmc_busclk.2", &clk_sclk_mmc2.clk),
*/
#include <linux/gpio.h>
-#include <linux/platform_device.h>
-#include <linux/io.h>
-
#include <plat/gpio-cfg.h>
-#include <plat/cpu.h>
-#include <plat/s3c64xx-spi.h>
#ifdef CONFIG_S3C64XX_DEV_SPI0
-struct s3c64xx_spi_info s3c64xx_spi0_pdata __initdata = {
- .fifo_lvl_mask = 0x1ff,
- .rx_lvl_offset = 15,
- .tx_st_done = 25,
-};
-
-int s3c64xx_spi0_cfg_gpio(struct platform_device *dev)
+int s3c64xx_spi0_cfg_gpio(void)
{
if (soc_is_s5p6450())
s3c_gpio_cfgall_range(S5P6450_GPC(0), 3,
#endif
#ifdef CONFIG_S3C64XX_DEV_SPI1
-struct s3c64xx_spi_info s3c64xx_spi1_pdata __initdata = {
- .fifo_lvl_mask = 0x7f,
- .rx_lvl_offset = 15,
- .tx_st_done = 25,
-};
-
-int s3c64xx_spi1_cfg_gpio(struct platform_device *dev)
+int s3c64xx_spi1_cfg_gpio(void)
{
if (soc_is_s5p6450())
s3c_gpio_cfgall_range(S5P6450_GPC(4), 3,
.ctrlbit = (1 << 5),
}, {
.name = "spi",
- .devname = "s3c64xx-spi.0",
+ .devname = "s5pc100-spi.0",
.parent = &clk_div_d1_bus.clk,
.enable = s5pc100_d1_4_ctrl,
.ctrlbit = (1 << 6),
}, {
.name = "spi",
- .devname = "s3c64xx-spi.1",
+ .devname = "s5pc100-spi.1",
.parent = &clk_div_d1_bus.clk,
.enable = s5pc100_d1_4_ctrl,
.ctrlbit = (1 << 7),
}, {
.name = "spi",
- .devname = "s3c64xx-spi.2",
+ .devname = "s5pc100-spi.2",
.parent = &clk_div_d1_bus.clk,
.enable = s5pc100_d1_4_ctrl,
.ctrlbit = (1 << 8),
static struct clk clk_48m_spi0 = {
.name = "spi_48m",
- .devname = "s3c64xx-spi.0",
+ .devname = "s5pc100-spi.0",
.parent = &clk_mout_48m.clk,
.enable = s5pc100_sclk0_ctrl,
.ctrlbit = (1 << 7),
static struct clk clk_48m_spi1 = {
.name = "spi_48m",
- .devname = "s3c64xx-spi.1",
+ .devname = "s5pc100-spi.1",
.parent = &clk_mout_48m.clk,
.enable = s5pc100_sclk0_ctrl,
.ctrlbit = (1 << 8),
static struct clk clk_48m_spi2 = {
.name = "spi_48m",
- .devname = "s3c64xx-spi.2",
+ .devname = "s5pc100-spi.2",
.parent = &clk_mout_48m.clk,
.enable = s5pc100_sclk0_ctrl,
.ctrlbit = (1 << 9),
static struct clksrc_clk clk_sclk_spi0 = {
.clk = {
.name = "sclk_spi",
- .devname = "s3c64xx-spi.0",
+ .devname = "s5pc100-spi.0",
.ctrlbit = (1 << 4),
.enable = s5pc100_sclk0_ctrl,
},
static struct clksrc_clk clk_sclk_spi1 = {
.clk = {
.name = "sclk_spi",
- .devname = "s3c64xx-spi.1",
+ .devname = "s5pc100-spi.1",
.ctrlbit = (1 << 5),
.enable = s5pc100_sclk0_ctrl,
},
static struct clksrc_clk clk_sclk_spi2 = {
.clk = {
.name = "sclk_spi",
- .devname = "s3c64xx-spi.2",
+ .devname = "s5pc100-spi.2",
.ctrlbit = (1 << 6),
.enable = s5pc100_sclk0_ctrl,
},
CLKDEV_INIT("s3c-sdhci.1", "mmc_busclk.2", &clk_sclk_mmc1.clk),
CLKDEV_INIT("s3c-sdhci.2", "mmc_busclk.2", &clk_sclk_mmc2.clk),
CLKDEV_INIT(NULL, "spi_busclk0", &clk_p),
- CLKDEV_INIT("s3c64xx-spi.0", "spi_busclk1", &clk_48m_spi0),
- CLKDEV_INIT("s3c64xx-spi.0", "spi_busclk2", &clk_sclk_spi0.clk),
- CLKDEV_INIT("s3c64xx-spi.1", "spi_busclk1", &clk_48m_spi1),
- CLKDEV_INIT("s3c64xx-spi.1", "spi_busclk2", &clk_sclk_spi1.clk),
- CLKDEV_INIT("s3c64xx-spi.2", "spi_busclk1", &clk_48m_spi2),
- CLKDEV_INIT("s3c64xx-spi.2", "spi_busclk2", &clk_sclk_spi2.clk),
+ CLKDEV_INIT("s5pc100-spi.0", "spi_busclk1", &clk_48m_spi0),
+ CLKDEV_INIT("s5pc100-spi.0", "spi_busclk2", &clk_sclk_spi0.clk),
+ CLKDEV_INIT("s5pc100-spi.1", "spi_busclk1", &clk_48m_spi1),
+ CLKDEV_INIT("s5pc100-spi.1", "spi_busclk2", &clk_sclk_spi1.clk),
+ CLKDEV_INIT("s5pc100-spi.2", "spi_busclk1", &clk_48m_spi2),
+ CLKDEV_INIT("s5pc100-spi.2", "spi_busclk2", &clk_sclk_spi2.clk),
};
void __init s5pc100_register_clocks(void)
*/
#include <linux/gpio.h>
-#include <linux/platform_device.h>
-
#include <plat/gpio-cfg.h>
-#include <plat/s3c64xx-spi.h>
#ifdef CONFIG_S3C64XX_DEV_SPI0
-struct s3c64xx_spi_info s3c64xx_spi0_pdata __initdata = {
- .fifo_lvl_mask = 0x7f,
- .rx_lvl_offset = 13,
- .high_speed = 1,
- .tx_st_done = 21,
-};
-
-int s3c64xx_spi0_cfg_gpio(struct platform_device *dev)
+int s3c64xx_spi0_cfg_gpio(void)
{
s3c_gpio_cfgall_range(S5PC100_GPB(0), 3,
S3C_GPIO_SFN(2), S3C_GPIO_PULL_UP);
#endif
#ifdef CONFIG_S3C64XX_DEV_SPI1
-struct s3c64xx_spi_info s3c64xx_spi1_pdata __initdata = {
- .fifo_lvl_mask = 0x7f,
- .rx_lvl_offset = 13,
- .high_speed = 1,
- .tx_st_done = 21,
-};
-
-int s3c64xx_spi1_cfg_gpio(struct platform_device *dev)
+int s3c64xx_spi1_cfg_gpio(void)
{
s3c_gpio_cfgall_range(S5PC100_GPB(4), 3,
S3C_GPIO_SFN(2), S3C_GPIO_PULL_UP);
#endif
#ifdef CONFIG_S3C64XX_DEV_SPI2
-struct s3c64xx_spi_info s3c64xx_spi2_pdata __initdata = {
- .fifo_lvl_mask = 0x7f,
- .rx_lvl_offset = 13,
- .high_speed = 1,
- .tx_st_done = 21,
-};
-
-int s3c64xx_spi2_cfg_gpio(struct platform_device *dev)
+int s3c64xx_spi2_cfg_gpio(void)
{
s3c_gpio_cfgpin(S5PC100_GPG3(0), S3C_GPIO_SFN(3));
s3c_gpio_setpull(S5PC100_GPG3(0), S3C_GPIO_PULL_UP);
.ctrlbit = (1 << 11),
}, {
.name = "spi",
- .devname = "s3c64xx-spi.0",
+ .devname = "s5pv210-spi.0",
.parent = &clk_pclk_psys.clk,
.enable = s5pv210_clk_ip3_ctrl,
.ctrlbit = (1<<12),
}, {
.name = "spi",
- .devname = "s3c64xx-spi.1",
+ .devname = "s5pv210-spi.1",
.parent = &clk_pclk_psys.clk,
.enable = s5pv210_clk_ip3_ctrl,
.ctrlbit = (1<<13),
}, {
.name = "spi",
- .devname = "s3c64xx-spi.2",
+ .devname = "s5pv210-spi.2",
.parent = &clk_pclk_psys.clk,
.enable = s5pv210_clk_ip3_ctrl,
.ctrlbit = (1<<14),
static struct clksrc_clk clk_sclk_spi0 = {
.clk = {
.name = "sclk_spi",
- .devname = "s3c64xx-spi.0",
+ .devname = "s5pv210-spi.0",
.enable = s5pv210_clk_mask0_ctrl,
.ctrlbit = (1 << 16),
},
static struct clksrc_clk clk_sclk_spi1 = {
.clk = {
.name = "sclk_spi",
- .devname = "s3c64xx-spi.1",
+ .devname = "s5pv210-spi.1",
.enable = s5pv210_clk_mask0_ctrl,
.ctrlbit = (1 << 17),
},
CLKDEV_INIT("s3c-sdhci.2", "mmc_busclk.2", &clk_sclk_mmc2.clk),
CLKDEV_INIT("s3c-sdhci.3", "mmc_busclk.2", &clk_sclk_mmc3.clk),
CLKDEV_INIT(NULL, "spi_busclk0", &clk_p),
- CLKDEV_INIT("s3c64xx-spi.0", "spi_busclk1", &clk_sclk_spi0.clk),
- CLKDEV_INIT("s3c64xx-spi.1", "spi_busclk1", &clk_sclk_spi1.clk),
+ CLKDEV_INIT("s5pv210-spi.0", "spi_busclk1", &clk_sclk_spi0.clk),
+ CLKDEV_INIT("s5pv210-spi.1", "spi_busclk1", &clk_sclk_spi1.clk),
};
void __init s5pv210_register_clocks(void)
*/
#include <linux/gpio.h>
-#include <linux/platform_device.h>
-
#include <plat/gpio-cfg.h>
-#include <plat/s3c64xx-spi.h>
#ifdef CONFIG_S3C64XX_DEV_SPI0
-struct s3c64xx_spi_info s3c64xx_spi0_pdata = {
- .fifo_lvl_mask = 0x1ff,
- .rx_lvl_offset = 15,
- .high_speed = 1,
- .tx_st_done = 25,
-};
-
-int s3c64xx_spi0_cfg_gpio(struct platform_device *dev)
+int s3c64xx_spi0_cfg_gpio(void)
{
s3c_gpio_cfgpin(S5PV210_GPB(0), S3C_GPIO_SFN(2));
s3c_gpio_setpull(S5PV210_GPB(0), S3C_GPIO_PULL_UP);
#endif
#ifdef CONFIG_S3C64XX_DEV_SPI1
-struct s3c64xx_spi_info s3c64xx_spi1_pdata = {
- .fifo_lvl_mask = 0x7f,
- .rx_lvl_offset = 15,
- .high_speed = 1,
- .tx_st_done = 25,
-};
-
-int s3c64xx_spi1_cfg_gpio(struct platform_device *dev)
+int s3c64xx_spi1_cfg_gpio(void)
{
s3c_gpio_cfgpin(S5PV210_GPB(4), S3C_GPIO_SFN(2));
s3c_gpio_setpull(S5PV210_GPB(4), S3C_GPIO_PULL_UP);
* - end - virtual end address
*/
ENTRY(v3_coherent_user_range)
+ mov r0, #0
mov pc, lr
/*
* - end - virtual end address
*/
ENTRY(v4_coherent_user_range)
+ mov r0, #0
mov pc, lr
/*
add r0, r0, #CACHE_DLINESIZE
cmp r0, r1
blo 1b
- mov ip, #0
- mcr p15, 0, ip, c7, c5, 0 @ invalidate I cache
- mcr p15, 0, ip, c7, c10, 4 @ drain WB
+ mov r0, #0
+ mcr p15, 0, r0, c7, c5, 0 @ invalidate I cache
+ mcr p15, 0, r0, c7, c10, 4 @ drain WB
mov pc, lr
add r0, r0, #CACHE_DLINESIZE
cmp r0, r1
blo 1b
+ mov r0, #0
mov pc, lr
/*
#include <linux/linkage.h>
#include <linux/init.h>
#include <asm/assembler.h>
+#include <asm/errno.h>
#include <asm/unwind.h>
#include "proc-macros.S"
1:
USER( mcr p15, 0, r0, c7, c10, 1 ) @ clean D line
add r0, r0, #CACHE_LINE_SIZE
-2:
cmp r0, r1
blo 1b
#endif
/*
* Fault handling for the cache operation above. If the virtual address in r0
- * isn't mapped, just try the next page.
+ * isn't mapped, fail with -EFAULT.
*/
9001:
- mov r0, r0, lsr #12
- mov r0, r0, lsl #12
- add r0, r0, #4096
- b 2b
+ mov r0, #-EFAULT
+ mov pc, lr
UNWIND(.fnend )
ENDPROC(v6_coherent_user_range)
ENDPROC(v6_coherent_kern_range)
#include <linux/linkage.h>
#include <linux/init.h>
#include <asm/assembler.h>
+#include <asm/errno.h>
#include <asm/unwind.h>
#include "proc-macros.S"
add r12, r12, r2
cmp r12, r1
blo 2b
-3:
mov r0, #0
ALT_SMP(mcr p15, 0, r0, c7, c1, 6) @ invalidate BTB Inner Shareable
ALT_UP(mcr p15, 0, r0, c7, c5, 6) @ invalidate BTB
/*
* Fault handling for the cache operation above. If the virtual address in r0
- * isn't mapped, just try the next page.
+ * isn't mapped, fail with -EFAULT.
*/
9001:
- mov r12, r12, lsr #12
- mov r12, r12, lsl #12
- add r12, r12, #4096
- b 3b
+ mov r0, #-EFAULT
+ mov pc, lr
UNWIND(.fnend )
ENDPROC(v7_coherent_kern_range)
ENDPROC(v7_coherent_user_range)
#include <linux/init.h>
#include <linux/device.h>
#include <linux/dma-mapping.h>
+#include <linux/dma-contiguous.h>
#include <linux/highmem.h>
+#include <linux/memblock.h>
#include <linux/slab.h>
+#include <linux/iommu.h>
+#include <linux/io.h>
+#include <linux/vmalloc.h>
#include <asm/memory.h>
#include <asm/highmem.h>
#include <asm/tlbflush.h>
#include <asm/sizes.h>
#include <asm/mach/arch.h>
+#include <asm/dma-iommu.h>
+#include <asm/mach/map.h>
+#include <asm/system_info.h>
+#include <asm/dma-contiguous.h>
#include "mm.h"
+/*
+ * The DMA API is built upon the notion of "buffer ownership". A buffer
+ * is either exclusively owned by the CPU (and therefore may be accessed
+ * by it) or exclusively owned by the DMA device. These helper functions
+ * represent the transitions between these two ownership states.
+ *
+ * Note, however, that on later ARMs, this notion does not work due to
+ * speculative prefetches. We model our approach on the assumption that
+ * the CPU does do speculative prefetches, which means we clean caches
+ * before transfers and delay cache invalidation until transfer completion.
+ *
+ */
+static void __dma_page_cpu_to_dev(struct page *, unsigned long,
+ size_t, enum dma_data_direction);
+static void __dma_page_dev_to_cpu(struct page *, unsigned long,
+ size_t, enum dma_data_direction);
+
+/**
+ * arm_dma_map_page - map a portion of a page for streaming DMA
+ * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
+ * @page: page that buffer resides in
+ * @offset: offset into page for start of buffer
+ * @size: size of buffer to map
+ * @dir: DMA transfer direction
+ *
+ * Ensure that any data held in the cache is appropriately discarded
+ * or written back.
+ *
+ * The device owns this memory once this call has completed. The CPU
+ * can regain ownership by calling dma_unmap_page().
+ */
+static dma_addr_t arm_dma_map_page(struct device *dev, struct page *page,
+ unsigned long offset, size_t size, enum dma_data_direction dir,
+ struct dma_attrs *attrs)
+{
+ if (!arch_is_coherent() && !dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
+ __dma_page_cpu_to_dev(page, offset, size, dir);
+ return pfn_to_dma(dev, page_to_pfn(page)) + offset;
+}
+
+/**
+ * arm_dma_unmap_page - unmap a buffer previously mapped through dma_map_page()
+ * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
+ * @handle: DMA address of buffer
+ * @size: size of buffer (same as passed to dma_map_page)
+ * @dir: DMA transfer direction (same as passed to dma_map_page)
+ *
+ * Unmap a page streaming mode DMA translation. The handle and size
+ * must match what was provided in the previous dma_map_page() call.
+ * All other usages are undefined.
+ *
+ * After this call, reads by the CPU to the buffer are guaranteed to see
+ * whatever the device wrote there.
+ */
+static void arm_dma_unmap_page(struct device *dev, dma_addr_t handle,
+ size_t size, enum dma_data_direction dir,
+ struct dma_attrs *attrs)
+{
+ if (!arch_is_coherent() && !dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
+ __dma_page_dev_to_cpu(pfn_to_page(dma_to_pfn(dev, handle)),
+ handle & ~PAGE_MASK, size, dir);
+}
+
+static void arm_dma_sync_single_for_cpu(struct device *dev,
+ dma_addr_t handle, size_t size, enum dma_data_direction dir)
+{
+ unsigned int offset = handle & (PAGE_SIZE - 1);
+ struct page *page = pfn_to_page(dma_to_pfn(dev, handle-offset));
+ if (!arch_is_coherent())
+ __dma_page_dev_to_cpu(page, offset, size, dir);
+}
+
+static void arm_dma_sync_single_for_device(struct device *dev,
+ dma_addr_t handle, size_t size, enum dma_data_direction dir)
+{
+ unsigned int offset = handle & (PAGE_SIZE - 1);
+ struct page *page = pfn_to_page(dma_to_pfn(dev, handle-offset));
+ if (!arch_is_coherent())
+ __dma_page_cpu_to_dev(page, offset, size, dir);
+}
+
+static int arm_dma_set_mask(struct device *dev, u64 dma_mask);
+
+struct dma_map_ops arm_dma_ops = {
+ .alloc = arm_dma_alloc,
+ .free = arm_dma_free,
+ .mmap = arm_dma_mmap,
+ .get_sgtable = arm_dma_get_sgtable,
+ .map_page = arm_dma_map_page,
+ .unmap_page = arm_dma_unmap_page,
+ .map_sg = arm_dma_map_sg,
+ .unmap_sg = arm_dma_unmap_sg,
+ .sync_single_for_cpu = arm_dma_sync_single_for_cpu,
+ .sync_single_for_device = arm_dma_sync_single_for_device,
+ .sync_sg_for_cpu = arm_dma_sync_sg_for_cpu,
+ .sync_sg_for_device = arm_dma_sync_sg_for_device,
+ .set_dma_mask = arm_dma_set_mask,
+};
+EXPORT_SYMBOL(arm_dma_ops);
+
static u64 get_coherent_dma_mask(struct device *dev)
{
u64 mask = (u64)arm_dma_limit;
return mask;
}
+static void __dma_clear_buffer(struct page *page, size_t size)
+{
+ void *ptr;
+ /*
+ * Ensure that the allocated pages are zeroed, and that any data
+ * lurking in the kernel direct-mapped region is invalidated.
+ */
+ ptr = page_address(page);
+ if (ptr) {
+ memset(ptr, 0, size);
+ dmac_flush_range(ptr, ptr + size);
+ outer_flush_range(__pa(ptr), __pa(ptr) + size);
+ }
+}
+
/*
* Allocate a DMA buffer for 'dev' of size 'size' using the
* specified gfp mask. Note that 'size' must be page aligned.
{
unsigned long order = get_order(size);
struct page *page, *p, *e;
- void *ptr;
- u64 mask = get_coherent_dma_mask(dev);
-
-#ifdef CONFIG_DMA_API_DEBUG
- u64 limit = (mask + 1) & ~mask;
- if (limit && size >= limit) {
- dev_warn(dev, "coherent allocation too big (requested %#x mask %#llx)\n",
- size, mask);
- return NULL;
- }
-#endif
-
- if (!mask)
- return NULL;
-
- if (mask < 0xffffffffULL)
- gfp |= GFP_DMA;
page = alloc_pages(gfp, order);
if (!page)
for (p = page + (size >> PAGE_SHIFT), e = page + (1 << order); p < e; p++)
__free_page(p);
- /*
- * Ensure that the allocated pages are zeroed, and that any data
- * lurking in the kernel direct-mapped region is invalidated.
- */
- ptr = page_address(page);
- memset(ptr, 0, size);
- dmac_flush_range(ptr, ptr + size);
- outer_flush_range(__pa(ptr), __pa(ptr) + size);
+ __dma_clear_buffer(page, size);
return page;
}
}
#ifdef CONFIG_MMU
+#ifdef CONFIG_HUGETLB_PAGE
+#error ARM Coherent DMA allocator does not (yet) support huge TLB
+#endif
-#define CONSISTENT_OFFSET(x) (((unsigned long)(x) - consistent_base) >> PAGE_SHIFT)
-#define CONSISTENT_PTE_INDEX(x) (((unsigned long)(x) - consistent_base) >> PMD_SHIFT)
-
-/*
- * These are the page tables (2MB each) covering uncached, DMA consistent allocations
- */
-static pte_t **consistent_pte;
-
-#define DEFAULT_CONSISTENT_DMA_SIZE SZ_2M
+static void *__alloc_from_contiguous(struct device *dev, size_t size,
+ pgprot_t prot, struct page **ret_page);
-unsigned long consistent_base = CONSISTENT_END - DEFAULT_CONSISTENT_DMA_SIZE;
+static void *__alloc_remap_buffer(struct device *dev, size_t size, gfp_t gfp,
+ pgprot_t prot, struct page **ret_page,
+ const void *caller);
-void __init init_consistent_dma_size(unsigned long size)
+static void *
+__dma_alloc_remap(struct page *page, size_t size, gfp_t gfp, pgprot_t prot,
+ const void *caller)
{
- unsigned long base = CONSISTENT_END - ALIGN(size, SZ_2M);
+ struct vm_struct *area;
+ unsigned long addr;
+
+ area = get_vm_area_caller(size, VM_DMA | VM_USERMAP, caller);
+ if (!area)
+ return NULL;
+ addr = (unsigned long)area->addr;
+ area->phys_addr = __pfn_to_phys(page_to_pfn(page));
- BUG_ON(consistent_pte); /* Check we're called before DMA region init */
- BUG_ON(base < VMALLOC_END);
+ if (ioremap_page_range(addr, addr + size, area->phys_addr, prot)) {
+ vunmap((void *)addr);
+ return NULL;
+ }
+ return (void *)addr;
+}
- /* Grow region to accommodate specified size */
- if (base < consistent_base)
- consistent_base = base;
+static void __dma_free_remap(void *cpu_addr, size_t size)
+{
+ struct vm_struct *area = find_vm_area(cpu_addr);
+ if (!area || !(area->flags & VM_DMA)) {
+ pr_err("%s: trying to free invalid coherent area: %p\n",
+ __func__, cpu_addr);
+ dump_stack();
+ return;
+ }
+ unmap_kernel_range((unsigned long)cpu_addr, size);
+ vunmap(cpu_addr);
}
-#include "vmregion.h"
+struct dma_pool {
+ size_t size;
+ spinlock_t lock;
+ unsigned long *bitmap;
+ unsigned long nr_pages;
+ void *vaddr;
+ struct page *page;
+};
-static struct arm_vmregion_head consistent_head = {
- .vm_lock = __SPIN_LOCK_UNLOCKED(&consistent_head.vm_lock),
- .vm_list = LIST_HEAD_INIT(consistent_head.vm_list),
- .vm_end = CONSISTENT_END,
+static struct dma_pool atomic_pool = {
+ .size = SZ_256K,
};
-#ifdef CONFIG_HUGETLB_PAGE
-#error ARM Coherent DMA allocator does not (yet) support huge TLB
-#endif
+static int __init early_coherent_pool(char *p)
+{
+ atomic_pool.size = memparse(p, &p);
+ return 0;
+}
+early_param("coherent_pool", early_coherent_pool);
/*
- * Initialise the consistent memory allocation.
+ * Initialise the coherent pool for atomic allocations.
*/
-static int __init consistent_init(void)
+static int __init atomic_pool_init(void)
{
- int ret = 0;
- pgd_t *pgd;
- pud_t *pud;
- pmd_t *pmd;
- pte_t *pte;
- int i = 0;
- unsigned long base = consistent_base;
- unsigned long num_ptes = (CONSISTENT_END - base) >> PMD_SHIFT;
+ struct dma_pool *pool = &atomic_pool;
+ pgprot_t prot = pgprot_dmacoherent(pgprot_kernel);
+ unsigned long nr_pages = pool->size >> PAGE_SHIFT;
+ unsigned long *bitmap;
+ struct page *page;
+ void *ptr;
+ int bitmap_size = BITS_TO_LONGS(nr_pages) * sizeof(long);
- consistent_pte = kmalloc(num_ptes * sizeof(pte_t), GFP_KERNEL);
- if (!consistent_pte) {
- pr_err("%s: no memory\n", __func__);
- return -ENOMEM;
+ bitmap = kzalloc(bitmap_size, GFP_KERNEL);
+ if (!bitmap)
+ goto no_bitmap;
+
+ if (IS_ENABLED(CONFIG_CMA))
+ ptr = __alloc_from_contiguous(NULL, pool->size, prot, &page);
+ else
+ ptr = __alloc_remap_buffer(NULL, pool->size, GFP_KERNEL, prot,
+ &page, NULL);
+ if (ptr) {
+ spin_lock_init(&pool->lock);
+ pool->vaddr = ptr;
+ pool->page = page;
+ pool->bitmap = bitmap;
+ pool->nr_pages = nr_pages;
+ pr_info("DMA: preallocated %u KiB pool for atomic coherent allocations\n",
+ (unsigned)pool->size / 1024);
+ return 0;
}
+ kfree(bitmap);
+no_bitmap:
+ pr_err("DMA: failed to allocate %u KiB pool for atomic coherent allocation\n",
+ (unsigned)pool->size / 1024);
+ return -ENOMEM;
+}
+/*
+ * CMA is activated by core_initcall, so we must be called after it.
+ */
+postcore_initcall(atomic_pool_init);
- pr_debug("DMA memory: 0x%08lx - 0x%08lx:\n", base, CONSISTENT_END);
- consistent_head.vm_start = base;
+struct dma_contig_early_reserve {
+ phys_addr_t base;
+ unsigned long size;
+};
- do {
- pgd = pgd_offset(&init_mm, base);
+static struct dma_contig_early_reserve dma_mmu_remap[MAX_CMA_AREAS] __initdata;
- pud = pud_alloc(&init_mm, pgd, base);
- if (!pud) {
- printk(KERN_ERR "%s: no pud tables\n", __func__);
- ret = -ENOMEM;
- break;
- }
+static int dma_mmu_remap_num __initdata;
- pmd = pmd_alloc(&init_mm, pud, base);
- if (!pmd) {
- printk(KERN_ERR "%s: no pmd tables\n", __func__);
- ret = -ENOMEM;
- break;
- }
- WARN_ON(!pmd_none(*pmd));
+void __init dma_contiguous_early_fixup(phys_addr_t base, unsigned long size)
+{
+ dma_mmu_remap[dma_mmu_remap_num].base = base;
+ dma_mmu_remap[dma_mmu_remap_num].size = size;
+ dma_mmu_remap_num++;
+}
- pte = pte_alloc_kernel(pmd, base);
- if (!pte) {
- printk(KERN_ERR "%s: no pte tables\n", __func__);
- ret = -ENOMEM;
- break;
- }
+void __init dma_contiguous_remap(void)
+{
+ int i;
+ for (i = 0; i < dma_mmu_remap_num; i++) {
+ phys_addr_t start = dma_mmu_remap[i].base;
+ phys_addr_t end = start + dma_mmu_remap[i].size;
+ struct map_desc map;
+ unsigned long addr;
+
+ if (end > arm_lowmem_limit)
+ end = arm_lowmem_limit;
+ if (start >= end)
+ return;
+
+ map.pfn = __phys_to_pfn(start);
+ map.virtual = __phys_to_virt(start);
+ map.length = end - start;
+ map.type = MT_MEMORY_DMA_READY;
- consistent_pte[i++] = pte;
- base += PMD_SIZE;
- } while (base < CONSISTENT_END);
+ /*
+ * Clear previous low-memory mapping
+ */
+ for (addr = __phys_to_virt(start); addr < __phys_to_virt(end);
+ addr += PMD_SIZE)
+ pmd_clear(pmd_off_k(addr));
- return ret;
+ iotable_init(&map, 1);
+ }
}
-core_initcall(consistent_init);
+static int __dma_update_pte(pte_t *pte, pgtable_t token, unsigned long addr,
+ void *data)
+{
+ struct page *page = virt_to_page(addr);
+ pgprot_t prot = *(pgprot_t *)data;
-static void *
-__dma_alloc_remap(struct page *page, size_t size, gfp_t gfp, pgprot_t prot,
- const void *caller)
+ set_pte_ext(pte, mk_pte(page, prot), 0);
+ return 0;
+}
+
+static void __dma_remap(struct page *page, size_t size, pgprot_t prot)
+{
+ unsigned long start = (unsigned long) page_address(page);
+ unsigned end = start + size;
+
+ apply_to_page_range(&init_mm, start, size, __dma_update_pte, &prot);
+ dsb();
+ flush_tlb_kernel_range(start, end);
+}
+
+static void *__alloc_remap_buffer(struct device *dev, size_t size, gfp_t gfp,
+ pgprot_t prot, struct page **ret_page,
+ const void *caller)
+{
+ struct page *page;
+ void *ptr;
+ page = __dma_alloc_buffer(dev, size, gfp);
+ if (!page)
+ return NULL;
+
+ ptr = __dma_alloc_remap(page, size, gfp, prot, caller);
+ if (!ptr) {
+ __dma_free_buffer(page, size);
+ return NULL;
+ }
+
+ *ret_page = page;
+ return ptr;
+}
+
+static void *__alloc_from_pool(size_t size, struct page **ret_page)
{
- struct arm_vmregion *c;
+ struct dma_pool *pool = &atomic_pool;
+ unsigned int count = size >> PAGE_SHIFT;
+ unsigned int pageno;
+ unsigned long flags;
+ void *ptr = NULL;
size_t align;
- int bit;
- if (!consistent_pte) {
- printk(KERN_ERR "%s: not initialised\n", __func__);
+ if (!pool->vaddr) {
+ pr_err("%s: coherent pool not initialised!\n", __func__);
dump_stack();
return NULL;
}
/*
- * Align the virtual region allocation - maximum alignment is
- * a section size, minimum is a page size. This helps reduce
- * fragmentation of the DMA space, and also prevents allocations
- * smaller than a section from crossing a section boundary.
- */
- bit = fls(size - 1);
- if (bit > SECTION_SHIFT)
- bit = SECTION_SHIFT;
- align = 1 << bit;
-
- /*
- * Allocate a virtual address in the consistent mapping region.
+ * Align the region allocation - allocations from pool are rather
+ * small, so align them to their order in pages, minimum is a page
+ * size. This helps reduce fragmentation of the DMA space.
*/
- c = arm_vmregion_alloc(&consistent_head, align, size,
- gfp & ~(__GFP_DMA | __GFP_HIGHMEM), caller);
- if (c) {
- pte_t *pte;
- int idx = CONSISTENT_PTE_INDEX(c->vm_start);
- u32 off = CONSISTENT_OFFSET(c->vm_start) & (PTRS_PER_PTE-1);
-
- pte = consistent_pte[idx] + off;
- c->vm_pages = page;
-
- do {
- BUG_ON(!pte_none(*pte));
-
- set_pte_ext(pte, mk_pte(page, prot), 0);
- page++;
- pte++;
- off++;
- if (off >= PTRS_PER_PTE) {
- off = 0;
- pte = consistent_pte[++idx];
- }
- } while (size -= PAGE_SIZE);
-
- dsb();
-
- return (void *)c->vm_start;
+ align = PAGE_SIZE << get_order(size);
+
+ spin_lock_irqsave(&pool->lock, flags);
+ pageno = bitmap_find_next_zero_area(pool->bitmap, pool->nr_pages,
+ 0, count, (1 << align) - 1);
+ if (pageno < pool->nr_pages) {
+ bitmap_set(pool->bitmap, pageno, count);
+ ptr = pool->vaddr + PAGE_SIZE * pageno;
+ *ret_page = pool->page + pageno;
}
- return NULL;
+ spin_unlock_irqrestore(&pool->lock, flags);
+
+ return ptr;
}
-static void __dma_free_remap(void *cpu_addr, size_t size)
+static int __free_from_pool(void *start, size_t size)
{
- struct arm_vmregion *c;
- unsigned long addr;
- pte_t *ptep;
- int idx;
- u32 off;
+ struct dma_pool *pool = &atomic_pool;
+ unsigned long pageno, count;
+ unsigned long flags;
- c = arm_vmregion_find_remove(&consistent_head, (unsigned long)cpu_addr);
- if (!c) {
- printk(KERN_ERR "%s: trying to free invalid coherent area: %p\n",
- __func__, cpu_addr);
- dump_stack();
- return;
- }
+ if (start < pool->vaddr || start > pool->vaddr + pool->size)
+ return 0;
- if ((c->vm_end - c->vm_start) != size) {
- printk(KERN_ERR "%s: freeing wrong coherent size (%ld != %d)\n",
- __func__, c->vm_end - c->vm_start, size);
+ if (start + size > pool->vaddr + pool->size) {
+ pr_err("%s: freeing wrong coherent size from pool\n", __func__);
dump_stack();
- size = c->vm_end - c->vm_start;
+ return 0;
}
- idx = CONSISTENT_PTE_INDEX(c->vm_start);
- off = CONSISTENT_OFFSET(c->vm_start) & (PTRS_PER_PTE-1);
- ptep = consistent_pte[idx] + off;
- addr = c->vm_start;
- do {
- pte_t pte = ptep_get_and_clear(&init_mm, addr, ptep);
-
- ptep++;
- addr += PAGE_SIZE;
- off++;
- if (off >= PTRS_PER_PTE) {
- off = 0;
- ptep = consistent_pte[++idx];
- }
+ pageno = (start - pool->vaddr) >> PAGE_SHIFT;
+ count = size >> PAGE_SHIFT;
- if (pte_none(pte) || !pte_present(pte))
- printk(KERN_CRIT "%s: bad page in kernel page table\n",
- __func__);
- } while (size -= PAGE_SIZE);
+ spin_lock_irqsave(&pool->lock, flags);
+ bitmap_clear(pool->bitmap, pageno, count);
+ spin_unlock_irqrestore(&pool->lock, flags);
- flush_tlb_kernel_range(c->vm_start, c->vm_end);
+ return 1;
+}
+
+static void *__alloc_from_contiguous(struct device *dev, size_t size,
+ pgprot_t prot, struct page **ret_page)
+{
+ unsigned long order = get_order(size);
+ size_t count = size >> PAGE_SHIFT;
+ struct page *page;
+
+ page = dma_alloc_from_contiguous(dev, count, order);
+ if (!page)
+ return NULL;
+
+ __dma_clear_buffer(page, size);
+ __dma_remap(page, size, prot);
+
+ *ret_page = page;
+ return page_address(page);
+}
- arm_vmregion_free(&consistent_head, c);
+static void __free_from_contiguous(struct device *dev, struct page *page,
+ size_t size)
+{
+ __dma_remap(page, size, pgprot_kernel);
+ dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT);
+}
+
+static inline pgprot_t __get_dma_pgprot(struct dma_attrs *attrs, pgprot_t prot)
+{
+ prot = dma_get_attr(DMA_ATTR_WRITE_COMBINE, attrs) ?
+ pgprot_writecombine(prot) :
+ pgprot_dmacoherent(prot);
+ return prot;
}
+#define nommu() 0
+
#else /* !CONFIG_MMU */
-#define __dma_alloc_remap(page, size, gfp, prot, c) page_address(page)
-#define __dma_free_remap(addr, size) do { } while (0)
+#define nommu() 1
+
+#define __get_dma_pgprot(attrs, prot) __pgprot(0)
+#define __alloc_remap_buffer(dev, size, gfp, prot, ret, c) NULL
+#define __alloc_from_pool(dev, size, ret_page, c) NULL
+#define __alloc_from_contiguous(dev, size, prot, ret) NULL
+#define __free_from_pool(cpu_addr, size) 0
+#define __free_from_contiguous(dev, page, size) do { } while (0)
+#define __dma_free_remap(cpu_addr, size) do { } while (0)
#endif /* CONFIG_MMU */
-static void *
-__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, gfp_t gfp,
- pgprot_t prot, const void *caller)
+static void *__alloc_simple_buffer(struct device *dev, size_t size, gfp_t gfp,
+ struct page **ret_page)
+{
+ struct page *page;
+ page = __dma_alloc_buffer(dev, size, gfp);
+ if (!page)
+ return NULL;
+
+ *ret_page = page;
+ return page_address(page);
+}
+
+
+
+static void *__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
+ gfp_t gfp, pgprot_t prot, const void *caller)
{
+ u64 mask = get_coherent_dma_mask(dev);
struct page *page;
void *addr;
+#ifdef CONFIG_DMA_API_DEBUG
+ u64 limit = (mask + 1) & ~mask;
+ if (limit && size >= limit) {
+ dev_warn(dev, "coherent allocation too big (requested %#x mask %#llx)\n",
+ size, mask);
+ return NULL;
+ }
+#endif
+
+ if (!mask)
+ return NULL;
+
+ if (mask < 0xffffffffULL)
+ gfp |= GFP_DMA;
+
/*
* Following is a work-around (a.k.a. hack) to prevent pages
* with __GFP_COMP being passed to split_page() which cannot
*/
gfp &= ~(__GFP_COMP);
- *handle = ~0;
+ *handle = DMA_ERROR_CODE;
size = PAGE_ALIGN(size);
- page = __dma_alloc_buffer(dev, size, gfp);
- if (!page)
- return NULL;
-
- if (!arch_is_coherent())
- addr = __dma_alloc_remap(page, size, gfp, prot, caller);
+ if (arch_is_coherent() || nommu())
+ addr = __alloc_simple_buffer(dev, size, gfp, &page);
+ else if (gfp & GFP_ATOMIC)
+ addr = __alloc_from_pool(size, &page);
+ else if (!IS_ENABLED(CONFIG_CMA))
+ addr = __alloc_remap_buffer(dev, size, gfp, prot, &page, caller);
else
- addr = page_address(page);
+ addr = __alloc_from_contiguous(dev, size, prot, &page);
if (addr)
*handle = pfn_to_dma(dev, page_to_pfn(page));
- else
- __dma_free_buffer(page, size);
return addr;
}
* Allocate DMA-coherent memory space and return both the kernel remapped
* virtual and bus address for that space.
*/
-void *
-dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *handle, gfp_t gfp)
+void *arm_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
+ gfp_t gfp, struct dma_attrs *attrs)
{
+ pgprot_t prot = __get_dma_pgprot(attrs, pgprot_kernel);
void *memory;
if (dma_alloc_from_coherent(dev, size, handle, &memory))
return memory;
- return __dma_alloc(dev, size, handle, gfp,
- pgprot_dmacoherent(pgprot_kernel),
+ return __dma_alloc(dev, size, handle, gfp, prot,
__builtin_return_address(0));
}
-EXPORT_SYMBOL(dma_alloc_coherent);
/*
- * Allocate a writecombining region, in much the same way as
- * dma_alloc_coherent above.
+ * Create userspace mapping for the DMA-coherent memory.
*/
-void *
-dma_alloc_writecombine(struct device *dev, size_t size, dma_addr_t *handle, gfp_t gfp)
-{
- return __dma_alloc(dev, size, handle, gfp,
- pgprot_writecombine(pgprot_kernel),
- __builtin_return_address(0));
-}
-EXPORT_SYMBOL(dma_alloc_writecombine);
-
-static int dma_mmap(struct device *dev, struct vm_area_struct *vma,
- void *cpu_addr, dma_addr_t dma_addr, size_t size)
+int arm_dma_mmap(struct device *dev, struct vm_area_struct *vma,
+ void *cpu_addr, dma_addr_t dma_addr, size_t size,
+ struct dma_attrs *attrs)
{
int ret = -ENXIO;
#ifdef CONFIG_MMU
- unsigned long user_size, kern_size;
- struct arm_vmregion *c;
+ unsigned long user_count = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
+ unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+ unsigned long pfn = dma_to_pfn(dev, dma_addr);
+ unsigned long off = vma->vm_pgoff;
- user_size = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
+ vma->vm_page_prot = __get_dma_pgprot(attrs, vma->vm_page_prot);
- c = arm_vmregion_find(&consistent_head, (unsigned long)cpu_addr);
- if (c) {
- unsigned long off = vma->vm_pgoff;
+ if (dma_mmap_from_coherent(dev, vma, cpu_addr, size, &ret))
+ return ret;
- kern_size = (c->vm_end - c->vm_start) >> PAGE_SHIFT;
-
- if (off < kern_size &&
- user_size <= (kern_size - off)) {
- ret = remap_pfn_range(vma, vma->vm_start,
- page_to_pfn(c->vm_pages) + off,
- user_size << PAGE_SHIFT,
- vma->vm_page_prot);
- }
+ if (off < count && user_count <= (count - off)) {
+ ret = remap_pfn_range(vma, vma->vm_start,
+ pfn + off,
+ user_count << PAGE_SHIFT,
+ vma->vm_page_prot);
}
#endif /* CONFIG_MMU */
return ret;
}
-int dma_mmap_coherent(struct device *dev, struct vm_area_struct *vma,
- void *cpu_addr, dma_addr_t dma_addr, size_t size)
-{
- vma->vm_page_prot = pgprot_dmacoherent(vma->vm_page_prot);
- return dma_mmap(dev, vma, cpu_addr, dma_addr, size);
-}
-EXPORT_SYMBOL(dma_mmap_coherent);
-
-int dma_mmap_writecombine(struct device *dev, struct vm_area_struct *vma,
- void *cpu_addr, dma_addr_t dma_addr, size_t size)
-{
- vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
- return dma_mmap(dev, vma, cpu_addr, dma_addr, size);
-}
-EXPORT_SYMBOL(dma_mmap_writecombine);
-
/*
- * free a page as defined by the above mapping.
- * Must not be called with IRQs disabled.
+ * Free a buffer as defined by the above mapping.
*/
-void dma_free_coherent(struct device *dev, size_t size, void *cpu_addr, dma_addr_t handle)
+void arm_dma_free(struct device *dev, size_t size, void *cpu_addr,
+ dma_addr_t handle, struct dma_attrs *attrs)
{
- WARN_ON(irqs_disabled());
+ struct page *page = pfn_to_page(dma_to_pfn(dev, handle));
if (dma_release_from_coherent(dev, get_order(size), cpu_addr))
return;
size = PAGE_ALIGN(size);
- if (!arch_is_coherent())
+ if (arch_is_coherent() || nommu()) {
+ __dma_free_buffer(page, size);
+ } else if (cpu_architecture() < CPU_ARCH_ARMv6) {
__dma_free_remap(cpu_addr, size);
-
- __dma_free_buffer(pfn_to_page(dma_to_pfn(dev, handle)), size);
-}
-EXPORT_SYMBOL(dma_free_coherent);
-
-/*
- * Make an area consistent for devices.
- * Note: Drivers should NOT use this function directly, as it will break
- * platforms with CONFIG_DMABOUNCE.
- * Use the driver DMA support - see dma-mapping.h (dma_sync_*)
- */
-void ___dma_single_cpu_to_dev(const void *kaddr, size_t size,
- enum dma_data_direction dir)
-{
- unsigned long paddr;
-
- BUG_ON(!virt_addr_valid(kaddr) || !virt_addr_valid(kaddr + size - 1));
-
- dmac_map_area(kaddr, size, dir);
-
- paddr = __pa(kaddr);
- if (dir == DMA_FROM_DEVICE) {
- outer_inv_range(paddr, paddr + size);
+ __dma_free_buffer(page, size);
} else {
- outer_clean_range(paddr, paddr + size);
+ if (__free_from_pool(cpu_addr, size))
+ return;
+ /*
+ * Non-atomic allocations cannot be freed with IRQs disabled
+ */
+ WARN_ON(irqs_disabled());
+ __free_from_contiguous(dev, page, size);
}
- /* FIXME: non-speculating: flush on bidirectional mappings? */
}
-EXPORT_SYMBOL(___dma_single_cpu_to_dev);
-void ___dma_single_dev_to_cpu(const void *kaddr, size_t size,
- enum dma_data_direction dir)
+int arm_dma_get_sgtable(struct device *dev, struct sg_table *sgt,
+ void *cpu_addr, dma_addr_t handle, size_t size,
+ struct dma_attrs *attrs)
{
- BUG_ON(!virt_addr_valid(kaddr) || !virt_addr_valid(kaddr + size - 1));
+ struct page *page = pfn_to_page(dma_to_pfn(dev, handle));
+ int ret;
- /* FIXME: non-speculating: not required */
- /* don't bother invalidating if DMA to device */
- if (dir != DMA_TO_DEVICE) {
- unsigned long paddr = __pa(kaddr);
- outer_inv_range(paddr, paddr + size);
- }
+ ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
+ if (unlikely(ret))
+ return ret;
- dmac_unmap_area(kaddr, size, dir);
+ sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0);
+ return 0;
}
-EXPORT_SYMBOL(___dma_single_dev_to_cpu);
static void dma_cache_maint_page(struct page *page, unsigned long offset,
size_t size, enum dma_data_direction dir,
} while (left);
}
-void ___dma_page_cpu_to_dev(struct page *page, unsigned long off,
- size_t size, enum dma_data_direction dir)
-{
+/*
+ * Make an area consistent for devices.
+ * Note: Drivers should NOT use this function directly, as it will break
+ * platforms with CONFIG_DMABOUNCE.
+ * Use the driver DMA support - see dma-mapping.h (dma_sync_*)
+ */
+static void __dma_page_cpu_to_dev(struct page *page, unsigned long off,
+ size_t size, enum dma_data_direction dir)
+{
unsigned long paddr;
dma_cache_maint_page(page, off, size, dir, dmac_map_area);
}
/* FIXME: non-speculating: flush on bidirectional mappings? */
}
-EXPORT_SYMBOL(___dma_page_cpu_to_dev);
-void ___dma_page_dev_to_cpu(struct page *page, unsigned long off,
+static void __dma_page_dev_to_cpu(struct page *page, unsigned long off,
size_t size, enum dma_data_direction dir)
{
unsigned long paddr = page_to_phys(page) + off;
if (dir != DMA_TO_DEVICE && off == 0 && size >= PAGE_SIZE)
set_bit(PG_dcache_clean, &page->flags);
}
-EXPORT_SYMBOL(___dma_page_dev_to_cpu);
/**
- * dma_map_sg - map a set of SG buffers for streaming mode DMA
+ * arm_dma_map_sg - map a set of SG buffers for streaming mode DMA
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
* @sg: list of buffers
* @nents: number of buffers to map
* Device ownership issues as mentioned for dma_map_single are the same
* here.
*/
-int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
- enum dma_data_direction dir)
+int arm_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
+ enum dma_data_direction dir, struct dma_attrs *attrs)
{
+ struct dma_map_ops *ops = get_dma_ops(dev);
struct scatterlist *s;
int i, j;
- BUG_ON(!valid_dma_direction(dir));
-
for_each_sg(sg, s, nents, i) {
- s->dma_address = __dma_map_page(dev, sg_page(s), s->offset,
- s->length, dir);
+#ifdef CONFIG_NEED_SG_DMA_LENGTH
+ s->dma_length = s->length;
+#endif
+ s->dma_address = ops->map_page(dev, sg_page(s), s->offset,
+ s->length, dir, attrs);
if (dma_mapping_error(dev, s->dma_address))
goto bad_mapping;
}
- debug_dma_map_sg(dev, sg, nents, nents, dir);
return nents;
bad_mapping:
for_each_sg(sg, s, i, j)
- __dma_unmap_page(dev, sg_dma_address(s), sg_dma_len(s), dir);
+ ops->unmap_page(dev, sg_dma_address(s), sg_dma_len(s), dir, attrs);
return 0;
}
-EXPORT_SYMBOL(dma_map_sg);
/**
- * dma_unmap_sg - unmap a set of SG buffers mapped by dma_map_sg
+ * arm_dma_unmap_sg - unmap a set of SG buffers mapped by dma_map_sg
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
* @sg: list of buffers
* @nents: number of buffers to unmap (same as was passed to dma_map_sg)
* Unmap a set of streaming mode DMA translations. Again, CPU access
* rules concerning calls here are the same as for dma_unmap_single().
*/
-void dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents,
- enum dma_data_direction dir)
+void arm_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents,
+ enum dma_data_direction dir, struct dma_attrs *attrs)
{
+ struct dma_map_ops *ops = get_dma_ops(dev);
struct scatterlist *s;
- int i;
- debug_dma_unmap_sg(dev, sg, nents, dir);
+ int i;
for_each_sg(sg, s, nents, i)
- __dma_unmap_page(dev, sg_dma_address(s), sg_dma_len(s), dir);
+ ops->unmap_page(dev, sg_dma_address(s), sg_dma_len(s), dir, attrs);
}
-EXPORT_SYMBOL(dma_unmap_sg);
/**
- * dma_sync_sg_for_cpu
+ * arm_dma_sync_sg_for_cpu
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
* @sg: list of buffers
* @nents: number of buffers to map (returned from dma_map_sg)
* @dir: DMA transfer direction (same as was passed to dma_map_sg)
*/
-void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
+void arm_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir)
{
+ struct dma_map_ops *ops = get_dma_ops(dev);
struct scatterlist *s;
int i;
- for_each_sg(sg, s, nents, i) {
- if (!dmabounce_sync_for_cpu(dev, sg_dma_address(s), 0,
- sg_dma_len(s), dir))
- continue;
-
- __dma_page_dev_to_cpu(sg_page(s), s->offset,
- s->length, dir);
- }
-
- debug_dma_sync_sg_for_cpu(dev, sg, nents, dir);
+ for_each_sg(sg, s, nents, i)
+ ops->sync_single_for_cpu(dev, sg_dma_address(s), s->length,
+ dir);
}
-EXPORT_SYMBOL(dma_sync_sg_for_cpu);
/**
- * dma_sync_sg_for_device
+ * arm_dma_sync_sg_for_device
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
* @sg: list of buffers
* @nents: number of buffers to map (returned from dma_map_sg)
* @dir: DMA transfer direction (same as was passed to dma_map_sg)
*/
-void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
+void arm_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir)
{
+ struct dma_map_ops *ops = get_dma_ops(dev);
struct scatterlist *s;
int i;
- for_each_sg(sg, s, nents, i) {
- if (!dmabounce_sync_for_device(dev, sg_dma_address(s), 0,
- sg_dma_len(s), dir))
- continue;
-
- __dma_page_cpu_to_dev(sg_page(s), s->offset,
- s->length, dir);
- }
-
- debug_dma_sync_sg_for_device(dev, sg, nents, dir);
+ for_each_sg(sg, s, nents, i)
+ ops->sync_single_for_device(dev, sg_dma_address(s), s->length,
+ dir);
}
-EXPORT_SYMBOL(dma_sync_sg_for_device);
/*
* Return whether the given device DMA address mask can be supported
}
EXPORT_SYMBOL(dma_supported);
-int dma_set_mask(struct device *dev, u64 dma_mask)
+static int arm_dma_set_mask(struct device *dev, u64 dma_mask)
{
if (!dev->dma_mask || !dma_supported(dev, dma_mask))
return -EIO;
-#ifndef CONFIG_DMABOUNCE
*dev->dma_mask = dma_mask;
-#endif
return 0;
}
-EXPORT_SYMBOL(dma_set_mask);
#define PREALLOC_DMA_DEBUG_ENTRIES 4096
static int __init dma_debug_do_init(void)
{
-#ifdef CONFIG_MMU
- arm_vmregion_create_proc("dma-mappings", &consistent_head);
-#endif
dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
return 0;
}
fs_initcall(dma_debug_do_init);
+
+#ifdef CONFIG_ARM_DMA_USE_IOMMU
+
+/* IOMMU */
+
+static inline dma_addr_t __alloc_iova(struct dma_iommu_mapping *mapping,
+ size_t size)
+{
+ unsigned int order = get_order(size);
+ unsigned int align = 0;
+ unsigned int count, start;
+ unsigned long flags;
+
+ count = ((PAGE_ALIGN(size) >> PAGE_SHIFT) +
+ (1 << mapping->order) - 1) >> mapping->order;
+
+ if (order > mapping->order)
+ align = (1 << (order - mapping->order)) - 1;
+
+ spin_lock_irqsave(&mapping->lock, flags);
+ start = bitmap_find_next_zero_area(mapping->bitmap, mapping->bits, 0,
+ count, align);
+ if (start > mapping->bits) {
+ spin_unlock_irqrestore(&mapping->lock, flags);
+ return DMA_ERROR_CODE;
+ }
+
+ bitmap_set(mapping->bitmap, start, count);
+ spin_unlock_irqrestore(&mapping->lock, flags);
+
+ return mapping->base + (start << (mapping->order + PAGE_SHIFT));
+}
+
+static inline void __free_iova(struct dma_iommu_mapping *mapping,
+ dma_addr_t addr, size_t size)
+{
+ unsigned int start = (addr - mapping->base) >>
+ (mapping->order + PAGE_SHIFT);
+ unsigned int count = ((size >> PAGE_SHIFT) +
+ (1 << mapping->order) - 1) >> mapping->order;
+ unsigned long flags;
+
+ spin_lock_irqsave(&mapping->lock, flags);
+ bitmap_clear(mapping->bitmap, start, count);
+ spin_unlock_irqrestore(&mapping->lock, flags);
+}
+
+static struct page **__iommu_alloc_buffer(struct device *dev, size_t size, gfp_t gfp)
+{
+ struct page **pages;
+ int count = size >> PAGE_SHIFT;
+ int array_size = count * sizeof(struct page *);
+ int i = 0;
+
+ if (array_size <= PAGE_SIZE)
+ pages = kzalloc(array_size, gfp);
+ else
+ pages = vzalloc(array_size);
+ if (!pages)
+ return NULL;
+
+ while (count) {
+ int j, order = __ffs(count);
+
+ pages[i] = alloc_pages(gfp | __GFP_NOWARN, order);
+ while (!pages[i] && order)
+ pages[i] = alloc_pages(gfp | __GFP_NOWARN, --order);
+ if (!pages[i])
+ goto error;
+
+ if (order)
+ split_page(pages[i], order);
+ j = 1 << order;
+ while (--j)
+ pages[i + j] = pages[i] + j;
+
+ __dma_clear_buffer(pages[i], PAGE_SIZE << order);
+ i += 1 << order;
+ count -= 1 << order;
+ }
+
+ return pages;
+error:
+ while (--i)
+ if (pages[i])
+ __free_pages(pages[i], 0);
+ if (array_size < PAGE_SIZE)
+ kfree(pages);
+ else
+ vfree(pages);
+ return NULL;
+}
+
+static int __iommu_free_buffer(struct device *dev, struct page **pages, size_t size)
+{
+ int count = size >> PAGE_SHIFT;
+ int array_size = count * sizeof(struct page *);
+ int i;
+ for (i = 0; i < count; i++)
+ if (pages[i])
+ __free_pages(pages[i], 0);
+ if (array_size < PAGE_SIZE)
+ kfree(pages);
+ else
+ vfree(pages);
+ return 0;
+}
+
+/*
+ * Create a CPU mapping for a specified pages
+ */
+static void *
+__iommu_alloc_remap(struct page **pages, size_t size, gfp_t gfp, pgprot_t prot,
+ const void *caller)
+{
+ unsigned int i, nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
+ struct vm_struct *area;
+ unsigned long p;
+
+ area = get_vm_area_caller(size, VM_DMA | VM_USERMAP, caller);
+ if (!area)
+ return NULL;
+
+ area->pages = pages;
+ area->nr_pages = nr_pages;
+ p = (unsigned long)area->addr;
+
+ for (i = 0; i < nr_pages; i++) {
+ phys_addr_t phys = __pfn_to_phys(page_to_pfn(pages[i]));
+ if (ioremap_page_range(p, p + PAGE_SIZE, phys, prot))
+ goto err;
+ p += PAGE_SIZE;
+ }
+ return area->addr;
+err:
+ unmap_kernel_range((unsigned long)area->addr, size);
+ vunmap(area->addr);
+ return NULL;
+}
+
+/*
+ * Create a mapping in device IO address space for specified pages
+ */
+static dma_addr_t
+__iommu_create_mapping(struct device *dev, struct page **pages, size_t size)
+{
+ struct dma_iommu_mapping *mapping = dev->archdata.mapping;
+ unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+ dma_addr_t dma_addr, iova;
+ int i, ret = DMA_ERROR_CODE;
+
+ dma_addr = __alloc_iova(mapping, size);
+ if (dma_addr == DMA_ERROR_CODE)
+ return dma_addr;
+
+ iova = dma_addr;
+ for (i = 0; i < count; ) {
+ unsigned int next_pfn = page_to_pfn(pages[i]) + 1;
+ phys_addr_t phys = page_to_phys(pages[i]);
+ unsigned int len, j;
+
+ for (j = i + 1; j < count; j++, next_pfn++)
+ if (page_to_pfn(pages[j]) != next_pfn)
+ break;
+
+ len = (j - i) << PAGE_SHIFT;
+ ret = iommu_map(mapping->domain, iova, phys, len, 0);
+ if (ret < 0)
+ goto fail;
+ iova += len;
+ i = j;
+ }
+ return dma_addr;
+fail:
+ iommu_unmap(mapping->domain, dma_addr, iova-dma_addr);
+ __free_iova(mapping, dma_addr, size);
+ return DMA_ERROR_CODE;
+}
+
+static int __iommu_remove_mapping(struct device *dev, dma_addr_t iova, size_t size)
+{
+ struct dma_iommu_mapping *mapping = dev->archdata.mapping;
+
+ /*
+ * add optional in-page offset from iova to size and align
+ * result to page size
+ */
+ size = PAGE_ALIGN((iova & ~PAGE_MASK) + size);
+ iova &= PAGE_MASK;
+
+ iommu_unmap(mapping->domain, iova, size);
+ __free_iova(mapping, iova, size);
+ return 0;
+}
+
+static struct page **__iommu_get_pages(void *cpu_addr, struct dma_attrs *attrs)
+{
+ struct vm_struct *area;
+
+ if (dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs))
+ return cpu_addr;
+
+ area = find_vm_area(cpu_addr);
+ if (area && (area->flags & VM_DMA))
+ return area->pages;
+ return NULL;
+}
+
+static void *arm_iommu_alloc_attrs(struct device *dev, size_t size,
+ dma_addr_t *handle, gfp_t gfp, struct dma_attrs *attrs)
+{
+ pgprot_t prot = __get_dma_pgprot(attrs, pgprot_kernel);
+ struct page **pages;
+ void *addr = NULL;
+
+ *handle = DMA_ERROR_CODE;
+ size = PAGE_ALIGN(size);
+
+ pages = __iommu_alloc_buffer(dev, size, gfp);
+ if (!pages)
+ return NULL;
+
+ *handle = __iommu_create_mapping(dev, pages, size);
+ if (*handle == DMA_ERROR_CODE)
+ goto err_buffer;
+
+ if (dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs))
+ return pages;
+
+ addr = __iommu_alloc_remap(pages, size, gfp, prot,
+ __builtin_return_address(0));
+ if (!addr)
+ goto err_mapping;
+
+ return addr;
+
+err_mapping:
+ __iommu_remove_mapping(dev, *handle, size);
+err_buffer:
+ __iommu_free_buffer(dev, pages, size);
+ return NULL;
+}
+
+static int arm_iommu_mmap_attrs(struct device *dev, struct vm_area_struct *vma,
+ void *cpu_addr, dma_addr_t dma_addr, size_t size,
+ struct dma_attrs *attrs)
+{
+ unsigned long uaddr = vma->vm_start;
+ unsigned long usize = vma->vm_end - vma->vm_start;
+ struct page **pages = __iommu_get_pages(cpu_addr, attrs);
+
+ vma->vm_page_prot = __get_dma_pgprot(attrs, vma->vm_page_prot);
+
+ if (!pages)
+ return -ENXIO;
+
+ do {
+ int ret = vm_insert_page(vma, uaddr, *pages++);
+ if (ret) {
+ pr_err("Remapping memory failed: %d\n", ret);
+ return ret;
+ }
+ uaddr += PAGE_SIZE;
+ usize -= PAGE_SIZE;
+ } while (usize > 0);
+
+ return 0;
+}
+
+/*
+ * free a page as defined by the above mapping.
+ * Must not be called with IRQs disabled.
+ */
+void arm_iommu_free_attrs(struct device *dev, size_t size, void *cpu_addr,
+ dma_addr_t handle, struct dma_attrs *attrs)
+{
+ struct page **pages = __iommu_get_pages(cpu_addr, attrs);
+ size = PAGE_ALIGN(size);
+
+ if (!pages) {
+ pr_err("%s: trying to free invalid coherent area: %p\n",
+ __func__, cpu_addr);
+ dump_stack();
+ return;
+ }
+
+ if (!dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs)) {
+ unmap_kernel_range((unsigned long)cpu_addr, size);
+ vunmap(cpu_addr);
+ }
+
+ __iommu_remove_mapping(dev, handle, size);
+ __iommu_free_buffer(dev, pages, size);
+}
+
+static int arm_iommu_get_sgtable(struct device *dev, struct sg_table *sgt,
+ void *cpu_addr, dma_addr_t dma_addr,
+ size_t size, struct dma_attrs *attrs)
+{
+ unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+ struct page **pages = __iommu_get_pages(cpu_addr, attrs);
+
+ if (!pages)
+ return -ENXIO;
+
+ return sg_alloc_table_from_pages(sgt, pages, count, 0, size,
+ GFP_KERNEL);
+}
+
+/*
+ * Map a part of the scatter-gather list into contiguous io address space
+ */
+static int __map_sg_chunk(struct device *dev, struct scatterlist *sg,
+ size_t size, dma_addr_t *handle,
+ enum dma_data_direction dir, struct dma_attrs *attrs)
+{
+ struct dma_iommu_mapping *mapping = dev->archdata.mapping;
+ dma_addr_t iova, iova_base;
+ int ret = 0;
+ unsigned int count;
+ struct scatterlist *s;
+
+ size = PAGE_ALIGN(size);
+ *handle = DMA_ERROR_CODE;
+
+ iova_base = iova = __alloc_iova(mapping, size);
+ if (iova == DMA_ERROR_CODE)
+ return -ENOMEM;
+
+ for (count = 0, s = sg; count < (size >> PAGE_SHIFT); s = sg_next(s)) {
+ phys_addr_t phys = page_to_phys(sg_page(s));
+ unsigned int len = PAGE_ALIGN(s->offset + s->length);
+
+ if (!arch_is_coherent() &&
+ !dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
+ __dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir);
+
+ ret = iommu_map(mapping->domain, iova, phys, len, 0);
+ if (ret < 0)
+ goto fail;
+ count += len >> PAGE_SHIFT;
+ iova += len;
+ }
+ *handle = iova_base;
+
+ return 0;
+fail:
+ iommu_unmap(mapping->domain, iova_base, count * PAGE_SIZE);
+ __free_iova(mapping, iova_base, size);
+ return ret;
+}
+
+/**
+ * arm_iommu_map_sg - map a set of SG buffers for streaming mode DMA
+ * @dev: valid struct device pointer
+ * @sg: list of buffers
+ * @nents: number of buffers to map
+ * @dir: DMA transfer direction
+ *
+ * Map a set of buffers described by scatterlist in streaming mode for DMA.
+ * The scatter gather list elements are merged together (if possible) and
+ * tagged with the appropriate dma address and length. They are obtained via
+ * sg_dma_{address,length}.
+ */
+int arm_iommu_map_sg(struct device *dev, struct scatterlist *sg, int nents,
+ enum dma_data_direction dir, struct dma_attrs *attrs)
+{
+ struct scatterlist *s = sg, *dma = sg, *start = sg;
+ int i, count = 0;
+ unsigned int offset = s->offset;
+ unsigned int size = s->offset + s->length;
+ unsigned int max = dma_get_max_seg_size(dev);
+
+ for (i = 1; i < nents; i++) {
+ s = sg_next(s);
+
+ s->dma_address = DMA_ERROR_CODE;
+ s->dma_length = 0;
+
+ if (s->offset || (size & ~PAGE_MASK) || size + s->length > max) {
+ if (__map_sg_chunk(dev, start, size, &dma->dma_address,
+ dir, attrs) < 0)
+ goto bad_mapping;
+
+ dma->dma_address += offset;
+ dma->dma_length = size - offset;
+
+ size = offset = s->offset;
+ start = s;
+ dma = sg_next(dma);
+ count += 1;
+ }
+ size += s->length;
+ }
+ if (__map_sg_chunk(dev, start, size, &dma->dma_address, dir, attrs) < 0)
+ goto bad_mapping;
+
+ dma->dma_address += offset;
+ dma->dma_length = size - offset;
+
+ return count+1;
+
+bad_mapping:
+ for_each_sg(sg, s, count, i)
+ __iommu_remove_mapping(dev, sg_dma_address(s), sg_dma_len(s));
+ return 0;
+}
+
+/**
+ * arm_iommu_unmap_sg - unmap a set of SG buffers mapped by dma_map_sg
+ * @dev: valid struct device pointer
+ * @sg: list of buffers
+ * @nents: number of buffers to unmap (same as was passed to dma_map_sg)
+ * @dir: DMA transfer direction (same as was passed to dma_map_sg)
+ *
+ * Unmap a set of streaming mode DMA translations. Again, CPU access
+ * rules concerning calls here are the same as for dma_unmap_single().
+ */
+void arm_iommu_unmap_sg(struct device *dev, struct scatterlist *sg, int nents,
+ enum dma_data_direction dir, struct dma_attrs *attrs)
+{
+ struct scatterlist *s;
+ int i;
+
+ for_each_sg(sg, s, nents, i) {
+ if (sg_dma_len(s))
+ __iommu_remove_mapping(dev, sg_dma_address(s),
+ sg_dma_len(s));
+ if (!arch_is_coherent() &&
+ !dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
+ __dma_page_dev_to_cpu(sg_page(s), s->offset,
+ s->length, dir);
+ }
+}
+
+/**
+ * arm_iommu_sync_sg_for_cpu
+ * @dev: valid struct device pointer
+ * @sg: list of buffers
+ * @nents: number of buffers to map (returned from dma_map_sg)
+ * @dir: DMA transfer direction (same as was passed to dma_map_sg)
+ */
+void arm_iommu_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
+ int nents, enum dma_data_direction dir)
+{
+ struct scatterlist *s;
+ int i;
+
+ for_each_sg(sg, s, nents, i)
+ if (!arch_is_coherent())
+ __dma_page_dev_to_cpu(sg_page(s), s->offset, s->length, dir);
+
+}
+
+/**
+ * arm_iommu_sync_sg_for_device
+ * @dev: valid struct device pointer
+ * @sg: list of buffers
+ * @nents: number of buffers to map (returned from dma_map_sg)
+ * @dir: DMA transfer direction (same as was passed to dma_map_sg)
+ */
+void arm_iommu_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
+ int nents, enum dma_data_direction dir)
+{
+ struct scatterlist *s;
+ int i;
+
+ for_each_sg(sg, s, nents, i)
+ if (!arch_is_coherent())
+ __dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir);
+}
+
+
+/**
+ * arm_iommu_map_page
+ * @dev: valid struct device pointer
+ * @page: page that buffer resides in
+ * @offset: offset into page for start of buffer
+ * @size: size of buffer to map
+ * @dir: DMA transfer direction
+ *
+ * IOMMU aware version of arm_dma_map_page()
+ */
+static dma_addr_t arm_iommu_map_page(struct device *dev, struct page *page,
+ unsigned long offset, size_t size, enum dma_data_direction dir,
+ struct dma_attrs *attrs)
+{
+ struct dma_iommu_mapping *mapping = dev->archdata.mapping;
+ dma_addr_t dma_addr;
+ int ret, len = PAGE_ALIGN(size + offset);
+
+ if (!arch_is_coherent() && !dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
+ __dma_page_cpu_to_dev(page, offset, size, dir);
+
+ dma_addr = __alloc_iova(mapping, len);
+ if (dma_addr == DMA_ERROR_CODE)
+ return dma_addr;
+
+ ret = iommu_map(mapping->domain, dma_addr, page_to_phys(page), len, 0);
+ if (ret < 0)
+ goto fail;
+
+ return dma_addr + offset;
+fail:
+ __free_iova(mapping, dma_addr, len);
+ return DMA_ERROR_CODE;
+}
+
+/**
+ * arm_iommu_unmap_page
+ * @dev: valid struct device pointer
+ * @handle: DMA address of buffer
+ * @size: size of buffer (same as passed to dma_map_page)
+ * @dir: DMA transfer direction (same as passed to dma_map_page)
+ *
+ * IOMMU aware version of arm_dma_unmap_page()
+ */
+static void arm_iommu_unmap_page(struct device *dev, dma_addr_t handle,
+ size_t size, enum dma_data_direction dir,
+ struct dma_attrs *attrs)
+{
+ struct dma_iommu_mapping *mapping = dev->archdata.mapping;
+ dma_addr_t iova = handle & PAGE_MASK;
+ struct page *page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova));
+ int offset = handle & ~PAGE_MASK;
+ int len = PAGE_ALIGN(size + offset);
+
+ if (!iova)
+ return;
+
+ if (!arch_is_coherent() && !dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
+ __dma_page_dev_to_cpu(page, offset, size, dir);
+
+ iommu_unmap(mapping->domain, iova, len);
+ __free_iova(mapping, iova, len);
+}
+
+static void arm_iommu_sync_single_for_cpu(struct device *dev,
+ dma_addr_t handle, size_t size, enum dma_data_direction dir)
+{
+ struct dma_iommu_mapping *mapping = dev->archdata.mapping;
+ dma_addr_t iova = handle & PAGE_MASK;
+ struct page *page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova));
+ unsigned int offset = handle & ~PAGE_MASK;
+
+ if (!iova)
+ return;
+
+ if (!arch_is_coherent())
+ __dma_page_dev_to_cpu(page, offset, size, dir);
+}
+
+static void arm_iommu_sync_single_for_device(struct device *dev,
+ dma_addr_t handle, size_t size, enum dma_data_direction dir)
+{
+ struct dma_iommu_mapping *mapping = dev->archdata.mapping;
+ dma_addr_t iova = handle & PAGE_MASK;
+ struct page *page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova));
+ unsigned int offset = handle & ~PAGE_MASK;
+
+ if (!iova)
+ return;
+
+ __dma_page_cpu_to_dev(page, offset, size, dir);
+}
+
+struct dma_map_ops iommu_ops = {
+ .alloc = arm_iommu_alloc_attrs,
+ .free = arm_iommu_free_attrs,
+ .mmap = arm_iommu_mmap_attrs,
+ .get_sgtable = arm_iommu_get_sgtable,
+
+ .map_page = arm_iommu_map_page,
+ .unmap_page = arm_iommu_unmap_page,
+ .sync_single_for_cpu = arm_iommu_sync_single_for_cpu,
+ .sync_single_for_device = arm_iommu_sync_single_for_device,
+
+ .map_sg = arm_iommu_map_sg,
+ .unmap_sg = arm_iommu_unmap_sg,
+ .sync_sg_for_cpu = arm_iommu_sync_sg_for_cpu,
+ .sync_sg_for_device = arm_iommu_sync_sg_for_device,
+};
+
+/**
+ * arm_iommu_create_mapping
+ * @bus: pointer to the bus holding the client device (for IOMMU calls)
+ * @base: start address of the valid IO address space
+ * @size: size of the valid IO address space
+ * @order: accuracy of the IO addresses allocations
+ *
+ * Creates a mapping structure which holds information about used/unused
+ * IO address ranges, which is required to perform memory allocation and
+ * mapping with IOMMU aware functions.
+ *
+ * The client device need to be attached to the mapping with
+ * arm_iommu_attach_device function.
+ */
+struct dma_iommu_mapping *
+arm_iommu_create_mapping(struct bus_type *bus, dma_addr_t base, size_t size,
+ int order)
+{
+ unsigned int count = size >> (PAGE_SHIFT + order);
+ unsigned int bitmap_size = BITS_TO_LONGS(count) * sizeof(long);
+ struct dma_iommu_mapping *mapping;
+ int err = -ENOMEM;
+
+ if (!count)
+ return ERR_PTR(-EINVAL);
+
+ mapping = kzalloc(sizeof(struct dma_iommu_mapping), GFP_KERNEL);
+ if (!mapping)
+ goto err;
+
+ mapping->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
+ if (!mapping->bitmap)
+ goto err2;
+
+ mapping->base = base;
+ mapping->bits = BITS_PER_BYTE * bitmap_size;
+ mapping->order = order;
+ spin_lock_init(&mapping->lock);
+
+ mapping->domain = iommu_domain_alloc(bus);
+ if (!mapping->domain)
+ goto err3;
+
+ kref_init(&mapping->kref);
+ return mapping;
+err3:
+ kfree(mapping->bitmap);
+err2:
+ kfree(mapping);
+err:
+ return ERR_PTR(err);
+}
+
+static void release_iommu_mapping(struct kref *kref)
+{
+ struct dma_iommu_mapping *mapping =
+ container_of(kref, struct dma_iommu_mapping, kref);
+
+ iommu_domain_free(mapping->domain);
+ kfree(mapping->bitmap);
+ kfree(mapping);
+}
+
+void arm_iommu_release_mapping(struct dma_iommu_mapping *mapping)
+{
+ if (mapping)
+ kref_put(&mapping->kref, release_iommu_mapping);
+}
+
+/**
+ * arm_iommu_attach_device
+ * @dev: valid struct device pointer
+ * @mapping: io address space mapping structure (returned from
+ * arm_iommu_create_mapping)
+ *
+ * Attaches specified io address space mapping to the provided device,
+ * this replaces the dma operations (dma_map_ops pointer) with the
+ * IOMMU aware version. More than one client might be attached to
+ * the same io address space mapping.
+ */
+int arm_iommu_attach_device(struct device *dev,
+ struct dma_iommu_mapping *mapping)
+{
+ int err;
+
+ err = iommu_attach_device(mapping->domain, dev);
+ if (err)
+ return err;
+
+ kref_get(&mapping->kref);
+ dev->archdata.mapping = mapping;
+ set_dma_ops(dev, &iommu_ops);
+
+ pr_info("Attached IOMMU controller to %s device.\n", dev_name(dev));
+ return 0;
+}
+
+#endif
#include <linux/highmem.h>
#include <linux/gfp.h>
#include <linux/memblock.h>
+#include <linux/dma-contiguous.h>
#include <asm/mach-types.h>
#include <asm/memblock.h>
}
#endif
+void __init setup_dma_zone(struct machine_desc *mdesc)
+{
+#ifdef CONFIG_ZONE_DMA
+ if (mdesc->dma_zone_size) {
+ arm_dma_zone_size = mdesc->dma_zone_size;
+ arm_dma_limit = PHYS_OFFSET + arm_dma_zone_size - 1;
+ } else
+ arm_dma_limit = 0xffffffff;
+#endif
+}
+
static void __init arm_bootmem_free(unsigned long min, unsigned long max_low,
unsigned long max_high)
{
* Adjust the sizes according to any special requirements for
* this machine type.
*/
- if (arm_dma_zone_size) {
+ if (arm_dma_zone_size)
arm_adjust_dma_zone(zone_size, zhole_size,
arm_dma_zone_size >> PAGE_SHIFT);
- arm_dma_limit = PHYS_OFFSET + arm_dma_zone_size - 1;
- } else
- arm_dma_limit = 0xffffffff;
#endif
free_area_init_node(0, zone_size, min, zhole_size);
if (mdesc->reserve)
mdesc->reserve();
+ /*
+ * reserve memory for DMA contigouos allocations,
+ * must come from DMA area inside low memory
+ */
+ dma_contiguous_reserve(min(arm_dma_limit, arm_lowmem_limit));
+
arm_memblock_steal_permitted = false;
memblock_allow_resize();
memblock_dump_all();
#define arm_dma_limit ((u32)~0)
#endif
+extern phys_addr_t arm_lowmem_limit;
+
void __init bootmem_init(void);
void arm_mm_memblock_reserve(void);
+void dma_contiguous_remap(void);
PMD_SECT_UNCACHED | PMD_SECT_XN,
.domain = DOMAIN_KERNEL,
},
+ [MT_MEMORY_DMA_READY] = {
+ .prot_pte = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY,
+ .prot_l1 = PMD_TYPE_TABLE,
+ .domain = DOMAIN_KERNEL,
+ },
};
const struct mem_type *get_mem_type(unsigned int type)
if (arch_is_coherent() && cpu_is_xsc3()) {
mem_types[MT_MEMORY].prot_sect |= PMD_SECT_S;
mem_types[MT_MEMORY].prot_pte |= L_PTE_SHARED;
+ mem_types[MT_MEMORY_DMA_READY].prot_pte |= L_PTE_SHARED;
mem_types[MT_MEMORY_NONCACHED].prot_sect |= PMD_SECT_S;
mem_types[MT_MEMORY_NONCACHED].prot_pte |= L_PTE_SHARED;
}
mem_types[MT_DEVICE_CACHED].prot_pte |= L_PTE_SHARED;
mem_types[MT_MEMORY].prot_sect |= PMD_SECT_S;
mem_types[MT_MEMORY].prot_pte |= L_PTE_SHARED;
+ mem_types[MT_MEMORY_DMA_READY].prot_pte |= L_PTE_SHARED;
mem_types[MT_MEMORY_NONCACHED].prot_sect |= PMD_SECT_S;
mem_types[MT_MEMORY_NONCACHED].prot_pte |= L_PTE_SHARED;
}
mem_types[MT_HIGH_VECTORS].prot_l1 |= ecc_mask;
mem_types[MT_MEMORY].prot_sect |= ecc_mask | cp->pmd;
mem_types[MT_MEMORY].prot_pte |= kern_pgprot;
+ mem_types[MT_MEMORY_DMA_READY].prot_pte |= kern_pgprot;
mem_types[MT_MEMORY_NONCACHED].prot_sect |= ecc_mask;
mem_types[MT_ROM].prot_sect |= cp->pmd;
* L1 entries, whereas PGDs refer to a group of L1 entries making
* up one logical pointer to an L2 table.
*/
- if (((addr | end | phys) & ~SECTION_MASK) == 0) {
+ if (type->prot_sect && ((addr | end | phys) & ~SECTION_MASK) == 0) {
pmd_t *p = pmd;
#ifndef CONFIG_ARM_LPAE
}
early_param("vmalloc", early_vmalloc);
-static phys_addr_t lowmem_limit __initdata = 0;
+phys_addr_t arm_lowmem_limit __initdata = 0;
void __init sanity_check_meminfo(void)
{
bank->size = newsize;
}
#endif
- if (!bank->highmem && bank->start + bank->size > lowmem_limit)
- lowmem_limit = bank->start + bank->size;
+ if (!bank->highmem && bank->start + bank->size > arm_lowmem_limit)
+ arm_lowmem_limit = bank->start + bank->size;
j++;
}
}
#endif
meminfo.nr_banks = j;
- high_memory = __va(lowmem_limit - 1) + 1;
- memblock_set_current_limit(lowmem_limit);
+ high_memory = __va(arm_lowmem_limit - 1) + 1;
+ memblock_set_current_limit(arm_lowmem_limit);
}
static inline void prepare_page_table(void)
* Find the end of the first block of lowmem.
*/
end = memblock.memory.regions[0].base + memblock.memory.regions[0].size;
- if (end >= lowmem_limit)
- end = lowmem_limit;
+ if (end >= arm_lowmem_limit)
+ end = arm_lowmem_limit;
/*
* Clear out all the kernel space mappings, except for the first
phys_addr_t end = start + reg->size;
struct map_desc map;
- if (end > lowmem_limit)
- end = lowmem_limit;
+ if (end > arm_lowmem_limit)
+ end = arm_lowmem_limit;
if (start >= end)
break;
{
void *zero_page;
- memblock_set_current_limit(lowmem_limit);
+ memblock_set_current_limit(arm_lowmem_limit);
build_mem_type_table();
prepare_page_table();
map_lowmem();
+ dma_contiguous_remap();
devicemaps_init(mdesc);
kmap_init();
cmp r0, r1
blo 1b
mcr p15, 0, ip, c7, c10, 4 @ drain WB
+ mov r0, #0
mov pc, lr
/*
cmp r0, r1
blo 1b
mcr p15, 0, ip, c7, c10, 4 @ drain WB
+ mov r0, #0
mov pc, lr
/*
cmp r0, r1
blo 1b
mcr p15, 0, ip, c7, c10, 4 @ drain WB
+ mov r0, #0
mov pc, lr
/*
cmp r0, r1
blo 1b
mcr p15, 0, ip, c7, c10, 4 @ drain WB
+ mov r0, #0
mov pc, lr
/*
cmp r0, r1
blo 1b
mcr p15, 0, r0, c7, c10, 4 @ drain WB
+ mov r0, #0
mov pc, lr
/*
cmp r0, r1
blo 1b
mcr p15, 0, r0, c7, c10, 4 @ drain WB
+ mov r0, #0
mov pc, lr
/*
cmp r0, r1
blo 1b
mcr p15, 0, r0, c7, c10, 4 @ drain WB
+ mov r0, #0
mov pc, lr
/*
cmp r0, r1
blo 1b
mcr p15, 0, r0, c7, c10, 4 @ drain WB
+ mov r0, #0
mov pc, lr
/*
* - size - region size
*/
ENTRY(arm940_flush_kern_dcache_area)
- mov ip, #0
+ mov r0, #0
mov r1, #(CACHE_DSEGMENTS - 1) << 4 @ 4 segments
1: orr r3, r1, #(CACHE_DENTRIES - 1) << 26 @ 64 entries
2: mcr p15, 0, r3, c7, c14, 2 @ clean/flush D index
bcs 2b @ entries 63 to 0
subs r1, r1, #1 << 4
bcs 1b @ segments 7 to 0
- mcr p15, 0, ip, c7, c5, 0 @ invalidate I cache
- mcr p15, 0, ip, c7, c10, 4 @ drain WB
+ mcr p15, 0, r0, c7, c5, 0 @ invalidate I cache
+ mcr p15, 0, r0, c7, c10, 4 @ drain WB
mov pc, lr
/*
cmp r0, r1
blo 1b
mcr p15, 0, r0, c7, c10, 4 @ drain WB
+ mov r0, #0
mov pc, lr
/*
cmp r0, r1
blo 1b
mcr p15, 0, r0, c7, c10, 4 @ drain WB
+ mov r0, #0
mov pc, lr
/*
cmp r0, r1
blo 1b
mcr p15, 0, r0, c7, c10, 4 @ drain WB
+ mov r0, #0
mov pc, lr
/*
struct list_head vm_list;
unsigned long vm_start;
unsigned long vm_end;
- struct page *vm_pages;
+ void *priv;
int vm_active;
const void *caller;
};
Common code for power management support on S5P and newer SoCs
Note: Do not select this for S5P6440 and S5P6450.
-comment "System MMU"
-
-config S5P_SYSTEM_MMU
- bool "S5P SYSTEM MMU"
- depends on ARCH_EXYNOS4
- help
- Say Y here if you want to enable System MMU
-
config S5P_SLEEP
bool
help
obj-y += irq.o
obj-$(CONFIG_S5P_EXT_INT) += irq-eint.o
obj-$(CONFIG_S5P_GPIO_INT) += irq-gpioint.o
-obj-$(CONFIG_S5P_SYSTEM_MMU) += sysmmu.o
obj-$(CONFIG_S5P_PM) += pm.o irq-pm.o
obj-$(CONFIG_S5P_SLEEP) += sleep.o
obj-$(CONFIG_S5P_HRT) += s5p-time.o
.nr_sources = ARRAY_SIZE(clk_src_apll_list),
};
-/* Possible clock sources for BPLL Mux */
-static struct clk *clk_src_bpll_list[] = {
- [0] = &clk_fin_bpll,
- [1] = &clk_fout_bpll,
-};
-
-struct clksrc_sources clk_src_bpll = {
- .sources = clk_src_bpll_list,
- .nr_sources = ARRAY_SIZE(clk_src_bpll_list),
-};
-
/* Possible clock sources for CPLL Mux */
static struct clk *clk_src_cpll_list[] = {
[0] = &clk_fin_cpll,
--- /dev/null
+/*
+ * Copyright (C) 2011 Samsung Electronics Co., Ltd.
+ *
+ * Samsung S5P series DP device support
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef PLAT_S5P_DP_H_
+#define PLAT_S5P_DP_H_ __FILE__
+
+#include <video/exynos_dp.h>
+
+extern void s5p_dp_phy_init(void);
+extern void s5p_dp_phy_exit(void);
+
+#endif /* PLAT_S5P_DP_H_ */
--- /dev/null
+/* linux/arch/arm/plat-s5p/include/plat/fimg2d.h
+ *
+ * Copyright (c) 2010 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com/
+ *
+ * Platform Data Structure for Samsung Graphics 2D Hardware
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+#ifndef __ASM_ARCH_FIMG2D_H
+#define __ASM_ARCH_FIMG2D_H __FILE__
+
+struct fimg2d_platdata {
+ int hw_ver;
+ const char *parent_clkname;
+ const char *clkname;
+ const char *gate_clkname;
+ unsigned long clkrate;
+};
+
+extern void __init s5p_fimg2d_set_platdata(struct fimg2d_platdata *pd);
+
+#endif /* __ASM_ARCH_FIMG2D_H */
--- /dev/null
+/* linux/arch/arm/plat-s5p/include/plat/tvout.h
+ *
+ * Copyright (c) 2010 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com/
+ *
+ * Platform Header file for Samsung TV driver
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+#ifndef __ARM_PLAT_TVOUT_H
+#define __ARM_PLAT_TVOUT_H __FILE__
+
+struct platform_device;
+
+struct s5p_platform_hpd {
+ void (*int_src_hdmi_hpd)(struct platform_device *pdev);
+ void (*int_src_ext_hpd)(struct platform_device *pdev);
+ int (*read_gpio)(struct platform_device *pdev);
+};
+
+extern void s5p_hdmi_hpd_set_platdata(struct s5p_platform_hpd *pd);
+
+/* defined by architecture to configure gpio */
+extern void s5p_int_src_hdmi_hpd(struct platform_device *pdev);
+extern void s5p_int_src_ext_hpd(struct platform_device *pdev);
+extern void s5p_v4l2_int_src_hdmi_hpd(void);
+extern void s5p_v4l2_int_src_ext_hpd(void);
+extern int s5p_hpd_read_gpio(struct platform_device *pdev);
+extern int s5p_v4l2_hpd_read_gpio(void);
+
+struct s5p_platform_cec {
+
+ void (*cfg_gpio)(struct platform_device *pdev);
+};
+
+extern void s5p_hdmi_cec_set_platdata(struct s5p_platform_cec *pd);
+
+/* defined by architecture to configure gpio */
+extern void s5p_cec_cfg_gpio(struct platform_device *pdev);
+
+extern void s5p_tv_setup(void);
+
+#endif /* __ASM_PLAT_TV_HPD_H */
config SAMSUNG_DMADEV
bool
select DMADEVICES
- select PL330_DMA if (CPU_EXYNOS4210 || CPU_S5PV210 || CPU_S5PC100 || \
+ select PL330_DMA if (ARCH_EXYNOS5 || ARCH_EXYNOS4 || CPU_S5PV210 || CPU_S5PC100 || \
CPU_S5P6450 || CPU_S5P6440)
select ARM_AMBA
help
default "0" if DEBUG_S3C_UART0
default "1" if DEBUG_S3C_UART1
default "2" if DEBUG_S3C_UART2
+ default "3" if DEBUG_S3C_UART3
endif
#include <plat/regs-serial.h>
#include <plat/regs-spi.h>
#include <plat/s3c64xx-spi.h>
+#include <plat/fimg2d.h>
static u64 samsung_device_dma_mask = DMA_BIT_MASK(32);
};
#endif /* CONFIG_S5P_DEV_FIMC3 */
-/* G2D */
-
-#ifdef CONFIG_S5P_DEV_G2D
-static struct resource s5p_g2d_resource[] = {
- [0] = {
- .start = S5P_PA_G2D,
- .end = S5P_PA_G2D + SZ_4K - 1,
- .flags = IORESOURCE_MEM,
- },
- [1] = {
- .start = IRQ_2D,
- .end = IRQ_2D,
- .flags = IORESOURCE_IRQ,
- },
-};
-
-struct platform_device s5p_device_g2d = {
- .name = "s5p-g2d",
- .id = 0,
- .num_resources = ARRAY_SIZE(s5p_g2d_resource),
- .resource = s5p_g2d_resource,
- .dev = {
- .dma_mask = &samsung_device_dma_mask,
- .coherent_dma_mask = DMA_BIT_MASK(32),
- },
-};
-#endif /* CONFIG_S5P_DEV_G2D */
-
#ifdef CONFIG_S5P_DEV_JPEG
static struct resource s5p_jpeg_resource[] = {
[0] = DEFINE_RES_MEM(S5P_PA_JPEG, SZ_4K),
}
#endif /* CONFIG_S5P_DEV_FIMD0 */
+/* G2D */
+
+#ifdef CONFIG_S5P_DEV_G2D
+static struct resource s5p_g2d_resource[] = {
+ [0] = DEFINE_RES_MEM(S5P_PA_G2D, SZ_4K),
+ [1] = DEFINE_RES_IRQ(IRQ_2D),
+};
+
+struct platform_device s5p_device_g2d = {
+ .name = "s5p-g2d",
+ .id = 0,
+ .num_resources = ARRAY_SIZE(s5p_g2d_resource),
+ .resource = s5p_g2d_resource,
+ .dev = {
+ .dma_mask = &samsung_device_dma_mask,
+ .coherent_dma_mask = DMA_BIT_MASK(32),
+ },
+};
+
+#ifdef CONFIG_VIDEO_FIMG2D4X
+static struct fimg2d_platdata default_g2d_data __initdata = {
+ .parent_clkname = "mout_g2d0",
+ .clkname = "sclk_fimg2d",
+ .gate_clkname = "fimg2d",
+ .clkrate = 250 * 1000000,
+};
+
+void __init s5p_fimg2d_set_platdata(struct fimg2d_platdata *pd)
+{
+ struct fimg2d_platdata *npd;
+
+ if (!pd)
+ pd = &default_g2d_data;
+
+ npd = kmemdup(pd, sizeof(*pd), GFP_KERNEL);
+ if (!npd)
+ printk(KERN_ERR "no memory for fimg2d platform data\n");
+ else
+ s5p_device_g2d.dev.platform_data = npd;
+}
+#endif /* CONFIG_VIDEO_FIMG2D4X */
+#endif /* CONFIG_S5P_DEV_G2D */
+
/* HWMON */
#ifdef CONFIG_S3C_DEV_HWMON
/* PMU */
-#ifdef CONFIG_PLAT_S5P
+#if defined(CONFIG_PLAT_S5P) && !defined(CONFIG_OF)
static struct resource s5p_pmu_resource[] = {
DEFINE_RES_IRQ(IRQ_PMU)
};
};
struct platform_device s3c64xx_device_spi0 = {
- .name = "s3c64xx-spi",
+ .name = "s3c6410-spi",
.id = 0,
.num_resources = ARRAY_SIZE(s3c64xx_spi0_resource),
.resource = s3c64xx_spi0_resource,
},
};
-void __init s3c64xx_spi0_set_platdata(struct s3c64xx_spi_info *pd,
- int src_clk_nr, int num_cs)
+void __init s3c64xx_spi0_set_platdata(int (*cfg_gpio)(void), int src_clk_nr,
+ int num_cs)
{
- if (!pd) {
- pr_err("%s:Need to pass platform data\n", __func__);
- return;
- }
+ struct s3c64xx_spi_info pd;
/* Reject invalid configuration */
if (!num_cs || src_clk_nr < 0) {
return;
}
- pd->num_cs = num_cs;
- pd->src_clk_nr = src_clk_nr;
- if (!pd->cfg_gpio)
- pd->cfg_gpio = s3c64xx_spi0_cfg_gpio;
+ pd.num_cs = num_cs;
+ pd.src_clk_nr = src_clk_nr;
+ pd.cfg_gpio = (cfg_gpio) ? cfg_gpio : s3c64xx_spi0_cfg_gpio;
- s3c_set_platdata(pd, sizeof(*pd), &s3c64xx_device_spi0);
+ s3c_set_platdata(&pd, sizeof(pd), &s3c64xx_device_spi0);
}
#endif /* CONFIG_S3C64XX_DEV_SPI0 */
};
struct platform_device s3c64xx_device_spi1 = {
- .name = "s3c64xx-spi",
+ .name = "s3c6410-spi",
.id = 1,
.num_resources = ARRAY_SIZE(s3c64xx_spi1_resource),
.resource = s3c64xx_spi1_resource,
},
};
-void __init s3c64xx_spi1_set_platdata(struct s3c64xx_spi_info *pd,
- int src_clk_nr, int num_cs)
+void __init s3c64xx_spi1_set_platdata(int (*cfg_gpio)(void), int src_clk_nr,
+ int num_cs)
{
- if (!pd) {
- pr_err("%s:Need to pass platform data\n", __func__);
- return;
- }
-
/* Reject invalid configuration */
if (!num_cs || src_clk_nr < 0) {
pr_err("%s: Invalid SPI configuration\n", __func__);
return;
}
- pd->num_cs = num_cs;
- pd->src_clk_nr = src_clk_nr;
- if (!pd->cfg_gpio)
- pd->cfg_gpio = s3c64xx_spi1_cfg_gpio;
+ pd.num_cs = num_cs;
+ pd.src_clk_nr = src_clk_nr;
+ pd.cfg_gpio = (cfg_gpio) ? cfg_gpio : s3c64xx_spi1_cfg_gpio;
- s3c_set_platdata(pd, sizeof(*pd), &s3c64xx_device_spi1);
+ s3c_set_platdata(&pd, sizeof(pd), &s3c64xx_device_spi1);
}
#endif /* CONFIG_S3C64XX_DEV_SPI1 */
};
struct platform_device s3c64xx_device_spi2 = {
- .name = "s3c64xx-spi",
+ .name = "s3c6410-spi",
.id = 2,
.num_resources = ARRAY_SIZE(s3c64xx_spi2_resource),
.resource = s3c64xx_spi2_resource,
},
};
-void __init s3c64xx_spi2_set_platdata(struct s3c64xx_spi_info *pd,
- int src_clk_nr, int num_cs)
+void __init s3c64xx_spi2_set_platdata(int (*cfg_gpio)(void), int src_clk_nr,
+ int num_cs)
{
- if (!pd) {
- pr_err("%s:Need to pass platform data\n", __func__);
- return;
- }
+ struct s3c64xx_spi_info pd;
/* Reject invalid configuration */
if (!num_cs || src_clk_nr < 0) {
return;
}
- pd->num_cs = num_cs;
- pd->src_clk_nr = src_clk_nr;
- if (!pd->cfg_gpio)
- pd->cfg_gpio = s3c64xx_spi2_cfg_gpio;
+ pd.num_cs = num_cs;
+ pd.src_clk_nr = src_clk_nr;
+ pd.cfg_gpio = (cfg_gpio) ? cfg_gpio : s3c64xx_spi2_cfg_gpio;
- s3c_set_platdata(pd, sizeof(*pd), &s3c64xx_device_spi2);
+ s3c_set_platdata(&pd, sizeof(pd), &s3c64xx_device_spi2);
}
#endif /* CONFIG_S3C64XX_DEV_SPI2 */
extern struct bus_type s3c6410_subsys;
extern struct bus_type s5p64x0_subsys;
extern struct bus_type s5pv210_subsys;
-extern struct bus_type exynos4_subsys;
+extern struct bus_type exynos_subsys;
extern void (*s5pc1xx_idle)(void);
extern struct platform_device exynos4_device_pcm2;
extern struct platform_device exynos4_device_pd[];
extern struct platform_device exynos4_device_spdif;
-extern struct platform_device exynos4_device_sysmmu;
extern struct platform_device samsung_asoc_dma;
extern struct platform_device samsung_asoc_idma;
DMACH_MIPI_HSI5,
DMACH_MIPI_HSI6,
DMACH_MIPI_HSI7,
+ DMACH_DISP1,
DMACH_MTOM_0,
DMACH_MTOM_1,
DMACH_MTOM_2,
--- /dev/null
+/* linux/arm/arch/plat-s5p/include/plat/dsim.h
+ *
+ * Platform data header for Samsung SoC MIPI-DSIM.
+ *
+ * Copyright (c) 2011 Samsung Electronics
+ *
+ * InKi Dae <inki.dae@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+#ifndef _DSIM_H
+#define _DSIM_H
+
+#include <linux/device.h>
+#include <linux/fb.h>
+#include <linux/notifier.h>
+
+#include <linux/regulator/consumer.h>
+
+#define to_dsim_plat(d) (to_platform_device(d)->dev.platform_data)
+
+enum mipi_dsim_interface_type {
+ DSIM_COMMAND,
+ DSIM_VIDEO
+};
+
+enum mipi_dsim_virtual_ch_no {
+ DSIM_VIRTUAL_CH_0,
+ DSIM_VIRTUAL_CH_1,
+ DSIM_VIRTUAL_CH_2,
+ DSIM_VIRTUAL_CH_3
+};
+
+enum mipi_dsim_burst_mode_type {
+ DSIM_NON_BURST_SYNC_EVENT,
+ DSIM_NON_BURST_SYNC_PULSE = 2,
+ DSIM_BURST = 1,
+ DSIM_NON_VIDEO_MODE = 4
+};
+
+enum mipi_dsim_no_of_data_lane {
+ DSIM_DATA_LANE_1,
+ DSIM_DATA_LANE_2,
+ DSIM_DATA_LANE_3,
+ DSIM_DATA_LANE_4
+};
+
+enum mipi_dsim_byte_clk_src {
+ DSIM_PLL_OUT_DIV8,
+ DSIM_EXT_CLK_DIV8,
+ DSIM_EXT_CLK_BYPASS
+};
+
+enum mipi_dsim_pixel_format {
+ DSIM_CMD_3BPP,
+ DSIM_CMD_8BPP,
+ DSIM_CMD_12BPP,
+ DSIM_CMD_16BPP,
+ DSIM_VID_16BPP_565,
+ DSIM_VID_18BPP_666PACKED,
+ DSIM_18BPP_666LOOSELYPACKED,
+ DSIM_24BPP_888
+};
+
+/**
+ * struct mipi_dsim_config - interface for configuring mipi-dsi controller.
+ *
+ * @auto_flush: enable or disable Auto flush of MD FIFO using VSYNC pulse.
+ * @eot_disable: enable or disable EoT packet in HS mode.
+ * @auto_vertical_cnt: specifies auto vertical count mode.
+ * in Video mode, the vertical line transition uses line counter
+ * configured by VSA, VBP, and Vertical resolution.
+ * If this bit is set to '1', the line counter does not use VSA and VBP
+ * registers.(in command mode, this variable is ignored)
+ * @hse: set horizontal sync event mode.
+ * In VSYNC pulse and Vporch area, MIPI DSI master transfers only HSYNC
+ * start packet to MIPI DSI slave at MIPI DSI spec1.1r02.
+ * this bit transfers HSYNC end packet in VSYNC pulse and Vporch area
+ * (in mommand mode, this variable is ignored)
+ * @hfp: specifies HFP disable mode.
+ * if this variable is set, DSI master ignores HFP area in VIDEO mode.
+ * (in command mode, this variable is ignored)
+ * @hbp: specifies HBP disable mode.
+ * if this variable is set, DSI master ignores HBP area in VIDEO mode.
+ * (in command mode, this variable is ignored)
+ * @hsa: specifies HSA disable mode.
+ * if this variable is set, DSI master ignores HSA area in VIDEO mode.
+ * (in command mode, this variable is ignored)
+ * @e_interface: specifies interface to be used.(CPU or RGB interface)
+ * @e_virtual_ch: specifies virtual channel number that main or
+ * sub diaplsy uses.
+ * @e_pixel_format: specifies pixel stream format for main or sub display.
+ * @e_burst_mode: selects Burst mode in Video mode.
+ * in Non-burst mode, RGB data area is filled with RGB data and NULL
+ * packets, according to input bandwidth of RGB interface.
+ * In Burst mode, RGB data area is filled with RGB data only.
+ * @e_no_data_lane: specifies data lane count to be used by Master.
+ * @e_byte_clk: select byte clock source. (it must be DSIM_PLL_OUT_DIV8)
+ * DSIM_EXT_CLK_DIV8 and DSIM_EXT_CLK_BYPASSS are not supported.
+ * @pll_stable_time: specifies the PLL Timer for stability of the ganerated
+ * clock(System clock cycle base)
+ * if the timer value goes to 0x00000000, the clock stable bit of
+status
+ * and interrupt register is set.
+ * @esc_clk: specifies escape clock frequency for getting the escape clock
+ * prescaler value.
+ * @stop_holding_cnt: specifies the interval value between transmitting
+ * read packet(or write "set_tear_on" command) and BTA request.
+ * after transmitting read packet or write "set_tear_on" command,
+ * BTA requests to D-PHY automatically. this counter value specifies
+ * the interval between them.
+ * @bta_timeout: specifies the timer for BTA.
+ * this register specifies time out from BTA request to change
+ * the direction with respect to Tx escape clock.
+ * @rx_timeout: specifies the timer for LP Rx mode timeout.
+ * this register specifies time out on how long RxValid deasserts,
+ * after RxLpdt asserts with respect to Tx escape clock.
+ * - RxValid specifies Rx data valid indicator.
+ * - RxLpdt specifies an indicator that D-PHY is under RxLpdt mode.
+ * - RxValid and RxLpdt specifies signal from D-PHY.
+ * @lcd_panel_info: pointer for lcd panel specific structure.
+ * this structure specifies width, height, timing and polarity and so
+on.
+ * @mipi_ddi_pd: pointer to lcd panel platform data.
+ */
+struct mipi_dsim_config {
+ unsigned char auto_flush;
+ unsigned char eot_disable;
+
+ unsigned char auto_vertical_cnt;
+ unsigned char hse;
+ unsigned char hfp;
+ unsigned char hbp;
+ unsigned char hsa;
+
+ enum mipi_dsim_interface_type e_interface;
+ enum mipi_dsim_virtual_ch_no e_virtual_ch;
+ enum mipi_dsim_pixel_format e_pixel_format;
+ enum mipi_dsim_burst_mode_type e_burst_mode;
+ enum mipi_dsim_no_of_data_lane e_no_data_lane;
+ enum mipi_dsim_byte_clk_src e_byte_clk;
+
+ unsigned char p;
+ unsigned short m;
+ unsigned char s;
+
+ unsigned int pll_stable_time;
+ unsigned long esc_clk;
+
+ unsigned short stop_holding_cnt;
+ unsigned char bta_timeout;
+ unsigned short rx_timeout;
+
+ void *lcd_panel_info;
+ void *dsim_ddi_pd;
+};
+
+/* for RGB Interface */
+struct mipi_dsi_lcd_timing {
+ int left_margin;
+ int right_margin;
+ int upper_margin;
+ int lower_margin;
+ int hsync_len;
+ int vsync_len;
+};
+
+/* for CPU Interface */
+struct mipi_dsi_cpu_timing {
+ unsigned int cs_setup;
+ unsigned int wr_setup;
+ unsigned int wr_act;
+ unsigned int wr_hold;
+};
+
+struct mipi_dsi_lcd_size {
+ unsigned int width;
+ unsigned int height;
+};
+
+struct mipi_dsim_lcd_config {
+ enum mipi_dsim_interface_type e_interface;
+ unsigned int parameter[3];
+
+ /* lcd panel info */
+ struct mipi_dsi_lcd_timing rgb_timing;
+ struct mipi_dsi_cpu_timing cpu_timing;
+ struct mipi_dsi_lcd_size lcd_size;
+ /* platform data for lcd panel based on MIPI-DSI. */
+ void *mipi_ddi_pd;
+};
+
+/**
+ * struct mipi_dsim_device - global interface for mipi-dsi driver.
+ *
+ * @dev: driver model representation of the device.
+ * @clock: pointer to MIPI-DSI clock of clock framework.
+ * @irq: interrupt number to MIPI-DSI controller.
+ * @reg_base: base address to memory mapped SRF of MIPI-DSI controller.
+ * (virtual address)
+ * @pd: pointer to MIPI-DSI driver platform data.
+ * @dsim_info: infomation for configuring mipi-dsi controller.
+ * @master_ops: callbacks to mipi-dsi operations.
+ * @lcd_info: pointer to mipi_lcd_info structure.
+ * @state: specifies status of MIPI-DSI controller.
+ * the status could be RESET, INIT, STOP, HSCLKEN and ULPS.
+ * @data_lane: specifiec enabled data lane number.
+ * this variable would be set by driver according to e_no_data_lane
+ * automatically.
+ * @e_clk_src: select byte clock source.
+ * this variable would be set by driver according to e_byte_clock
+ * automatically.
+ * @hs_clk: HS clock rate.
+ * this variable would be set by driver automatically.
+ * @byte_clk: Byte clock rate.
+ * this variable would be set by driver automatically.
+ * @escape_clk: ESCAPE clock rate.
+ * this variable would be set by driver automatically.
+ * @freq_band: indicates Bitclk frequency band for D-PHY global timing.
+ * Serial Clock(=ByteClk X 8) FreqBand[3:0]
+ * ~ 99.99 MHz 0000
+ * 100 ~ 119.99 MHz 0001
+ * 120 ~ 159.99 MHz 0010
+ * 160 ~ 199.99 MHz 0011
+ * 200 ~ 239.99 MHz 0100
+ * 140 ~ 319.99 MHz 0101
+ * 320 ~ 389.99 MHz 0110
+ * 390 ~ 449.99 MHz 0111
+ * 450 ~ 509.99 MHz 1000
+ * 510 ~ 559.99 MHz 1001
+ * 560 ~ 639.99 MHz 1010
+ * 640 ~ 689.99 MHz 1011
+ * 690 ~ 769.99 MHz 1100
+ * 770 ~ 869.99 MHz 1101
+ * 870 ~ 949.99 MHz 1110
+ * 950 ~ 1000 MHz 1111
+ * this variable would be calculated by driver automatically.
+ */
+struct mipi_dsim_device {
+ struct device *dev;
+ struct resource *res;
+ struct clk *clock;
+ unsigned int irq;
+ void __iomem *reg_base;
+
+ struct s5p_platform_mipi_dsim *pd;
+ struct mipi_dsim_config *dsim_config;
+
+ unsigned int state;
+ unsigned int data_lane;
+ enum mipi_dsim_byte_clk_src e_clk_src;
+ unsigned long hs_clk;
+ unsigned long byte_clk;
+ unsigned long escape_clk;
+ unsigned char freq_band;
+ unsigned char id;
+ struct notifier_block fb_notif;
+
+ struct mipi_dsim_lcd_driver *dsim_lcd_drv;
+};
+
+/**
+ * struct s5p_platform_mipi_dsim - interface to platform data
+ * for mipi-dsi driver.
+ *
+ * @mipi_dsim_config: pointer of structure for configuring mipi-dsi controller.
+ * @dsim_lcd_info: pointer to structure for configuring
+ * mipi-dsi based lcd panel.
+ * @mipi_power: callback pointer for enabling or disabling mipi power.
+ * @part_reset: callback pointer for reseting mipi phy.
+ * @init_d_phy: callback pointer for enabing d_phy of dsi master.
+ * @get_fb_frame_done: callback pointer for getting frame done status of
+the
+ * display controller(FIMD).
+ * @trigger: callback pointer for triggering display controller(FIMD)
+ * in case of CPU mode.
+ * @delay_for_stabilization: specifies stable time.
+ * this delay needs when writing data on SFR
+ * after mipi mode became LP mode.
+ */
+struct s5p_platform_mipi_dsim {
+ const char clk_name[16];
+
+ struct mipi_dsim_config *dsim_config;
+ struct mipi_dsim_lcd_config *dsim_lcd_config;
+
+ unsigned int delay_for_stabilization;
+
+ int (*mipi_power) (struct mipi_dsim_device *dsim, unsigned int
+ enable);
+ int (*part_reset) (struct mipi_dsim_device *dsim);
+ int (*init_d_phy) (struct mipi_dsim_device *dsim, unsigned int enable);
+ int (*get_fb_frame_done) (struct fb_info *info);
+ void (*trigger) (struct fb_info *info);
+};
+
+/**
+ * driver structure for mipi-dsi based lcd panel.
+ *
+ * this structure should be registered by lcd panel driver.
+ * mipi-dsi driver seeks lcd panel registered through name field
+ * and calls these callback functions in appropriate time.
+ */
+
+struct mipi_dsim_lcd_driver {
+ int (*probe)(struct mipi_dsim_device *dsim);
+ int (*suspend)(struct mipi_dsim_device *dsim);
+ int (*displayon)(struct mipi_dsim_device *dsim);
+ int (*resume)(struct mipi_dsim_device *dsim);
+};
+
+/**
+ * register mipi_dsim_lcd_driver object defined by lcd panel driver
+ * to mipi-dsi driver.
+ */
+extern int s5p_dsim_part_reset(struct mipi_dsim_device *dsim);
+extern int s5p_dsim_init_d_phy(struct mipi_dsim_device *dsim,
+ unsigned int enable);
+
+#endif /* _DSIM_H */
*/
#define S3C_FB_MAX_WIN (5)
+/* IOCTL commands */
+#define S3CFB_WIN_POSITION _IOW('F', 203, \
+ struct s3c_fb_user_window)
+#define S3CFB_WIN_SET_PLANE_ALPHA _IOW('F', 204, \
+ struct s3c_fb_user_plane_alpha)
+#define S3CFB_WIN_SET_CHROMA _IOW('F', 205, \
+ struct s3c_fb_user_chroma)
+#define S3CFB_SET_VSYNC_INT _IOW('F', 206, u32)
+
+#define S3CFB_GET_ION_USER_HANDLE _IOWR('F', 208, \
+ struct s3c_fb_user_ion_client)
+
/**
* struct s3c_fb_pd_win - per window setup data
* @win_mode: The display parameters to initialise (not for window 0)
unsigned short max_bpp;
unsigned short virtual_x;
unsigned short virtual_y;
+ unsigned short width;
+ unsigned short height;
};
/**
* the data from the display system to the connected display
* device.
* @default_win: default window layer number to be used for UI layer.
+ * @clock_rate: To Setup FIMD source clock rate.
* @vidcon0: The base vidcon0 values to control the panel data format.
* @vidcon1: The base vidcon1 values to control the panel data output.
* @win: The setup data for each hardware window, or NULL for unused.
struct s3c_fb_pd_win *win[S3C_FB_MAX_WIN];
u32 default_win;
-
+ u32 clock_rate;
u32 vidcon0;
u32 vidcon1;
};
*/
extern void s5p64x0_fb_gpio_setup_24bpp(void);
+/**
+ * exynos4_fimd0_setup_clock() = Exynos4 setup function for parent clock.
+ * @dev: device pointer
+ * @parent: parent clock used for LCD pixel clock
+ * @clk_rate: clock rate for parent clock
+ */
+int __init exynos4_fimd0_setup_clock(struct device *dev, const char *parent,
+ unsigned long clk_rate);
+
+int __init exynos4_fimd_setup_clock(struct device *dev, const char *bus_clk,
+ const char *parent, unsigned long clk_rate);
#endif /* __PLAT_S3C_FB_H */
#define S3C_VA_USB_HSPHY S3C64XX_VA_USB_HSPHY
+#define S5P_VA_DRD_PHY S3C_ADDR_CPU(0x00300000)
/*
* ISA style IO, for each machine to sort out mappings for,
* if it implements it. We reserve two 16M regions for ISA.
#define S5P_VA_GIC_CPU S3C_ADDR(0x02810000)
#define S5P_VA_GIC_DIST S3C_ADDR(0x02820000)
+#define S5P_VA_AUDSS S3C_ADDR(0x02910000)
+
#define VA_VIC(x) (S3C_VA_IRQ + ((x) * 0x10000))
#define VA_VIC0 VA_VIC(0)
#define VA_VIC1 VA_VIC(1)
--- /dev/null
+/* linux/arm/arch/mach-s5pc110/include/plat/mipi_dsi.h
+ *
+ *
+ * Copyright (c) 2011 Samsung Electronics
+ * InKi Dae <inki.dae <at> samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+#ifndef _MIPI_DSI_H
+#define _MIPI_DSI_H
+
+#if defined(CONFIG_LCD_MIPI_TC358764)
+extern struct mipi_dsim_lcd_driver tc358764_mipi_lcd_driver;
+#endif
+
+extern int s5p_mipi_dsi_wr_data(struct mipi_dsim_device *dsim,
+ unsigned int data_id, unsigned int data0, unsigned int data1);
+
+enum mipi_ddi_interface {
+ RGB_IF = 0x4000,
+ I80_IF = 0x8000,
+ YUV_601 = 0x10000,
+ YUV_656 = 0x20000,
+ MIPI_VIDEO = 0x1000,
+ MIPI_COMMAND = 0x2000,
+};
+
+enum mipi_ddi_panel_select {
+ DDI_MAIN_LCD = 0,
+ DDI_SUB_LCD = 1,
+};
+
+enum mipi_ddi_model {
+ S6DR117 = 0,
+};
+
+enum mipi_ddi_parameter {
+ /* DSIM video interface parameter */
+ DSI_VIRTUAL_CH_ID = 0,
+ DSI_FORMAT = 1,
+ DSI_VIDEO_MODE_SEL = 2,
+};
+
+#endif /* _MIPI_DSI_H */
extern void s3c_pm_do_restore_core(struct sleep_save *ptr, int count);
#ifdef CONFIG_PM
+extern int s3c_irq_wake(struct irq_data *data, unsigned int state);
extern int s3c_irqext_wake(struct irq_data *data, unsigned int state);
extern int s3c24xx_irq_suspend(void);
extern void s3c24xx_irq_resume(void);
#else
+#define s3c_irq_wake NULL
#define s3c_irqext_wake NULL
#define s3c24xx_irq_suspend NULL
#define s3c24xx_irq_resume NULL
#define VIDCON1_FSTATUS_EVEN (1 << 15)
/* Video timing controls */
+#ifdef CONFIG_FB_EXYNOS_FIMD_V8
+#define VIDTCON0 (0x20010)
+#define VIDTCON1 (0x20014)
+#define VIDTCON3 (0x2001C)
+#else
#define VIDTCON0 (0x10)
#define VIDTCON1 (0x14)
-#define VIDTCON2 (0x18)
+#define VIDTCON3 (0x1C)
+#endif
/* Window position controls */
#define VIDOSD_BASE (0x40)
#define VIDINTCON0 (0x130)
+#define VIDINTCON1 (0x134)
/* WINCONx */
+#define WINCONx_CSC_CON_EQ709 (1 << 28)
+#define WINCONx_CSC_CON_EQ601 (0 << 28)
#define WINCONx_CSCWIDTH_MASK (0x3 << 26)
#define WINCONx_CSCWIDTH_SHIFT (26)
#define WINCONx_CSCWIDTH_WIDE (0x0 << 26)
#define VIDCON0 (0x00)
#define VIDCON0_INTERLACE (1 << 29)
+
+#ifdef CONFIG_FB_EXYNOS_FIMD_V8
+#define VIDOUT_CON (0x20000)
+#define VIDOUT_CON_VIDOUT_UP_MASK (0x1 << 16)
+#define VIDOUT_CON_VIDOUT_UP_SHIFT (16)
+#define VIDOUT_CON_VIDOUT_UP_ALWAYS (0x0 << 16)
+#define VIDOUT_CON_VIDOUT_UP_START_FRAME (0x1 << 16)
+#define VIDOUT_CON_VIDOUT_F_MASK (0x7 << 8)
+#define VIDOUT_CON_VIDOUT_F_SHIFT (8)
+#define VIDOUT_CON_VIDOUT_F_RGB (0x0 << 8)
+#define VIDOUT_CON_VIDOUT_F_I80_LDI0 (0x2 << 8)
+#define VIDOUT_CON_VIDOUT_F_I80_LDI1 (0x3 << 8)
+#define VIDOUT_CON_VIDOUT_F_WB (0x4 << 8)
+#endif
+
#define VIDCON0_VIDOUT_MASK (0x3 << 26)
#define VIDCON0_VIDOUT_SHIFT (26)
#define VIDCON0_VIDOUT_RGB (0x0 << 26)
#define VIDCON0_VIDOUT_TV (0x1 << 26)
#define VIDCON0_VIDOUT_I80_LDI0 (0x2 << 26)
#define VIDCON0_VIDOUT_I80_LDI1 (0x3 << 26)
+#define VIDCON0_VIDOUT_WB (0x4 << 26)
#define VIDCON0_L1_DATA_MASK (0x7 << 23)
#define VIDCON0_L1_DATA_SHIFT (23)
#define VIDCON0_ENVID (1 << 1)
#define VIDCON0_ENVID_F (1 << 0)
+#ifdef CONFIG_FB_EXYNOS_FIMD_V8
+#define VIDOUT_CON (0x20000)
+#define VIDCON1 (0x20004)
+#else
#define VIDCON1 (0x04)
+#endif
+
#define VIDCON1_LINECNT_MASK (0x7ff << 16)
#define VIDCON1_LINECNT_SHIFT (16)
#define VIDCON1_LINECNT_GET(_v) (((_v) >> 16) & 0x7ff)
#define VIDCON2_TVFMTSEL1_RGB (0x0 << 12)
#define VIDCON2_TVFMTSEL1_YUV422 (0x1 << 12)
#define VIDCON2_TVFMTSEL1_YUV444 (0x2 << 12)
+#define VIDCON2_TVFMTSEL1_SHIFT (12)
+#define VIDCON2_TVFMTSEL_SW (1 << 14)
+#define VIDCON2_TVFORMATSEL_YUV444 (0x2 << 12)
+
+#define VIDCON2_TVFMTSEL1_MASK (0x3 << 12)
+#define VIDCON2_TVFMTSEL1_RGB (0x0 << 12)
+#define VIDCON2_TVFMTSEL1_YUV422 (0x1 << 12)
+#define VIDCON2_TVFMTSEL1_YUV444 (0x2 << 12)
#define VIDCON2_ORGYCbCr (1 << 8)
#define VIDCON2_YUVORDCrCb (1 << 7)
#define VIDTCON1_HSPW_SHIFT (0)
#define VIDTCON1_HSPW_LIMIT (0xff)
#define VIDTCON1_HSPW(_x) ((_x) << 0)
+#define VIDCON1_VCLK_MASK (0x3 << 9)
+#define VIDCON1_VCLK_HOLD (0x0 << 9)
+#define VIDCON1_VCLK_RUN (0x1 << 9)
+#ifdef CONFIG_FB_EXYNOS_FIMD_V8
+#define VIDTCON2 (0x20018)
+#else
#define VIDTCON2 (0x18)
+#endif
#define VIDTCON2_LINEVAL_E(_x) ((((_x) & 0x800) >> 11) << 23)
#define VIDTCON2_LINEVAL_MASK (0x7ff << 11)
#define VIDTCON2_LINEVAL_SHIFT (11)
#define WINCONx_BYTSWP (1 << 17)
#define WINCONx_HAWSWP (1 << 16)
#define WINCONx_WSWP (1 << 15)
+#define WINCONx_ENLOCAL_MASK (0xf << 15)
+#define WINCONx_INRGB_RGB (0 << 13)
+#define WINCONx_INRGB_YCBCR (1 << 13)
#define WINCONx_BURSTLEN_MASK (0x3 << 9)
#define WINCONx_BURSTLEN_SHIFT (9)
#define WINCONx_BURSTLEN_16WORD (0x0 << 9)
#define WINCON0_BPPMODE_24BPP_888 (0xb << 2)
#define WINCON1_BLD_PIX (1 << 6)
+#define WINCON1_BLD_PLANE (0 << 6)
#define WINCON1_ALPHA_SEL (1 << 1)
#define WINCON1_BPPMODE_MASK (0xf << 2)
#define WPALCON_W0PAL_16BPP_A555 (0x5 << 0)
#define WPALCON_W0PAL_16BPP_565 (0x6 << 0)
+/* Clock gate mode control */
+#define REG_CLKGATE_MODE (0x1b0)
+#define REG_CLKGATE_MODE_AUTO_CLOCK_GATE (0 << 0)
+#define REG_CLKGATE_MODE_NON_CLOCK_GATE (1 << 0)
+
/* Blending equation control */
#define BLENDCON (0x260)
#define BLENDCON_NEW_MASK (1 << 0)
#define BLENDCON_NEW_8BIT_ALPHA_VALUE (1 << 0)
#define BLENDCON_NEW_4BIT_ALPHA_VALUE (0 << 0)
+/* Window alpha control */
+#define VIDW0ALPHA0 (0x200)
+#define VIDW0ALPHA1 (0x204)
+#define DPCLKCON (0x27c)
+#define DPCLKCON_ENABLE (1 << 1)
--- /dev/null
+/* linux/arch/arm/plat-s5p/include/plat/regs-mipidsim.h
+ *
+ * Register definition file for Samsung MIPI-DSIM driver
+ *
+ * Copyright (c) 2011 Samsung Electronics
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+#ifndef _REGS_MIPIDSIM_H
+#define _REGS_MIPIDSIM_H
+
+#define S5P_DSIM_STATUS (0x0) /* Status register */
+#define S5P_DSIM_SWRST (0x4) /* Software reset register */
+#define S5P_DSIM_CLKCTRL (0x8) /* Clock control register */
+#define S5P_DSIM_TIMEOUT (0xc) /* Time out register */
+#define S5P_DSIM_CONFIG (0x10) /* Configuration register */
+#define S5P_DSIM_ESCMODE (0x14) /* Escape mode register */
+
+/* Main display image resolution register */
+#define S5P_DSIM_MDRESOL (0x18)
+#define S5P_DSIM_MVPORCH (0x1c) /* Main display Vporch register */
+#define S5P_DSIM_MHPORCH (0x20) /* Main display Hporch register */
+#define S5P_DSIM_MSYNC (0x24) /* Main display sync area register
+*/
+
+/* Sub display image resolution register */
+#define S5P_DSIM_SDRESOL (0x28)
+#define S5P_DSIM_INTSRC (0x2c) /* Interrupt source
+register */
+#define S5P_DSIM_INTMSK (0x30) /* Interrupt mask register
+*/
+#define S5P_DSIM_PKTHDR (0x34) /* Packet Header FIFO
+register */
+#define S5P_DSIM_PAYLOAD (0x38) /* Payload FIFO register */
+#define S5P_DSIM_RXFIFO (0x3c) /* Read FIFO register */
+#define S5P_DSIM_FIFOTHLD (0x40) /* FIFO threshold level register */
+#define S5P_DSIM_FIFOCTRL (0x44) /* FIFO status and control register
+*/
+
+/* FIFO memory AC characteristic register */
+#define S5P_DSIM_PLLCTRL (0x4c) /* PLL control register */
+#define S5P_DSIM_PLLTMR (0x50) /* PLL timer register */
+#define S5P_DSIM_PHYACCHR (0x54) /* D-PHY AC characteristic register
+*/
+#define S5P_DSIM_PHYACCHR1 (0x58) /* D-PHY AC characteristic
+register1 */
+
+/* DSIM_STATUS */
+#define DSIM_STOP_STATE_DAT(x) (((x) & 0xf) << 0)
+#define DSIM_STOP_STATE_CLK (1 << 8)
+#define DSIM_TX_READY_HS_CLK (1 << 10)
+
+/* DSIM_SWRST */
+#define DSIM_FUNCRST (1 << 16)
+#define DSIM_SWRST (1 << 0)
+
+/* S5P_DSIM_TIMEOUT */
+#define DSIM_LPDR_TOUT_SHIFT (0)
+#define DSIM_BTA_TOUT_SHIFT (16)
+
+/* S5P_DSIM_CLKCTRL */
+#define DSIM_LANE_ESC_CLKEN_SHIFT (19)
+#define DSIM_BYTE_CLKEN_SHIFT (24)
+#define DSIM_BYTE_CLK_SRC_SHIFT (25)
+#define DSIM_PLL_BYPASS_SHIFT (27)
+#define DSIM_ESC_CLKEN_SHIFT (28)
+#define DSIM_TX_REQUEST_HSCLK_SHIFT (31)
+#define DSIM_LANE_ESC_CLKEN(x) (((x) & 0x1f) << \
+ DSIM_LANE_ESC_CLKEN_SHIFT)
+#define DSIM_BYTE_CLK_ENABLE (1 << DSIM_BYTE_CLKEN_SHIFT)
+#define DSIM_BYTE_CLK_DISABLE (0 << DSIM_BYTE_CLKEN_SHIFT)
+#define DSIM_PLL_BYPASS_EXTERNAL (1 << DSIM_PLL_BYPASS_SHIFT)
+#define DSIM_ESC_CLKEN_ENABLE (1 << DSIM_ESC_CLKEN_SHIFT)
+#define DSIM_ESC_CLKEN_DISABLE (0 << DSIM_ESC_CLKEN_SHIFT)
+
+/* S5P_DSIM_CONFIG */
+#define DSIM_NUM_OF_DATALANE_SHIFT (5)
+#define DSIM_HSA_MODE_SHIFT (20)
+#define DSIM_HBP_MODE_SHIFT (21)
+#define DSIM_HFP_MODE_SHIFT (22)
+#define DSIM_HSE_MODE_SHIFT (23)
+#define DSIM_AUTO_MODE_SHIFT (24)
+#define DSIM_LANE_ENx(x) (((x) & 0x1f) << 0)
+
+#define DSIM_NUM_OF_DATA_LANE(x) ((x) << DSIM_NUM_OF_DATALANE_SHIFT)
+
+/* S5P_DSIM_ESCMODE */
+#define DSIM_TX_LPDT_SHIFT (6)
+#define DSIM_CMD_LPDT_SHIFT (7)
+#define DSIM_TX_LPDT_LP (1 << DSIM_TX_LPDT_SHIFT)
+#define DSIM_CMD_LPDT_LP (1 << DSIM_CMD_LPDT_SHIFT)
+#define DSIM_STOP_STATE_CNT_SHIFT (21)
+#define DSIM_FORCE_STOP_STATE_SHIFT (20)
+
+/* S5P_DSIM_MDRESOL */
+#define DSIM_MAIN_STAND_BY (1 << 31)
+#define DSIM_MAIN_VRESOL(x) (((x) & 0x7ff) << 16)
+#define DSIM_MAIN_HRESOL(x) (((x) & 0X7ff) << 0)
+
+/* S5P_DSIM_MVPORCH */
+#define DSIM_CMD_ALLOW_SHIFT (28)
+#define DSIM_STABLE_VFP_SHIFT (16)
+#define DSIM_MAIN_VBP_SHIFT (0)
+#define DSIM_CMD_ALLOW_MASK (0xf << DSIM_CMD_ALLOW_SHIFT)
+#define DSIM_STABLE_VFP_MASK (0x7ff << DSIM_STABLE_VFP_SHIFT)
+#define DSIM_MAIN_VBP_MASK (0x7ff << DSIM_MAIN_VBP_SHIFT)
+
+/* S5P_DSIM_MHPORCH */
+#define DSIM_MAIN_HFP_SHIFT (16)
+#define DSIM_MAIN_HBP_SHIFT (0)
+#define DSIM_MAIN_HFP_MASK ((0xffff) << DSIM_MAIN_HFP_SHIFT)
+#define DSIM_MAIN_HBP_MASK ((0xffff) << DSIM_MAIN_HBP_SHIFT)
+
+/* S5P_DSIM_MSYNC */
+#define DSIM_MAIN_VSA_SHIFT (22)
+#define DSIM_MAIN_HSA_SHIFT (0)
+#define DSIM_MAIN_VSA_MASK ((0x3ff) << DSIM_MAIN_VSA_SHIFT)
+#define DSIM_MAIN_HSA_MASK ((0xffff) << DSIM_MAIN_HSA_SHIFT)
+
+/* S5P_DSIM_SDRESOL */
+#define DSIM_SUB_STANDY_SHIFT (31)
+#define DSIM_SUB_VRESOL_SHIFT (16)
+#define DSIM_SUB_HRESOL_SHIFT (0)
+#define DSIM_SUB_STANDY_MASK ((0x1) << DSIM_SUB_STANDY_SHIFT)
+#define DSIM_SUB_VRESOL_MASK ((0x7ff) << DSIM_SUB_VRESOL_SHIFT)
+#define DSIM_SUB_HRESOL_MASK ((0x7ff) << DSIM_SUB_HRESOL_SHIFT)
+
+/* S5P_DSIM_INTSRC */
+#define INTSRC_FRAME_DONE (1 << 24)
+#define INTSRC_PLL_STABLE (1 << 31)
+#define INTSRC_SFR_FIFO_EMPTY (1 << 29)
+
+/* S5P_DSIM_INTMSK */
+#define INTMSK_FRAME_DONE (1 << 24)
+
+/* S5P_DSIM_FIFOCTRL */
+#define SFR_HEADER_EMPTY (1 << 22)
+
+/* S5P_DSIM_PHYACCHR */
+#define DSIM_AFC_CTL(x) (((x) & 0x7) << 5)
+
+/* S5P_DSIM_PLLCTRL */
+#define DSIM_PLL_EN_SHIFT (23)
+#define DSIM_FREQ_BAND_SHIFT (24)
+
+#endif /* _REGS_MIPIDSIM_H */
--- /dev/null
+/* arch/arm/plat-samsung/include/plat/regs-usb3-exynos-drd-phy.h
+ *
+ * Copyright (c) 2011 Samsung Electronics Co. Ltd
+ * Author: Anton Tikhomirov <av.tikhomirov@samsung.com>
+ *
+ * Exynos SuperSpeed USB 3.0 DRD Controller PHY registers
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __PLAT_SAMSUNG_REGS_USB3_EXYNOS_DRD_PHY_H
+#define __PLAT_SAMSUNG_REGS_USB3_EXYNOS_DRD_PHY_H __FILE__
+
+#define EXYNOS_USB3_PHYREG(x) ((x) + S5P_VA_DRD_PHY)
+
+#define EXYNOS_USB3_LINKSYSTEM EXYNOS_USB3_PHYREG(0x04)
+#define EXYNOS_USB3_PHYUTMI EXYNOS_USB3_PHYREG(0x08)
+
+#define EXYNOS_USB3_PHYUTMI_OTGDISABLE (1 << 6)
+#define EXYNOS_USB3_PHYUTMI_FORCESUSPEND (1 << 1)
+#define EXYNOS_USB3_PHYUTMI_FORCESLEEP (1 << 0)
+
+#define EXYNOS_USB3_PHYPIPE EXYNOS_USB3_PHYREG(0x0C)
+#define EXYNOS_USB3_PHYCLKRST EXYNOS_USB3_PHYREG(0x10)
+
+#define EXYNOS_USB3_PHYCLKRST_SSC_REF_CLK_SEL_MASK (0xff << 23)
+#define EXYNOS_USB3_PHYCLKRST_SSC_REF_CLK_SEL_SHIFT (23)
+#define EXYNOS_USB3_PHYCLKRST_SSC_REF_CLK_SEL_LIMIT (0xff)
+#define EXYNOS_USB3_PHYCLKRST_SSC_REF_CLK_SEL(_x) ((_x) << 23)
+
+#define EXYNOS_USB3_PHYCLKRST_SSC_RANGE_MASK (0x03 << 21)
+#define EXYNOS_USB3_PHYCLKRST_SSC_RANGE_SHIFT (21)
+#define EXYNOS_USB3_PHYCLKRST_SSC_RANGE_LIMIT (0x03)
+#define EXYNOS_USB3_PHYCLKRST_SSC_RANGE(_x) ((_x) << 21)
+
+#define EXYNOS_USB3_PHYCLKRST_SSC_EN (1 << 20)
+#define EXYNOS_USB3_PHYCLKRST_REF_SSP_EN (1 << 19)
+#define EXYNOS_USB3_PHYCLKRST_REF_CLKDIV2 (1 << 18)
+
+#define EXYNOS_USB3_PHYCLKRST_MPLL_MULTIPLIER_MASK (0x7f << 11)
+#define EXYNOS_USB3_PHYCLKRST_MPLL_MULTIPLIER_SHIFT (11)
+#define EXYNOS_USB3_PHYCLKRST_MPLL_MULTIPLIER_LIMIT (0x7f)
+#define EXYNOS_USB3_PHYCLKRST_MPLL_MULTIPLIER(_x) ((_x) << 11)
+
+#define EXYNOS_USB3_PHYCLKRST_FSEL_MASK (0x3f << 5)
+#define EXYNOS_USB3_PHYCLKRST_FSEL_SHIFT (5)
+#define EXYNOS_USB3_PHYCLKRST_FSEL_LIMIT (0x3f)
+#define EXYNOS_USB3_PHYCLKRST_FSEL(_x) ((_x) << 5)
+
+#define EXYNOS_USB3_PHYCLKRST_RETENABLEN (1 << 4)
+
+#define EXYNOS_USB3_PHYCLKRST_REFCLKSEL_MASK (0x03 << 2)
+#define EXYNOS_USB3_PHYCLKRST_REFCLKSEL_SHIFT (2)
+#define EXYNOS_USB3_PHYCLKRST_REFCLKSEL_LIMIT (0x03)
+#define EXYNOS_USB3_PHYCLKRST_REFCLKSEL(_x) ((_x) << 2)
+
+#define EXYNOS_USB3_PHYCLKRST_PORTRESET (1 << 1)
+#define EXYNOS_USB3_PHYCLKRST_COMMONONN (1 << 0)
+
+#define EXYNOS_USB3_PHYREG0 EXYNOS_USB3_PHYREG(0x14)
+#define EXYNOS_USB3_PHYREG1 EXYNOS_USB3_PHYREG(0x18)
+#define EXYNOS_USB3_PHYPARAM0 EXYNOS_USB3_PHYREG(0x1C)
+#define EXYNOS_USB3_PHYPARAM1 EXYNOS_USB3_PHYREG(0x20)
+#define EXYNOS_USB3_PHYTERM EXYNOS_USB3_PHYREG(0x24)
+#define EXYNOS_USB3_PHYTEST EXYNOS_USB3_PHYREG(0x28)
+#define EXYNOS_USB3_PHYADP EXYNOS_USB3_PHYREG(0x2C)
+#define EXYNOS_USB3_PHYBATCHG EXYNOS_USB3_PHYREG(0x30)
+#define EXYNOS_USB3_PHYRESUME EXYNOS_USB3_PHYREG(0x34)
+#define EXYNOS_USB3_LINKPORT EXYNOS_USB3_PHYREG(0x44)
+#endif /* __PLAT_SAMSUNG_REGS_USB3_EXYNOS_DRD_PHY_H */
* @fb_delay: Slave specific feedback delay.
* Refer to FB_CLK_SEL register definition in SPI chapter.
* @line: Custom 'identity' of the CS line.
- * @set_level: CS line control.
*
* This is per SPI-Slave Chipselect information.
* Allocate and initialize one in machine init code and make the
struct s3c64xx_spi_csinfo {
u8 fb_delay;
unsigned line;
- void (*set_level)(unsigned line_id, int lvl);
};
/**
* struct s3c64xx_spi_info - SPI Controller defining structure
* @src_clk_nr: Clock source index for the CLK_CFG[SPI_CLKSEL] field.
- * @clk_from_cmu: If the SPI clock/prescalar control block is present
- * by the platform's clock-management-unit and not in SPI controller.
* @num_cs: Number of CS this controller emulates.
* @cfg_gpio: Configure pins for this SPI controller.
- * @fifo_lvl_mask: All tx fifo_lvl fields start at offset-6
- * @rx_lvl_offset: Depends on tx fifo_lvl field and bus number
- * @high_speed: If the controller supports HIGH_SPEED_EN bit
- * @tx_st_done: Depends on tx fifo_lvl field
*/
struct s3c64xx_spi_info {
int src_clk_nr;
- bool clk_from_cmu;
-
int num_cs;
-
- int (*cfg_gpio)(struct platform_device *pdev);
-
- /* Following two fields are for future compatibility */
- int fifo_lvl_mask;
- int rx_lvl_offset;
- int high_speed;
- int tx_st_done;
+ int (*cfg_gpio)(void);
};
/**
* s3c64xx_spi_set_platdata - SPI Controller configure callback by the board
* initialization code.
- * @pd: SPI platform data to set.
+ * @cfg_gpio: Pointer to gpio setup function.
* @src_clk_nr: Clock the SPI controller is to use to generate SPI clocks.
* @num_cs: Number of elements in the 'cs' array.
*
* Call this from machine init code for each SPI Controller that
* has some chips attached to it.
*/
-extern void s3c64xx_spi0_set_platdata(struct s3c64xx_spi_info *pd,
- int src_clk_nr, int num_cs);
-extern void s3c64xx_spi1_set_platdata(struct s3c64xx_spi_info *pd,
- int src_clk_nr, int num_cs);
-extern void s3c64xx_spi2_set_platdata(struct s3c64xx_spi_info *pd,
- int src_clk_nr, int num_cs);
+extern void s3c64xx_spi0_set_platdata(int (*cfg_gpio)(void), int src_clk_nr,
+ int num_cs);
+extern void s3c64xx_spi1_set_platdata(int (*cfg_gpio)(void), int src_clk_nr,
+ int num_cs);
+extern void s3c64xx_spi2_set_platdata(int (*cfg_gpio)(void), int src_clk_nr,
+ int num_cs);
/* defined by architecture to configure gpio */
-extern int s3c64xx_spi0_cfg_gpio(struct platform_device *dev);
-extern int s3c64xx_spi1_cfg_gpio(struct platform_device *dev);
-extern int s3c64xx_spi2_cfg_gpio(struct platform_device *dev);
+extern int s3c64xx_spi0_cfg_gpio(void);
+extern int s3c64xx_spi1_cfg_gpio(void);
+extern int s3c64xx_spi2_cfg_gpio(void);
extern struct s3c64xx_spi_info s3c64xx_spi0_pdata;
extern struct s3c64xx_spi_info s3c64xx_spi1_pdata;
+++ /dev/null
-/* linux/arch/arm/plat-samsung/include/plat/sysmmu.h
- *
- * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
- * http://www.samsung.com
- *
- * Samsung System MMU driver for S5P platform
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
-*/
-
-#ifndef __PLAT_SAMSUNG_SYSMMU_H
-#define __PLAT_SAMSUNG_SYSMMU_H __FILE__
-
-enum S5P_SYSMMU_INTERRUPT_TYPE {
- SYSMMU_PAGEFAULT,
- SYSMMU_AR_MULTIHIT,
- SYSMMU_AW_MULTIHIT,
- SYSMMU_BUSERROR,
- SYSMMU_AR_SECURITY,
- SYSMMU_AR_ACCESS,
- SYSMMU_AW_SECURITY,
- SYSMMU_AW_PROTECTION, /* 7 */
- SYSMMU_FAULTS_NUM
-};
-
-#ifdef CONFIG_S5P_SYSTEM_MMU
-
-#include <mach/sysmmu.h>
-
-/**
- * s5p_sysmmu_enable() - enable system mmu of ip
- * @ips: The ip connected system mmu.
- * #pgd: Base physical address of the 1st level page table
- *
- * This function enable system mmu to transfer address
- * from virtual address to physical address
- */
-void s5p_sysmmu_enable(sysmmu_ips ips, unsigned long pgd);
-
-/**
- * s5p_sysmmu_disable() - disable sysmmu mmu of ip
- * @ips: The ip connected system mmu.
- *
- * This function disable system mmu to transfer address
- * from virtual address to physical address
- */
-void s5p_sysmmu_disable(sysmmu_ips ips);
-
-/**
- * s5p_sysmmu_set_tablebase_pgd() - set page table base address to refer page table
- * @ips: The ip connected system mmu.
- * @pgd: The page table base address.
- *
- * This function set page table base address
- * When system mmu transfer address from virtaul address to physical address,
- * system mmu refer address information from page table
- */
-void s5p_sysmmu_set_tablebase_pgd(sysmmu_ips ips, unsigned long pgd);
-
-/**
- * s5p_sysmmu_tlb_invalidate() - flush all TLB entry in system mmu
- * @ips: The ip connected system mmu.
- *
- * This function flush all TLB entry in system mmu
- */
-void s5p_sysmmu_tlb_invalidate(sysmmu_ips ips);
-
-/** s5p_sysmmu_set_fault_handler() - Fault handler for System MMUs
- * @itype: type of fault.
- * @pgtable_base: the physical address of page table base. This is 0 if @ips is
- * SYSMMU_BUSERROR.
- * @fault_addr: the device (virtual) address that the System MMU tried to
- * translated. This is 0 if @ips is SYSMMU_BUSERROR.
- * Called when interrupt occurred by the System MMUs
- * The device drivers of peripheral devices that has a System MMU can implement
- * a fault handler to resolve address translation fault by System MMU.
- * The meanings of return value and parameters are described below.
-
- * return value: non-zero if the fault is correctly resolved.
- * zero if the fault is not handled.
- */
-void s5p_sysmmu_set_fault_handler(sysmmu_ips ips,
- int (*handler)(enum S5P_SYSMMU_INTERRUPT_TYPE itype,
- unsigned long pgtable_base,
- unsigned long fault_addr));
-#else
-#define s5p_sysmmu_enable(ips, pgd) do { } while (0)
-#define s5p_sysmmu_disable(ips) do { } while (0)
-#define s5p_sysmmu_set_tablebase_pgd(ips, pgd) do { } while (0)
-#define s5p_sysmmu_tlb_invalidate(ips) do { } while (0)
-#define s5p_sysmmu_set_fault_handler(ips, handler) do { } while (0)
-#endif
-#endif /* __ASM_PLAT_SYSMMU_H */
#endif
}
+extern void s5p_v4l2_int_src_hdmi_hpd(void);
+extern void s5p_int_src_hdmi_hpd(struct platform_device *pdev);
+extern int s5p_v4l2_hpd_read_gpio(void);
+extern void s5p_v4l2_int_src_ext_hpd(void);
+extern void s5p_cec_cfg_gpio(struct platform_device *pdev);
+extern void s5p_v4l2_int_src_hdmi_hpd(void);
#endif /* __SAMSUNG_PLAT_TV_H */
enum s5p_usb_phy_type {
S5P_USB_PHY_DEVICE,
S5P_USB_PHY_HOST,
+ S5P_USB_PHY_DRD,
};
extern int s5p_usb_phy_init(struct platform_device *pdev, int type);
return tin_parent_rate / 16;
}
-#define NS_IN_HZ (1000000000UL)
+#define NS_IN_SEC (1000000000UL)
int pwm_config(struct pwm_device *pwm, int duty_ns, int period_ns)
{
unsigned long tin_rate;
unsigned long tin_ns;
- unsigned long period;
+ unsigned long frequency;
unsigned long flags;
unsigned long tcon;
unsigned long tcnt;
long tcmp;
+ int pwm_was_enabled;
/* We currently avoid using 64bit arithmetic by using the
* fact that anything faster than 1Hz is easily representable
* by 32bits. */
- if (period_ns > NS_IN_HZ || duty_ns > NS_IN_HZ)
+ if (period_ns > NS_IN_SEC || duty_ns > NS_IN_SEC)
return -ERANGE;
if (duty_ns > period_ns)
tcmp = __raw_readl(S3C2410_TCMPB(pwm->pwm_id));
tcnt = __raw_readl(S3C2410_TCNTB(pwm->pwm_id));
- period = NS_IN_HZ / period_ns;
+ frequency = NS_IN_SEC / period_ns;
pwm_dbg(pwm, "duty_ns=%d, period_ns=%d (%lu)\n",
- duty_ns, period_ns, period);
+ duty_ns, period_ns, frequency);
/* Check to see if we are changing the clock rate of the PWM */
if (pwm->period_ns != period_ns) {
if (pwm_is_tdiv(pwm)) {
- tin_rate = pwm_calc_tin(pwm, period);
+ tin_rate = pwm_calc_tin(pwm, frequency);
clk_set_rate(pwm->clk_div, tin_rate);
} else
tin_rate = clk_get_rate(pwm->clk);
pwm_dbg(pwm, "tin_rate=%lu\n", tin_rate);
- tin_ns = NS_IN_HZ / tin_rate;
+ tin_ns = NS_IN_SEC / tin_rate;
tcnt = period_ns / tin_ns;
} else
- tin_ns = NS_IN_HZ / clk_get_rate(pwm->clk);
+ tin_ns = NS_IN_SEC / clk_get_rate(pwm->clk);
/* Note, counters count down */
__raw_writel(tcnt, S3C2410_TCNTB(pwm->pwm_id));
tcon = __raw_readl(S3C2410_TCON);
+ pwm_was_enabled = (tcon & pwm_tcon_start(pwm)) != 0;
+
+ /* Ensure manual update is off before turning it on. */
+ tcon &= ~pwm_tcon_manulupdate(pwm);
+ tcon &= ~pwm_tcon_start(pwm);
+ __raw_writel(tcon, S3C2410_TCON);
+
tcon |= pwm_tcon_manulupdate(pwm);
tcon |= pwm_tcon_autoreload(pwm);
__raw_writel(tcon, S3C2410_TCON);
tcon &= ~pwm_tcon_manulupdate(pwm);
__raw_writel(tcon, S3C2410_TCON);
+ if (pwm_was_enabled) {
+ tcon |= pwm_tcon_start(pwm);
+ __raw_writel(tcon, S3C2410_TCON);
+ }
+
local_irq_restore(flags);
return 0;
select ARCH_WANT_OPTIONAL_GPIOLIB
select ARCH_WANT_FRAME_POINTERS
select HAVE_DMA_ATTRS
+ select HAVE_DMA_CONTIGUOUS if !SWIOTLB
select HAVE_KRETPROBES
select HAVE_OPTPROBES
select HAVE_FTRACE_MCOUNT_RECORD
--- /dev/null
+#ifndef ASMX86_DMA_CONTIGUOUS_H
+#define ASMX86_DMA_CONTIGUOUS_H
+
+#ifdef __KERNEL__
+
+#include <linux/types.h>
+#include <asm-generic/dma-contiguous.h>
+
+static inline void
+dma_contiguous_early_fixup(phys_addr_t base, unsigned long size) { }
+
+#endif
+#endif
#include <asm/io.h>
#include <asm/swiotlb.h>
#include <asm-generic/dma-coherent.h>
+#include <linux/dma-contiguous.h>
#ifdef CONFIG_ISA
# define ISA_DMA_BIT_MASK DMA_BIT_MASK(24)
dma_addr_t *dma_addr, gfp_t flag,
struct dma_attrs *attrs);
+extern void dma_generic_free_coherent(struct device *dev, size_t size,
+ void *vaddr, dma_addr_t dma_addr,
+ struct dma_attrs *attrs);
+
static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
{
if (!dev->dma_mask)
struct dma_attrs *attrs)
{
unsigned long dma_mask;
- struct page *page;
+ struct page *page = NULL;
+ unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
dma_addr_t addr;
dma_mask = dma_alloc_coherent_mask(dev, flag);
flag |= __GFP_ZERO;
again:
- page = alloc_pages_node(dev_to_node(dev), flag, get_order(size));
+ if (!(flag & GFP_ATOMIC))
+ page = dma_alloc_from_contiguous(dev, count, get_order(size));
+ if (!page)
+ page = alloc_pages_node(dev_to_node(dev), flag, get_order(size));
if (!page)
return NULL;
return page_address(page);
}
+void dma_generic_free_coherent(struct device *dev, size_t size, void *vaddr,
+ dma_addr_t dma_addr, struct dma_attrs *attrs)
+{
+ unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+ struct page *page = virt_to_page(vaddr);
+
+ if (!dma_release_from_contiguous(dev, page, count))
+ free_pages((unsigned long)vaddr, get_order(size));
+}
+
/*
* See <Documentation/x86/x86_64/boot-options.txt> for the iommu kernel
* parameter documentation.
return nents;
}
-static void nommu_free_coherent(struct device *dev, size_t size, void *vaddr,
- dma_addr_t dma_addr, struct dma_attrs *attrs)
-{
- free_pages((unsigned long)vaddr, get_order(size));
-}
-
static void nommu_sync_single_for_device(struct device *dev,
dma_addr_t addr, size_t size,
enum dma_data_direction dir)
struct dma_map_ops nommu_dma_ops = {
.alloc = dma_generic_alloc_coherent,
- .free = nommu_free_coherent,
+ .free = dma_generic_free_coherent,
.map_sg = nommu_map_sg,
.map_page = nommu_map_page,
.sync_single_for_device = nommu_sync_single_for_device,
#include <asm/pci-direct.h>
#include <linux/init_ohci1394_dma.h>
#include <linux/kvm_para.h>
+#include <linux/dma-contiguous.h>
#include <linux/errno.h>
#include <linux/kernel.h>
}
#endif
memblock.current_limit = get_max_mapped();
+ dma_contiguous_reserve(0);
/*
* NOTE: On x86-32, only from this point on, fixmaps are ready for use.
source "drivers/devfreq/Kconfig"
+source "drivers/gpu/vithar/Kconfig"
+
endmenu
APIs extension; the file's descriptor can then be passed on to other
driver.
+config CMA
+ bool "Contiguous Memory Allocator (EXPERIMENTAL)"
+ depends on HAVE_DMA_CONTIGUOUS && HAVE_MEMBLOCK && EXPERIMENTAL
+ select MIGRATION
+ help
+ This enables the Contiguous Memory Allocator which allows drivers
+ to allocate big physically-contiguous blocks of memory for use with
+ hardware components that do not support I/O map nor scatter-gather.
+
+ For more information see <include/linux/dma-contiguous.h>.
+ If unsure, say "n".
+
+if CMA
+
+config CMA_DEBUG
+ bool "CMA debug messages (DEVELOPMENT)"
+ depends on DEBUG_KERNEL
+ help
+ Turns on debug messages in CMA. This produces KERN_DEBUG
+ messages for every CMA call as well as various messages while
+ processing calls such as dma_alloc_from_contiguous().
+ This option does not affect warning and error messages.
+
+comment "Default contiguous memory area size:"
+
+config CMA_SIZE_MBYTES
+ int "Size in Mega Bytes"
+ depends on !CMA_SIZE_SEL_PERCENTAGE
+ default 16
+ help
+ Defines the size (in MiB) of the default memory area for Contiguous
+ Memory Allocator.
+
+config CMA_SIZE_PERCENTAGE
+ int "Percentage of total memory"
+ depends on !CMA_SIZE_SEL_MBYTES
+ default 10
+ help
+ Defines the size of the default memory area for Contiguous Memory
+ Allocator as a percentage of the total memory in the system.
+
+choice
+ prompt "Selected region size"
+ default CMA_SIZE_SEL_ABSOLUTE
+
+config CMA_SIZE_SEL_MBYTES
+ bool "Use mega bytes value only"
+
+config CMA_SIZE_SEL_PERCENTAGE
+ bool "Use percentage value only"
+
+config CMA_SIZE_SEL_MIN
+ bool "Use lower value (minimum)"
+
+config CMA_SIZE_SEL_MAX
+ bool "Use higher value (maximum)"
+
+endchoice
+
+config CMA_ALIGNMENT
+ int "Maximum PAGE_SIZE order of alignment for contiguous buffers"
+ range 4 9
+ default 8
+ help
+ DMA mapping framework by default aligns all buffers to the smallest
+ PAGE_SIZE order which is greater than or equal to the requested buffer
+ size. This works well for buffers up to a few hundreds kilobytes, but
+ for larger buffers it just a memory waste. With this parameter you can
+ specify the maximum PAGE_SIZE order for contiguous buffers. Larger
+ buffers will be aligned only to this specified order. The order is
+ expressed as a power of two multiplied by the PAGE_SIZE.
+
+ For example, if your system defaults to 4KiB pages, the order value
+ of 8 means that the buffers will be aligned up to 1MiB only.
+
+ If unsure, leave the default value "8".
+
+config CMA_AREAS
+ int "Maximum count of the CMA device-private areas"
+ default 7
+ help
+ CMA allows to create CMA areas for particular devices. This parameter
+ sets the maximum number of such device private CMA areas in the
+ system.
+
+ If unsure, leave the default value "7".
+
+endif
+
endmenu
attribute_container.o transport_class.o \
topology.o
obj-$(CONFIG_DEVTMPFS) += devtmpfs.o
+obj-$(CONFIG_CMA) += dma-contiguous.o
obj-y += power/
obj-$(CONFIG_HAS_DMA) += dma-mapping.o
obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o
return 0;
}
+static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
+{
+ struct dma_buf *dmabuf;
+
+ if (!is_dma_buf_file(file))
+ return -EINVAL;
+
+ dmabuf = file->private_data;
+
+ /* check for overflowing the buffer's size */
+ if (vma->vm_pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) >
+ dmabuf->size >> PAGE_SHIFT)
+ return -EINVAL;
+
+ return dmabuf->ops->mmap(dmabuf, vma);
+}
+
static const struct file_operations dma_buf_fops = {
.release = dma_buf_release,
+ .mmap = dma_buf_mmap_internal,
};
/*
|| !ops->unmap_dma_buf
|| !ops->release
|| !ops->kmap_atomic
- || !ops->kmap)) {
+ || !ops->kmap
+ || !ops->mmap)) {
return ERR_PTR(-EINVAL);
}
dmabuf->ops->kunmap(dmabuf, page_num, vaddr);
}
EXPORT_SYMBOL_GPL(dma_buf_kunmap);
+
+
+/**
+ * dma_buf_mmap - Setup up a userspace mmap with the given vma
+ * @dmabuf: [in] buffer that should back the vma
+ * @vma: [in] vma for the mmap
+ * @pgoff: [in] offset in pages where this mmap should start within the
+ * dma-buf buffer.
+ *
+ * This function adjusts the passed in vma so that it points at the file of the
+ * dma_buf operation. It alsog adjusts the starting pgoff and does bounds
+ * checking on the size of the vma. Then it calls the exporters mmap function to
+ * set up the mapping.
+ *
+ * Can return negative error values, returns 0 on success.
+ */
+int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
+ unsigned long pgoff)
+{
+ if (WARN_ON(!dmabuf || !vma))
+ return -EINVAL;
+
+ /* check for offset overflow */
+ if (pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) < pgoff)
+ return -EOVERFLOW;
+
+ /* check for overflowing the buffer's size */
+ if (pgoff + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) >
+ dmabuf->size >> PAGE_SHIFT)
+ return -EINVAL;
+
+ /* readjust the vma */
+ if (vma->vm_file)
+ fput(vma->vm_file);
+
+ vma->vm_file = dmabuf->file;
+ get_file(vma->vm_file);
+
+ vma->vm_pgoff = pgoff;
+
+ return dmabuf->ops->mmap(dmabuf, vma);
+}
+EXPORT_SYMBOL_GPL(dma_buf_mmap);
+
+/**
+ * dma_buf_vmap - Create virtual mapping for the buffer object into kernel
+ * address space. Same restrictions as for vmap and friends apply.
+ * @dmabuf: [in] buffer to vmap
+ *
+ * This call may fail due to lack of virtual mapping address space.
+ * These calls are optional in drivers. The intended use for them
+ * is for mapping objects linear in kernel space for high use objects.
+ * Please attempt to use kmap/kunmap before thinking about these interfaces.
+ */
+void *dma_buf_vmap(struct dma_buf *dmabuf)
+{
+ if (WARN_ON(!dmabuf))
+ return NULL;
+
+ if (dmabuf->ops->vmap)
+ return dmabuf->ops->vmap(dmabuf);
+ return NULL;
+}
+EXPORT_SYMBOL_GPL(dma_buf_vmap);
+
+/**
+ * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap.
+ * @dmabuf: [in] buffer to vunmap
+ */
+void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
+{
+ if (WARN_ON(!dmabuf))
+ return;
+
+ if (dmabuf->ops->vunmap)
+ dmabuf->ops->vunmap(dmabuf, vaddr);
+}
+EXPORT_SYMBOL_GPL(dma_buf_vunmap);
struct dma_coherent_mem {
void *virt_base;
dma_addr_t device_base;
+ phys_addr_t pfn_base;
int size;
int flags;
unsigned long *bitmap;
dev->dma_mem->virt_base = mem_base;
dev->dma_mem->device_base = device_addr;
+ dev->dma_mem->pfn_base = PFN_DOWN(bus_addr);
dev->dma_mem->size = pages;
dev->dma_mem->flags = flags;
return 0;
}
EXPORT_SYMBOL(dma_release_from_coherent);
+
+/**
+ * dma_mmap_from_coherent() - try to mmap the memory allocated from
+ * per-device coherent memory pool to userspace
+ * @dev: device from which the memory was allocated
+ * @vma: vm_area for the userspace memory
+ * @vaddr: cpu address returned by dma_alloc_from_coherent
+ * @size: size of the memory buffer allocated by dma_alloc_from_coherent
+ *
+ * This checks whether the memory was allocated from the per-device
+ * coherent memory pool and if so, maps that memory to the provided vma.
+ *
+ * Returns 1 if we correctly mapped the memory, or 0 if
+ * dma_release_coherent() should proceed with mapping memory from
+ * generic pools.
+ */
+int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma,
+ void *vaddr, size_t size, int *ret)
+{
+ struct dma_coherent_mem *mem = dev ? dev->dma_mem : NULL;
+
+ if (mem && vaddr >= mem->virt_base && vaddr + size <=
+ (mem->virt_base + (mem->size << PAGE_SHIFT))) {
+ unsigned long off = vma->vm_pgoff;
+ int start = (vaddr - mem->virt_base) >> PAGE_SHIFT;
+ int user_count = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
+ int count = size >> PAGE_SHIFT;
+
+ *ret = -ENXIO;
+ if (off < count && user_count <= count - off) {
+ unsigned pfn = mem->pfn_base + start + off;
+ *ret = remap_pfn_range(vma, vma->vm_start, pfn,
+ user_count << PAGE_SHIFT,
+ vma->vm_page_prot);
+ }
+ return 1;
+ }
+ return 0;
+}
+EXPORT_SYMBOL(dma_mmap_from_coherent);
--- /dev/null
+/*
+ * Contiguous Memory Allocator for DMA mapping framework
+ * Copyright (c) 2010-2011 by Samsung Electronics.
+ * Written by:
+ * Marek Szyprowski <m.szyprowski@samsung.com>
+ * Michal Nazarewicz <mina86@mina86.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License or (at your optional) any later version of the license.
+ */
+
+#define pr_fmt(fmt) "cma: " fmt
+
+#ifdef CONFIG_CMA_DEBUG
+#ifndef DEBUG
+# define DEBUG
+#endif
+#endif
+
+#include <asm/page.h>
+#include <asm/dma-contiguous.h>
+
+#include <linux/memblock.h>
+#include <linux/err.h>
+#include <linux/mm.h>
+#include <linux/mutex.h>
+#include <linux/page-isolation.h>
+#include <linux/slab.h>
+#include <linux/swap.h>
+#include <linux/mm_types.h>
+#include <linux/dma-contiguous.h>
+
+#ifndef SZ_1M
+#define SZ_1M (1 << 20)
+#endif
+
+struct cma {
+ unsigned long base_pfn;
+ unsigned long count;
+ unsigned long *bitmap;
+};
+
+struct cma *dma_contiguous_default_area;
+
+#ifdef CONFIG_CMA_SIZE_MBYTES
+#define CMA_SIZE_MBYTES CONFIG_CMA_SIZE_MBYTES
+#else
+#define CMA_SIZE_MBYTES 0
+#endif
+
+/*
+ * Default global CMA area size can be defined in kernel's .config.
+ * This is usefull mainly for distro maintainers to create a kernel
+ * that works correctly for most supported systems.
+ * The size can be set in bytes or as a percentage of the total memory
+ * in the system.
+ *
+ * Users, who want to set the size of global CMA area for their system
+ * should use cma= kernel parameter.
+ */
+static const unsigned long size_bytes = CMA_SIZE_MBYTES * SZ_1M;
+static long size_cmdline = -1;
+
+static int __init early_cma(char *p)
+{
+ pr_debug("%s(%s)\n", __func__, p);
+ size_cmdline = memparse(p, &p);
+ return 0;
+}
+early_param("cma", early_cma);
+
+#ifdef CONFIG_CMA_SIZE_PERCENTAGE
+
+static unsigned long __init __maybe_unused cma_early_percent_memory(void)
+{
+ struct memblock_region *reg;
+ unsigned long total_pages = 0;
+
+ /*
+ * We cannot use memblock_phys_mem_size() here, because
+ * memblock_analyze() has not been called yet.
+ */
+ for_each_memblock(memory, reg)
+ total_pages += memblock_region_memory_end_pfn(reg) -
+ memblock_region_memory_base_pfn(reg);
+
+ return (total_pages * CONFIG_CMA_SIZE_PERCENTAGE / 100) << PAGE_SHIFT;
+}
+
+#else
+
+static inline __maybe_unused unsigned long cma_early_percent_memory(void)
+{
+ return 0;
+}
+
+#endif
+
+/**
+ * dma_contiguous_reserve() - reserve area for contiguous memory handling
+ * @limit: End address of the reserved memory (optional, 0 for any).
+ *
+ * This function reserves memory from early allocator. It should be
+ * called by arch specific code once the early allocator (memblock or bootmem)
+ * has been activated and all other subsystems have already allocated/reserved
+ * memory.
+ */
+void __init dma_contiguous_reserve(phys_addr_t limit)
+{
+ unsigned long selected_size = 0;
+
+ pr_debug("%s(limit %08lx)\n", __func__, (unsigned long)limit);
+
+ if (size_cmdline != -1) {
+ selected_size = size_cmdline;
+ } else {
+#ifdef CONFIG_CMA_SIZE_SEL_MBYTES
+ selected_size = size_bytes;
+#elif defined(CONFIG_CMA_SIZE_SEL_PERCENTAGE)
+ selected_size = cma_early_percent_memory();
+#elif defined(CONFIG_CMA_SIZE_SEL_MIN)
+ selected_size = min(size_bytes, cma_early_percent_memory());
+#elif defined(CONFIG_CMA_SIZE_SEL_MAX)
+ selected_size = max(size_bytes, cma_early_percent_memory());
+#endif
+ }
+
+ if (selected_size) {
+ pr_debug("%s: reserving %ld MiB for global area\n", __func__,
+ selected_size / SZ_1M);
+
+ dma_declare_contiguous(NULL, selected_size, 0, limit);
+ }
+};
+
+static DEFINE_MUTEX(cma_mutex);
+
+static __init int cma_activate_area(unsigned long base_pfn, unsigned long count)
+{
+ unsigned long pfn = base_pfn;
+ unsigned i = count >> pageblock_order;
+ struct zone *zone;
+
+ WARN_ON_ONCE(!pfn_valid(pfn));
+ zone = page_zone(pfn_to_page(pfn));
+
+ do {
+ unsigned j;
+ base_pfn = pfn;
+ for (j = pageblock_nr_pages; j; --j, pfn++) {
+ WARN_ON_ONCE(!pfn_valid(pfn));
+ if (page_zone(pfn_to_page(pfn)) != zone)
+ return -EINVAL;
+ }
+ init_cma_reserved_pageblock(pfn_to_page(base_pfn));
+ } while (--i);
+ return 0;
+}
+
+static __init struct cma *cma_create_area(unsigned long base_pfn,
+ unsigned long count)
+{
+ int bitmap_size = BITS_TO_LONGS(count) * sizeof(long);
+ struct cma *cma;
+ int ret = -ENOMEM;
+
+ pr_debug("%s(base %08lx, count %lx)\n", __func__, base_pfn, count);
+
+ cma = kmalloc(sizeof *cma, GFP_KERNEL);
+ if (!cma)
+ return ERR_PTR(-ENOMEM);
+
+ cma->base_pfn = base_pfn;
+ cma->count = count;
+ cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
+
+ if (!cma->bitmap)
+ goto no_mem;
+
+ ret = cma_activate_area(base_pfn, count);
+ if (ret)
+ goto error;
+
+ pr_debug("%s: returned %p\n", __func__, (void *)cma);
+ return cma;
+
+error:
+ kfree(cma->bitmap);
+no_mem:
+ kfree(cma);
+ return ERR_PTR(ret);
+}
+
+static struct cma_reserved {
+ phys_addr_t start;
+ unsigned long size;
+ struct device *dev;
+} cma_reserved[MAX_CMA_AREAS] __initdata;
+static unsigned cma_reserved_count __initdata;
+
+static int __init cma_init_reserved_areas(void)
+{
+ struct cma_reserved *r = cma_reserved;
+ unsigned i = cma_reserved_count;
+
+ pr_debug("%s()\n", __func__);
+
+ for (; i; --i, ++r) {
+ struct cma *cma;
+ cma = cma_create_area(PFN_DOWN(r->start),
+ r->size >> PAGE_SHIFT);
+ if (!IS_ERR(cma))
+ dev_set_cma_area(r->dev, cma);
+ }
+ return 0;
+}
+core_initcall(cma_init_reserved_areas);
+
+/**
+ * dma_declare_contiguous() - reserve area for contiguous memory handling
+ * for particular device
+ * @dev: Pointer to device structure.
+ * @size: Size of the reserved memory.
+ * @base: Start address of the reserved memory (optional, 0 for any).
+ * @limit: End address of the reserved memory (optional, 0 for any).
+ *
+ * This function reserves memory for specified device. It should be
+ * called by board specific code when early allocator (memblock or bootmem)
+ * is still activate.
+ */
+int __init dma_declare_contiguous(struct device *dev, unsigned long size,
+ phys_addr_t base, phys_addr_t limit)
+{
+ struct cma_reserved *r = &cma_reserved[cma_reserved_count];
+ unsigned long alignment;
+
+ pr_debug("%s(size %lx, base %08lx, limit %08lx)\n", __func__,
+ (unsigned long)size, (unsigned long)base,
+ (unsigned long)limit);
+
+ /* Sanity checks */
+ if (cma_reserved_count == ARRAY_SIZE(cma_reserved)) {
+ pr_err("Not enough slots for CMA reserved regions!\n");
+ return -ENOSPC;
+ }
+
+ if (!size)
+ return -EINVAL;
+
+ /* Sanitise input arguments */
+ alignment = PAGE_SIZE << max(MAX_ORDER, pageblock_order);
+ base = ALIGN(base, alignment);
+ size = ALIGN(size, alignment);
+ limit &= ~(alignment - 1);
+
+ /* Reserve memory */
+ if (base) {
+ if (memblock_is_region_reserved(base, size) ||
+ memblock_reserve(base, size) < 0) {
+ base = -EBUSY;
+ goto err;
+ }
+ } else {
+ /*
+ * Use __memblock_alloc_base() since
+ * memblock_alloc_base() panic()s.
+ */
+ phys_addr_t addr = __memblock_alloc_base(size, alignment, limit);
+ if (!addr) {
+ base = -ENOMEM;
+ goto err;
+ } else if (addr + size > ~(unsigned long)0) {
+ memblock_free(addr, size);
+ base = -EINVAL;
+ goto err;
+ } else {
+ base = addr;
+ }
+ }
+
+ /*
+ * Each reserved area must be initialised later, when more kernel
+ * subsystems (like slab allocator) are available.
+ */
+ r->start = base;
+ r->size = size;
+ r->dev = dev;
+ cma_reserved_count++;
+ pr_info("CMA: reserved %ld MiB at %08lx\n", size / SZ_1M,
+ (unsigned long)base);
+
+ /* Architecture specific contiguous memory fixup. */
+ dma_contiguous_early_fixup(base, size);
+ return 0;
+err:
+ pr_err("CMA: failed to reserve %ld MiB\n", size / SZ_1M);
+ return base;
+}
+
+/**
+ * dma_alloc_from_contiguous() - allocate pages from contiguous area
+ * @dev: Pointer to device for which the allocation is performed.
+ * @count: Requested number of pages.
+ * @align: Requested alignment of pages (in PAGE_SIZE order).
+ *
+ * This function allocates memory buffer for specified device. It uses
+ * device specific contiguous memory area if available or the default
+ * global one. Requires architecture specific get_dev_cma_area() helper
+ * function.
+ */
+struct page *dma_alloc_from_contiguous(struct device *dev, int count,
+ unsigned int align)
+{
+ unsigned long mask, pfn, pageno, start = 0;
+ struct cma *cma = dev_get_cma_area(dev);
+ int ret;
+
+ if (!cma || !cma->count)
+ return NULL;
+
+ if (align > CONFIG_CMA_ALIGNMENT)
+ align = CONFIG_CMA_ALIGNMENT;
+
+ pr_debug("%s(cma %p, count %d, align %d)\n", __func__, (void *)cma,
+ count, align);
+
+ if (!count)
+ return NULL;
+
+ mask = (1 << align) - 1;
+
+ mutex_lock(&cma_mutex);
+
+ for (;;) {
+ pageno = bitmap_find_next_zero_area(cma->bitmap, cma->count,
+ start, count, mask);
+ if (pageno >= cma->count) {
+ ret = -ENOMEM;
+ goto error;
+ }
+
+ pfn = cma->base_pfn + pageno;
+ ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA);
+ if (ret == 0) {
+ bitmap_set(cma->bitmap, pageno, count);
+ break;
+ } else if (ret != -EBUSY) {
+ goto error;
+ }
+ pr_debug("%s(): memory range at %p is busy, retrying\n",
+ __func__, pfn_to_page(pfn));
+ /* try again with a bit different memory target */
+ start = pageno + mask + 1;
+ }
+
+ mutex_unlock(&cma_mutex);
+
+ pr_debug("%s(): returned %p\n", __func__, pfn_to_page(pfn));
+ return pfn_to_page(pfn);
+error:
+ mutex_unlock(&cma_mutex);
+ return NULL;
+}
+
+/**
+ * dma_release_from_contiguous() - release allocated pages
+ * @dev: Pointer to device for which the pages were allocated.
+ * @pages: Allocated pages.
+ * @count: Number of allocated pages.
+ *
+ * This function releases memory allocated by dma_alloc_from_contiguous().
+ * It returns false when provided pages do not belong to contiguous area and
+ * true otherwise.
+ */
+bool dma_release_from_contiguous(struct device *dev, struct page *pages,
+ int count)
+{
+ struct cma *cma = dev_get_cma_area(dev);
+ unsigned long pfn;
+
+ if (!cma || !pages)
+ return false;
+
+ pr_debug("%s(page %p)\n", __func__, (void *)pages);
+
+ pfn = page_to_pfn(pages);
+
+ if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
+ return false;
+
+ VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
+
+ mutex_lock(&cma_mutex);
+ bitmap_clear(cma->bitmap, pfn - cma->base_pfn, count);
+ free_contig_range(pfn, count);
+ mutex_unlock(&cma_mutex);
+
+ return true;
+}
}
EXPORT_SYMBOL(dmam_release_declared_memory);
+/*
+ * Create scatter-list for the already allocated DMA buffer.
+ */
+int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
+ void *cpu_addr, dma_addr_t handle, size_t size)
+{
+ struct page *page = virt_to_page(cpu_addr);
+ int ret;
+
+ ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
+ if (unlikely(ret))
+ return ret;
+
+ sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0);
+ return 0;
+}
+EXPORT_SYMBOL(dma_common_get_sgtable);
+
#endif
}
EXPORT_SYMBOL_GPL(driver_remove_file);
+/**
+ * put_driver - decrement driver's refcount.
+ * @drv: driver.
+ */
+void put_driver(struct device_driver *drv)
+{
+ kobject_put(&drv->p->kobj);
+}
+EXPORT_SYMBOL_GPL(put_driver);
+
static int driver_add_groups(struct device_driver *drv,
const struct attribute_group **groups)
{
goto out;
}
- if (cpufreq_frequency_table_target(policy, freq_table,
- freqs.old, relation, &old_index)) {
+ /*
+ * The policy may have been changed so that we cannot get proper
+ * old_index with cpufreq_frequency_table_target(). Thus, ignore
+ * policy and get the index from the raw frequency table.
+ */
+ for (old_index = 0;
+ freq_table[old_index].frequency != CPUFREQ_TABLE_END;
+ old_index++)
+ if (freq_table[old_index].frequency == freqs.old)
+ break;
+ if (freq_table[old_index].frequency == CPUFREQ_TABLE_END) {
ret = -EINVAL;
goto out;
}
static int exynos_cpufreq_cpu_init(struct cpufreq_policy *policy)
{
+ int ret;
+
policy->cur = policy->min = policy->max = exynos_getspeed(policy->cpu);
cpufreq_frequency_table_get_attr(exynos_info->freq_table, policy->cpu);
cpumask_setall(policy->cpus);
}
- return cpufreq_frequency_table_cpuinfo(policy, exynos_info->freq_table);
+ ret = cpufreq_frequency_table_cpuinfo(policy, exynos_info->freq_table);
+ if (ret)
+ return ret;
+
+ cpufreq_frequency_table_get_attr(exynos_info->freq_table, policy->cpu);
+ return 0;
+
}
+static int exynos_cpufreq_cpu_exit(struct cpufreq_policy *policy)
+{
+ cpufreq_frequency_table_put_attr(policy->cpu);
+ return 0;
+}
+
+static struct freq_attr *exynos_cpufreq_attr[] = {
+ &cpufreq_freq_attr_scaling_available_freqs,
+ NULL,
+};
+
static struct cpufreq_driver exynos_driver = {
.flags = CPUFREQ_STICKY,
.verify = exynos_verify_speed,
.target = exynos_target,
.get = exynos_getspeed,
.init = exynos_cpufreq_cpu_init,
+ .exit = exynos_cpufreq_cpu_exit,
.name = "exynos_cpufreq",
+ .attr = exynos_cpufreq_attr,
#ifdef CONFIG_PM
.suspend = exynos_cpufreq_suspend,
.resume = exynos_cpufreq_resume,
/*
- * Copyright (c) 2010-20122Samsung Electronics Co., Ltd.
+ * Copyright (c) 2010-2012 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* EXYNOS5250 - CPU frequency scaling support
* Clock divider value for following
* { ARM, CPUD, ACP, PERIPH, ATB, PCLK_DBG, APLL, ARM2 }
*/
- { 0, 3, 7, 7, 6, 1, 3, 0 }, /* 1700 MHz - N/A */
- { 0, 3, 7, 7, 6, 1, 3, 0 }, /* 1600 MHz - N/A */
- { 0, 3, 7, 7, 5, 1, 3, 0 }, /* 1500 MHz - N/A */
- { 0, 3, 7, 7, 6, 1, 3, 0 }, /* 1400 MHz */
- { 0, 3, 7, 7, 6, 1, 3, 0 }, /* 1300 MHz */
- { 0, 3, 7, 7, 5, 1, 3, 0 }, /* 1200 MHz */
- { 0, 2, 7, 7, 5, 1, 2, 0 }, /* 1100 MHz */
- { 0, 2, 7, 7, 4, 1, 2, 0 }, /* 1000 MHz */
- { 0, 2, 7, 7, 4, 1, 2, 0 }, /* 900 MHz */
- { 0, 2, 7, 7, 3, 1, 1, 0 }, /* 800 MHz */
+ { 0, 3, 7, 7, 7, 2, 5, 0 }, /* 1700 MHz */
+ { 0, 3, 7, 7, 7, 1, 4, 0 }, /* 1600 MHz */
+ { 0, 2, 7, 7, 7, 1, 4, 0 }, /* 1500 MHz */
+ { 0, 2, 7, 7, 6, 1, 4, 0 }, /* 1400 MHz */
+ { 0, 2, 7, 7, 6, 1, 3, 0 }, /* 1300 MHz */
+ { 0, 2, 7, 7, 5, 1, 3, 0 }, /* 1200 MHz */
+ { 0, 3, 7, 7, 5, 1, 3, 0 }, /* 1100 MHz */
+ { 0, 1, 7, 7, 4, 1, 2, 0 }, /* 1000 MHz */
+ { 0, 1, 7, 7, 4, 1, 2, 0 }, /* 900 MHz */
+ { 0, 1, 7, 7, 4, 1, 2, 0 }, /* 800 MHz */
{ 0, 1, 7, 7, 3, 1, 1, 0 }, /* 700 MHz */
- { 0, 1, 7, 7, 2, 1, 1, 0 }, /* 600 MHz */
+ { 0, 1, 7, 7, 3, 1, 1, 0 }, /* 600 MHz */
{ 0, 1, 7, 7, 2, 1, 1, 0 }, /* 500 MHz */
- { 0, 1, 7, 7, 1, 1, 1, 0 }, /* 400 MHz */
+ { 0, 1, 7, 7, 2, 1, 1, 0 }, /* 400 MHz */
{ 0, 1, 7, 7, 1, 1, 1, 0 }, /* 300 MHz */
{ 0, 1, 7, 7, 1, 1, 1, 0 }, /* 200 MHz */
};
/* Clock divider value for following
* { COPY, HPM }
*/
- { 0, 2 }, /* 1700 MHz - N/A */
- { 0, 2 }, /* 1600 MHz - N/A */
- { 0, 2 }, /* 1500 MHz - N/A */
+ { 0, 2 }, /* 1700 MHz */
+ { 0, 2 }, /* 1600 MHz */
+ { 0, 2 }, /* 1500 MHz */
{ 0, 2 }, /* 1400 MHz */
{ 0, 2 }, /* 1300 MHz */
{ 0, 2 }, /* 1200 MHz */
};
static unsigned int exynos5_apll_pms_table[CPUFREQ_LEVEL_END] = {
- (0), /* 1700 MHz - N/A */
- (0), /* 1600 MHz - N/A */
- (0), /* 1500 MHz - N/A */
- (0), /* 1400 MHz */
+ ((425 << 16) | (6 << 8) | 0), /* 1700 MHz */
+ ((200 << 16) | (3 << 8) | 0), /* 1600 MHz */
+ ((250 << 16) | (4 << 8) | 0), /* 1500 MHz */
+ ((175 << 16) | (3 << 8) | 0), /* 1400 MHz */
((325 << 16) | (6 << 8) | 0), /* 1300 MHz */
((200 << 16) | (4 << 8) | 0), /* 1200 MHz */
((275 << 16) | (6 << 8) | 0), /* 1100 MHz */
/* ASV group voltage table */
static const unsigned int asv_voltage_5250[CPUFREQ_LEVEL_END] = {
- 0, 0, 0, 0, 0, 0, 0, /* 1700 MHz ~ 1100 MHz Not supported */
- 1175000, 1125000, 1075000, 1050000, 1000000,
- 950000, 925000, 925000, 900000
+ 1300000, /* L0 */
+ 1250000, /* L1 */
+ 1225000, /* L2 */
+ 1200000, /* L3 */
+ 1150000, /* L4 */
+ 1125000, /* L5 */
+ 1100000, /* L6 */
+ 1075000, /* L7 */
+ 1050000, /* L8 */
+ 1025000, /* L9 */
+ 1012500, /* L10 */
+ 1000000, /* L11 */
+ 975000, /* L12 */
+ 950000, /* L13 */
+ 937500, /* L14 */
+ 925000, /* L15 */
};
static void set_clkdiv(unsigned int div_index)
static void __init set_volt_table(void)
{
unsigned int i;
-
- exynos5250_freq_table[L0].frequency = CPUFREQ_ENTRY_INVALID;
- exynos5250_freq_table[L1].frequency = CPUFREQ_ENTRY_INVALID;
- exynos5250_freq_table[L2].frequency = CPUFREQ_ENTRY_INVALID;
- exynos5250_freq_table[L3].frequency = CPUFREQ_ENTRY_INVALID;
- exynos5250_freq_table[L4].frequency = CPUFREQ_ENTRY_INVALID;
- exynos5250_freq_table[L5].frequency = CPUFREQ_ENTRY_INVALID;
- exynos5250_freq_table[L6].frequency = CPUFREQ_ENTRY_INVALID;
-
- max_support_idx = L7;
+ max_support_idx = L0;
for (i = 0 ; i < CPUFREQ_LEVEL_END ; i++)
exynos5250_volt_table[i] = asv_voltage_5250[i];
dma_async_tx_callback callback;
/* Change status to reload it */
+ dma_cookie_assign(&desc->txd);
desc->status = PREP;
pch = desc->pchan;
callback = desc->txd.callback;
.ngpio = EXYNOS5_GPIO_C3_NR,
.label = "GPC3",
},
+ }, {
+ .chip = {
+ .base = EXYNOS5_GPC4(0),
+ .ngpio = EXYNOS5_GPIO_C4_NR,
+ .label = "GPC4",
+ },
}, {
.chip = {
.base = EXYNOS5_GPD0(0),
goto err_ioremap1;
}
+ /* need to set base address for gpc4 */
+ exynos5_gpios_1[11].base = gpio_base1 + 0x2E0;
+
/* need to set base address for gpx */
- chip = &exynos5_gpios_1[20];
+ chip = &exynos5_gpios_1[21];
gpx_base = gpio_base1 + 0xC00;
for (i = 0; i < 4; i++, chip++, gpx_base += 0x20)
chip->base = gpx_base;
-obj-y += drm/ vga/ stub/
+obj-y += drm/ vga/ stub/ vithar/
Choose this option if you have a Samsung SoC EXYNOS chipset.
If M is selected the module will be called exynosdrm.
+config DRM_EXYNOS_DMABUF
+ bool "EXYNOS DRM DMABUF"
+ depends on DRM_EXYNOS
+ help
+ Choose this option if you want to use DMABUF feature for DRM.
+
config DRM_EXYNOS_FIMD
bool "Exynos DRM FIMD"
depends on DRM_EXYNOS && !FB_S3C
exynos_drm_buf.o exynos_drm_gem.o exynos_drm_core.o \
exynos_drm_plane.o
+exynosdrm-$(CONFIG_DRM_EXYNOS_DMABUF) += exynos_drm_dmabuf.o
exynosdrm-$(CONFIG_DRM_EXYNOS_FIMD) += exynos_drm_fimd.o
exynosdrm-$(CONFIG_DRM_EXYNOS_HDMI) += exynos_hdmi.o exynos_mixer.o \
exynos_ddc.o exynos_hdmiphy.o \
unsigned int flags, struct exynos_drm_gem_buf *buf)
{
dma_addr_t start_addr;
- unsigned int npages, page_size, i = 0;
+ unsigned int npages, i = 0;
struct scatterlist *sgl;
int ret = 0;
if (buf->size >= SZ_1M) {
npages = buf->size >> SECTION_SHIFT;
- page_size = SECTION_SIZE;
+ buf->page_size = SECTION_SIZE;
} else if (buf->size >= SZ_64K) {
npages = buf->size >> 16;
- page_size = SZ_64K;
+ buf->page_size = SZ_64K;
} else {
npages = buf->size >> PAGE_SHIFT;
- page_size = PAGE_SIZE;
+ buf->page_size = PAGE_SIZE;
}
buf->sgt = kzalloc(sizeof(struct sg_table), GFP_KERNEL);
while (i < npages) {
buf->pages[i] = phys_to_page(start_addr);
- sg_set_page(sgl, buf->pages[i], page_size, 0);
+ sg_set_page(sgl, buf->pages[i], buf->page_size, 0);
sg_dma_address(sgl) = start_addr;
- start_addr += page_size;
+ start_addr += buf->page_size;
sgl = sg_next(sgl);
i++;
}
static LIST_HEAD(exynos_drm_subdrv_list);
static struct drm_device *drm_dev;
+#ifdef CONFIG_EXYNOS_IOMMU
+struct dma_iommu_mapping *exynos_drm_common_mapping;
+#endif
static int exynos_drm_subdrv_probe(struct drm_device *dev,
struct exynos_drm_subdrv *subdrv)
--- /dev/null
+/* exynos_drm_dmabuf.c
+ *
+ * Copyright (c) 2012 Samsung Electronics Co., Ltd.
+ * Author: Inki Dae <inki.dae@samsung.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "drmP.h"
+#include "drm.h"
+#include "exynos_drm_drv.h"
+#include "exynos_drm_gem.h"
+
+#include "exynos_drm.h"
+#include <linux/dma-buf.h>
+
+static struct sg_table *exynos_pages_to_sg(struct page **pages, int nr_pages,
+ unsigned int page_size)
+{
+ struct sg_table *sgt = NULL;
+ struct scatterlist *sgl;
+ int i, ret;
+
+ sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
+ if (!sgt)
+ goto out;
+
+ ret = sg_alloc_table(sgt, nr_pages, GFP_KERNEL);
+ if (ret)
+ goto err_free_sgt;
+
+ if (page_size < PAGE_SIZE)
+ page_size = PAGE_SIZE;
+
+ for_each_sg(sgt->sgl, sgl, nr_pages, i)
+ sg_set_page(sgl, pages[i], page_size, 0);
+
+ return sgt;
+
+err_free_sgt:
+ kfree(sgt);
+ sgt = NULL;
+out:
+ return NULL;
+}
+
+static struct sg_table *
+ exynos_gem_map_dma_buf(struct dma_buf_attachment *attach,
+ enum dma_data_direction dir)
+{
+ struct exynos_drm_gem_obj *gem_obj = attach->dmabuf->priv;
+ struct drm_device *dev = gem_obj->base.dev;
+ struct exynos_drm_gem_buf *buf;
+ struct sg_table *sgt = NULL;
+ unsigned int npages;
+ int nents;
+
+ DRM_DEBUG_PRIME("%s\n", __FILE__);
+
+ mutex_lock(&dev->struct_mutex);
+
+ buf = gem_obj->buffer;
+
+ /* there should always be pages allocated. */
+ if (!buf->pages) {
+ DRM_ERROR("pages is null.\n");
+ goto err_unlock;
+ }
+
+ npages = buf->size / buf->page_size;
+
+ sgt = exynos_pages_to_sg(buf->pages, npages, buf->page_size);
+ if (!sgt) {
+ DRM_DEBUG_PRIME("exynos_pages_to_sg returned NULL!\n");
+ goto err_unlock;
+ }
+ nents = dma_map_sg(attach->dev, sgt->sgl, sgt->nents, dir);
+
+ DRM_DEBUG_PRIME("npages = %d buffer size = 0x%lx page_size = 0x%lx\n",
+ npages, buf->size, buf->page_size);
+
+err_unlock:
+ mutex_unlock(&dev->struct_mutex);
+ return sgt;
+}
+
+static void exynos_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
+ struct sg_table *sgt,
+ enum dma_data_direction dir)
+{
+ dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents, dir);
+ sg_free_table(sgt);
+ kfree(sgt);
+ sgt = NULL;
+}
+
+static void exynos_dmabuf_release(struct dma_buf *dmabuf)
+{
+ struct exynos_drm_gem_obj *exynos_gem_obj = dmabuf->priv;
+
+ DRM_DEBUG_PRIME("%s\n", __FILE__);
+
+ /*
+ * exynos_dmabuf_release() call means that file object's
+ * f_count is 0 and it calls drm_gem_object_handle_unreference()
+ * to drop the references that these values had been increased
+ * at drm_prime_handle_to_fd()
+ */
+ if (exynos_gem_obj->base.export_dma_buf == dmabuf) {
+ exynos_gem_obj->base.export_dma_buf = NULL;
+
+ /*
+ * drop this gem object refcount to release allocated buffer
+ * and resources.
+ */
+ drm_gem_object_unreference_unlocked(&exynos_gem_obj->base);
+ }
+}
+
+static void *exynos_gem_dmabuf_kmap_atomic(struct dma_buf *dma_buf,
+ unsigned long page_num)
+{
+ /* TODO */
+
+ return NULL;
+}
+
+static void exynos_gem_dmabuf_kunmap_atomic(struct dma_buf *dma_buf,
+ unsigned long page_num,
+ void *addr)
+{
+ /* TODO */
+}
+
+static void *exynos_gem_dmabuf_kmap(struct dma_buf *dma_buf,
+ unsigned long page_num)
+{
+ /* TODO */
+
+ return NULL;
+}
+
+static void exynos_gem_dmabuf_kunmap(struct dma_buf *dma_buf,
+ unsigned long page_num, void *addr)
+{
+ /* TODO */
+}
+
+static int exynos_drm_gem_dmabuf_mmap(struct dma_buf *dmabuf,
+ struct vm_area_struct *vma)
+{
+ struct drm_gem_object *obj = dmabuf->priv;
+ struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj);
+ struct exynos_drm_gem_buf *buffer = exynos_gem_obj->buffer;
+ unsigned long uaddr = vma->vm_start;
+ int ret;
+
+ if (exynos_gem_obj->flags & EXYNOS_BO_NONCONTIG) {
+ unsigned long usize = buffer->size, i=0;
+
+ if (!buffer->pages)
+ return -EINVAL;
+
+ do {
+ ret = vm_insert_page(vma, uaddr, buffer->pages[i++]);
+ if (ret) {
+ DRM_ERROR("failed to remap user space.\n");
+ return ret;
+ }
+
+ uaddr += PAGE_SIZE;
+ usize -= PAGE_SIZE;
+ } while (usize > 0);
+ }
+ return 0;
+}
+
+static struct dma_buf_ops exynos_dmabuf_ops = {
+ .mmap = exynos_drm_gem_dmabuf_mmap,
+ .map_dma_buf = exynos_gem_map_dma_buf,
+ .unmap_dma_buf = exynos_gem_unmap_dma_buf,
+ .kmap = exynos_gem_dmabuf_kmap,
+ .kmap_atomic = exynos_gem_dmabuf_kmap_atomic,
+ .kunmap = exynos_gem_dmabuf_kunmap,
+ .kunmap_atomic = exynos_gem_dmabuf_kunmap_atomic,
+ .release = exynos_dmabuf_release,
+};
+
+struct dma_buf *exynos_dmabuf_prime_export(struct drm_device *drm_dev,
+ struct drm_gem_object *obj, int flags)
+{
+ struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj);
+
+ return dma_buf_export(exynos_gem_obj, &exynos_dmabuf_ops,
+ exynos_gem_obj->base.size, 0666);
+}
+
+struct drm_gem_object *exynos_dmabuf_prime_import(struct drm_device *drm_dev,
+ struct dma_buf *dma_buf)
+{
+ struct dma_buf_attachment *attach;
+ struct sg_table *sgt;
+ struct scatterlist *sgl;
+ struct exynos_drm_gem_obj *exynos_gem_obj;
+ struct exynos_drm_gem_buf *buffer;
+ struct page *page;
+ int ret, i = 0;
+
+ DRM_DEBUG_PRIME("%s\n", __FILE__);
+
+ /* is this one of own objects? */
+ if (dma_buf->ops == &exynos_dmabuf_ops) {
+ struct drm_gem_object *obj;
+
+ exynos_gem_obj = dma_buf->priv;
+ obj = &exynos_gem_obj->base;
+
+ /* is it from our device? */
+ if (obj->dev == drm_dev) {
+ drm_gem_object_reference(obj);
+ return obj;
+ }
+ }
+
+ attach = dma_buf_attach(dma_buf, drm_dev->dev);
+ if (IS_ERR(attach))
+ return ERR_PTR(-EINVAL);
+
+
+ sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
+ if (IS_ERR_OR_NULL(sgt)) {
+ ret = PTR_ERR(sgt);
+ goto err_buf_detach;
+ }
+
+ buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
+ if (!buffer) {
+ DRM_ERROR("failed to allocate exynos_drm_gem_buf.\n");
+ ret = -ENOMEM;
+ goto err_unmap_attach;
+ }
+
+ buffer->pages = kzalloc(sizeof(*page) * sgt->nents, GFP_KERNEL);
+ if (!buffer->pages) {
+ DRM_ERROR("failed to allocate pages.\n");
+ ret = -ENOMEM;
+ goto err_free_buffer;
+ }
+
+ exynos_gem_obj = exynos_drm_gem_init(drm_dev, dma_buf->size);
+ if (!exynos_gem_obj) {
+ ret = -ENOMEM;
+ goto err_free_pages;
+ }
+
+ sgl = sgt->sgl;
+ buffer->dma_addr = sg_dma_address(sgl);
+
+ while (i < sgt->nents) {
+ buffer->pages[i] = sg_page(sgl);
+ buffer->size += sg_dma_len(sgl);
+ sgl = sg_next(sgl);
+ i++;
+ }
+
+ exynos_gem_obj->buffer = buffer;
+ buffer->sgt = sgt;
+ exynos_gem_obj->base.import_attach = attach;
+
+ DRM_DEBUG_PRIME("dma_addr = 0x%x, size = 0x%lx\n", buffer->dma_addr,
+ buffer->size);
+
+ return &exynos_gem_obj->base;
+
+err_free_pages:
+ kfree(buffer->pages);
+ buffer->pages = NULL;
+err_free_buffer:
+ kfree(buffer);
+ buffer = NULL;
+err_unmap_attach:
+ dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL);
+err_buf_detach:
+ dma_buf_detach(dma_buf, attach);
+ return ERR_PTR(ret);
+}
+
+MODULE_AUTHOR("Inki Dae <inki.dae@samsung.com>");
+MODULE_DESCRIPTION("Samsung SoC DRM DMABUF Module");
+MODULE_LICENSE("GPL");
--- /dev/null
+/* exynos_drm_dmabuf.h
+ *
+ * Copyright (c) 2012 Samsung Electronics Co., Ltd.
+ * Author: Inki Dae <inki.dae@samsung.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef _EXYNOS_DRM_DMABUF_H_
+#define _EXYNOS_DRM_DMABUF_H_
+
+#ifdef CONFIG_DRM_EXYNOS_DMABUF
+struct dma_buf *exynos_dmabuf_prime_export(struct drm_device *drm_dev,
+ struct drm_gem_object *obj, int flags);
+
+struct drm_gem_object *exynos_dmabuf_prime_import(struct drm_device *drm_dev,
+ struct dma_buf *dma_buf);
+#else
+#define exynos_dmabuf_prime_export NULL
+#define exynos_dmabuf_prime_import NULL
+#endif
+#endif
#include "exynos_drm_gem.h"
#include "exynos_drm_plane.h"
#include "exynos_drm_vidi.h"
+#include "exynos_drm_dmabuf.h"
#define DRIVER_NAME "exynos"
#define DRIVER_DESC "Samsung SoC DRM"
{
DRM_DEBUG_DRIVER("%s\n", __FILE__);
+ drm_prime_init_file_private(&file->prime);
+
return exynos_drm_subdrv_open(dev, file);
}
e->base.destroy(&e->base);
}
}
+ drm_prime_destroy_file_private(&file->prime);
spin_unlock_irqrestore(&dev->event_lock, flags);
exynos_drm_subdrv_close(dev, file);
static struct drm_driver exynos_drm_driver = {
.driver_features = DRIVER_HAVE_IRQ | DRIVER_BUS_PLATFORM |
- DRIVER_MODESET | DRIVER_GEM,
+ DRIVER_MODESET | DRIVER_GEM | DRIVER_PRIME,
.load = exynos_drm_load,
.unload = exynos_drm_unload,
.open = exynos_drm_open,
.dumb_create = exynos_drm_gem_dumb_create,
.dumb_map_offset = exynos_drm_gem_dumb_map_offset,
.dumb_destroy = exynos_drm_gem_dumb_destroy,
+ .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
+ .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
+ .gem_prime_export = exynos_dmabuf_prime_export,
+ .gem_prime_import = exynos_dmabuf_prime_import,
.ioctls = exynos_ioctls,
.fops = &exynos_drm_driver_fops,
.name = DRIVER_NAME,
.minor = DRIVER_MINOR,
};
+#ifdef CONFIG_EXYNOS_IOMMU
+static int iommu_init(struct platform_device *pdev)
+{
+ /* DRM device expects a IOMMU mapping to be already
+ * created in FIMD. Else this function should
+ * throw an error.
+ */
+ if (exynos_drm_common_mapping==NULL) {
+ printk(KERN_ERR "exynos drm common mapping is invalid\n");
+ return -1;
+ }
+
+ if (!s5p_create_iommu_mapping(&pdev->dev, 0,
+ 0, 0, exynos_drm_common_mapping)) {
+ printk(KERN_ERR "failed to create IOMMU mapping\n");
+ return -1;
+ }
+
+ return 0;
+}
+#endif
+
static int exynos_drm_platform_probe(struct platform_device *pdev)
{
DRM_DEBUG_DRIVER("%s\n", __FILE__);
+#ifdef CONFIG_EXYNOS_IOMMU
+ if (iommu_init(pdev)) {
+ DRM_ERROR("failed to initialize IOMMU\n");
+ return -ENODEV;
+ }
+#endif
+
exynos_drm_driver.num_ioctls = DRM_ARRAY_SIZE(exynos_ioctls);
return drm_platform_init(&exynos_drm_driver, pdev);
#include <linux/module.h>
#include "drm.h"
+#ifdef CONFIG_EXYNOS_IOMMU
+#include <mach/sysmmu.h>
+#include <linux/of_platform.h>
+#endif
#define MAX_CRTC 3
#define MAX_PLANE 5
extern struct platform_driver mixer_driver;
extern struct platform_driver exynos_drm_common_hdmi_driver;
extern struct platform_driver vidi_driver;
+#ifdef CONFIG_EXYNOS_IOMMU
+extern struct dma_iommu_mapping *exynos_drm_common_mapping;
+#endif
#endif
struct exynos_drm_gem_obj *exynos_gem_obj;
};
+static int
+exynos_drm_fb_mmap(struct fb_info *info, struct vm_area_struct * vma)
+{
+ int ret;
+
+ vma->vm_pgoff = 0;
+ ret = dma_mmap_writecombine(info->device, vma, info->screen_base,
+ info->fix.smem_start, vma->vm_end - vma->vm_start);
+ if (ret)
+ printk(KERN_ERR "Remapping memory failed, error: %d\n", ret);
+
+ return ret;
+}
+
static struct fb_ops exynos_drm_fb_ops = {
.owner = THIS_MODULE,
+ .fb_mmap = exynos_drm_fb_mmap,
.fb_fillrect = cfb_fillrect,
.fb_copyarea = cfb_copyarea,
.fb_imageblit = cfb_imageblit,
return 0;
}
+#ifdef CONFIG_EXYNOS_IOMMU
+static int iommu_init(struct platform_device *pdev)
+{
+ struct platform_device *pds;
+
+ pds = find_sysmmu_dt(pdev, "sysmmu");
+ if (pds==NULL) {
+ printk(KERN_ERR "No sysmmu found\n");
+ return -1;
+ }
+
+ platform_set_sysmmu(&pds->dev, &pdev->dev);
+ exynos_drm_common_mapping = s5p_create_iommu_mapping(&pdev->dev,
+ 0x20000000, SZ_128M, 4,
+ exynos_drm_common_mapping);
+
+ if (!exynos_drm_common_mapping) {
+ printk(KERN_ERR "IOMMU mapping not created\n");
+ return -1;
+ }
+
+ return 0;
+}
+#endif
static int __devinit fimd_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct exynos_drm_fimd_pdata *pdata;
struct exynos_drm_panel_info *panel;
struct resource *res;
+ struct clk *clk_parent;
int win;
int ret = -EINVAL;
+#ifdef CONFIG_EXYNOS_IOMMU
+ ret = iommu_init(pdev);
+ if (ret < 0) {
+ dev_err(dev, "failed to initialize IOMMU\n");
+ return -ENODEV;
+ }
+#endif
DRM_DEBUG_KMS("%s\n", __FILE__);
pdata = pdev->dev.platform_data;
goto err_bus_clk;
}
+ clk_parent = clk_get(NULL, "mout_mpll_user");
+ if (IS_ERR(clk_parent)) {
+ ret = PTR_ERR(clk_parent);
+ goto err_clk;
+ }
+
+ if (clk_set_parent(ctx->lcd_clk, clk_parent)) {
+ ret = PTR_ERR(ctx->lcd_clk);
+ goto err_clk;
+ }
+
+ if (clk_set_rate(ctx->lcd_clk, pdata->clock_rate)) {
+ ret = PTR_ERR(ctx->lcd_clk);
+ goto err_clk;
+ }
+
+ clk_put(clk_parent);
+
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res) {
dev_err(dev, "failed to find registers\n");
goto err_req_region_io;
}
- res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+ res = platform_get_resource(pdev, IORESOURCE_IRQ, 1);
if (!res) {
dev_err(dev, "irq request failed.\n");
goto err_req_region_irq;
for (win = 0; win < WINDOWS_NR; win++)
fimd_clear_win(ctx, win);
+ if (pdata->panel_type == DP_LCD)
+ writel(DPCLKCON_ENABLE, ctx->regs + DPCLKCON);
+
exynos_drm_subdrv_register(subdrv);
return 0;
}
#endif
+static struct platform_device_id exynos_drm_driver_ids[] = {
+ {
+ .name = "exynos4-fb",
+ }, {
+ .name = "exynos5-fb",
+ },
+ {},
+};
+MODULE_DEVICE_TABLE(platform, exynos_drm_driver_ids);
+
static const struct dev_pm_ops fimd_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(fimd_suspend, fimd_resume)
SET_RUNTIME_PM_OPS(fimd_runtime_suspend, fimd_runtime_resume, NULL)
struct platform_driver fimd_driver = {
.probe = fimd_probe,
.remove = __devexit_p(fimd_remove),
+ .id_table = exynos_drm_driver_ids,
.driver = {
- .name = "exynos4-fb",
+ .name = "exynos-drm-fimd",
.owner = THIS_MODULE,
.pm = &fimd_pm_ops,
},
return roundup(size, PAGE_SIZE);
}
-static struct page **exynos_gem_get_pages(struct drm_gem_object *obj,
+struct page **exynos_gem_get_pages(struct drm_gem_object *obj,
gfp_t gfpmask)
{
struct inode *inode;
}
npages = obj->size >> PAGE_SHIFT;
+ buf->page_size = PAGE_SIZE;
buf->sgt = kzalloc(sizeof(struct sg_table), GFP_KERNEL);
if (!buf->sgt) {
sgl = sg_next(sgl);
}
+ /* Map the SGT to create a IOMMU mapping for this buffer */
+ ret = dma_map_sg(obj->dev->dev, buf->sgt->sgl, buf->sgt->orig_nents, DMA_BIDIRECTIONAL);
+ if (!ret) {
+ DRM_ERROR("failed to map sg\n");
+ ret = -ENOMEM;
+ goto err1;
+ }
+ buf->dma_addr = buf->sgt->sgl->dma_address;
+
/* add some codes for UNCACHED type here. TODO */
buf->pages = pages;
struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj);
struct exynos_drm_gem_buf *buf = exynos_gem_obj->buffer;
+ /* Unmap the SGT to remove the IOMMU mapping created for this buffer */
+ dma_unmap_sg(obj->dev->dev, buf->sgt->sgl, buf->sgt->orig_nents, DMA_BIDIRECTIONAL);
+
/*
* if buffer typs is EXYNOS_BO_NONCONTIG then release all pages
* allocated at gem fault handler.
exynos_gem_obj = NULL;
}
-static struct exynos_drm_gem_obj *exynos_drm_gem_init(struct drm_device *dev,
+struct exynos_drm_gem_obj *exynos_drm_gem_init(struct drm_device *dev,
unsigned long size)
{
struct exynos_drm_gem_obj *exynos_gem_obj;
void exynos_drm_gem_free_object(struct drm_gem_object *obj)
{
+ struct exynos_drm_gem_obj *exynos_gem_obj;
+ struct exynos_drm_gem_buf *buf;
+
DRM_DEBUG_KMS("%s\n", __FILE__);
+ exynos_gem_obj = to_exynos_gem_obj(obj);
+ buf = exynos_gem_obj->buffer;
+
+ if (obj->import_attach)
+ drm_prime_gem_destroy(obj, buf->sgt);
+
exynos_drm_gem_destroy(to_exynos_gem_obj(obj));
}
* with DRM_IOCTL_MODE_CREATE_DUMB command.
*/
- args->pitch = args->width * args->bpp >> 3;
+ args->pitch = args->width * ALIGN(args->bpp, 8) >> 3;
+
args->size = PAGE_ALIGN(args->pitch * args->height);
exynos_gem_obj = exynos_drm_gem_create(dev, args->flags, args->size);
* device address with IOMMU.
* @sgt: sg table to transfer page data.
* @pages: contain all pages to allocated memory region.
+ * @page_size: could be 4K, 64K or 1MB.
* @size: size of allocated memory region.
*/
struct exynos_drm_gem_buf {
dma_addr_t dma_addr;
struct sg_table *sgt;
struct page **pages;
+ unsigned long page_size;
unsigned long size;
};
unsigned int flags;
};
+struct page **exynos_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask);
+
/* destroy a buffer with gem object */
void exynos_drm_gem_destroy(struct exynos_drm_gem_obj *exynos_gem_obj);
+/* create a private gem object and initialize it. */
+struct exynos_drm_gem_obj *exynos_drm_gem_init(struct drm_device *dev,
+ unsigned long size);
+
/* create a new buffer with gem object */
struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_device *dev,
unsigned int flags,
--- /dev/null
+menuconfig VITHAR
+ tristate "Enable Vithar DDK"
+ help
+ Choose this option to enable 3D rendering with vithar DDK.
+
+config VITHAR_DEVICE_NODE_CREATION_IN_RUNTIME
+ bool "Enable runtime device file creation by using UDEV"
+ depends on VITHAR
+ default y
+ help
+ Choose this option to create device file under dev folder in runtime. Must be yes for Android.
+
+config VITHAR_RT_PM
+ bool "Enable Runtime power management"
+ depends on VITHAR
+ help
+ Choose this option to enable runtime power management on vithar DDK.
+
+config VITHAR_DVFS
+ bool "Enable Dynamic frequency and volatge scaling"
+ depends on VITHAR
+ help
+ Choose this option to enable dynamic frequency and volatge scaling
--- /dev/null
+obj-$(CONFIG_VITHAR) += kbase/ osk/ uk/
--- /dev/null
+obj-y += src/
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file
+ * Software workarounds configuration for Hardware issues.
+ */
+
+#ifndef _BASE_HWCONFIG_H_
+#define _BASE_HWCONFIG_H_
+
+#include <malisw/mali_malisw.h>
+
+/**
+ * List of all workarounds.
+ *
+ */
+
+typedef enum base_hw_issue {
+
+ /* Tiler triggers a fault if the scissor rectangle is empty. */
+ BASE_HW_ISSUE_5699,
+
+ /* The current version of the model doesn't support Soft-Stop */
+ BASE_HW_ISSUE_5736,
+
+ /* Need way to guarantee that all previously-translated memory accesses are commited */
+ BASE_HW_ISSUE_6367,
+
+ /* Unaligned load stores crossing 128 bit boundaries will fail */
+ BASE_HW_ISSUE_6402,
+
+ /* On job complete with non-done the cache is not flushed */
+ BASE_HW_ISSUE_6787,
+
+ /* The clamp integer coordinate flag bit of the sampler descriptor is reserved */
+ BASE_HW_ISSUE_7144,
+
+ /* Write of PRFCNT_CONFIG_MODE_MANUAL to PRFCNT_CONFIG causes a instrumentation dump if
+ PRFCNT_TILER_EN is enabled */
+ BASE_HW_ISSUE_8186,
+
+ /* TIB: Reports faults from a vtile which has not yet been allocated */
+ BASE_HW_ISSUE_8245,
+
+ /* WLMA memory goes wrong when run on shader cores other than core 0. */
+ BASE_HW_ISSUE_8250,
+
+ /* Hierz doesn't work when stenciling is enabled */
+ BASE_HW_ISSUE_8260,
+
+ /* Livelock in L0 icache */
+ BASE_HW_ISSUE_8280,
+
+ /* uTLB deadlock could occur when writing to an invalid page at the same time as
+ * access to a valid page in the same uTLB cache line ( == 4 PTEs == 16K block of mapping) */
+ BASE_HW_ISSUE_8316,
+
+ /* HT: TERMINATE for RUN command ignored if previous LOAD_DESCRIPTOR is still executing */
+ BASE_HW_ISSUE_8394,
+
+ /* CSE : Sends a TERMINATED response for a task that should not be terminated */
+ /* (Note that PRLAM-8379 also uses this workaround) */
+ BASE_HW_ISSUE_8401,
+
+ /* Repeatedly Soft-stopping a job chain consisting of (Vertex Shader, Cache Flush, Tiler)
+ * jobs causes 0x58 error on tiler job. */
+ BASE_HW_ISSUE_8408,
+
+ /* Disable the Pause Buffer in the LS pipe. */
+ BASE_HW_ISSUE_8443,
+
+ /* Stencil test enable 1->0 sticks */
+ BASE_HW_ISSUE_8456,
+
+ /* Tiler heap issue using FBOs or multiple processes using the tiler simultaneously */
+ BASE_HW_ISSUE_8564,
+
+ /* Livelock issue using atomic instructions (particularly when using atomic_cmpxchg as a spinlock) */
+ BASE_HW_ISSUE_8791,
+
+ /* Fused jobs are not supported (for various reasons) */
+ /* Jobs with relaxed dependencies do not support soft-stop */
+ /* (Note that PRLAM-8803, PRLAM-8393, PRLAM-8559, PRLAM-8601 & PRLAM-8607 all use this work-around) */
+ BASE_HW_ISSUE_8803,
+
+ /* Blend shader output is wrong for certain formats */
+ BASE_HW_ISSUE_8833,
+
+ /* Occlusion queries can create false 0 result in boolean and counter modes */
+ BASE_HW_ISSUE_8879,
+
+ /* Output has half intensity with blend shaders enabled on 8xMSAA. */
+ BASE_HW_ISSUE_8896,
+
+ /* 8xMSAA does not work with CRC */
+ BASE_HW_ISSUE_8975,
+
+ /* Boolean occlusion queries don't work properly due to sdc issue. */
+ BASE_HW_ISSUE_8986,
+
+ /* Change in RMUs in use causes problems related with the core's SDC */
+ BASE_HW_ISSUE_8987,
+
+ /* Occlusion query result is not updated if color writes are disabled. */
+ BASE_HW_ISSUE_9010,
+
+ /* Problem with number of work registers in the RSD if set to 0 */
+ BASE_HW_ISSUE_9275,
+
+ /* Compute endpoint has a 4-deep queue of tasks, meaning a soft stop won't complete until all 4 tasks have completed */
+ BASE_HW_ISSUE_9435,
+
+ /* HT: Tiler returns TERMINATED for command that hasn't been terminated */
+ BASE_HW_ISSUE_9510,
+
+ /* Occasionally the GPU will issue multiple page faults for the same address before the MMU page table has been read by the GPU */
+ BASE_HW_ISSUE_9630,
+
+ /* The BASE_HW_ISSUE_END value must be the last issue listed in this enumeration
+ * and must be the last value in each array that contains the list of workarounds
+ * for a particular HW version.
+ */
+ BASE_HW_ISSUE_END
+} base_hw_issue;
+
+
+/**
+ * Workarounds configuration for each HW revision
+ */
+
+/* Mali T60x/T65x r0p0-15dev0 - 2011-W39-stable-9 */
+static const base_hw_issue base_hw_issues_t60x_t65x_r0p0_15dev0[] =
+{
+ BASE_HW_ISSUE_5699,
+ BASE_HW_ISSUE_6367,
+ BASE_HW_ISSUE_6402,
+ BASE_HW_ISSUE_6787,
+ BASE_HW_ISSUE_7144,
+ BASE_HW_ISSUE_8186,
+ BASE_HW_ISSUE_8245,
+ BASE_HW_ISSUE_8250,
+ BASE_HW_ISSUE_8260,
+ BASE_HW_ISSUE_8280,
+ BASE_HW_ISSUE_8316,
+ BASE_HW_ISSUE_8394,
+ BASE_HW_ISSUE_8401,
+ BASE_HW_ISSUE_8408,
+ BASE_HW_ISSUE_8443,
+ BASE_HW_ISSUE_8456,
+ BASE_HW_ISSUE_8564,
+ BASE_HW_ISSUE_8791,
+ BASE_HW_ISSUE_8803,
+ BASE_HW_ISSUE_8833,
+ BASE_HW_ISSUE_8896,
+ BASE_HW_ISSUE_8975,
+ BASE_HW_ISSUE_8986,
+ BASE_HW_ISSUE_8987,
+ BASE_HW_ISSUE_9010,
+ BASE_HW_ISSUE_9275,
+ BASE_HW_ISSUE_9435,
+ BASE_HW_ISSUE_9510,
+ BASE_HW_ISSUE_9630,
+ /* List of hardware issues must end with BASE_HW_ISSUE_END */
+ BASE_HW_ISSUE_END
+};
+
+/* Mali T60x/T65x r0p0-00rel0 - 2011-W46-stable-13c */
+static const base_hw_issue base_hw_issues_t60x_t65x_r0p0_eac[] =
+{
+ BASE_HW_ISSUE_5699,
+ BASE_HW_ISSUE_6367,
+ BASE_HW_ISSUE_6402,
+ BASE_HW_ISSUE_6787,
+ BASE_HW_ISSUE_8186,
+ BASE_HW_ISSUE_8245,
+ BASE_HW_ISSUE_8260,
+ BASE_HW_ISSUE_8280,
+ BASE_HW_ISSUE_8316,
+ BASE_HW_ISSUE_8564,
+ BASE_HW_ISSUE_8803,
+ BASE_HW_ISSUE_9010,
+ BASE_HW_ISSUE_9275,
+ BASE_HW_ISSUE_9435,
+ BASE_HW_ISSUE_9510,
+ /* List of hardware issues must end with BASE_HW_ISSUE_END */
+ BASE_HW_ISSUE_END
+};
+
+/* Mali T65x r0p1 */
+static const base_hw_issue base_hw_issues_t65x_r0p1[] =
+{
+ BASE_HW_ISSUE_5699,
+ BASE_HW_ISSUE_6367,
+ BASE_HW_ISSUE_6402,
+ BASE_HW_ISSUE_6787,
+ BASE_HW_ISSUE_8186,
+ BASE_HW_ISSUE_8245,
+ BASE_HW_ISSUE_8260,
+ BASE_HW_ISSUE_8280,
+ BASE_HW_ISSUE_8316,
+ BASE_HW_ISSUE_8564,
+ BASE_HW_ISSUE_8803,
+ BASE_HW_ISSUE_9010,
+ BASE_HW_ISSUE_9275,
+ BASE_HW_ISSUE_9435,
+ BASE_HW_ISSUE_9510,
+ /* List of hardware issues must end with BASE_HW_ISSUE_END */
+ BASE_HW_ISSUE_END
+};
+
+/* Mali T60x/T65x r1p0-00rel0 */
+static const base_hw_issue base_hw_issues_t60x_t65x_r1p0[] =
+{
+ BASE_HW_ISSUE_6367,
+ BASE_HW_ISSUE_6402,
+ BASE_HW_ISSUE_8803,
+ BASE_HW_ISSUE_9435,
+ BASE_HW_ISSUE_9510,
+ /* List of hardware issues must end with BASE_HW_ISSUE_END */
+ BASE_HW_ISSUE_END
+};
+
+/* Mali T62x r0p0 */
+static const base_hw_issue base_hw_issues_t62x_r0p0[] =
+{
+ BASE_HW_ISSUE_6367,
+ BASE_HW_ISSUE_6402,
+ BASE_HW_ISSUE_8803,
+ BASE_HW_ISSUE_9435,
+ BASE_HW_ISSUE_9510,
+ /* List of hardware issues must end with BASE_HW_ISSUE_END */
+ BASE_HW_ISSUE_END
+};
+
+/* Mali T67x r0p0 */
+static const base_hw_issue base_hw_issues_t67x_r0p0[] =
+{
+ BASE_HW_ISSUE_6367,
+ BASE_HW_ISSUE_6402,
+ BASE_HW_ISSUE_8803,
+ BASE_HW_ISSUE_9435,
+ BASE_HW_ISSUE_9510,
+ /* List of hardware issues must end with BASE_HW_ISSUE_END */
+ BASE_HW_ISSUE_END
+};
+
+#if !MALI_BACKEND_KERNEL
+
+/* Model configuration
+ *
+ * note: We can only know that the model is used at compile-time
+ */
+
+static const base_hw_issue base_hw_issues_model[] =
+{
+ BASE_HW_ISSUE_5736,
+ BASE_HW_ISSUE_8260,
+ BASE_HW_ISSUE_8316,
+ BASE_HW_ISSUE_8394,
+ BASE_HW_ISSUE_8803,
+ /* NOTE: Model is fixed for BASE_HW_ISSUE_8975, but EGL is currently broken, see MIDEGL-868 */
+ BASE_HW_ISSUE_8975,
+ BASE_HW_ISSUE_8987,
+ BASE_HW_ISSUE_9010,
+ BASE_HW_ISSUE_9275,
+ BASE_HW_ISSUE_9435,
+ BASE_HW_ISSUE_9510, /* TODO: Review - should be disabled for model? */
+ /* List of hardware issues must end with BASE_HW_ISSUE_END */
+ BASE_HW_ISSUE_END
+};
+
+#endif /* !MALI_BACKEND_KERNEL */
+
+#endif /* _BASE_HWCONFIG_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file
+ * Base structures shared with the kernel.
+ */
+
+#ifndef _BASE_KERNEL_H_
+#define _BASE_KERNEL_H_
+
+#include <kbase/src/mali_base_mem_priv.h>
+
+/*
+ * Dependency stuff, keep it private for now. May want to expose it if
+ * we decide to make the number of semaphores a configurable
+ * option.
+ */
+#define BASEP_JD_SEM_PER_WORD_LOG2 5
+#define BASEP_JD_SEM_PER_WORD (1 << BASEP_JD_SEM_PER_WORD_LOG2)
+#define BASEP_JD_SEM_WORD_NR(x) ((x) >> BASEP_JD_SEM_PER_WORD_LOG2)
+#define BASEP_JD_SEM_MASK_IN_WORD(x) (1 << ((x) & (BASEP_JD_SEM_PER_WORD - 1)))
+#define BASEP_JD_SEM_ARRAY_SIZE BASEP_JD_SEM_WORD_NR(256)
+
+/* Size of the ring buffer */
+#define BASEP_JCTX_RB_NRPAGES 16
+
+#define BASE_GPU_NUM_TEXTURE_FEATURES_REGISTERS 3
+
+#define BASE_MAX_COHERENT_GROUPS 16
+
+#if defined CDBG_ASSERT
+#define LOCAL_ASSERT CDBG_ASSERT
+#elif defined OSK_ASSERT
+#define LOCAL_ASSERT OSK_ASSERT
+#else
+#error assert macro not defined!
+#endif
+
+#if defined OSK_PAGE_MASK
+ #define LOCAL_PAGE_LSB ~OSK_PAGE_MASK
+#else
+ #include <osu/mali_osu.h>
+
+ #if defined CONFIG_CPU_PAGE_SIZE_LOG2
+ #define LOCAL_PAGE_LSB ((1ul << CONFIG_CPU_PAGE_SIZE_LOG2) - 1)
+ #else
+ #error Failed to find page size
+ #endif
+#endif
+
+
+/**
+ * @addtogroup base_user_api User-side Base APIs
+ * @{
+ */
+
+/**
+ * @addtogroup base_user_api_memory User-side Base Memory APIs
+ * @{
+ */
+
+/**
+ * @brief Memory allocation, access/hint flags
+ *
+ * A combination of MEM_PROT/MEM_HINT flags must be passed to each allocator
+ * in order to determine the best cache policy. Some combinations are
+ * of course invalid (eg @c MEM_PROT_CPU_WR | @c MEM_HINT_CPU_RD),
+ * which defines a @a write-only region on the CPU side, which is
+ * heavily read by the CPU...
+ * Other flags are only meaningful to a particular allocator.
+ * More flags can be added to this list, as long as they don't clash
+ * (see ::BASE_MEM_FLAGS_NR_BITS for the number of the first free bit).
+ */
+typedef u32 base_mem_alloc_flags;
+
+
+/**
+ * @brief Memory allocation, access/hint flags
+ *
+ * See ::base_mem_alloc_flags.
+ *
+ */
+enum
+{
+ BASE_MEM_PROT_CPU_RD = (1U << 0), /**< Read access CPU side */
+ BASE_MEM_PROT_CPU_WR = (1U << 1), /**< Write access CPU side */
+ BASE_MEM_PROT_GPU_RD = (1U << 2), /**< Read access GPU side */
+ BASE_MEM_PROT_GPU_WR = (1U << 3), /**< Write access GPU side */
+ BASE_MEM_PROT_GPU_EX = (1U << 4), /**< Execute allowed on the GPU side */
+
+ BASE_MEM_HINT_CPU_RD = (1U << 5), /**< Heavily read CPU side */
+ BASE_MEM_HINT_CPU_WR = (1U << 6), /**< Heavily written CPU side */
+ BASE_MEM_HINT_GPU_RD = (1U << 7), /**< Heavily read GPU side */
+ BASE_MEM_HINT_GPU_WR = (1U << 8), /**< Heavily written GPU side */
+
+ BASEP_MEM_GROWABLE = (1U << 9), /**< Growable memory. This is a private flag that is set automatically. Not valid for PMEM. */
+ BASE_MEM_GROW_ON_GPF = (1U << 10), /**< Grow backing store on GPU Page Fault */
+
+ BASE_MEM_COHERENT_SYSTEM = (1U << 11),/**< Page coherence Outer shareable */
+ BASE_MEM_COHERENT_LOCAL = (1U << 12) /**< Page coherence Inner shareable */
+};
+
+/**
+ * @brief Memory types supported by @a base_tmem_import
+ *
+ * Each type defines what the supported handle type is.
+ *
+ * If any new type is added here ARM must be contacted
+ * to allocate a numeric value for it.
+ * Do not just add a new type without synchronizing with ARM
+ * as future releases from ARM might include other new types
+ * which could clash with your custom types.
+ */
+typedef enum base_tmem_import_type
+{
+ BASE_TMEM_IMPORT_TYPE_INVALID = 0,
+ /** UMP import. Handle type is ump_secure_id. */
+ BASE_TMEM_IMPORT_TYPE_UMP = 1,
+ /** UMM import. Handle type is a file descriptor (int) */
+ BASE_TMEM_IMPORT_TYPE_UMM = 2
+} base_tmem_import_type;
+
+/**
+ * Bits we can tag into a memory handle.
+ * We use the lower 12 bits as our handles are page-multiples, thus not using the 12 LSBs
+ */
+enum
+{
+ BASE_MEM_TAGS_MASK = ((1U << 12) - 1), /**< Mask to get hold of the tag bits/see if there are tag bits */
+ BASE_MEM_TAG_IMPORTED = (1U << 0) /**< Tagged as imported */
+ /* max 1u << 11 supported */
+};
+
+
+/**
+ * @brief Number of bits used as flags for base memory management
+ *
+ * Must be kept in sync with the ::base_mem_alloc_flags flags
+ */
+#define BASE_MEM_FLAGS_NR_BITS 13
+
+/**
+ * @brief Result codes of changing the size of the backing store allocated to a tmem region
+ */
+typedef enum base_backing_threshold_status
+{
+ BASE_BACKING_THRESHOLD_OK = 0, /**< Resize successful */
+ BASE_BACKING_THRESHOLD_ERROR_NOT_GROWABLE = -1, /**< Not a growable tmem object */
+ BASE_BACKING_THRESHOLD_ERROR_OOM = -2, /**< Increase failed due to an out-of-memory condition */
+ BASE_BACKING_THRESHOLD_ERROR_MAPPED = -3, /**< Resize attempted on buffer while it was mapped, which is not permitted */
+ BASE_BACKING_THRESHOLD_ERROR_INVALID_ARGUMENTS = -4 /**< Invalid arguments (not tmem, illegal size request, etc.) */
+} base_backing_threshold_status;
+
+/**
+ * @addtogroup base_user_api_memory_defered User-side Base Defered Memory Coherency APIs
+ * @{
+ */
+
+/**
+ * @brief a basic memory operation (sync-set).
+ *
+ * The content of this structure is private, and should only be used
+ * by the accessors.
+ */
+typedef struct base_syncset
+{
+ basep_syncset basep_sset;
+} base_syncset;
+
+/** @} end group base_user_api_memory_defered */
+
+/**
+ * Handle to represent imported memory object.
+ * Simple opague handle to imported memory, can't be used
+ * with anything but base_external_resource_init to bind to an atom.
+ */
+typedef struct base_import_handle
+{
+ struct
+ {
+ mali_addr64 handle;
+ } basep;
+} base_import_handle;
+
+
+/** @} end group base_user_api_memory */
+
+/**
+ * @addtogroup base_user_api_job_dispatch User-side Base Job Dispatcher APIs
+ * @{
+ */
+
+/**
+ * @brief A pre- or post- dual dependency.
+ *
+ * This structure is used to express either
+ * @li a single or dual pre-dependency (a job depending on one or two
+ * other jobs),
+ * @li a single or dual post-dependency (a job resolving a dependency
+ * for one or two other jobs).
+ *
+ * The dependency itself is specified as a u8, where 0 indicates no
+ * dependency. A single dependency is expressed by having one of the
+ * dependencies set to 0.
+ */
+typedef struct base_jd_dep {
+ u8 dep[2]; /**< pre/post dependencies */
+} base_jd_dep;
+
+/**
+ * @brief Per-job data
+ *
+ * This structure is used to store per-job data, and is completly unused
+ * by the Base driver. It can be used to store things such as callback
+ * function pointer, data to handle job completion. It is guaranteed to be
+ * untouched by the Base driver.
+ */
+typedef struct base_jd_udata
+{
+ u64 blob[2]; /**< per-job data array */
+} base_jd_udata;
+
+/**
+ * @brief Job chain hardware requirements.
+ *
+ * A job chain must specify what GPU features it needs to allow the
+ * driver to schedule the job correctly. By not specifying the
+ * correct settings can/will cause an early job termination. Multiple
+ * values can be ORed together to specify multiple requirements.
+ * Special case is ::BASE_JD_REQ_DEP, which is used to express complex
+ * dependencies, and that doesn't execute anything on the hardware.
+ */
+typedef u16 base_jd_core_req;
+
+/* Requirements that come from the HW */
+#define BASE_JD_REQ_DEP 0 /**< No requirement, dependency only */
+#define BASE_JD_REQ_FS (1U << 0) /**< Requires fragment shaders */
+/**
+ * Requires compute shaders
+ * This covers any of the following Midgard Job types:
+ * - Vertex Shader Job
+ * - Geometry Shader Job
+ * - An actual Compute Shader Job
+ *
+ * Compare this with @ref BASE_JD_REQ_ONLY_COMPUTE, which specifies that the
+ * job is specifically just the "Compute Shader" job type, and not the "Vertex
+ * Shader" nor the "Geometry Shader" job type.
+ */
+#define BASE_JD_REQ_CS (1U << 1)
+#define BASE_JD_REQ_T (1U << 2) /**< Requires tiling */
+#define BASE_JD_REQ_CF (1U << 3) /**< Requires cache flushes */
+#define BASE_JD_REQ_V (1U << 4) /**< Requires value writeback */
+
+/* SW-only requirements - the HW does not expose these as part of the job slot capabilities */
+/**
+ * SW Only requirement: this job chain might not be soft-stoppable (Non-Soft
+ * Stoppable), and so must be scheduled separately from all other job-chains
+ * that are soft-stoppable.
+ *
+ * In absence of this requirement, then the job-chain is assumed to be
+ * soft-stoppable. That is, if it does not release the GPU "soon after" it is
+ * soft-stopped, then it will be killed. In contrast, NSS job chains can
+ * release the GPU "a long time after" they are soft-stopped.
+ *
+ * "soon after" and "a long time after" are implementation defined, and
+ * configurable in the device driver by the system integrator.
+ */
+#define BASE_JD_REQ_NSS (1U << 5)
+
+/**
+ * SW Only requirement: the job chain requires a coherent core group. We don't
+ * mind which coherent core group is used.
+ */
+#define BASE_JD_REQ_COHERENT_GROUP (1U << 6)
+
+/**
+ * SW Only requirement: The performance counters should be enabled only when
+ * they are needed, to reduce power consumption.
+ */
+
+#define BASE_JD_REQ_PERMON (1U << 7)
+
+/**
+ * SW Only requirement: External resources are referenced by this atom.
+ * When external resources are referenced no syncsets can be bundled with the atom
+ * but should instead be part of a NULL jobs inserted into the dependency tree.
+ * The first pre_dep object must be configured for the external resouces to use,
+ * the second pre_dep object can be used to create other dependencies.
+ */
+#define BASE_JD_REQ_EXTERNAL_RESOURCES (1U << 8)
+
+/**
+ * SW Only requirement: Software defined job. Jobs with this bit set will not be submitted
+ * to the hardware but will cause some action to happen within the driver
+ */
+#define BASE_JD_REQ_SOFT_JOB (1U << 9)
+
+#define BASE_JD_REQ_SOFT_DUMP_CPU_GPU_TIME (BASE_JD_REQ_SOFT_JOB | 0x1)
+
+/**
+ * HW Requirement: Requires Compute shaders (but not Vertex or Geometry Shaders)
+ *
+ * This indicates that the Job Chain contains Midgard Jobs of the 'Compute Shaders' type.
+ *
+ * In contrast to @ref BASE_JD_REQ_CS, this does \b not indicate that the Job
+ * Chain contains 'Geometry Shader' or 'Vertex Shader' jobs.
+ *
+ * @note This is a more flexible variant of the @ref BASE_CONTEXT_HINT_ONLY_COMPUTE flag,
+ * allowing specific jobs to be marked as 'Only Compute' instead of the entire context
+ */
+#define BASE_JD_REQ_ONLY_COMPUTE (1U << 10)
+
+/**
+ * HW Requirement: Use the base_jd_atom::device_nr field to specify a
+ * particular core group
+ *
+ * If both BASE_JD_REQ_COHERENT_GROUP and this flag are set, this flag takes priority
+ *
+ * This is only guaranteed to work for BASE_JD_REQ_ONLY_COMPUTE atoms.
+ */
+#define BASE_JD_REQ_SPECIFIC_COHERENT_GROUP ( 1U << 11 )
+
+/**
+* These requirement bits are currently unused in base_jd_core_req (currently a u16)
+*/
+
+#define BASEP_JD_REQ_RESERVED_BIT12 ( 1U << 12 )
+#define BASEP_JD_REQ_RESERVED_BIT13 ( 1U << 13 )
+#define BASEP_JD_REQ_RESERVED_BIT14 ( 1U << 14 )
+#define BASEP_JD_REQ_RESERVED_BIT15 ( 1U << 15 )
+
+/**
+* Mask of all the currently unused requirement bits in base_jd_core_req.
+*/
+
+#define BASEP_JD_REQ_RESERVED ( BASEP_JD_REQ_RESERVED_BIT12 | BASEP_JD_REQ_RESERVED_BIT13 |\
+ BASEP_JD_REQ_RESERVED_BIT14 | BASEP_JD_REQ_RESERVED_BIT15 )
+
+
+/**
+ * @brief A single job chain, with pre/post dependendencies and mem ops
+ *
+ * This structure is used to describe a single job-chain to be submitted
+ * as part of a bag.
+ * It contains all the necessary information for Base to take care of this
+ * job-chain, including core requirements, priority, syncsets and
+ * dependencies.
+ */
+typedef struct base_jd_atom
+{
+ mali_addr64 jc; /**< job-chain GPU address */
+ base_jd_udata udata; /**< user data */
+ base_jd_dep pre_dep; /**< pre-dependencies */
+ base_jd_dep post_dep; /**< post-dependencies */
+ base_jd_core_req core_req; /**< core requirements */
+ u16 nr_syncsets; /**< nr of syncsets following the atom */
+ u16 nr_extres; /**< nr of external resources following the atom */
+
+ /** @brief Relative priority.
+ *
+ * A positive value requests a lower priority, whilst a negative value
+ * requests a higher priority. Only privileged processes may request a
+ * higher priority. For unprivileged processes, a negative priority will
+ * be interpreted as zero.
+ */
+ s8 prio;
+
+ /**
+ * @brief Device number to use, depending on @ref base_jd_core_req flags set.
+ *
+ * When BASE_JD_REQ_SPECIFIC_COHERENT_GROUP is set, a 'device' is one of
+ * the coherent core groups, and so this targets a particular coherent
+ * core-group. They are numbered from 0 to (mali_base_gpu_coherent_group_info::num_groups - 1),
+ * and the cores targeted by this device_nr will usually be those specified by
+ * (mali_base_gpu_coherent_group_info::group[device_nr].core_mask).
+ * Further, two atoms from different processes using the same \a device_nr
+ * at the same time will always target the same coherent core-group.
+ *
+ * There are exceptions to when the device_nr is ignored:
+ * - when any process in the system uses a BASE_JD_REQ_CS or
+ * BASE_JD_REQ_ONLY_COMPUTE atom that can run on all cores across all
+ * coherency groups (i.e. also does \b not have the
+ * BASE_JD_REQ_COHERENT_GROUP or BASE_JD_REQ_SPECIFIC_COHERENT_GROUP flags
+ * set). In this case, such atoms would block device_nr==1 being used due
+ * to restrictions on affinity, perhaps indefinitely. To ensure progress is
+ * made, the atoms targeted for device_nr 1 will instead be redirected to
+ * device_nr 0
+ * - When any process in the system is using 'NSS' (BASE_JD_REQ_NSS) atoms,
+ * because there'd be very high latency on atoms targeting a coregroup
+ * that is also in use by NSS atoms. To ensure progress is
+ * made, the atoms targeted for device_nr 1 will instead be redirected to
+ * device_nr 0
+ * - During certain HW workarounds, such as BASE_HW_ISSUE_8987, where
+ * BASE_JD_REQ_ONLY_COMPUTE atoms must not use the same cores as other
+ * atoms. In this case, all atoms are targeted to device_nr == min( num_groups, 1 )
+ *
+ * Note that the 'device' number for a coherent coregroup cannot exceed
+ * (BASE_MAX_COHERENT_GROUPS - 1).
+ */
+ u8 device_nr;
+} base_jd_atom;
+
+/* Structure definition works around the fact that C89 doesn't allow arrays of size 0 */
+typedef struct basep_jd_atom_ss
+{
+ base_jd_atom atom;
+ base_syncset syncsets[1];
+} basep_jd_atom_ss;
+
+typedef enum base_external_resource_access
+{
+ BASE_EXT_RES_ACCESS_SHARED,
+ BASE_EXT_RES_ACCESS_EXCLUSIVE
+} base_external_resource_access;
+
+typedef struct base_external_resource
+{
+ u64 ext_resource;
+} base_external_resource;
+
+/* Structure definition works around the fact that C89 doesn't allow arrays of size 0 */
+typedef struct basep_jd_atom_ext_res
+{
+ base_jd_atom atom;
+ base_external_resource resources[1];
+} basep_jd_atom_ext_res;
+
+static INLINE size_t base_jd_atom_size_ex(u32 syncset_count, u32 external_res_count)
+{
+ LOCAL_ASSERT( 0 == syncset_count || 0 == external_res_count );
+
+ return syncset_count ? offsetof(basep_jd_atom_ss, syncsets[0]) + (sizeof(base_syncset) * syncset_count) :
+ external_res_count ? offsetof(basep_jd_atom_ext_res, resources[0]) + (sizeof(base_external_resource) * external_res_count) :
+ sizeof(base_jd_atom);
+}
+
+/**
+ * @brief Atom size evaluator
+ *
+ * This function returns the size in bytes of a ::base_jd_atom
+ * containing @a n syncsets. It must be used to compute the size of a
+ * bag before allocation.
+ *
+ * @param nr the number of syncsets for this atom
+ * @return the atom size in bytes
+ */
+static INLINE size_t base_jd_atom_size(u32 nr)
+{
+ return base_jd_atom_size_ex(nr, 0);
+}
+
+/**
+ * @brief Atom syncset accessor
+ *
+ * This function returns a pointer to the nth syncset allocated
+ * together with an atom.
+ *
+ * @param[in] atom The allocated atom
+ * @param n The number of the syncset to be returned
+ * @return a pointer to the nth syncset.
+ */
+static INLINE base_syncset *base_jd_get_atom_syncset(base_jd_atom *atom, int n)
+{
+ LOCAL_ASSERT(atom != NULL);
+ LOCAL_ASSERT(0 == (atom->core_req & BASE_JD_REQ_EXTERNAL_RESOURCES));
+ LOCAL_ASSERT( (n >= 0) && (n <= atom->nr_syncsets) );
+ return &((basep_jd_atom_ss *)atom)->syncsets[n];
+}
+
+/**
+ * @brief Atom external resource accessor
+ *
+ * This functions returns a pointer to the nth external resource tracked by the atom.
+ *
+ * @param[in] atom The allocated atom
+ * @param n The number of the external resource to return a pointer to
+ * @return a pointer to the nth external resource
+ */
+static INLINE base_external_resource *base_jd_get_external_resource(base_jd_atom *atom, int n)
+{
+ LOCAL_ASSERT(atom != NULL);
+ LOCAL_ASSERT(BASE_JD_REQ_EXTERNAL_RESOURCES == (atom->core_req & BASE_JD_REQ_EXTERNAL_RESOURCES));
+ LOCAL_ASSERT( (n >= 0) && (n <= atom->nr_extres) );
+ return &((basep_jd_atom_ext_res*)atom)->resources[n];
+}
+
+/**
+ * @brief External resource info initialization.
+ *
+ * Sets up a external resource object to reference
+ * a memory allocation and the type of access requested.
+ *
+ * @param[in] res The resource object to initialize
+ * @param address The GPU VA of the external memory
+ * @param access The type of access requested
+ */
+static INLINE void base_external_resource_init(base_external_resource * res, base_import_handle handle, base_external_resource_access access)
+{
+ mali_addr64 address;
+ address = handle.basep.handle;
+
+ LOCAL_ASSERT(res != NULL);
+ LOCAL_ASSERT(0 == (address & LOCAL_PAGE_LSB));
+ LOCAL_ASSERT(access == BASE_EXT_RES_ACCESS_SHARED || access == BASE_EXT_RES_ACCESS_EXCLUSIVE);
+
+ res->ext_resource = address | (access & LOCAL_PAGE_LSB);
+}
+
+/**
+ * @brief Next atom accessor
+ *
+ * This function returns a pointer to the next allocated atom. It
+ * relies on the fact that the current atom has been correctly
+ * initialized (relies on the base_jd_atom::nr_syncsets field).
+ *
+ * @param[in] atom The allocated atom
+ * @return a pointer to the next atom.
+ */
+static INLINE base_jd_atom *base_jd_get_next_atom(base_jd_atom *atom)
+{
+ LOCAL_ASSERT(atom != NULL);
+ return (atom->core_req & BASE_JD_REQ_EXTERNAL_RESOURCES) ? (base_jd_atom *)base_jd_get_external_resource(atom, atom->nr_extres) :
+ (base_jd_atom *)base_jd_get_atom_syncset(atom, atom->nr_syncsets);
+}
+
+/**
+ * @brief Job chain event code bits
+ * Defines the bits used to create ::base_jd_event_code
+ */
+enum
+{
+ BASE_JD_SW_EVENT_KERNEL = (1u << 15), /**< Kernel side event */
+ BASE_JD_SW_EVENT = (1u << 14), /**< SW defined event */
+ BASE_JD_SW_EVENT_SUCCESS = (1u << 13), /**< Event idicates success (SW events only) */
+ BASE_JD_SW_EVENT_JOB = (0u << 11), /**< Job related event */
+ BASE_JD_SW_EVENT_BAG = (1u << 11), /**< Bag related event */
+ BASE_JD_SW_EVENT_INFO = (2u << 11), /**< Misc/info event */
+ BASE_JD_SW_EVENT_RESERVED = (3u << 11), /**< Reserved event type */
+ BASE_JD_SW_EVENT_TYPE_MASK = (3u << 11) /**< Mask to extract the type from an event code */
+};
+
+/**
+ * @brief Job chain event codes
+ *
+ * HW and low-level SW events are represented by event codes.
+ * The status of jobs which succeeded are also represented by
+ * an event code (see ::BASE_JD_EVENT_DONE).
+ * Events are usually reported as part of a ::base_jd_event.
+ *
+ * The event codes are encoded in the following way:
+ * @li 10:0 - subtype
+ * @li 12:11 - type
+ * @li 13 - SW success (only valid if the SW bit is set)
+ * @li 14 - SW event (HW event if not set)
+ * @li 15 - Kernel event (should never be seen in userspace)
+ *
+ * Events are split up into ranges as follows:
+ * - BASE_JD_EVENT_RANGE_\<description\>_START
+ * - BASE_JD_EVENT_RANGE_\<description\>_END
+ *
+ * \a code is in \<description\>'s range when:
+ * - <tt>BASE_JD_EVENT_RANGE_\<description\>_START <= code < BASE_JD_EVENT_RANGE_\<description\>_END </tt>
+ *
+ * Ranges can be asserted for adjacency by testing that the END of the previous
+ * is equal to the START of the next. This is useful for optimizing some tests
+ * for range.
+ *
+ * A limitation is that the last member of this enum must explicitly be handled
+ * (with an assert-unreachable statement) in switch statements that use
+ * variables of this type. Otherwise, the compiler warns that we have not
+ * handled that enum value.
+ */
+typedef enum base_jd_event_code
+{
+ /* HW defined exceptions */
+
+ /** Start of HW Non-fault status codes
+ *
+ * @note Obscurely, BASE_JD_EVENT_TERMINATED indicates a real fault,
+ * because the job was hard-stopped
+ */
+ BASE_JD_EVENT_RANGE_HW_NONFAULT_START = 0,
+
+ /* non-fatal exceptions */
+ BASE_JD_EVENT_NOT_STARTED = 0x00, /**< Can't be seen by userspace, treated as 'previous job done' */
+ BASE_JD_EVENT_DONE = 0x01,
+ BASE_JD_EVENT_STOPPED = 0x03, /**< Can't be seen by userspace, becomes TERMINATED, DONE or JOB_CANCELLED */
+ BASE_JD_EVENT_TERMINATED = 0x04, /**< This is actually a fault status code - the job was hard stopped */
+ BASE_JD_EVENT_ACTIVE = 0x08, /**< Can't be seen by userspace, jobs only returned on complete/fail/cancel */
+
+ /** End of HW Non-fault status codes
+ *
+ * @note Obscurely, BASE_JD_EVENT_TERMINATED indicates a real fault,
+ * because the job was hard-stopped
+ */
+ BASE_JD_EVENT_RANGE_HW_NONFAULT_END = 0x40,
+
+ /** Start of HW fault and SW Error status codes */
+ BASE_JD_EVENT_RANGE_HW_FAULT_OR_SW_ERROR_START = 0x40,
+
+ /* job exceptions */
+ BASE_JD_EVENT_JOB_CONFIG_FAULT = 0x40,
+ BASE_JD_EVENT_JOB_POWER_FAULT = 0x41,
+ BASE_JD_EVENT_JOB_READ_FAULT = 0x42,
+ BASE_JD_EVENT_JOB_WRITE_FAULT = 0x43,
+ BASE_JD_EVENT_JOB_AFFINITY_FAULT = 0x44,
+ BASE_JD_EVENT_JOB_BUS_FAULT = 0x48,
+ BASE_JD_EVENT_INSTR_INVALID_PC = 0x50,
+ BASE_JD_EVENT_INSTR_INVALID_ENC = 0x51,
+ BASE_JD_EVENT_INSTR_TYPE_MISMATCH = 0x52,
+ BASE_JD_EVENT_INSTR_OPERAND_FAULT = 0x53,
+ BASE_JD_EVENT_INSTR_TLS_FAULT = 0x54,
+ BASE_JD_EVENT_INSTR_BARRIER_FAULT = 0x55,
+ BASE_JD_EVENT_INSTR_ALIGN_FAULT = 0x56,
+ BASE_JD_EVENT_DATA_INVALID_FAULT = 0x58,
+ BASE_JD_EVENT_TILE_RANGE_FAULT = 0x59,
+ BASE_JD_EVENT_STATE_FAULT = 0x5A,
+ BASE_JD_EVENT_OUT_OF_MEMORY = 0x60,
+ BASE_JD_EVENT_UNKNOWN = 0x7F,
+
+ /* GPU exceptions */
+ BASE_JD_EVENT_DELAYED_BUS_FAULT = 0x80,
+ BASE_JD_EVENT_SHAREABILITY_FAULT = 0x88,
+
+ /* MMU exceptions */
+ BASE_JD_EVENT_TRANSLATION_FAULT_LEVEL1 = 0xC1,
+ BASE_JD_EVENT_TRANSLATION_FAULT_LEVEL2 = 0xC2,
+ BASE_JD_EVENT_TRANSLATION_FAULT_LEVEL3 = 0xC3,
+ BASE_JD_EVENT_TRANSLATION_FAULT_LEVEL4 = 0xC4,
+ BASE_JD_EVENT_PERMISSION_FAULT = 0xC8,
+ BASE_JD_EVENT_TRANSTAB_BUS_FAULT_LEVEL1 = 0xD1,
+ BASE_JD_EVENT_TRANSTAB_BUS_FAULT_LEVEL2 = 0xD2,
+ BASE_JD_EVENT_TRANSTAB_BUS_FAULT_LEVEL3 = 0xD3,
+ BASE_JD_EVENT_TRANSTAB_BUS_FAULT_LEVEL4 = 0xD4,
+ BASE_JD_EVENT_ACCESS_FLAG = 0xD8,
+
+ /* SW defined exceptions */
+ BASE_JD_EVENT_MEM_GROWTH_FAILED = BASE_JD_SW_EVENT | BASE_JD_SW_EVENT_JOB | 0x000,
+ BASE_JD_EVENT_TIMED_OUT = BASE_JD_SW_EVENT | BASE_JD_SW_EVENT_JOB | 0x001,
+ BASE_JD_EVENT_JOB_CANCELLED = BASE_JD_SW_EVENT | BASE_JD_SW_EVENT_JOB | 0x002,
+ BASE_JD_EVENT_BAG_INVALID = BASE_JD_SW_EVENT | BASE_JD_SW_EVENT_BAG | 0x003,
+
+ /** End of HW fault and SW Error status codes */
+ BASE_JD_EVENT_RANGE_HW_FAULT_OR_SW_ERROR_END = BASE_JD_SW_EVENT | BASE_JD_SW_EVENT_RESERVED | 0x3FF,
+
+ /** Start of SW Success status codes */
+ BASE_JD_EVENT_RANGE_SW_SUCCESS_START = BASE_JD_SW_EVENT | BASE_JD_SW_EVENT_SUCCESS | 0x000,
+
+ BASE_JD_EVENT_PROGRESS_REPORT = BASE_JD_SW_EVENT | BASE_JD_SW_EVENT_SUCCESS | BASE_JD_SW_EVENT_JOB | 0x000,
+ BASE_JD_EVENT_BAG_DONE = BASE_JD_SW_EVENT | BASE_JD_SW_EVENT_SUCCESS | BASE_JD_SW_EVENT_BAG | 0x000,
+ BASE_JD_EVENT_DRV_TERMINATED = BASE_JD_SW_EVENT | BASE_JD_SW_EVENT_SUCCESS | BASE_JD_SW_EVENT_INFO | 0x000,
+
+ /** End of SW Success status codes */
+ BASE_JD_EVENT_RANGE_SW_SUCCESS_END = BASE_JD_SW_EVENT | BASE_JD_SW_EVENT_SUCCESS | BASE_JD_SW_EVENT_RESERVED | 0x3FF,
+
+ /** Start of Kernel-only status codes. Such codes are never returned to user-space */
+ BASE_JD_EVENT_RANGE_KERNEL_ONLY_START = BASE_JD_SW_EVENT | BASE_JD_SW_EVENT_KERNEL | 0x000,
+ BASE_JD_EVENT_REMOVED_FROM_NEXT = BASE_JD_SW_EVENT | BASE_JD_SW_EVENT_KERNEL | BASE_JD_SW_EVENT_JOB | 0x000,
+
+ /** End of Kernel-only status codes. */
+ BASE_JD_EVENT_RANGE_KERNEL_ONLY_END = BASE_JD_SW_EVENT | BASE_JD_SW_EVENT_KERNEL | BASE_JD_SW_EVENT_RESERVED | 0x3FF
+} base_jd_event_code;
+
+/**
+ * @brief Event reporting structure
+ *
+ * This structure is used by the kernel driver to report information
+ * about GPU events. The can either be HW-specific events or low-level
+ * SW events, such as job-chain completion.
+ *
+ * The event code contains an event type field which can be extracted
+ * by ANDing with ::BASE_JD_SW_EVENT_TYPE_MASK.
+ *
+ * Based on the event type base_jd_event::data holds:
+ * @li ::BASE_JD_SW_EVENT_JOB : the offset in the ring-buffer for the completed
+ * job-chain
+ * @li ::BASE_JD_SW_EVENT_BAG : The address of the ::base_jd_bag that has
+ * been completed (ie all contained job-chains have been completed).
+ * @li ::BASE_JD_SW_EVENT_INFO : base_jd_event::data not used
+ */
+typedef struct base_jd_event
+{
+ base_jd_event_code event_code; /**< event code */
+ void * data; /**< event specific data */
+} base_jd_event;
+
+/**
+ * @brief Structure for BASE_JD_REQ_SOFT_DUMP_CPU_GPU_COUNTERS jobs.
+ *
+ * This structure is stored into the memory pointed to by the @c jc field of @ref base_jd_atom.
+ */
+typedef struct base_dump_cpu_gpu_counters {
+ u64 system_time;
+ u64 cycle_counter;
+ u64 sec;
+ u32 usec;
+} base_dump_cpu_gpu_counters;
+
+/** @} end group base_user_api_job_dispatch */
+
+
+#ifdef __KERNEL__
+/*
+ * The following typedefs should be removed when a midg types header is added.
+ * See MIDCOM-1657 for details.
+ */
+typedef u32 midg_product_id;
+typedef u32 midg_cache_features;
+typedef u32 midg_tiler_features;
+typedef u32 midg_mem_features;
+typedef u32 midg_mmu_features;
+typedef u32 midg_js_features;
+typedef u32 midg_as_present;
+typedef u32 midg_js_present;
+
+#define MIDG_MAX_JOB_SLOTS 16
+
+#else
+#include <midg/mali_midg.h>
+#endif
+
+/**
+ * @page page_base_user_api_gpuprops User-side Base GPU Property Query API
+ *
+ * The User-side Base GPU Property Query API encapsulates two
+ * sub-modules:
+ *
+ * - @ref base_user_api_gpuprops_dyn "Dynamic GPU Properties"
+ * - @ref base_plat_config_gpuprops "Base Platform Config GPU Properties"
+ *
+ * There is a related third module outside of Base, which is owned by the MIDG
+ * module:
+ * - @ref midg_gpuprops_static "Midgard Compile-time GPU Properties"
+ *
+ * Base only deals with properties that vary between different Midgard
+ * implementations - the Dynamic GPU properties and the Platform Config
+ * properties.
+ *
+ * For properties that are constant for the Midgard Architecture, refer to the
+ * MIDG module. However, we will discuss their relevance here <b>just to
+ * provide background information.</b>
+ *
+ * @section sec_base_user_api_gpuprops_about About the GPU Properties in Base and MIDG modules
+ *
+ * The compile-time properties (Platform Config, Midgard Compile-time
+ * properties) are exposed as pre-processor macros.
+ *
+ * Complementing the compile-time properties are the Dynamic GPU
+ * Properties, which act as a conduit for the Midgard Configuration
+ * Discovery.
+ *
+ * In general, the dynamic properties are present to verify that the platform
+ * has been configured correctly with the right set of Platform Config
+ * Compile-time Properties.
+ *
+ * As a consistant guide across the entire DDK, the choice for dynamic or
+ * compile-time should consider the following, in order:
+ * -# Can the code be written so that it doesn't need to know the
+ * implementation limits at all?
+ * -# If you need the limits, get the information from the Dynamic Property
+ * lookup. This should be done once as you fetch the context, and then cached
+ * as part of the context data structure, so it's cheap to access.
+ * -# If there's a clear and arguable inefficiency in using Dynamic Properties,
+ * then use a Compile-Time Property (Platform Config, or Midgard Compile-time
+ * property). Examples of where this might be sensible follow:
+ * - Part of a critical inner-loop
+ * - Frequent re-use throughout the driver, causing significant extra load
+ * instructions or control flow that would be worthwhile optimizing out.
+ *
+ * We cannot provide an exhaustive set of examples, neither can we provide a
+ * rule for every possible situation. Use common sense, and think about: what
+ * the rest of the driver will be doing; how the compiler might represent the
+ * value if it is a compile-time constant; whether an OEM shipping multiple
+ * devices would benefit much more from a single DDK binary, instead of
+ * insignificant micro-optimizations.
+ *
+ * @section sec_base_user_api_gpuprops_dyn Dynamic GPU Properties
+ *
+ * Dynamic GPU properties are presented in two sets:
+ * -# the commonly used properties in @ref base_gpu_props, which have been
+ * unpacked from GPU register bitfields.
+ * -# The full set of raw, unprocessed properties in @ref midg_raw_gpu_props
+ * (also a member of @ref base_gpu_props). All of these are presented in
+ * the packed form, as presented by the GPU registers themselves.
+ *
+ * @usecase The raw properties in @ref midg_raw_gpu_props are necessary to
+ * allow a user of the Mali Tools (e.g. PAT) to determine "Why is this device
+ * behaving differently?". In this case, all information about the
+ * configuration is potentially useful, but it <b>does not need to be processed
+ * by the driver</b>. Instead, the raw registers can be processed by the Mali
+ * Tools software on the host PC.
+ *
+ * The properties returned extend the Midgard Configuration Discovery
+ * registers. For example, GPU clock speed is not specified in the Midgard
+ * Architecture, but is <b>necessary for OpenCL's clGetDeviceInfo() function</b>.
+ *
+ * The GPU properties are obtained by a call to
+ * _mali_base_get_gpu_props(). This simply returns a pointer to a const
+ * base_gpu_props structure. It is constant for the life of a base
+ * context. Multiple calls to _mali_base_get_gpu_props() to a base context
+ * return the same pointer to a constant structure. This avoids cache pollution
+ * of the common data.
+ *
+ * This pointer must not be freed, because it does not point to the start of a
+ * region allocated by the memory allocator; instead, just close the @ref
+ * base_context.
+ *
+ *
+ * @section sec_base_user_api_gpuprops_config Platform Config Compile-time Properties
+ *
+ * The Platform Config File sets up gpu properties that are specific to a
+ * certain platform. Properties that are 'Implementation Defined' in the
+ * Midgard Architecture spec are placed here.
+ *
+ * @note Reference configurations are provided for Midgard Implementations, such as
+ * the Mali-T600 family. The customer need not repeat this information, and can select one of
+ * these reference configurations. For example, VA_BITS, PA_BITS and the
+ * maximum number of samples per pixel might vary between Midgard Implementations, but
+ * \b not for platforms using the Mali-T604. This information is placed in
+ * the reference configuration files.
+ *
+ * The System Integrator creates the following structure:
+ * - platform_XYZ
+ * - platform_XYZ/plat
+ * - platform_XYZ/plat/plat_config.h
+ *
+ * They then edit plat_config.h, using the example plat_config.h files as a
+ * guide.
+ *
+ * At the very least, the customer must set @ref CONFIG_GPU_CORE_TYPE, and will
+ * receive a helpful \#error message if they do not do this correctly. This
+ * selects the Reference Configuration for the Midgard Implementation. The rationale
+ * behind this decision (against asking the customer to write \#include
+ * <gpus/mali_t600.h> in their plat_config.h) is as follows:
+ * - This mechanism 'looks' like a regular config file (such as Linux's
+ * .config)
+ * - It is difficult to get wrong in a way that will produce strange build
+ * errors:
+ * - They need not know where the mali_t600.h, other_midg_gpu.h etc. files are stored - and
+ * so they won't accidentally pick another file with 'mali_t600' in its name
+ * - When the build doesn't work, the System Integrator may think the DDK is
+ * doesn't work, and attempt to fix it themselves:
+ * - For the @ref CONFIG_GPU_CORE_TYPE mechanism, the only way to get past the
+ * error is to set @ref CONFIG_GPU_CORE_TYPE, and this is what the \#error tells
+ * you.
+ * - For a \#include mechanism, checks must still be made elsewhere, which the
+ * System Integrator may try working around by setting \#defines (such as
+ * VA_BITS) themselves in their plat_config.h. In the worst case, they may
+ * set the prevention-mechanism \#define of
+ * "A_CORRECT_MIDGARD_CORE_WAS_CHOSEN".
+ * - In this case, they would believe they are on the right track, because
+ * the build progresses with their fix, but with errors elsewhere.
+ *
+ * However, there is nothing to prevent the customer using \#include to organize
+ * their own configurations files hierarchically.
+ *
+ * The mechanism for the header file processing is as follows:
+ *
+ * @dot
+ digraph plat_config_mechanism {
+ rankdir=BT
+ size="6,6"
+
+ "mali_base.h";
+ "midg/midg.h";
+
+ node [ shape=box ];
+ {
+ rank = same; ordering = out;
+
+ "midg/midg_gpu_props.h";
+ "base/midg_gpus/mali_t600.h";
+ "base/midg_gpus/other_midg_gpu.h";
+ }
+ { rank = same; "plat/plat_config.h"; }
+ {
+ rank = same;
+ "midg/midg.h" [ shape=box ];
+ gpu_chooser [ label="" style="invisible" width=0 height=0 fixedsize=true ];
+ select_gpu [ label="Mali-T600 | Other\n(select_gpu.h)" shape=polygon,sides=4,distortion=0.25 width=3.3 height=0.99 fixedsize=true ] ;
+ }
+ node [ shape=box ];
+ { rank = same; "plat/plat_config.h"; }
+ { rank = same; "mali_base.h"; }
+
+
+
+ "mali_base.h" -> "midg/midg.h" -> "midg/midg_gpu_props.h";
+ "mali_base.h" -> "plat/plat_config.h" ;
+ "mali_base.h" -> select_gpu ;
+
+ "plat/plat_config.h" -> gpu_chooser [style="dotted,bold" dir=none weight=4] ;
+ gpu_chooser -> select_gpu [style="dotted,bold"] ;
+
+ select_gpu -> "base/midg_gpus/mali_t600.h" ;
+ select_gpu -> "base/midg_gpus/other_midg_gpu.h" ;
+ }
+ @enddot
+ *
+ *
+ * @section sec_base_user_api_gpuprops_kernel Kernel Operation
+ *
+ * During Base Context Create time, user-side makes a single kernel call:
+ * - A call to fill user memory with GPU information structures
+ *
+ * The kernel-side will fill the provided the entire processed @ref base_gpu_props
+ * structure, because this information is required in both
+ * user and kernel side; it does not make sense to decode it twice.
+ *
+ * Coherency groups must be derived from the bitmasks, but this can be done
+ * kernel side, and just once at kernel startup: Coherency groups must already
+ * be known kernel-side, to support chains that specify a 'Only Coherent Group'
+ * SW requirement, or 'Only Coherent Group with Tiler' SW requirement.
+ *
+ * @section sec_base_user_api_gpuprops_cocalc Coherency Group calculation
+ * Creation of the coherent group data is done at device-driver startup, and so
+ * is one-time. This will most likely involve a loop with CLZ, shifting, and
+ * bit clearing on the L2_PRESENT or L3_PRESENT masks, depending on whether the
+ * system is L2 or L2+L3 Coherent. The number of shader cores is done by a
+ * population count, since faulty cores may be disabled during production,
+ * producing a non-contiguous mask.
+ *
+ * The memory requirements for this algoirthm can be determined either by a u64
+ * population count on the L2/L3_PRESENT masks (a LUT helper already is
+ * requried for the above), or simple assumption that there can be no more than
+ * 16 coherent groups, since core groups are typically 4 cores.
+ */
+
+
+/**
+ * @addtogroup base_user_api_gpuprops User-side Base GPU Property Query APIs
+ * @{
+ */
+
+
+/**
+ * @addtogroup base_user_api_gpuprops_dyn Dynamic HW Properties
+ * @{
+ */
+
+
+#define BASE_GPU_NUM_TEXTURE_FEATURES_REGISTERS 3
+
+#define BASE_MAX_COHERENT_GROUPS 16
+
+struct mali_base_gpu_core_props
+{
+ /**
+ * Product specific value.
+ */
+ midg_product_id product_id;
+
+ /**
+ * Status of the GPU release.
+ * No defined values, but starts at 0 and increases by one for each release
+ * status (alpha, beta, EAC, etc.).
+ * 4 bit values (0-15).
+ */
+ u16 version_status;
+
+ /**
+ * Minor release number of the GPU. "P" part of an "RnPn" release number.
+ * 8 bit values (0-255).
+ */
+ u16 minor_revision;
+
+ /**
+ * Major release number of the GPU. "R" part of an "RnPn" release number.
+ * 4 bit values (0-15).
+ */
+ u16 major_revision;
+
+ /**
+ * @usecase GPU clock speed is not specified in the Midgard Architecture, but is
+ * <b>necessary for OpenCL's clGetDeviceInfo() function</b>.
+ */
+ u32 gpu_speed_mhz;
+
+ /**
+ * @usecase GPU clock max/min speed is required for computing best/worst case
+ * in tasks as job scheduling ant irq_throttling. (It is not specified in the
+ * Midgard Architecture).
+ */
+ u32 gpu_freq_khz_max;
+ u32 gpu_freq_khz_min;
+
+ /**
+ * Size of the shader program counter, in bits.
+ */
+ u32 log2_program_counter_size;
+
+ /**
+ * TEXTURE_FEATURES_x registers, as exposed by the GPU. This is a
+ * bitpattern where a set bit indicates that the format is supported.
+ *
+ * Before using a texture format, it is recommended that the corresponding
+ * bit be checked.
+ */
+ u32 texture_features[BASE_GPU_NUM_TEXTURE_FEATURES_REGISTERS];
+
+ /**
+ * Theoretical maximum memory available to the GPU. It is unlikely that a
+ * client will be able to allocate all of this memory for their own
+ * purposes, but this at least provides an upper bound on the memory
+ * available to the GPU.
+ *
+ * This is required for OpenCL's clGetDeviceInfo() call when
+ * CL_DEVICE_GLOBAL_MEM_SIZE is requested, for OpenCL GPU devices. The
+ * client will not be expecting to allocate anywhere near this value.
+ */
+ u64 gpu_available_memory_size;
+
+ /**
+ * @usecase Version string: For use by glGetString( GL_RENDERER ); (and similar
+ * for other APIs)
+ */
+ const char * version_string;
+};
+
+/**
+ *
+ * More information is possible - but associativity and bus width are not
+ * required by upper-level apis.
+
+ */
+struct mali_base_gpu_cache_props
+{
+ u32 log2_line_size;
+ u32 log2_cache_size;
+};
+
+struct mali_base_gpu_tiler_props
+{
+ u32 bin_size_bytes; /* Max is 4*2^15 */
+ u32 max_active_levels; /* Max is 2^15 */
+};
+
+/**
+ * @brief descriptor for a coherent group
+ *
+ * \c core_mask exposes all cores in that coherent group, and \c num_cores
+ * provides a cached population-count for that mask.
+ *
+ * @note Whilst all cores are exposed in the mask, not all may be available to
+ * the application, depending on the Kernel Job Scheduler policy. Therefore,
+ * the application should not further restrict the core mask itself, as it may
+ * result in an empty core mask. However, it can guarentee that there will be
+ * at least one core available for each core group exposed .
+ *
+ * @usecase Chains marked at certain user-side priorities (e.g. the Long-running
+ * (batch) priority ) can be prevented from running on entire core groups by the
+ * Kernel Chain Scheduler policy.
+ *
+ * @note if u64s must be 8-byte aligned, then this structure has 32-bits of wastage.
+ */
+struct mali_base_gpu_coherent_group
+{
+ u64 core_mask; /**< Core restriction mask required for the group */
+ u16 num_cores; /**< Number of cores in the group */
+};
+
+/**
+ * @brief Coherency group information
+ *
+ * Note that the sizes of the members could be reduced. However, the \c group
+ * member might be 8-byte aligned to ensure the u64 core_mask is 8-byte
+ * aligned, thus leading to wastage if the other members sizes were reduced.
+ *
+ * The groups are sorted by core mask. The core masks are non-repeating and do
+ * not intersect.
+ */
+struct mali_base_gpu_coherent_group_info
+{
+ u32 num_groups;
+
+ /**
+ * Number of core groups (coherent or not) in the GPU. Equivalent to the number of L2 Caches.
+ *
+ * The GPU Counter dumping writes 2048 bytes per core group, regardless of
+ * whether the core groups are coherent or not. Hence this member is needed
+ * to calculate how much memory is required for dumping.
+ *
+ * @note Do not use it to work out how many valid elements are in the
+ * group[] member. Use num_groups instead.
+ */
+ u32 num_core_groups;
+
+ /**
+ * Coherency features of the memory, accessed by @ref midg_mem_features
+ * methods
+ */
+ midg_mem_features coherency;
+
+ /**
+ * Descriptors of coherent groups
+ */
+ struct mali_base_gpu_coherent_group group[BASE_MAX_COHERENT_GROUPS];
+};
+
+
+/**
+ * A complete description of the GPU's Hardware Configuration Discovery
+ * registers.
+ *
+ * The information is presented inefficiently for access. For frequent access,
+ * the values should be better expressed in an unpacked form in the
+ * base_gpu_props structure.
+ *
+ * @usecase The raw properties in @ref midg_raw_gpu_props are necessary to
+ * allow a user of the Mali Tools (e.g. PAT) to determine "Why is this device
+ * behaving differently?". In this case, all information about the
+ * configuration is potentially useful, but it <b>does not need to be processed
+ * by the driver</b>. Instead, the raw registers can be processed by the Mali
+ * Tools software on the host PC.
+ *
+ */
+struct midg_raw_gpu_props
+{
+ u64 shader_present;
+ u64 tiler_present;
+ u64 l2_present;
+ u64 l3_present;
+
+ midg_cache_features l2_features;
+ midg_cache_features l3_features;
+ midg_mem_features mem_features;
+ midg_mmu_features mmu_features;
+
+ midg_as_present as_present;
+
+ u32 js_present;
+ midg_js_features js_features[MIDG_MAX_JOB_SLOTS];
+ midg_tiler_features tiler_features;
+
+ u32 gpu_id;
+};
+
+
+
+/**
+ * Return structure for _mali_base_get_gpu_props().
+ *
+ */
+typedef struct mali_base_gpu_props
+{
+ struct mali_base_gpu_core_props core_props;
+ struct mali_base_gpu_cache_props l2_props;
+ struct mali_base_gpu_cache_props l3_props;
+ struct mali_base_gpu_tiler_props tiler_props;
+
+ /** This member is large, likely to be 128 bytes */
+ struct midg_raw_gpu_props raw_props;
+
+ /** This must be last member of the structure */
+ struct mali_base_gpu_coherent_group_info coherency_info;
+}base_gpu_props;
+
+/** @} end group base_user_api_gpuprops_dyn */
+
+/** @} end group base_user_api_gpuprops */
+
+/**
+ * @addtogroup base_user_api_core User-side Base core APIs
+ * @{
+ */
+
+/**
+ * \enum base_context_create_flags
+ *
+ * Flags to pass to ::base_context_init.
+ * Flags can be ORed together to enable multiple things.
+ *
+ * These share the same space as @ref basep_context_private_flags, and so must
+ * not collide with them.
+ */
+enum base_context_create_flags
+{
+ /** No flags set */
+ BASE_CONTEXT_CREATE_FLAG_NONE = 0,
+
+ /** Base context is embedded in a cctx object (flag used for CINSTR software counter macros) */
+ BASE_CONTEXT_CCTX_EMBEDDED = (1u << 0),
+
+ /** Base context is a 'System Monitor' context for Hardware counters.
+ *
+ * One important side effect of this is that job submission is disabled. */
+ BASE_CONTEXT_SYSTEM_MONITOR_SUBMIT_DISABLED = (1u << 1),
+
+ /** Base context flag indicating a 'hint' that this context uses Compute
+ * Jobs only.
+ *
+ * Specifially, this means that it only sends atoms that <b>do not</b>
+ * contain the following @ref base_jd_corereq :
+ * - BASE_JD_REQ_FS
+ * - BASE_JD_REQ_T
+ *
+ * Violation of these requirements will cause the Job-Chains to be rejected.
+ *
+ * In addition, it is inadvisable for the atom's Job-Chains to contain Jobs
+ * of the following @ref midg_job_type (whilst it may work now, it may not
+ * work in future) :
+ * - @ref MIDG_JOB_VERTEX
+ * - @ref MIDG_JOB_GEOMETRY
+ *
+ * @note An alternative to using this is to specify the BASE_JD_REQ_ONLY_COMPUTE
+ * requirement in atoms.
+ */
+ BASE_CONTEXT_HINT_ONLY_COMPUTE = (1u << 2)
+};
+
+/**
+ * Bitpattern describing the ::base_context_create_flags that can be passed to base_context_init()
+ */
+#define BASE_CONTEXT_CREATE_ALLOWED_FLAGS \
+ ( ((u32)BASE_CONTEXT_CCTX_EMBEDDED) | \
+ ((u32)BASE_CONTEXT_SYSTEM_MONITOR_SUBMIT_DISABLED) | \
+ ((u32)BASE_CONTEXT_HINT_ONLY_COMPUTE) )
+
+/**
+ * Bitpattern describing the ::base_context_create_flags that can be passed to the kernel
+ */
+#define BASE_CONTEXT_CREATE_KERNEL_FLAGS \
+ ( ((u32)BASE_CONTEXT_SYSTEM_MONITOR_SUBMIT_DISABLED) | \
+ ((u32)BASE_CONTEXT_HINT_ONLY_COMPUTE) )
+
+
+/**
+ * Private flags used on the base context
+ *
+ * These start at bit 31, and run down to zero.
+ *
+ * They share the same space as @ref base_context_create_flags, and so must
+ * not collide with them.
+ */
+enum basep_context_private_flags
+{
+ /** Private flag tracking whether job descriptor dumping is disabled */
+ BASEP_CONTEXT_FLAG_JOB_DUMP_DISABLED = (1 << 31)
+};
+
+/** @} end group base_user_api_core */
+
+/** @} end group base_user_api */
+
+/**
+ * @addtogroup base_plat_config_gpuprops Base Platform Config GPU Properties
+ * @{
+ *
+ * C Pre-processor macros are exposed here to do with Platform
+ * Config.
+ *
+ * These include:
+ * - GPU Properties that are constant on a particular Midgard Family
+ * Implementation e.g. Maximum samples per pixel on Mali-T600.
+ * - General platform config for the GPU, such as the GPU major and minor
+ * revison.
+ */
+
+/** @} end group base_plat_config_gpuprops */
+
+/**
+ @addtogroup basecpuprops
+ * @{
+ */
+
+/**
+ * @brief CPU Property Flag for base_cpu_props::cpu_flags, indicating a
+ * Little Endian System. If not set in base_cpu_props::cpu_flags, then the
+ * system is Big Endian.
+ *
+ * The compile-time equivalent is @ref CONFIG_CPU_LITTLE_ENDIAN.
+ */
+#define BASE_CPU_PROPERTY_FLAG_LITTLE_ENDIAN F_BIT_0
+
+/** @brief Platform Dynamic CPU properties structure */
+typedef struct base_cpu_props {
+ u32 nr_cores; /**< Number of CPU cores */
+
+ /**
+ * CPU page size as a Logarithm to Base 2. The compile-time
+ * equivalent is @ref CONFIG_CPU_PAGE_SIZE_LOG2
+ */
+ u32 cpu_page_size_log2;
+
+ /**
+ * CPU L1 Data cache line size as a Logarithm to Base 2. The compile-time
+ * equivalent is @ref CONFIG_CPU_L1_DCACHE_LINE_SIZE_LOG2.
+ */
+ u32 cpu_l1_dcache_line_size_log2;
+
+ /**
+ * CPU L1 Data cache size, in bytes. The compile-time equivalient is
+ * @ref CONFIG_CPU_L1_DCACHE_SIZE.
+ *
+ * This CPU Property is mainly provided to implement OpenCL's
+ * clGetDeviceInfo(), which allows the CL_DEVICE_GLOBAL_MEM_CACHE_SIZE
+ * hint to be queried.
+ */
+ u32 cpu_l1_dcache_size;
+
+ /**
+ * CPU Property Flags bitpattern.
+ *
+ * This is a combination of bits as specified by the macros prefixed with
+ * 'BASE_CPU_PROPERTY_FLAG_'.
+ */
+ u32 cpu_flags;
+
+ /**
+ * Maximum clock speed in MHz.
+ * @usecase 'Maximum' CPU Clock Speed information is required by OpenCL's
+ * clGetDeviceInfo() function for the CL_DEVICE_MAX_CLOCK_FREQUENCY hint.
+ */
+ u32 max_cpu_clock_speed_mhz;
+
+ /**
+ * @brief Total memory, in bytes.
+ *
+ * This is the theoretical maximum memory available to the CPU. It is
+ * unlikely that a client will be able to allocate all of this memory for
+ * their own purposes, but this at least provides an upper bound on the
+ * memory available to the CPU.
+ *
+ * This is required for OpenCL's clGetDeviceInfo() call when
+ * CL_DEVICE_GLOBAL_MEM_SIZE is requested, for OpenCL CPU devices.
+ */
+ u64 available_memory_size;
+} base_cpu_props;
+/** @} end group basecpuprops */
+
+/** @} end group base_api */
+
+#endif /* _BASE_KERNEL_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_config.h
+ * Configuration API and Attributes for KBase
+ */
+
+#ifndef _KBASE_CONFIG_H_
+#define _KBASE_CONFIG_H_
+
+#include <malisw/mali_stdtypes.h>
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @addtogroup base_kbase_api
+ * @{
+ */
+
+/**
+ * @addtogroup kbase_config Configuration API and Attributes
+ * @{
+ */
+
+#if MALI_CUSTOMER_RELEASE == 0
+/* This flag is set for internal builds so we can run tests without credentials. */
+#define KBASE_HWCNT_DUMP_BYPASS_ROOT 1
+#else
+#define KBASE_HWCNT_DUMP_BYPASS_ROOT 0
+#endif
+
+/**
+ * Relative memory performance indicators. Enum elements should always be defined in slowest to fastest order.
+ */
+typedef enum kbase_memory_performance
+{
+ KBASE_MEM_PERF_SLOW,
+ KBASE_MEM_PERF_NORMAL,
+ KBASE_MEM_PERF_FAST,
+
+ KBASE_MEM_PERF_MAX_VALUE = KBASE_MEM_PERF_FAST
+} kbase_memory_performance;
+
+/**
+ * Device wide configuration
+ */
+enum
+{
+ /**
+ * Invalid attribute ID (reserve 0).
+ *
+ * Attached value: Ignored
+ * Default value: NA
+ * */
+ KBASE_CONFIG_ATTR_INVALID,
+
+ /**
+ * Memory resource object.
+ * Multiple resources can be listed.
+ * The resources will be used in the order listed
+ * in the configuration attribute list if they have no other
+ * preferred order based on the memory resource property list
+ * (see ::kbase_memory_attribute).
+ *
+ * Attached value: Pointer to a kbase_memory_resource object.
+ * Default value: No resources
+ * */
+
+ KBASE_CONFIG_ATTR_MEMORY_RESOURCE,
+ /**
+ * Maximum of memory which can be allocated from the OS
+ * to be used by the GPU (shared memory).
+ * This must be greater than 0 as the GPU page tables
+ * are currently stored in a shared memory allocation.
+ *
+ * Attached value: number in bytes
+ * Default value: Limited by available memory
+ */
+ KBASE_CONFIG_ATTR_MEMORY_OS_SHARED_MAX,
+
+ /**
+ * Relative performance for the GPU to access
+ * OS shared memory.
+ *
+ * Attached value: ::kbase_memory_performance member
+ * Default value: ::KBASE_MEM_PERF_NORMAL
+ */
+ KBASE_CONFIG_ATTR_MEMORY_OS_SHARED_PERF_GPU,
+
+ /**
+ * Limit (in bytes) the amount of memory a single process
+ * can allocate across all memory banks (including OS shared memory)
+ * for use by the GPU.
+ *
+ * Attached value: number in bytes
+ * Default value: Limited by available memory
+ */
+ KBASE_CONFIG_ATTR_MEMORY_PER_PROCESS_LIMIT,
+
+ /**
+ * UMP device mapping.
+ * Which UMP device this GPU should be mapped to.
+ *
+ * Attached value: UMP_DEVICE_<device>_SHIFT
+ * Default value: UMP_DEVICE_W_SHIFT
+ */
+ KBASE_CONFIG_ATTR_UMP_DEVICE,
+
+ /**
+ * Maximum frequency GPU will be clocked at. Given in kHz.
+ * This must be specified as there is no default value.
+ *
+ * Attached value: number in kHz
+ * Default value: NA
+ */
+ KBASE_CONFIG_ATTR_GPU_FREQ_KHZ_MAX,
+
+ /**
+ * Minimum frequency GPU will be clocked at. Given in kHz.
+ * This must be specified as there is no default value.
+ *
+ * Attached value: number in kHz
+ * Default value: NA
+ */
+ KBASE_CONFIG_ATTR_GPU_FREQ_KHZ_MIN,
+
+ /**
+ * Irq throttle. It is the minimum desired time in between two
+ * consecutive gpu interrupts (given in 'us'). The irq throttle
+ * gpu register will be configured after this, taking into
+ * account the configured max frequency.
+ *
+ * Attached value: number in micro seconds
+ * Default value: see DEFAULT_IRQ_THROTTLE_TIME_US
+ */
+ KBASE_CONFIG_ATTR_GPU_IRQ_THROTTLE_TIME_US,
+
+ /*** Begin Job Scheduling Configs ***/
+ /**
+ * Job Scheduler scheduling tick granuality. This is in nanoseconds to
+ * allow HR timer support.
+ *
+ * On each scheduling tick, the scheduler may decide to:
+ * -# soft stop a job (the job will be re-run later, and other jobs will
+ * be able to run on the GPU now). This effectively controls the
+ * 'timeslice' given to a job.
+ * -# hard stop a job (to kill a job if it has spent too long on the GPU
+ * and didn't soft-stop).
+ *
+ * The numbers of ticks for these events are controlled by:
+ * - @ref KBASE_CONIFG_ATTR_JS_SOFT_STOP_TICKS
+ * - @ref KBASE_CONIFG_ATTR_JS_HARD_STOP_TICKS_SS
+ * - @ref KBASE_CONIFG_ATTR_JS_HARD_STOP_TICKS_NSS
+ *
+ * A soft-stopped job will later be resumed, allowing it to use more GPU
+ * time <em>in total</em> than that defined by any of the above. However,
+ * the scheduling policy attempts to limit the amount of \em uninterrupted
+ * time spent on the GPU using the above values (that is, the 'timeslice'
+ * of a job)
+ *
+ * This value is supported by the following scheduling policies:
+ * - The Completely Fair Share (CFS) policy
+ *
+ * Attached value: unsigned 32-bit kbasep_js_device_data::scheduling_tick_ns.
+ * The value might be rounded down to lower precision. Must be non-zero
+ * after rounding.<br>
+ * Default value: @ref DEFAULT_JS_SCHEDULING_TICK_NS
+ *
+ * @note this value is allowed to be greater than
+ * @ref KBASE_CONFIG_ATTR_JS_CTX_TIMESLICE_NS. This allows jobs to run on (much)
+ * longer than the job-timeslice, but once this happens, the context gets
+ * scheduled in (much) less frequently than others that stay within the
+ * ctx-timeslice.
+ */
+ KBASE_CONFIG_ATTR_JS_SCHEDULING_TICK_NS,
+
+ /**
+ * Job Scheduler minimum number of scheduling ticks before jobs are soft-stopped.
+ *
+ * This defines the amount of time a job is allowed to stay on the GPU,
+ * before it is soft-stopped to allow other jobs to run.
+ *
+ * That is, this defines the 'timeslice' of the job. It is separate from the
+ * timeslice of the context that contains the job (see
+ * @ref KBASE_CONFIG_ATTR_JS_CTX_TIMESLICE_NS).
+ *
+ * This value is supported by the following scheduling policies:
+ * - The Completely Fair Share (CFS) policy
+ *
+ * Attached value: unsigned 32-bit kbasep_js_device_data::soft_stop_ticks<br>
+ * Default value: @ref DEFAULT_JS_SOFT_STOP_TICKS
+ *
+ * @note a value of zero means "the quickest time to soft-stop a job",
+ * which is somewhere between instant and one tick later.
+ *
+ * @note this value is allowed to be greater than
+ * @ref KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_SS or
+ * @ref KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_NSS. This effectively disables
+ * soft-stop, and just uses hard-stop instead. In this case, this value
+ * should be much greater than any of the hard stop values (to avoid
+ * soft-stop-after-hard-stop)
+ *
+ * @see KBASE_CONFIG_ATTR_JS_SCHEDULING_TICK_NS
+ */
+ KBASE_CONFIG_ATTR_JS_SOFT_STOP_TICKS,
+
+ /**
+ * Job Scheduler minimum number of scheduling ticks before Soft-Stoppable
+ * (BASE_JD_REQ_NSS bit \b clear) jobs are hard-stopped.
+ *
+ * This defines the amount of time a Soft-Stoppable job is allowed to spend
+ * on the GPU before it is killed. Such jobs won't be resumed if killed.
+ *
+ * This value is supported by the following scheduling policies:
+ * - The Completely Fair Share (CFS) policy
+ *
+ * Attached value: unsigned 32-bit kbasep_js_device_data::hard_stop_ticks_ss<br>
+ * Default value: @ref DEFAULT_JS_HARD_STOP_TICKS_SS
+ *
+ * @note a value of zero means "the quickest time to hard-stop a job",
+ * which is somewhere between instant and one tick later.
+ *
+ * @see KBASE_CONFIG_ATTR_JS_SCHEDULING_TICK_NS
+ */
+ KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_SS,
+
+ /**
+ * Job Scheduler minimum number of scheduling ticks before Non-Soft-Stoppable
+ * (BASE_JD_REQ_NSS bit \b set) jobs are hard-stopped.
+ *
+ * This defines the amount of time a Non-Soft-Stoppable job is allowed to spend
+ * on the GPU before it is killed. Such jobs won't be resumed if killed.
+ *
+ * This value is supported by the following scheduling policies:
+ * - The Completely Fair Share (CFS) policy
+ *
+ * Attached value: unsigned 32-bit kbasep_js_device_data::hard_stop_ticks_nss<br>
+ * Default value: @ref DEFAULT_JS_HARD_STOP_TICKS_NSS
+ *
+ * @note a value of zero means "the quickest time to hard-stop a job",
+ * which is somewhere between instant and one tick later.
+ *
+ * @see KBASE_CONFIG_ATTR_JS_SCHEDULING_TICK_NS
+ */
+ KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_NSS,
+
+ /**
+ * Job Scheduler timeslice that a context is scheduled in for, in nanoseconds.
+ *
+ * When a context has used up this amount of time across its jobs, it is
+ * scheduled out to let another run.
+ *
+ * @note the resolution is nanoseconds (ns) here, because that's the format
+ * often used by the OS.
+ *
+ * This value controls affects the actual time defined by the following
+ * config values:
+ * - @ref KBASE_CONFIG_ATTR_JS_CFS_CTX_RUNTIME_INIT_SLICES
+ * - @ref KBASE_CONFIG_ATTR_JS_CFS_CTX_RUNTIME_MIN_SLICES
+ *
+ * This value is supported by the following scheduling policies:
+ * - The Completely Fair Share (CFS) policy
+ *
+ * Attached value: unsigned 32-bit kbasep_js_device_data::ctx_timeslice_ns.
+ * The value might be rounded down to lower precision.<br>
+ * Default value: @ref DEFAULT_JS_CTX_TIMESLICE_NS
+ *
+ * @note a value of zero models a "Round Robin" scheduling policy, and
+ * disables @ref KBASE_CONFIG_ATTR_JS_CFS_CTX_RUNTIME_INIT_SLICES
+ * (initially causing LIFO scheduling) and
+ * @ref KBASE_CONFIG_ATTR_JS_CFS_CTX_RUNTIME_MIN_SLICES (allowing
+ * not-run-often contexts to get scheduled in quickly, but to only use
+ * a single timeslice when they get scheduled in).
+ */
+ KBASE_CONFIG_ATTR_JS_CTX_TIMESLICE_NS,
+
+ /**
+ * Job Scheduler initial runtime of a context for the CFS Policy, in time-slices.
+ *
+ * This value is relative to that of the least-run context, and defines
+ * where in the CFS queue a new context is added. A value of 1 means 'after
+ * the least-run context has used its timeslice'. Therefore, when all
+ * contexts consistently use the same amount of time, a value of 1 models a
+ * FIFO. A value of 0 would model a LIFO.
+ *
+ * The value is represented in "numbers of time slices". Multiply this
+ * value by that defined in @ref KBASE_CONFIG_ATTR_JS_CTX_TIMESLICE_NS to get
+ * the time value for this in nanoseconds.
+ *
+ * Attached value: unsigned 32-bit kbasep_js_device_data::cfs_ctx_runtime_init_slices<br>
+ * Default value: @ref DEFAULT_JS_CFS_CTX_RUNTIME_INIT_SLICES
+ */
+ KBASE_CONFIG_ATTR_JS_CFS_CTX_RUNTIME_INIT_SLICES,
+
+ /**
+ * Job Scheduler minimum runtime value of a context for CFS, in time_slices
+ * relative to that of the least-run context.
+ *
+ * This is a measure of how much preferrential treatment is given to a
+ * context that is not run very often.
+ *
+ * Specficially, this value defines how many timeslices such a context is
+ * (initially) allowed to use at once. Such contexts (e.g. 'interactive'
+ * processes) will appear near the front of the CFS queue, and can initially
+ * use more time than contexts that run continuously (e.g. 'batch'
+ * processes).
+ *
+ * This limit \b prevents a "stored-up timeslices" DoS attack, where a ctx
+ * not run for a long time attacks the system by using a very large initial
+ * number of timeslices when it finally does run.
+ *
+ * Attached value: unsigned 32-bit kbasep_js_device_data::cfs_ctx_runtime_min_slices<br>
+ * Default value: @ref DEFAULT_JS_CFS_CTX_RUNTIME_MIN_SLICES
+ *
+ * @note A value of zero allows not-run-often contexts to get scheduled in
+ * quickly, but to only use a single timeslice when they get scheduled in.
+ */
+ KBASE_CONFIG_ATTR_JS_CFS_CTX_RUNTIME_MIN_SLICES,
+
+ /**
+ * Job Scheduler minimum number of scheduling ticks before Soft-Stoppable
+ * (BASE_JD_REQ_NSS bit \b clear) jobs cause the GPU to be reset.
+ *
+ * This defines the amount of time a Soft-Stoppable job is allowed to spend
+ * on the GPU before it is assumed that the GPU has hung and needs to be reset.
+ * The assumes that the job has been hard-stopped already and so the presence of
+ * a job that has remained on the GPU for so long indicates that the GPU has in some
+ * way hung.
+ *
+ * This value is supported by the following scheduling policies:
+ * - The Completely Fair Share (CFS) policy
+ *
+ * Attached value: unsigned 32-bit kbasep_js_device_data::gpu_reset_ticks_nss<br>
+ * Default value: @ref DEFAULT_JS_RESET_TICKS_SS
+ *
+ * @see KBASE_CONFIG_ATTR_JS_SCHEDULING_TICK_NS
+ */
+ KBASE_CONFIG_ATTR_JS_RESET_TICKS_SS,
+
+ /**
+ * Job Scheduler minimum number of scheduling ticks before Non-Soft-Stoppable
+ * (BASE_JD_REQ_NSS bit \b set) jobs cause the GPU to be reset.
+ *
+ * This defines the amount of time a Non-Soft-Stoppable job is allowed to spend
+ * on the GPU before it is assumed that the GPU has hung and needs to be reset.
+ * The assumes that the job has been hard-stopped already and so the presence of
+ * a job that has remained on the GPU for so long indicates that the GPU has in some
+ * way hung.
+ *
+ * This value is supported by the following scheduling policies:
+ * - The Completely Fair Share (CFS) policy
+ *
+ * Attached value: unsigned 32-bit kbasep_js_device_data::gpu_reset_ticks_nss<br>
+ * Default value: @ref DEFAULT_JS_RESET_TICKS_NSS
+ *
+ * @see KBASE_CONFIG_ATTR_JS_SCHEDULING_TICK_NS
+ */
+ KBASE_CONFIG_ATTR_JS_RESET_TICKS_NSS,
+
+ /**
+ * Number of milliseconds given for other jobs on the GPU to be
+ * soft-stopped when the GPU needs to be reset.
+ *
+ * Attached value: number in milliseconds
+ * Default value: @ref DEFAULT_JS_RESET_TIMEOUT_MS
+ */
+ KBASE_CONFIG_ATTR_JS_RESET_TIMEOUT_MS,
+ /*** End Job Scheduling Configs ***/
+
+ /** Power management configuration
+ *
+ * Attached value: pointer to @ref kbase_pm_callback_conf
+ * Default value: See @ref kbase_pm_callback_conf
+ */
+ KBASE_CONFIG_ATTR_POWER_MANAGEMENT_CALLBACKS,
+
+ /**
+ * Boolean indicating whether the driver is configured to be secure at
+ * a potential loss of performance.
+ *
+ * This currently affects only r0p0-15dev0 HW and earlier.
+ *
+ * On r0p0-15dev0 HW and earlier, there are tradeoffs between security and
+ * performance:
+ *
+ * - When this is set to MALI_TRUE, the driver remains fully secure,
+ * but potentially loses performance compared with setting this to
+ * MALI_FALSE.
+ * - When set to MALI_FALSE, the driver is open to certain security
+ * attacks.
+ *
+ * From r0p0-00rel0 and onwards, there is no security loss by setting
+ * this to MALI_FALSE, and no performance loss by setting it to
+ * MALI_TRUE.
+ *
+ * Attached value: mali_bool value
+ * Default value: @ref DEFAULT_SECURE_BUT_LOSS_OF_PERFORMANCE
+ */
+ KBASE_CONFIG_ATTR_SECURE_BUT_LOSS_OF_PERFORMANCE,
+
+ /**
+ * A pointer to a function that calculates the CPU clock
+ * speed of the platform in MHz - see
+ * @ref kbase_cpuprops_clock_speed_function for the function
+ * prototype.
+ *
+ * Attached value: A @ref kbase_cpuprops_clock_speed_function.
+ * Default Value: Pointer to @ref DEFAULT_CPU_SPEED_FUNC -
+ * returns a clock speed of 100 MHz.
+ */
+ KBASE_CONFIG_ATTR_CPU_SPEED_FUNC,
+
+ /**
+ * Platform specific configuration functions
+ *
+ * Attached value: pointer to @ref kbase_platform_funcs_conf
+ * Default value: See @ref kbase_platform_funcs_conf
+ */
+ KBASE_CONFIG_ATTR_PLATFORM_FUNCS,
+
+ /**
+ * End of attribute list indicator.
+ * The configuration loader will stop processing any more elements
+ * when it encounters this attribute.
+ *
+ * Attached value: Ignored
+ * Default value: NA
+ */
+ KBASE_CONFIG_ATTR_END = 0x1FFFUL
+};
+
+enum
+{
+ /**
+ * Invalid attribute ID (reserve 0).
+ *
+ * Attached value: Ignored
+ * Default value: NA
+ */
+ KBASE_MEM_ATTR_INVALID,
+
+ /**
+ * Relative performance for the CPU to access
+ * the memory resource.
+ *
+ * Attached value: ::kbase_memory_performance member
+ * Default value: ::KBASE_MEM_PERF_NORMAL
+ */
+ KBASE_MEM_ATTR_PERF_CPU,
+
+ /**
+ * Relative performance for the GPU to access
+ * the memory resource.
+ *
+ * Attached value: ::kbase_memory_performance member
+ * Default value: ::KBASE_MEM_PERF_NORMAL
+ */
+ KBASE_MEM_ATTR_PERF_GPU,
+
+ /**
+ * End of attribute list indicator.
+ * The memory resource loader will stop processing any more
+ * elements when it encounters this attribute.
+ *
+ * Attached value: Ignored
+ * Default value: NA
+ */
+ KBASE_MEM_ATTR_END = 0x1FFFUL
+};
+
+
+/*
+ * @brief specifies a single attribute
+ *
+ * Attribute is identified by attr field. Data is either integer or a pointer to attribute-specific structure.
+ */
+typedef struct kbase_attribute
+{
+ int id;
+ uintptr_t data;
+} kbase_attribute;
+
+/*
+ * @brief Specifies dedicated memory bank
+ *
+ * Specifies base, size and attributes of a memory bank
+ */
+typedef struct kbase_memory_resource
+{
+ u64 base;
+ u64 size;
+ struct kbase_attribute * attributes;
+ const char * name;
+} kbase_memory_resource;
+
+/* Forward declaration of kbase_device */
+struct kbase_device;
+
+/*
+ * @brief Specifies the functions for platform specific initialization and termination
+ *
+ * By default no functions are required. No additional platform specific control is necessary.
+ */
+typedef struct kbase_platform_funcs_conf
+{
+ /**
+ * Function pointer for platform specific initialization or NULL if no initialization function is required.
+ * This function will be called \em before any other callbacks listed in the kbase_attribute struct (such as
+ * Power Management callbacks).
+ * The platform specific private pointer kbase_device::platform_context can be accessed (and possibly initialized) in here.
+ */
+ mali_bool (*platform_init_func)(struct kbase_device *kbdev);
+ /**
+ * Function pointer for platform specific termination or NULL if no termination function is required.
+ * This function will be called \em after any other callbacks listed in the kbase_attribute struct (such as
+ * Power Management callbacks).
+ * The platform specific private pointer kbase_device::platform_context can be accessed (and possibly terminated) in here.
+ */
+ void (*platform_term_func)(struct kbase_device *kbdev);
+
+} kbase_platform_funcs_conf;
+
+/*
+ * @brief Specifies the callbacks for power management
+ *
+ * By default no callbacks will be made and the GPU must not be powered off.
+ */
+typedef struct kbase_pm_callback_conf
+{
+ /** Callback for when the GPU is idle and the power to it can be switched off.
+ *
+ * The system integrator can decide whether to either do nothing, just switch off
+ * the clocks to the GPU, or to completely power down the GPU.
+ * The platform specific private pointer kbase_device::platform_context can be accessed and modified in here. It is the
+ * platform \em callbacks responsiblity to initialize and terminate this pointer if used (see @ref kbase_platform_funcs_conf).
+ */
+ void (*power_off_callback)(struct kbase_device *kbdev);
+
+ /** Callback for when the GPU is about to become active and power must be supplied.
+ *
+ * This function must not return until the GPU is powered and clocked sufficiently for register access to
+ * succeed. The return value specifies whether the GPU was powered down since the call to power_off_callback.
+ * If the GPU state has been lost then this function must return 1, otherwise it should return 0.
+ * The platform specific private pointer kbase_device::platform_context can be accessed and modified in here. It is the
+ * platform \em callbacks responsiblity to initialize and terminate this pointer if used (see @ref kbase_platform_funcs_conf).
+ *
+ * The return value of the first call to this function is ignored.
+ *
+ * @return 1 if the GPU state may have been lost, 0 otherwise.
+ */
+ int (*power_on_callback)(struct kbase_device *kbdev);
+} kbase_pm_callback_conf;
+
+/**
+ * Type of the function pointer for KBASE_CONFIG_ATTR_CPU_SPEED_FUNC.
+ *
+ * @param clock_speed [out] Once called this will contain the current CPU clock speed in MHz.
+ * This is mainly used to implement OpenCL's clGetDeviceInfo().
+ *
+ * @return 0 on success, 1 on error.
+ */
+typedef int (*kbase_cpuprops_clock_speed_function)(u32 *clock_speed);
+
+#if !MALI_LICENSE_IS_GPL || (defined(MALI_FAKE_PLATFORM_DEVICE) && MALI_FAKE_PLATFORM_DEVICE)
+/*
+ * @brief Specifies start and end of I/O memory region.
+ */
+typedef struct kbase_io_memory_region
+{
+ u64 start;
+ u64 end;
+} kbase_io_memory_region;
+
+/*
+ * @brief Specifies I/O related resources like IRQs and memory region for I/O operations.
+ */
+typedef struct kbase_io_resources
+{
+ u32 job_irq_number;
+ u32 mmu_irq_number;
+ u32 gpu_irq_number;
+ kbase_io_memory_region io_memory_region;
+} kbase_io_resources;
+
+typedef struct kbase_platform_config
+{
+ const kbase_attribute *attributes;
+ const kbase_io_resources *io_resources;
+ u32 midgard_type;
+} kbase_platform_config;
+
+#endif /* !MALI_LICENSE_IS_GPL || (defined(MALI_FAKE_PLATFORM_DEVICE) && MALI_FAKE_PLATFORM_DEVICE) */
+/**
+ * @brief Return character string associated with the given midgard type.
+ *
+ * @param[in] midgard_type - ID of midgard type
+ *
+ * @return Pointer to NULL-terminated character array associated with the given midgard type
+ */
+const char *kbasep_midgard_type_to_string(u32 midgard_type);
+
+/**
+ * @brief Gets the count of attributes in array
+ *
+ * Function gets the count of attributes in array. Note that end of list indicator is also included.
+ *
+ * @param[in] attributes Array of attributes
+ *
+ * @return Number of attributes in the array including end of list indicator.
+ */
+int kbasep_get_config_attribute_count(const kbase_attribute *attributes);
+
+/**
+ * @brief Gets the count of attributes with specified id
+ *
+ * Function gets the count of attributes with specified id in the given attribute array
+ *
+ * @param[in] attributes Array of attributes
+ * @param[in] attibute_id Id of attributes to count
+ *
+ * @return Number of attributes in the array that have specified id
+ */
+int kbasep_get_config_attribute_count_by_id(const kbase_attribute *attributes, int attribute_id);
+
+/**
+ * @brief Gets the next config attribute with the specified ID from the array of attributes.
+ *
+ * Function gets the next attribute with specified attribute id within specified array. If no such attribute is found,
+ * NULL is returned.
+ *
+ * @param[in] attributes Array of attributes in which lookup is performed
+ * @param[in] attribute_id ID of attribute
+ *
+ * @return Pointer to the first attribute matching id or NULL if none is found.
+ */
+const kbase_attribute *kbasep_get_next_attribute(const kbase_attribute *attributes, int attribute_id);
+
+/**
+ * @brief Gets the value of a single config attribute.
+ *
+ * Function gets the value of attribute specified as parameter. If no such attribute is found in the array of
+ * attributes, default value is used.
+ *
+ * @param[in] kbdev Kbase device pointer
+ * @param[in] attributes Array of attributes in which lookup is performed
+ * @param[in] attribute_id ID of attribute
+ *
+ * @return Value of attribute with the given id
+ */
+uintptr_t kbasep_get_config_value(struct kbase_device *kbdev, const kbase_attribute *attributes, int attribute_id);
+
+/**
+ * @brief Obtain memory performance values from kbase_memory_resource structure.
+ *
+ * Function gets cpu and gpu memory performance values from memory resource structure and puts them in the variables
+ * provided as parameters. If the performance of memory bank is not in resource attributes, default value is used.
+ *
+ * @param[in] resource Structure containing information about memory bank to use
+ * @param[out] cpu_performance Pointer to variable which will hold CPU performance value
+ * @param[out] gpu_performance Pointer to variable which will hold GPU performance value
+ */
+void kbasep_get_memory_performance(const kbase_memory_resource *resource,
+ kbase_memory_performance *cpu_performance, kbase_memory_performance *gpu_performance);
+
+/**
+ * @brief Validates configuration attributes
+ *
+ * Function checks validity of given configuration attributes. It will fail on any attribute with unknown id, attribute
+ * with invalid value or attribute list that is not correctly terminated. It will also fail if
+ * KBASE_CONFIG_ATTR_GPU_FREQ_KHZ_MIN or KBASE_CONFIG_ATTR_GPU_FREQ_KHZ_MAX are not specified.
+ *
+ * @param[in] kbdev Kbase device pointer
+ * @param[in] attributes Array of attributes to validate
+ *
+ * @return MALI_TRUE if no errors have been found in the config. MALI_FALSE otherwise.
+ */
+mali_bool kbasep_validate_configuration_attributes(struct kbase_device *kbdev, const kbase_attribute *attributes);
+
+#if !MALI_LICENSE_IS_GPL || (defined(MALI_FAKE_PLATFORM_DEVICE) && MALI_FAKE_PLATFORM_DEVICE)
+/**
+ * @brief Gets the pointer to platform config.
+ *
+ * @return Pointer to the platform config
+ */
+kbase_platform_config *kbasep_get_platform_config(void);
+#endif /* !MALI_LICENSE_IS_GPL || (defined(MALI_FAKE_PLATFORM_DEVICE) && MALI_FAKE_PLATFORM_DEVICE) */
+
+/**
+ * @brief Platform specific call to initialize hardware
+ *
+ * Function calls a platform defined routine if specified in the configuration attributes.
+ * The routine can initialize any hardware and context state that is required for the GPU block to function.
+ *
+ * @param[in] kbdev Kbase device pointer
+ *
+ * @return MALI_TRUE if no errors have been found in the config. MALI_FALSE otherwise.
+ */
+mali_bool kbasep_platform_device_init(struct kbase_device *kbdev);
+
+/**
+ * @brief Platform specific call to terminate hardware
+ *
+ * Function calls a platform defined routine if specified in the configuration attributes.
+ * The routine can destroy any platform specific context state and shut down any hardware functionality that are
+ * outside of the Power Management callbacks.
+ *
+ * @param[in] kbdev Kbase device pointer
+ *
+ */
+void kbasep_platform_device_term(struct kbase_device *kbdev);
+
+
+/** @} */ /* end group kbase_config */
+/** @} */ /* end group base_kbase_api */
+/** @} */ /* end group base_api */
+
+#endif /* _KBASE_CONFIG_H_ */
--- /dev/null
+/*
+ * Copyright:
+ * ----------------------------------------------------------------------------
+ * This confidential and proprietary software may be used only as authorized
+ * by a licensing agreement from ARM Limited.
+ * (C) COPYRIGHT 2007-2011 ARM Limited , ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorized copies and
+ * copies may only be made to the extent permitted by a licensing agreement
+ * from ARM Limited.
+ * ----------------------------------------------------------------------------
+ */
+
+/**
+ * @addtogroup malisw
+ * @{
+ */
+
+/* ============================================================================
+ Description
+============================================================================ */
+/**
+ * @defgroup arm_cstd_coding_standard ARM C standard types and constants
+ * The common files are a set of standard headers which are used by all parts
+ * of this development, describing types, and generic constants.
+ *
+ * Files in group:
+ * - arm_cstd.h
+ * - arm_cstd_compilers.h
+ * - arm_cstd_types.h
+ * - arm_cstd_types_rvct.h
+ * - arm_cstd_types_gcc.h
+ * - arm_cstd_types_msvc.h
+ * - arm_cstd_pack_push.h
+ * - arm_cstd_pack_pop.h
+ */
+
+/**
+ * @addtogroup arm_cstd_coding_standard
+ * @{
+ */
+
+#ifndef _ARM_CSTD_
+#define _ARM_CSTD_
+
+/* ============================================================================
+ Import standard C99 types
+============================================================================ */
+#include "arm_cstd_compilers.h"
+#include "arm_cstd_types.h"
+
+/* ============================================================================
+ Min and Max Values
+============================================================================ */
+#if !defined(INT8_MAX)
+ #define INT8_MAX ((int8_t) 0x7F)
+#endif
+#if !defined(INT8_MIN)
+ #define INT8_MIN (-INT8_MAX - 1)
+#endif
+
+#if !defined(INT16_MAX)
+ #define INT16_MAX ((int16_t)0x7FFF)
+#endif
+#if !defined(INT16_MIN)
+ #define INT16_MIN (-INT16_MAX - 1)
+#endif
+
+#if !defined(INT32_MAX)
+ #define INT32_MAX ((int32_t)0x7FFFFFFF)
+#endif
+#if !defined(INT32_MIN)
+ #define INT32_MIN (-INT32_MAX - 1)
+#endif
+
+#if !defined(INT64_MAX)
+ #define INT64_MAX ((int64_t)0x7FFFFFFFFFFFFFFFLL)
+#endif
+#if !defined(INT64_MIN)
+ #define INT64_MIN (-INT64_MAX - 1)
+#endif
+
+#if !defined(UINT8_MAX)
+ #define UINT8_MAX ((uint8_t) 0xFF)
+#endif
+
+#if !defined(UINT16_MAX)
+ #define UINT16_MAX ((uint16_t)0xFFFF)
+#endif
+
+#if !defined(UINT32_MAX)
+ #define UINT32_MAX ((uint32_t)0xFFFFFFFF)
+#endif
+
+#if !defined(UINT64_MAX)
+ #define UINT64_MAX ((uint64_t)0xFFFFFFFFFFFFFFFFULL)
+#endif
+
+/* fallbacks if limits.h wasn't available */
+#if !defined(UCHAR_MAX)
+ #define UCHAR_MAX ((unsigned char)~0U)
+#endif
+
+#if !defined(SCHAR_MAX)
+ #define SCHAR_MAX ((signed char)(UCHAR_MAX >> 1))
+#endif
+#if !defined(SCHAR_MIN)
+ #define SCHAR_MIN ((signed char)(-SCHAR_MAX - 1))
+#endif
+
+#if !defined(USHRT_MAX)
+ #define USHRT_MAX ((unsigned short)~0U)
+#endif
+
+#if !defined(SHRT_MAX)
+ #define SHRT_MAX ((signed short)(USHRT_MAX >> 1))
+#endif
+#if !defined(SHRT_MIN)
+ #define SHRT_MIN ((signed short)(-SHRT_MAX - 1))
+#endif
+
+#if !defined(UINT_MAX)
+ #define UINT_MAX ((unsigned int)~0U)
+#endif
+
+#if !defined(INT_MAX)
+ #define INT_MAX ((signed int)(UINT_MAX >> 1))
+#endif
+#if !defined(INT_MIN)
+ #define INT_MIN ((signed int)(-INT_MAX - 1))
+#endif
+
+#if !defined(ULONG_MAX)
+ #define ULONG_MAX ((unsigned long)~0UL)
+#endif
+
+#if !defined(LONG_MAX)
+ #define LONG_MAX ((signed long)(ULONG_MAX >> 1))
+#endif
+#if !defined(LONG_MIN)
+ #define LONG_MIN ((signed long)(-LONG_MAX - 1))
+#endif
+
+#if !defined(ULLONG_MAX)
+ #define ULLONG_MAX ((unsigned long long)~0ULL)
+#endif
+
+#if !defined(LLONG_MAX)
+ #define LLONG_MAX ((signed long long)(ULLONG_MAX >> 1))
+#endif
+#if !defined(LLONG_MIN)
+ #define LLONG_MIN ((signed long long)(-LLONG_MAX - 1))
+#endif
+
+#if !defined(SIZE_MAX)
+ #if 1 == CSTD_CPU_32BIT
+ #define SIZE_MAX UINT32_MAX
+ #elif 1 == CSTD_CPU_64BIT
+ #define SIZE_MAX UINT64_MAX
+ #endif
+#endif
+
+/* ============================================================================
+ Keywords
+============================================================================ */
+/* Portable keywords. */
+
+#if !defined(CONST)
+/**
+ * @hideinitializer
+ * Variable is a C @c const, which can be made non-const for testing purposes.
+ */
+ #define CONST const
+#endif
+
+#if !defined(STATIC)
+/**
+ * @hideinitializer
+ * Variable is a C @c static, which can be made non-static for testing
+ * purposes.
+ */
+ #define STATIC static
+#endif
+
+/**
+ * Specifies a function as being exported outside of a logical module.
+ */
+#define PUBLIC
+
+/**
+ * @def PROTECTED
+ * Specifies a a function which is internal to an logical module, but which
+ * should not be used outside of that module. This cannot be enforced by the
+ * compiler, as a module is typically more than one translation unit.
+ */
+#define PROTECTED
+
+/**
+ * Specifies a function as being internal to a translation unit. Private
+ * functions would typically be declared as STATIC, unless they are being
+ * exported for unit test purposes.
+ */
+#define PRIVATE STATIC
+
+/**
+ * Specify an assertion value which is evaluated at compile time. Recommended
+ * usage is specification of a @c static @c INLINE function containing all of
+ * the assertions thus:
+ *
+ * @code
+ * static INLINE [module]_compile_time_assertions( void )
+ * {
+ * COMPILE_TIME_ASSERT( sizeof(uintptr_t) == sizeof(intptr_t) );
+ * }
+ * @endcode
+ *
+ * @note Use @c static not @c STATIC. We never want to turn off this @c static
+ * specification for testing purposes.
+ */
+#define CSTD_COMPILE_TIME_ASSERT( expr ) \
+ do { switch(0){case 0: case (expr):;} } while( FALSE )
+
+/**
+ * @hideinitializer
+ * @deprecated Prefered form is @c CSTD_UNUSED
+ * Function-like macro for suppressing unused variable warnings. Where possible
+ * such variables should be removed; this macro is present for cases where we
+ * much support API backwards compatibility.
+ */
+#define UNUSED( x ) ((void)(x))
+
+/**
+ * @hideinitializer
+ * Function-like macro for suppressing unused variable warnings. Where possible
+ * such variables should be removed; this macro is present for cases where we
+ * much support API backwards compatibility.
+ */
+#define CSTD_UNUSED( x ) ((void)(x))
+
+/**
+ * @hideinitializer
+ * Function-like macro for use where "no behavior" is desired. This is useful
+ * when compile time macros turn a function-like macro in to a no-op, but
+ * where having no statement is otherwise invalid.
+ */
+#define CSTD_NOP( ... ) ((void)#__VA_ARGS__)
+
+/**
+ * @hideinitializer
+ * Function-like macro for converting a pointer in to a u64 for storing into
+ * an external data structure. This is commonly used when pairing a 32-bit
+ * CPU with a 64-bit peripheral, such as a Midgard GPU. C's type promotion
+ * is complex and a straight cast does not work reliably as pointers are
+ * often considered as signed.
+ */
+#define CSTD_PTR_TO_U64( x ) ((uint64_t)((uintptr_t)(x)))
+
+/**
+ * @hideinitializer
+ * Function-like macro for stringizing a single level macro.
+ * @code
+ * #define MY_MACRO 32
+ * CSTD_STR1( MY_MACRO )
+ * > "MY_MACRO"
+ * @endcode
+ */
+#define CSTD_STR1( x ) #x
+
+/**
+ * @hideinitializer
+ * Function-like macro for stringizing a macro's value. This should not be used
+ * if the macro is defined in a way which may have no value; use the
+ * alternative @c CSTD_STR2N macro should be used instead.
+ * @code
+ * #define MY_MACRO 32
+ * CSTD_STR2( MY_MACRO )
+ * > "32"
+ * @endcode
+ */
+#define CSTD_STR2( x ) CSTD_STR1( x )
+
+/**
+ * @hideinitializer
+ * Utility function for stripping the first character off a string.
+ */
+static INLINE char* arm_cstd_strstrip( char * string )
+{
+ return ++string;
+}
+
+/**
+ * @hideinitializer
+ * Function-like macro for stringizing a single level macro where the macro
+ * itself may not have a value. Parameter @c a should be set to any single
+ * character which is then stripped by the macro via an inline function. This
+ * should only be used via the @c CSTD_STR2N macro; for printing a single
+ * macro only the @c CSTD_STR1 macro is a better alternative.
+ *
+ * This macro requires run-time code to handle the case where the macro has
+ * no value (you can't concat empty strings in the preprocessor).
+ */
+#define CSTD_STR1N( a, x ) arm_cstd_strstrip( CSTD_STR1( a##x ) )
+
+/**
+ * @hideinitializer
+ * Function-like macro for stringizing a two level macro where the macro itself
+ * may not have a value.
+ * @code
+ * #define MY_MACRO 32
+ * CSTD_STR2N( MY_MACRO )
+ * > "32"
+ *
+ * #define MY_MACRO 32
+ * CSTD_STR2N( MY_MACRO )
+ * > "32"
+ * @endcode
+ */
+#define CSTD_STR2N( x ) CSTD_STR1N( _, x )
+
+/* ============================================================================
+ Validate portability constructs
+============================================================================ */
+static INLINE void arm_cstd_compile_time_assertions( void )
+{
+ CSTD_COMPILE_TIME_ASSERT( sizeof(uint8_t) == 1 );
+ CSTD_COMPILE_TIME_ASSERT( sizeof(int8_t) == 1 );
+ CSTD_COMPILE_TIME_ASSERT( sizeof(uint16_t) == 2 );
+ CSTD_COMPILE_TIME_ASSERT( sizeof(int16_t) == 2 );
+ CSTD_COMPILE_TIME_ASSERT( sizeof(uint32_t) == 4 );
+ CSTD_COMPILE_TIME_ASSERT( sizeof(int32_t) == 4 );
+ CSTD_COMPILE_TIME_ASSERT( sizeof(uint64_t) == 8 );
+ CSTD_COMPILE_TIME_ASSERT( sizeof(int64_t) == 8 );
+ CSTD_COMPILE_TIME_ASSERT( sizeof(intptr_t) == sizeof(uintptr_t) );
+
+ CSTD_COMPILE_TIME_ASSERT( 1 == TRUE );
+ CSTD_COMPILE_TIME_ASSERT( 0 == FALSE );
+
+#if 1 == CSTD_CPU_32BIT
+ CSTD_COMPILE_TIME_ASSERT( sizeof(uintptr_t) == 4 );
+#elif 1 == CSTD_CPU_64BIT
+ CSTD_COMPILE_TIME_ASSERT( sizeof(uintptr_t) == 8 );
+#endif
+
+}
+
+/* ============================================================================
+ Useful function-like macro
+============================================================================ */
+/**
+ * @brief Return the lesser of two values.
+ * As a macro it may evaluate its arguments more than once.
+ * @see CSTD_MAX
+ */
+#define CSTD_MIN( x, y ) ((x) < (y) ? (x) : (y))
+
+/**
+ * @brief Return the greater of two values.
+ * As a macro it may evaluate its arguments more than once.
+ * If called on the same two arguments as CSTD_MIN it is guaranteed to return
+ * the one that CSTD_MIN didn't return. This is significant for types where not
+ * all values are comparable e.g. NaNs in floating-point types. But if you want
+ * to retrieve the min and max of two values, consider using a conditional swap
+ * instead.
+ */
+#define CSTD_MAX( x, y ) ((x) < (y) ? (y) : (x))
+
+/**
+ * @brief Clamp value @c x to within @c min and @c max inclusive.
+ */
+#define CSTD_CLAMP( x, min, max ) ((x)<(min) ? (min):((x)>(max) ? (max):(x)))
+
+/**
+ * Flag a cast as a reinterpretation, usually of a pointer type.
+ */
+#define CSTD_REINTERPRET_CAST(type) (type)
+
+/**
+ * Flag a cast as casting away const, usually of a pointer type.
+ */
+#define CSTD_CONST_CAST(type) (type)
+
+/**
+ * Flag a cast as a (potentially complex) value conversion, usually of a
+ * numerical type.
+ */
+#define CSTD_STATIC_CAST(type) (type)
+
+/* ============================================================================
+ Useful bit constants
+============================================================================ */
+/**
+ * @cond arm_cstd_utilities
+ */
+
+/* Common bit constant values, useful in embedded programming. */
+#define F_BIT_0 ((uint32_t)0x00000001)
+#define F_BIT_1 ((uint32_t)0x00000002)
+#define F_BIT_2 ((uint32_t)0x00000004)
+#define F_BIT_3 ((uint32_t)0x00000008)
+#define F_BIT_4 ((uint32_t)0x00000010)
+#define F_BIT_5 ((uint32_t)0x00000020)
+#define F_BIT_6 ((uint32_t)0x00000040)
+#define F_BIT_7 ((uint32_t)0x00000080)
+#define F_BIT_8 ((uint32_t)0x00000100)
+#define F_BIT_9 ((uint32_t)0x00000200)
+#define F_BIT_10 ((uint32_t)0x00000400)
+#define F_BIT_11 ((uint32_t)0x00000800)
+#define F_BIT_12 ((uint32_t)0x00001000)
+#define F_BIT_13 ((uint32_t)0x00002000)
+#define F_BIT_14 ((uint32_t)0x00004000)
+#define F_BIT_15 ((uint32_t)0x00008000)
+#define F_BIT_16 ((uint32_t)0x00010000)
+#define F_BIT_17 ((uint32_t)0x00020000)
+#define F_BIT_18 ((uint32_t)0x00040000)
+#define F_BIT_19 ((uint32_t)0x00080000)
+#define F_BIT_20 ((uint32_t)0x00100000)
+#define F_BIT_21 ((uint32_t)0x00200000)
+#define F_BIT_22 ((uint32_t)0x00400000)
+#define F_BIT_23 ((uint32_t)0x00800000)
+#define F_BIT_24 ((uint32_t)0x01000000)
+#define F_BIT_25 ((uint32_t)0x02000000)
+#define F_BIT_26 ((uint32_t)0x04000000)
+#define F_BIT_27 ((uint32_t)0x08000000)
+#define F_BIT_28 ((uint32_t)0x10000000)
+#define F_BIT_29 ((uint32_t)0x20000000)
+#define F_BIT_30 ((uint32_t)0x40000000)
+#define F_BIT_31 ((uint32_t)0x80000000)
+
+/* Common 2^n size values, useful in embedded programming. */
+#define C_SIZE_1B ((uint32_t)0x00000001)
+#define C_SIZE_2B ((uint32_t)0x00000002)
+#define C_SIZE_4B ((uint32_t)0x00000004)
+#define C_SIZE_8B ((uint32_t)0x00000008)
+#define C_SIZE_16B ((uint32_t)0x00000010)
+#define C_SIZE_32B ((uint32_t)0x00000020)
+#define C_SIZE_64B ((uint32_t)0x00000040)
+#define C_SIZE_128B ((uint32_t)0x00000080)
+#define C_SIZE_256B ((uint32_t)0x00000100)
+#define C_SIZE_512B ((uint32_t)0x00000200)
+#define C_SIZE_1KB ((uint32_t)0x00000400)
+#define C_SIZE_2KB ((uint32_t)0x00000800)
+#define C_SIZE_4KB ((uint32_t)0x00001000)
+#define C_SIZE_8KB ((uint32_t)0x00002000)
+#define C_SIZE_16KB ((uint32_t)0x00004000)
+#define C_SIZE_32KB ((uint32_t)0x00008000)
+#define C_SIZE_64KB ((uint32_t)0x00010000)
+#define C_SIZE_128KB ((uint32_t)0x00020000)
+#define C_SIZE_256KB ((uint32_t)0x00040000)
+#define C_SIZE_512KB ((uint32_t)0x00080000)
+#define C_SIZE_1MB ((uint32_t)0x00100000)
+#define C_SIZE_2MB ((uint32_t)0x00200000)
+#define C_SIZE_4MB ((uint32_t)0x00400000)
+#define C_SIZE_8MB ((uint32_t)0x00800000)
+#define C_SIZE_16MB ((uint32_t)0x01000000)
+#define C_SIZE_32MB ((uint32_t)0x02000000)
+#define C_SIZE_64MB ((uint32_t)0x04000000)
+#define C_SIZE_128MB ((uint32_t)0x08000000)
+#define C_SIZE_256MB ((uint32_t)0x10000000)
+#define C_SIZE_512MB ((uint32_t)0x20000000)
+#define C_SIZE_1GB ((uint32_t)0x40000000)
+#define C_SIZE_2GB ((uint32_t)0x80000000)
+
+/**
+ * @endcond
+ */
+
+/**
+ * @}
+ */
+
+/**
+ * @}
+ */
+
+#endif /* End (_ARM_CSTD_) */
--- /dev/null
+/*
+ * Copyright:
+ * ----------------------------------------------------------------------------
+ * This confidential and proprietary software may be used only as authorized
+ * by a licensing agreement from ARM Limited.
+ * (C) COPYRIGHT 2005-2012 ARM Limited , ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorized copies and
+ * copies may only be made to the extent permitted by a licensing agreement
+ * from ARM Limited.
+ * ----------------------------------------------------------------------------
+ */
+
+#ifndef _ARM_CSTD_COMPILERS_H_
+#define _ARM_CSTD_COMPILERS_H_
+
+/* ============================================================================
+ Document default definitions - assuming nothing set at this point.
+============================================================================ */
+/**
+ * @addtogroup arm_cstd_coding_standard
+ * @{
+ */
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if toolchain is Microsoft Visual Studio, 0
+ * otherwise.
+ */
+#define CSTD_TOOLCHAIN_MSVC 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if toolchain is the GNU Compiler Collection, 0
+ * otherwise.
+ */
+#define CSTD_TOOLCHAIN_GCC 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if toolchain is ARM RealView Compiler Tools, 0
+ * otherwise. Note - if running RVCT in GCC mode this define will be set to 0;
+ * @c CSTD_TOOLCHAIN_GCC and @c CSTD_TOOLCHAIN_RVCT_GCC_MODE will both be
+ * defined as 1.
+ */
+#define CSTD_TOOLCHAIN_RVCT 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if toolchain is ARM RealView Compiler Tools running
+ * in GCC mode, 0 otherwise.
+ */
+#define CSTD_TOOLCHAIN_RVCT_GCC_MODE 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if processor is an x86 32-bit machine, 0 otherwise.
+ */
+#define CSTD_CPU_X86_32 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if processor is an x86-64 (AMD64) machine, 0
+ * otherwise.
+ */
+#define CSTD_CPU_X86_64 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if processor is an ARM machine, 0 otherwise.
+ */
+#define CSTD_CPU_ARM 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if processor is a MIPS machine, 0 otherwise.
+ */
+#define CSTD_CPU_MIPS 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if CPU is 32-bit, 0 otherwise.
+ */
+#define CSTD_CPU_32BIT 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if CPU is 64-bit, 0 otherwise.
+ */
+#define CSTD_CPU_64BIT 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if processor configured as big-endian, 0 if it
+ * is little-endian.
+ */
+#define CSTD_CPU_BIG_ENDIAN 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if operating system is a version of Windows, 0 if
+ * it is not.
+ */
+#define CSTD_OS_WINDOWS 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if operating system is a 32-bit version of Windows,
+ * 0 if it is not.
+ */
+#define CSTD_OS_WIN32 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if operating system is a 64-bit version of Windows,
+ * 0 if it is not.
+ */
+#define CSTD_OS_WIN64 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if operating system is Linux, 0 if it is not.
+ */
+#define CSTD_OS_LINUX 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if we are compiling Linux kernel code, 0 otherwise.
+ */
+#define CSTD_OS_LINUX_KERNEL 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if operating system is a 32-bit version of Linux,
+ * 0 if it is not.
+ */
+#define CSTD_OS_LINUX32 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if operating system is a 64-bit version of Linux,
+ * 0 if it is not.
+ */
+#define CSTD_OS_LINUX64 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if operating system is Android, 0 if it is not.
+ */
+#define CSTD_OS_ANDROID 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if we are compiling Android kernel code, 0 otherwise.
+ */
+#define CSTD_OS_ANDROID_KERNEL 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if operating system is a 32-bit version of Android,
+ * 0 if it is not.
+ */
+#define CSTD_OS_ANDROID32 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if operating system is a 64-bit version of Android,
+ * 0 if it is not.
+ */
+#define CSTD_OS_ANDROID64 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if operating system is a version of Apple OS,
+ * 0 if it is not.
+ */
+#define CSTD_OS_APPLEOS 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if operating system is a 32-bit version of Apple OS,
+ * 0 if it is not.
+ */
+#define CSTD_OS_APPLEOS32 0
+
+/**
+ * @hideinitializer
+ * Defined with value of 1 if operating system is a 64-bit version of Apple OS,
+ * 0 if it is not.
+ */
+#define CSTD_OS_APPLEOS64 0
+
+/**
+ * @def CSTD_OS_SYMBIAN
+ * @hideinitializer
+ * Defined with value of 1 if operating system is Symbian, 0 if it is not.
+ */
+#define CSTD_OS_SYMBIAN 0
+
+/**
+ * @def CSTD_OS_NONE
+ * @hideinitializer
+ * Defined with value of 1 if there is no operating system (bare metal), 0
+ * otherwise
+ */
+#define CSTD_OS_NONE 0
+
+/* ============================================================================
+ Determine the compiler in use
+============================================================================ */
+#if defined(_MSC_VER)
+ #undef CSTD_TOOLCHAIN_MSVC
+ #define CSTD_TOOLCHAIN_MSVC 1
+
+#elif defined(__GNUC__)
+ #undef CSTD_TOOLCHAIN_GCC
+ #define CSTD_TOOLCHAIN_GCC 1
+
+ /* Detect RVCT pretending to be GCC. */
+ #if defined(__ARMCC_VERSION)
+ #undef CSTD_TOOLCHAIN_RVCT_GCC_MODE
+ #define CSTD_TOOLCHAIN_RVCT_GCC_MODE 1
+ #endif
+
+#elif defined(__ARMCC_VERSION)
+ #undef CSTD_TOOLCHAIN_RVCT
+ #define CSTD_TOOLCHAIN_RVCT 1
+
+#else
+ #warning "Unsupported or unknown toolchain"
+
+#endif
+
+/* ============================================================================
+ Determine the processor
+============================================================================ */
+#if 1 == CSTD_TOOLCHAIN_MSVC
+ #if defined(_M_IX86)
+ #undef CSTD_CPU_X86_32
+ #define CSTD_CPU_X86_32 1
+
+ #elif defined(_M_X64) || defined(_M_AMD64)
+ #undef CSTD_CPU_X86_64
+ #define CSTD_CPU_X86_64 1
+
+ #elif defined(_M_ARM)
+ #undef CSTD_CPU_ARM
+ #define CSTD_CPU_ARM 1
+
+ #elif defined(_M_MIPS)
+ #undef CSTD_CPU_MIPS
+ #define CSTD_CPU_MIPS 1
+
+ #else
+ #warning "Unsupported or unknown host CPU for MSVC tools"
+
+ #endif
+
+#elif 1 == CSTD_TOOLCHAIN_GCC
+ #if defined(__amd64__)
+ #undef CSTD_CPU_X86_64
+ #define CSTD_CPU_X86_64 1
+
+ #elif defined(__i386__)
+ #undef CSTD_CPU_X86_32
+ #define CSTD_CPU_X86_32 1
+
+ #elif defined(__arm__)
+ #undef CSTD_CPU_ARM
+ #define CSTD_CPU_ARM 1
+
+ #elif defined(__mips__)
+ #undef CSTD_CPU_MIPS
+ #define CSTD_CPU_MIPS 1
+
+ #else
+ #warning "Unsupported or unknown host CPU for GCC tools"
+
+ #endif
+
+#elif 1 == CSTD_TOOLCHAIN_RVCT
+ #undef CSTD_CPU_ARM
+ #define CSTD_CPU_ARM 1
+
+#else
+ #warning "Unsupported or unknown toolchain"
+
+#endif
+
+/* ============================================================================
+ Determine the Processor Endianness
+============================================================================ */
+
+#if ((1 == CSTD_CPU_X86_32) || (1 == CSTD_CPU_X86_64))
+ /* Note: x86 and x86-64 are always little endian, so leave at default. */
+
+#elif 1 == CSTD_TOOLCHAIN_RVCT
+ #if defined(__BIG_ENDIAN)
+ #undef CSTD_ENDIAN_BIG
+ #define CSTD_ENDIAN_BIG 1
+ #endif
+
+#elif ((1 == CSTD_TOOLCHAIN_GCC) && (1 == CSTD_CPU_ARM))
+ #if defined(__ARMEB__)
+ #undef CSTD_ENDIAN_BIG
+ #define CSTD_ENDIAN_BIG 1
+ #endif
+
+#elif ((1 == CSTD_TOOLCHAIN_GCC) && (1 == CSTD_CPU_MIPS))
+ #if defined(__MIPSEB__)
+ #undef CSTD_ENDIAN_BIG
+ #define CSTD_ENDIAN_BIG 1
+ #endif
+
+#elif 1 == CSTD_TOOLCHAIN_MSVC
+ /* Note: Microsoft only support little endian, so leave at default. */
+
+#else
+ #warning "Unsupported or unknown CPU"
+
+#endif
+
+/* ============================================================================
+ Determine the operating system and addressing width
+============================================================================ */
+#if 1 == CSTD_TOOLCHAIN_MSVC
+ #if defined(_WIN32) && !defined(_WIN64)
+ #undef CSTD_OS_WINDOWS
+ #define CSTD_OS_WINDOWS 1
+ #undef CSTD_OS_WIN32
+ #define CSTD_OS_WIN32 1
+ #undef CSTD_CPU_32BIT
+ #define CSTD_CPU_32BIT 1
+
+ #elif defined(_WIN32) && defined(_WIN64)
+ #undef CSTD_OS_WINDOWS
+ #define CSTD_OS_WINDOWS 1
+ #undef CSTD_OS_WIN64
+ #define CSTD_OS_WIN64 1
+ #undef CSTD_CPU_64BIT
+ #define CSTD_CPU_64BIT 1
+
+ #else
+ #warning "Unsupported or unknown host OS for MSVC tools"
+
+ #endif
+
+#elif 1 == CSTD_TOOLCHAIN_GCC
+ #if defined(_WIN32) && defined(_WIN64)
+ #undef CSTD_OS_WINDOWS
+ #define CSTD_OS_WINDOWS 1
+ #undef CSTD_OS_WIN64
+ #define CSTD_OS_WIN64 1
+ #undef CSTD_CPU_64BIT
+ #define CSTD_CPU_64BIT 1
+
+ #elif defined(_WIN32) && !defined(_WIN64)
+ #undef CSTD_OS_WINDOWS
+ #define CSTD_OS_WINDOWS 1
+ #undef CSTD_OS_WIN32
+ #define CSTD_OS_WIN32 1
+ #undef CSTD_CPU_32BIT
+ #define CSTD_CPU_32BIT 1
+
+ #elif defined(ANDROID)
+ #undef CSTD_OS_ANDROID
+ #define CSTD_OS_ANDROID 1
+
+ #if defined(__KERNEL__)
+ #undef CSTD_OS_ANDROID_KERNEL
+ #define CSTD_OS_ANDROID_KERNEL 1
+ #endif
+
+ #if defined(__LP64__) || defined(_LP64)
+ #undef CSTD_OS_ANDROID64
+ #define CSTD_OS_ANDROID64 1
+ #undef CSTD_CPU_64BIT
+ #define CSTD_CPU_64BIT 1
+ #else
+ #undef CSTD_OS_ANDROID32
+ #define CSTD_OS_ANDROID32 1
+ #undef CSTD_CPU_32BIT
+ #define CSTD_CPU_32BIT 1
+ #endif
+
+ #elif defined(__KERNEL__) || defined(__linux)
+ #undef CSTD_OS_LINUX
+ #define CSTD_OS_LINUX 1
+
+ #if defined(__KERNEL__)
+ #undef CSTD_OS_LINUX_KERNEL
+ #define CSTD_OS_LINUX_KERNEL 1
+ #endif
+
+ #if defined(__LP64__) || defined(_LP64)
+ #undef CSTD_OS_LINUX64
+ #define CSTD_OS_LINUX64 1
+ #undef CSTD_CPU_64BIT
+ #define CSTD_CPU_64BIT 1
+ #else
+ #undef CSTD_OS_LINUX32
+ #define CSTD_OS_LINUX32 1
+ #undef CSTD_CPU_32BIT
+ #define CSTD_CPU_32BIT 1
+ #endif
+
+ #elif defined(__APPLE__)
+ #undef CSTD_OS_APPLEOS
+ #define CSTD_OS_APPLEOS 1
+
+ #if defined(__LP64__) || defined(_LP64)
+ #undef CSTD_OS_APPLEOS64
+ #define CSTD_OS_APPLEOS64 1
+ #undef CSTD_CPU_64BIT
+ #define CSTD_CPU_64BIT 1
+ #else
+ #undef CSTD_OS_APPLEOS32
+ #define CSTD_OS_APPLEOS32 1
+ #undef CSTD_CPU_32BIT
+ #define CSTD_CPU_32BIT 1
+ #endif
+
+ #elif defined(__SYMBIAN32__)
+ #undef CSTD_OS_SYMBIAN
+ #define CSTD_OS_SYMBIAN 1
+ #undef CSTD_CPU_32BIT
+ #define CSTD_CPU_32BIT 1
+
+ #else
+ #undef CSTD_OS_NONE
+ #define CSTD_OS_NONE 1
+ #undef CSTD_CPU_32BIT
+ #define CSTD_CPU_32BIT 1
+
+#endif
+
+#elif 1 == CSTD_TOOLCHAIN_RVCT
+
+ #if defined(ANDROID)
+ #undef CSTD_OS_ANDROID
+ #undef CSTD_OS_ANDROID32
+ #define CSTD_OS_ANDROID 1
+ #define CSTD_OS_ANDROID32 1
+
+ #elif defined(__linux)
+ #undef CSTD_OS_LINUX
+ #undef CSTD_OS_LINUX32
+ #define CSTD_OS_LINUX 1
+ #define CSTD_OS_LINUX32 1
+
+ #elif defined(__SYMBIAN32__)
+ #undef CSTD_OS_SYMBIAN
+ #define CSTD_OS_SYMBIAN 1
+
+ #else
+ #undef CSTD_OS_NONE
+ #define CSTD_OS_NONE 1
+
+#endif
+
+#else
+ #warning "Unsupported or unknown host OS"
+
+#endif
+
+/* ============================================================================
+ Determine the correct linker symbol Import and Export Macros
+============================================================================ */
+/**
+ * @defgroup arm_cstd_linkage_specifiers Linkage Specifiers
+ * @{
+ *
+ * This set of macros contain system-dependent linkage specifiers which
+ * determine the visibility of symbols across DLL boundaries. A header for a
+ * particular DLL should define a set of local macros derived from these,
+ * and should not use these macros to decorate functions directly as there may
+ * be multiple DLLs being used.
+ *
+ * These DLL library local macros should be (with appropriate library prefix)
+ * <tt>[MY_LIBRARY]_API</tt>, <tt>[MY_LIBRARY]_IMPL</tt>, and
+ * <tt>[MY_LIBRARY]_LOCAL</tt>.
+ *
+ * - <tt>[MY_LIBRARY]_API</tt> should be use to decorate the function
+ * declarations in the header. It should be defined as either
+ * @c CSTD_LINK_IMPORT or @c CSTD_LINK_EXPORT, depending whether the
+ * current situation is a compile of the DLL itself (use export) or a
+ * compile of an external user of the DLL (use import).
+ * - <tt>[MY_LIBRARY]_IMPL</tt> should be defined as @c CSTD_LINK_IMPL
+ * and should be used to decorate the definition of functions in the C
+ * file.
+ * - <tt>[MY_LIBRARY]_LOCAL</tt> should be used to decorate function
+ * declarations which are exported across translation units within the
+ * DLL, but which are not exported outside of the DLL boundary.
+ *
+ * Functions which are @c static in either a C file or in a header file do not
+ * need any form of linkage decoration, and should therefore have no linkage
+ * macro applied to them.
+ */
+
+/**
+ * @def CSTD_LINK_IMPORT
+ * Specifies a function as being imported to a translation unit across a DLL
+ * boundary.
+ */
+
+/**
+ * @def CSTD_LINK_EXPORT
+ * Specifies a function as being exported across a DLL boundary by a
+ * translation unit.
+ */
+
+/**
+ * @def CSTD_LINK_IMPL
+ * Specifies a function which will be exported across a DLL boundary as
+ * being implemented by a translation unit.
+ */
+
+/**
+ * @def CSTD_LINK_LOCAL
+ * Specifies a function which is internal to a DLL, and which should not be
+ * exported outside of it.
+ */
+
+/**
+ * @}
+ */
+
+#if 1 == CSTD_OS_LINUX
+ #define CSTD_LINK_IMPORT __attribute__((visibility("default")))
+ #define CSTD_LINK_EXPORT __attribute__((visibility("default")))
+ #define CSTD_LINK_IMPL __attribute__((visibility("default")))
+ #define CSTD_LINK_LOCAL __attribute__((visibility("hidden")))
+
+#elif 1 == CSTD_OS_WINDOWS
+ #define CSTD_LINK_IMPORT __declspec(dllimport)
+ #define CSTD_LINK_EXPORT __declspec(dllexport)
+ #define CSTD_LINK_IMPL __declspec(dllexport)
+ #define CSTD_LINK_LOCAL
+
+#elif 1 == CSTD_OS_SYMBIAN
+ #define CSTD_LINK_IMPORT IMPORT_C
+ #define CSTD_LINK_EXPORT IMPORT_C
+ #define CSTD_LINK_IMPL EXPORT_C
+ #define CSTD_LINK_LOCAL
+
+#elif 1 == CSTD_OS_APPLEOS
+ #define CSTD_LINK_IMPORT __attribute__((visibility("default")))
+ #define CSTD_LINK_EXPORT __attribute__((visibility("default")))
+ #define CSTD_LINK_IMPL __attribute__((visibility("default")))
+ #define CSTD_LINK_LOCAL __attribute__((visibility("hidden")))
+
+#else /* CSTD_OS_NONE */
+ #define CSTD_LINK_IMPORT
+ #define CSTD_LINK_EXPORT
+ #define CSTD_LINK_IMPL
+ #define CSTD_LINK_LOCAL
+
+#endif
+
+/**
+ * @}
+ */
+
+#endif /* End (_ARM_CSTD_COMPILERS_H_) */
--- /dev/null
+/*
+ * Copyright:
+ * ----------------------------------------------------------------------------
+ * This confidential and proprietary software may be used only as authorized
+ * by a licensing agreement from ARM Limited.
+ * (C) COPYRIGHT 2009-2010 ARM Limited , ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorized copies and
+ * copies may only be made to the extent permitted by a licensing agreement
+ * from ARM Limited.
+ * ----------------------------------------------------------------------------
+ */
+
+#ifndef _ARM_CSTD_PACK_POP_H_
+#define _ARM_CSTD_PACK_POP_H_
+
+#if 1 == CSTD_TOOLCHAIN_MSVC
+ #include <poppack.h>
+#endif
+
+#endif /* End (_ARM_CSTD_PACK_POP_H_) */
--- /dev/null
+/*
+ * Copyright:
+ * ----------------------------------------------------------------------------
+ * This confidential and proprietary software may be used only as authorized
+ * by a licensing agreement from ARM Limited.
+ * (C) COPYRIGHT 2009-2010 ARM Limited , ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorized copies and
+ * copies may only be made to the extent permitted by a licensing agreement
+ * from ARM Limited.
+ * ----------------------------------------------------------------------------
+ */
+
+#ifndef _ARM_CSTD_PACK_PUSH_H_
+#define _ARM_CSTD_PACK_PUSH_H_
+
+#if 1 == CSTD_TOOLCHAIN_MSVC
+ #include <pshpack1.h>
+#endif
+
+#endif /* End (_ARM_CSTD_PACK_PUSH_H_) */
--- /dev/null
+/*
+ * Copyright:
+ * ----------------------------------------------------------------------------
+ * This confidential and proprietary software may be used only as authorized
+ * by a licensing agreement from ARM Limited.
+ * (C) COPYRIGHT 2009-2010 ARM Limited , ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorized copies and
+ * copies may only be made to the extent permitted by a licensing agreement
+ * from ARM Limited.
+ * ----------------------------------------------------------------------------
+ */
+
+#ifndef _ARM_CSTD_TYPES_H_
+#define _ARM_CSTD_TYPES_H_
+
+#if 1 == CSTD_TOOLCHAIN_MSVC
+ #include "arm_cstd_types_msvc.h"
+#elif 1 == CSTD_TOOLCHAIN_GCC
+ #include "arm_cstd_types_gcc.h"
+#elif 1 == CSTD_TOOLCHAIN_RVCT
+ #include "arm_cstd_types_rvct.h"
+#else
+ #error "Toolchain not recognized"
+#endif
+
+#endif /* End (_ARM_CSTD_TYPES_H_) */
--- /dev/null
+/*
+ * Copyright:
+ * ----------------------------------------------------------------------------
+ * This confidential and proprietary software may be used only as authorized
+ * by a licensing agreement from ARM Limited.
+ * (C) COPYRIGHT 2009-2011 ARM Limited , ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorized copies and
+ * copies may only be made to the extent permitted by a licensing agreement
+ * from ARM Limited.
+ * ----------------------------------------------------------------------------
+ */
+
+#ifndef _ARM_CSTD_TYPES_GCC_H_
+#define _ARM_CSTD_TYPES_GCC_H_
+
+/* ============================================================================
+ Type definitions
+============================================================================ */
+/* All modern versions of GCC support stdint outside of C99 Mode. */
+/* However, Linux kernel limits what headers are available! */
+#if 1 == CSTD_OS_LINUX_KERNEL
+ #include <linux/kernel.h>
+ #include <linux/types.h>
+ #include <linux/stddef.h>
+ #include <linux/version.h>
+
+ /* Fix up any types which CSTD provdes but which Linux is missing. */
+ /* Note Linux assumes pointers are "long", so this is safe. */
+ #if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,24)
+ typedef unsigned long uintptr_t;
+ #endif
+ typedef long intptr_t;
+
+#else
+ #include <stdint.h>
+ #include <stddef.h>
+ #include <limits.h>
+#endif
+
+typedef uint32_t bool_t;
+
+#if !defined(TRUE)
+ #define TRUE ((bool_t)1)
+#endif
+
+#if !defined(FALSE)
+ #define FALSE ((bool_t)0)
+#endif
+
+/* ============================================================================
+ Keywords
+============================================================================ */
+/* Doxygen documentation for these is in the RVCT header. */
+#define ASM __asm__
+
+#define INLINE __inline__
+
+#define FORCE_INLINE __attribute__((__always_inline__)) __inline__
+
+#define NEVER_INLINE __attribute__((__noinline__))
+
+#define PURE __attribute__((__pure__))
+
+#define PACKED __attribute__((__packed__))
+
+/* GCC does not support pointers to UNALIGNED data, so we do not define it to
+ * force a compile error if this macro is used. */
+
+#define RESTRICT __restrict__
+
+/* RVCT in GCC mode does not support the CHECK_RESULT attribute. */
+#if 0 == CSTD_TOOLCHAIN_RVCT_GCC_MODE
+ #define CHECK_RESULT __attribute__((__warn_unused_result__))
+#else
+ #define CHECK_RESULT
+#endif
+
+/* RVCT in GCC mode does not support the __func__ name outside of C99. */
+#if (0 == CSTD_TOOLCHAIN_RVCT_GCC_MODE)
+ #define CSTD_FUNC __func__
+#else
+ #define CSTD_FUNC __FUNCTION__
+#endif
+
+#endif /* End (_ARM_CSTD_TYPES_GCC_H_) */
--- /dev/null
+/*
+ * Copyright:
+ * ----------------------------------------------------------------------------
+ * This confidential and proprietary software may be used only as authorized
+ * by a licensing agreement from ARM Limited.
+ * (C) COPYRIGHT 2009-2011 ARM Limited , ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorized copies and
+ * copies may only be made to the extent permitted by a licensing agreement
+ * from ARM Limited.
+ * ----------------------------------------------------------------------------
+ */
+
+#ifndef _ARM_CSTD_TYPES_RVCT_H_
+#define _ARM_CSTD_TYPES_RVCT_H_
+
+/* ============================================================================
+ Type definitions
+============================================================================ */
+#include <stddef.h>
+#include <limits.h>
+
+#if 199901L <= __STDC_VERSION__
+ #include <inttypes.h>
+#else
+ typedef unsigned char uint8_t;
+ typedef signed char int8_t;
+ typedef unsigned short uint16_t;
+ typedef signed short int16_t;
+ typedef unsigned int uint32_t;
+ typedef signed int int32_t;
+ typedef unsigned __int64 uint64_t;
+ typedef signed __int64 int64_t;
+ typedef ptrdiff_t intptr_t;
+ typedef size_t uintptr_t;
+#endif
+
+typedef uint32_t bool_t;
+
+#if !defined(TRUE)
+ #define TRUE ((bool_t)1)
+#endif
+
+#if !defined(FALSE)
+ #define FALSE ((bool_t)0)
+#endif
+
+/* ============================================================================
+ Keywords
+============================================================================ */
+/**
+ * @addtogroup arm_cstd_coding_standard
+ * @{
+ */
+
+/**
+ * @def ASM
+ * @hideinitializer
+ * Mark an assembler block. Such blocks are often compiler specific, so often
+ * need to be surrounded in appropriate @c ifdef and @c endif blocks
+ * using the relevant @c CSTD_TOOLCHAIN macro.
+ */
+#define ASM __asm
+
+/**
+ * @def INLINE
+ * @hideinitializer
+ * Mark a definition as something which should be inlined. This is not always
+ * possible on a given compiler, and may be disabled at lower optimization
+ * levels.
+ */
+#define INLINE __inline
+
+/**
+ * @def FORCE_INLINE
+ * @hideinitializer
+ * Mark a definition as something which should be inlined. This provides a much
+ * stronger hint to the compiler than @c INLINE, and if supported should always
+ * result in an inlined function being emitted. If not supported this falls
+ * back to using the @c INLINE definition.
+ */
+#define FORCE_INLINE __forceinline
+
+/**
+ * @def NEVER_INLINE
+ * @hideinitializer
+ * Mark a definition as something which should not be inlined. This provides a
+ * stronger hint to the compiler than the function should not be inlined,
+ * bypassing any heuristic rules the compiler normally applies. If not
+ * supported by a toolchain this falls back to being an empty macro.
+ */
+#define NEVER_INLINE __declspec(noinline)
+
+/**
+ * @def PURE
+ * @hideinitializer
+ * Denotes that a function's return is only dependent on its inputs, enabling
+ * more efficient optimizations. Falls back to an empty macro if not supported.
+ */
+#define PURE __pure
+
+/**
+ * @def PACKED
+ * @hideinitializer
+ * Denotes that a structure should be stored in a packed form. This macro must
+ * be used in conjunction with the @c arm_cstd_pack_* headers for portability:
+ *
+ * @code
+ * #include <cstd/arm_cstd_pack_push.h>
+ *
+ * struct PACKED myStruct {
+ * ...
+ * };
+ *
+ * #include <cstd/arm_cstd_pack_pop.h>
+ * PACKED
+ * @endcode
+ */
+#define PACKED __packed
+
+/**
+ * @def UNALIGNED
+ * @hideinitializer
+ * Denotes that a pointer points to a buffer with lower alignment than the
+ * natural alignment required by the C standard. This should only be used
+ * in extreme cases, as the emitted code is normally more efficient if memory
+ * is aligned.
+ *
+ * @warning This is \b NON-PORTABLE. The GNU tools are anti-unaligned pointers
+ * and have no support for such a construction.
+ */
+#define UNALIGNED __packed
+
+/**
+ * @def RESTRICT
+ * @hideinitializer
+ * Denotes that a pointer does not overlap with any other points currently in
+ * scope, increasing the range of optimizations which can be performed by the
+ * compiler.
+ *
+ * @warning Specification of @c RESTRICT is a contract between the programmer
+ * and the compiler. If you place @c RESTICT on buffers which do actually
+ * overlap the behavior is undefined, and likely to vary at different
+ * optimization levels.!
+ */
+#define RESTRICT __restrict
+
+/**
+ * @def CHECK_RESULT
+ * @hideinitializer
+ * Function attribute which causes a warning to be emitted if the compiler's
+ * return value is not used by the caller. Compiles to an empty macro if
+ * there is no supported mechanism for this check in the underlying compiler.
+ *
+ * @note At the time of writing this is only supported by GCC. RVCT does not
+ * support this attribute, even in GCC mode, so engineers are encouraged to
+ * compile their code using GCC even if primarily working with another
+ * compiler.
+ *
+ * @code
+ * CHECK_RESULT int my_func( void );
+ * @endcode
+ */
+#define CHECK_RESULT
+
+/**
+ * @def CSTD_FUNC
+ * Specify the @c CSTD_FUNC macro, a portable construct containing the name of
+ * the current function. On most compilers it is illegal to use this macro
+ * outside of a function scope. If not supported by the compiler we define
+ * @c CSTD_FUNC as an empty string.
+ *
+ * @warning Due to the implementation of this on most modern compilers this
+ * expands to a magically defined "static const" variable, not a constant
+ * string. This makes injecting @c CSTD_FUNC directly in to compile-time
+ * strings impossible, so if you want to make the function name part of a
+ * larger string you must use a printf-like function with a @c @%s template
+ * which is populated with @c CSTD_FUNC
+ */
+#define CSTD_FUNC __FUNCTION__
+
+/**
+ * @}
+ */
+
+#endif /* End (_ARM_CSTD_TYPES_RVCT_H_) */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#ifndef _MALISW_H_
+#define _MALISW_H_
+
+#define MALI_MODULE_MALISW_MAJOR 2
+#define MALI_MODULE_MALISW_MINOR 4
+
+/**
+ * @file mali_malisw.h
+ * Driver-wide include for common macros and types.
+ */
+
+/**
+ * @defgroup malisw Mali software definitions and types
+ * @{
+ */
+
+#include <stddef.h>
+
+#include "mali_stdtypes.h"
+#include "mali_version_macros.h"
+
+/** @brief Gets the container object when given a pointer to a member of an object. */
+#define CONTAINER_OF(ptr, type, member) ((type *)((char *)(ptr) - offsetof(type,member)))
+
+/** @brief Gets the number of elements of type s in a fixed length array of s */
+#define NELEMS(s) (sizeof(s)/sizeof((s)[0]))
+
+/**
+ * @brief The lesser of two values.
+ * May evaluate its arguments more than once.
+ * @see CSTD_MIN
+ */
+#define MIN(x,y) CSTD_MIN(x,y)
+
+/**
+ * @brief The greater of two values.
+ * May evaluate its arguments more than once.
+ * @see CSTD_MAX
+ */
+#define MAX(x,y) CSTD_MAX(x,y)
+
+/**
+ * @brief Clamp value x to within min and max inclusive
+ * May evaluate its arguments more than once.
+ * @see CSTD_CLAMP
+ */
+#define CLAMP( x, min, max ) CSTD_CLAMP( x, min, max )
+
+/**
+ * @brief Convert a pointer into a u64 for storing in a data structure.
+ * This is commonly used when pairing a 32-bit CPU with a 64-bit peripheral,
+ * such as a Midgard GPU. C's type promotion is complex and a straight cast
+ * does not work reliably as pointers are often considered as signed.
+ */
+#define PTR_TO_U64( x ) CSTD_PTR_TO_U64( x )
+
+/**
+ * @name Mali library linkage specifiers
+ * These directly map to the cstd versions described in detail here: @ref arm_cstd_linkage_specifiers
+ * @{
+ */
+#define MALI_IMPORT CSTD_LINK_IMPORT
+#define MALI_EXPORT CSTD_LINK_EXPORT
+#define MALI_IMPL CSTD_LINK_IMPL
+#define MALI_LOCAL CSTD_LINK_LOCAL
+
+/** @brief Decorate exported function prototypes.
+ *
+ * The file containing the implementation of the function should define this to be MALI_EXPORT before including
+ * malisw/mali_malisw.h.
+ */
+#ifndef MALI_API
+#define MALI_API MALI_IMPORT
+#endif
+/** @} */
+
+/** @name Testable static functions
+ * @{
+ *
+ * These macros can be used to allow functions to be static in release builds but exported from a shared library in unit
+ * test builds, allowing them to be tested or used to assist testing.
+ *
+ * Example mali_foo_bar.c containing the function to test:
+ *
+ * @code
+ * #define MALI_API MALI_EXPORT
+ *
+ * #include <malisw/mali_malisw.h>
+ * #include "mali_foo_testable_statics.h"
+ *
+ * MALI_TESTABLE_STATIC_IMPL void my_func()
+ * {
+ * //Implementation
+ * }
+ * @endcode
+ *
+ * Example mali_foo_testable_statics.h:
+ *
+ * @code
+ * #if 1 == MALI_UNIT_TEST
+ * #include <malisw/mali_malisw.h>
+ *
+ * MALI_TESTABLE_STATIC_API void my_func();
+ *
+ * #endif
+ * @endcode
+ *
+ * Example mali_foo_tests.c:
+ *
+ * @code
+ * #include <foo/src/mali_foo_testable_statics.h>
+ *
+ * void my_test_func()
+ * {
+ * my_func();
+ * }
+ * @endcode
+ */
+
+/** @brief Decorate testable static function implementations.
+ *
+ * A header file containing a MALI_TESTABLE_STATIC_API-decorated prototype for each static function will be required
+ * when MALI_UNIT_TEST == 1 in order to link the function from the test.
+ */
+#if 1 == MALI_UNIT_TEST
+#define MALI_TESTABLE_STATIC_IMPL MALI_IMPL
+#else
+#define MALI_TESTABLE_STATIC_IMPL static
+#endif
+
+/** @brief Decorate testable static function prototypes.
+ *
+ * @note Prototypes should @em only be declared when MALI_UNIT_TEST == 1
+ */
+#define MALI_TESTABLE_STATIC_API MALI_API
+/** @} */
+
+/** @name Testable local functions
+ * @{
+ *
+ * These macros can be used to allow functions to be local to a shared library in release builds but be exported in unit
+ * test builds, allowing them to be tested or used to assist testing.
+ *
+ * Example mali_foo_bar.c containing the function to test:
+ *
+ * @code
+ * #define MALI_API MALI_EXPORT
+ *
+ * #include <malisw/mali_malisw.h>
+ * #include "mali_foo_bar.h"
+ *
+ * MALI_TESTABLE_LOCAL_IMPL void my_func()
+ * {
+ * //Implementation
+ * }
+ * @endcode
+ *
+ * Example mali_foo_bar.h:
+ *
+ * @code
+ * #include <malisw/mali_malisw.h>
+ *
+ * MALI_TESTABLE_LOCAL_API void my_func();
+ *
+ * @endcode
+ *
+ * Example mali_foo_tests.c:
+ *
+ * @code
+ * #include <foo/src/mali_foo_bar.h>
+ *
+ * void my_test_func()
+ * {
+ * my_func();
+ * }
+ * @endcode
+ */
+
+/** @brief Decorate testable local function implementations.
+ *
+ * This can be used to have a function normally local to the shared library except in unit test builds where it will be
+ * exported.
+ */
+#if 1 == MALI_UNIT_TEST
+#define MALI_TESTABLE_LOCAL_IMPL MALI_IMPL
+#else
+#define MALI_TESTABLE_LOCAL_IMPL MALI_LOCAL
+#endif
+
+/** @brief Decorate testable local function prototypes.
+ *
+ * This can be used to have a function normally local to the shared library except in unit test builds where it will be
+ * exported.
+ */
+#if 1 == MALI_UNIT_TEST
+#define MALI_TESTABLE_LOCAL_API MALI_API
+#else
+#define MALI_TESTABLE_LOCAL_API MALI_LOCAL
+#endif
+/** @} */
+
+/**
+ * Flag a cast as a reinterpretation, usually of a pointer type.
+ * @see CSTD_REINTERPRET_CAST
+ */
+#define REINTERPRET_CAST(type) CSTD_REINTERPRET_CAST(type)
+
+/**
+ * Flag a cast as casting away const, usually of a pointer type.
+ * @see CSTD_CONST_CAST
+ */
+#define CONST_CAST(type) (type) CSTD_CONST_CAST(type)
+
+/**
+ * Flag a cast as a (potentially complex) value conversion, usually of a numerical type.
+ * @see CSTD_STATIC_CAST
+ */
+#define STATIC_CAST(type) (type) CSTD_STATIC_CAST(type)
+
+
+/** @} */
+
+#endif /* _MALISW_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#ifndef _MALISW_STDTYPES_H_
+#define _MALISW_STDTYPES_H_
+
+/**
+ * @file mali_stdtypes.h
+ * This file defines the standard types used by the Mali codebase.
+ */
+
+/**
+ * @addtogroup malisw
+ * @{
+ */
+
+/**
+ * @defgroup malisw_stdtypes Mali software standard types
+ *
+ * Basic driver-wide types.
+ */
+
+/**
+ * @addtogroup malisw_stdtypes
+ * @{
+ */
+
+#include "arm_cstd/arm_cstd.h"
+
+/**
+ * @name Scalar types.
+ * These are the scalar types used within the mali driver.
+ * @{
+ */
+/* Note: if compiling the Linux kernel then avoid redefining these. */
+#if 0 == CSTD_OS_LINUX_KERNEL
+ typedef uint64_t u64;
+ typedef uint32_t u32;
+ typedef uint16_t u16;
+ typedef uint8_t u8;
+
+ typedef int64_t s64;
+ typedef int32_t s32;
+ typedef int16_t s16;
+ typedef int8_t s8;
+#endif
+
+typedef double f64;
+typedef float f32;
+typedef u16 f16;
+
+typedef u32 mali_fixed16_16;
+/* @} */
+
+/**
+ * @name Boolean types.
+ * The intended use is for bool8 to be used when storing boolean values in
+ * structures, casting to mali_bool to be used in code sections.
+ * @{
+ */
+typedef bool_t mali_bool;
+typedef u8 mali_bool8;
+
+#define MALI_FALSE FALSE
+#define MALI_TRUE TRUE
+/* @} */
+
+/**
+ * @name Integer bounding values
+ * Maximum and minimum values for integer types
+ * @{
+ */
+#define U64_MAX UINT64_MAX
+#define U32_MAX UINT32_MAX
+#define U16_MAX UINT16_MAX
+#define U8_MAX UINT8_MAX
+
+#define S64_MAX INT64_MAX
+#define S64_MIN INT64_MIN
+#define S32_MAX INT32_MAX
+#define S32_MIN INT32_MIN
+#define S16_MAX INT16_MAX
+#define S16_MIN INT16_MIN
+#define S8_MAX INT8_MAX
+#define S8_MIN INT8_MIN
+/* @} */
+
+/**
+ * @name GPU address types
+ * Types for integers which hold a GPU pointer or GPU pointer offsets.
+ * @{
+ */
+typedef u64 mali_addr64;
+typedef u32 mali_addr32;
+typedef u64 mali_size64;
+typedef s64 mali_offset64;
+/* 32 bit offsets and sizes are always for native types and so use ptrdiff_t and size_t respectively */
+/* @} */
+
+/**
+ * @name Mali error types
+ * @brief The common error type for the mali drivers
+ * The mali_error type, all driver error handling should be of this type unless
+ * it must deal with a specific APIs error type.
+ * @{
+ */
+typedef enum
+{
+ /**
+ * @brief Common Mali errors for the entire driver
+ * MALI_ERROR_NONE is guaranteed to be 0.
+ * @{
+ */
+ MALI_ERROR_NONE = 0,
+ MALI_ERROR_OUT_OF_GPU_MEMORY,
+ MALI_ERROR_OUT_OF_MEMORY,
+ MALI_ERROR_FUNCTION_FAILED,
+ /* @} */
+ /**
+ * @brief Mali errors for Client APIs to pass to EGL when creating EGLImages
+ * These errors must only be returned to EGL from one of the Client APIs as part of the
+ * (clientapi)_egl_image_interface.h
+ * @{
+ */
+ MALI_ERROR_EGLP_BAD_ACCESS,
+ MALI_ERROR_EGLP_BAD_PARAMETER,
+ /* @} */
+ /**
+ * @brief Mali errors for the MCL module.
+ * These errors must only be used within the private components of the OpenCL implementation that report
+ * directly to API functions for cases where errors cannot be detected in the entrypoints file. They must
+ * not be passed between driver components.
+ * These are errors in the mali error space specifically for the MCL module, hence the MCLP prefix.
+ * @{
+ */
+ MALI_ERROR_MCLP_DEVICE_NOT_FOUND,
+ MALI_ERROR_MCLP_DEVICE_NOT_AVAILABLE,
+ MALI_ERROR_MCLP_COMPILER_NOT_AVAILABLE,
+ MALI_ERROR_MCLP_MEM_OBJECT_ALLOCATION_FAILURE,
+ MALI_ERROR_MCLP_PROFILING_INFO_NOT_AVAILABLE,
+ MALI_ERROR_MCLP_MEM_COPY_OVERLAP,
+ MALI_ERROR_MCLP_IMAGE_FORMAT_MISMATCH,
+ MALI_ERROR_MCLP_IMAGE_FORMAT_NOT_SUPPORTED,
+ MALI_ERROR_MCLP_BUILD_PROGRAM_FAILURE,
+ MALI_ERROR_MCLP_MAP_FAILURE,
+ MALI_ERROR_MCLP_MISALIGNED_SUB_BUFFER_OFFSET,
+ MALI_ERROR_MCLP_EXEC_STATUS_ERROR_FOR_EVENTS_IN_WAIT_LIST,
+ MALI_ERROR_MCLP_INVALID_VALUE,
+ MALI_ERROR_MCLP_INVALID_DEVICE_TYPE,
+ MALI_ERROR_MCLP_INVALID_PLATFORM,
+ MALI_ERROR_MCLP_INVALID_DEVICE,
+ MALI_ERROR_MCLP_INVALID_CONTEXT,
+ MALI_ERROR_MCLP_INVALID_QUEUE_PROPERTIES,
+ MALI_ERROR_MCLP_INVALID_COMMAND_QUEUE,
+ MALI_ERROR_MCLP_INVALID_HOST_PTR,
+ MALI_ERROR_MCLP_INVALID_MEM_OBJECT,
+ MALI_ERROR_MCLP_INVALID_IMAGE_FORMAT_DESCRIPTOR,
+ MALI_ERROR_MCLP_INVALID_IMAGE_SIZE,
+ MALI_ERROR_MCLP_INVALID_SAMPLER,
+ MALI_ERROR_MCLP_INVALID_BINARY,
+ MALI_ERROR_MCLP_INVALID_BUILD_OPTIONS,
+ MALI_ERROR_MCLP_INVALID_PROGRAM,
+ MALI_ERROR_MCLP_INVALID_PROGRAM_EXECUTABLE,
+ MALI_ERROR_MCLP_INVALID_KERNEL_NAME,
+ MALI_ERROR_MCLP_INVALID_KERNEL_DEFINITION,
+ MALI_ERROR_MCLP_INVALID_KERNEL,
+ MALI_ERROR_MCLP_INVALID_ARG_INDEX,
+ MALI_ERROR_MCLP_INVALID_ARG_VALUE,
+ MALI_ERROR_MCLP_INVALID_ARG_SIZE,
+ MALI_ERROR_MCLP_INVALID_KERNEL_ARGS,
+ MALI_ERROR_MCLP_INVALID_WORK_DIMENSION,
+ MALI_ERROR_MCLP_INVALID_WORK_GROUP_SIZE,
+ MALI_ERROR_MCLP_INVALID_WORK_ITEM_SIZE,
+ MALI_ERROR_MCLP_INVALID_GLOBAL_OFFSET,
+ MALI_ERROR_MCLP_INVALID_EVENT_WAIT_LIST,
+ MALI_ERROR_MCLP_INVALID_EVENT,
+ MALI_ERROR_MCLP_INVALID_OPERATION,
+ MALI_ERROR_MCLP_INVALID_GL_OBJECT,
+ MALI_ERROR_MCLP_INVALID_BUFFER_SIZE,
+ MALI_ERROR_MCLP_INVALID_MIP_LEVEL,
+ MALI_ERROR_MCLP_INVALID_GLOBAL_WORK_SIZE,
+ /* @} */
+ /**
+ * @brief Mali errors for the BASE module
+ * These errors must only be used within the private components of the Base implementation. They will not
+ * passed to other modules by the base driver.
+ * These are errors in the mali error space specifically for the BASE module, hence the BASEP prefix.
+ * @{
+ */
+ MALI_ERROR_BASEP_INVALID_FUNCTION
+ /* @} */
+} mali_error;
+/* @} */
+
+/* @} */
+
+/* @} */
+
+#endif /* _MALISW_STDTYPES_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#ifndef _MALISW_VERSION_MACROS_H_
+#define _MALISW_VERSION_MACROS_H_
+
+/**
+ * @file mali_version_macros.h
+ * Mali version control macros.
+ */
+
+/**
+ * @addtogroup malisw
+ * @{
+ */
+
+/**
+ * @defgroup malisw_version Mali module version control
+ *
+ * This file provides a set of macros used to check a module's version. This
+ * version information can be used to perform compile time checks of a module's
+ * suitability for use with another.
+ *
+ * Module versions have both a Major and Minor value which specify the version
+ * of the interface only. These are defined in the following way:
+ *
+ * @li Major: This version is incremented whenever a compatibility-breaking
+ * change is made. For example, removing an interface function.
+ * @li Minor: This version is incremented whenever an interface change is made
+ * that does not break compatibility. For example, adding a new function to the
+ * interface. This value is reset to zero whenever the major number is
+ * incremented.
+ *
+ * When providing a driver module that will be used with this system, the public
+ * header must include a major and minor define of the following form:
+ *
+ * @code
+ * #define MALI_MODULE_<module>_MAJOR X
+ * #define MALI_MODULE_<module>_MINOR Y
+ * @endcode
+ * e.g. for a module CZAM with include czam/mali_czam.h
+ * @code
+ *
+ * #define MALI_MODULE_CZAM_MAJOR 1
+ * #define MALI_MODULE_CZAM_MINOR 0
+ * @endcode
+ *
+ * The version assertion macros outlined below are wrapped with a static function.
+ * This provides more useful error messages when the assertions fail, and allows
+ * the assertions to be specified adjacent to the inclusion of the module header.
+ *
+ * These macros should be used in the global scope of the file. Normal use would be:
+ *
+ * @code
+ * #include <modulex/mali_modulex.h>
+ * #include <moduley/mali_moduley.h>
+ * #include <modulez/mali_modulez.h>
+ * #include <modulez/mali_modulew.h>
+ *
+ * // This module added an enum we needed on minor 4 of major release 2
+ * MALI_MODULE_ASSERT_MAJOR_EQUALS_MINOR_AT_LEAST( MODULEW, 2, 4 )
+ *
+ * // this module really needs to be a specific version
+ * MALI_MODULE_ASSERT_EQUALS( MODULEX, 2, 0 )
+ *
+ * // 1.4 has performance problems
+ * MALI_MODULE_ASSERT_MAXIMUM( MODULEY, 1, 3 )
+ *
+ * // Major defines a backward compatible series of versions
+ * MALI_MODULE_ASSERT_MAJOR_EQUALS( MODULEZ, 1 )
+ * @endcode
+ *
+ * @par Version Assertions
+ *
+ * This module provides the following compile time version assertion macros.
+ *
+ * @li #MALI_MODULE_ASSERT_MAJOR_EQUALS_MINOR_AT_LEAST
+ * @li #MALI_MODULE_ASSERT_MAJOR_EQUALS
+ * @li #MALI_MODULE_ASSERT_EQUALS
+ * @li #MALI_MODULE_ASSERT_MINIMUM
+ * @li #MALI_MODULE_ASSERT_MAXIMUM
+ *
+ * @par Limitations
+ *
+ * To allow the macros to be placed in the global scope and report more readable
+ * errors, they produce a static function. This makes them unsuitable for use
+ * within headers as the names are only unique on the name of the module under test,
+ * the line number of the current file, and assert type (min, max, equals, ...).
+ */
+
+/**
+ * @addtogroup malisw_version
+ * @{
+ */
+
+#include "arm_cstd/arm_cstd.h"
+
+/**
+ * Private helper macro, indirection so that __LINE__ resolves correctly.
+ */
+#define MALIP_MODULE_ASSERT_FUNCTION_SIGNATURE2( module, type, line ) \
+ static INLINE void _mali_module_##module##_version_check_##type##_##line(void)
+
+#define MALIP_MODULE_ASSERT_FUNCTION_SIGNATURE( module, type, line ) \
+ MALIP_MODULE_ASSERT_FUNCTION_SIGNATURE2( module, type, line )
+
+/**
+ * @hideinitializer
+ * This macro provides a compile time assert that a module interface that has been
+ * @#included in the source base has is greater than or equal to the version specified.
+ *
+ * Expected use is for cases where a module version before the requested minimum
+ * does not support a specific function or is missing an enum affecting the code that is
+ * importing the module.
+ *
+ * It should be invoked at the global scope and ideally following straight after
+ * the module header has been included. For example:
+ *
+ * @code
+ * #include <modulex/mali_modulex.h>
+ *
+ * MALI_MODULE_ASSERT_MINIMUM( MODULEX, 1, 3 )
+ * @endcode
+ */
+#define MALI_MODULE_ASSERT_MINIMUM( module, major, minor ) \
+ MALIP_MODULE_ASSERT_FUNCTION_SIGNATURE( module, minimum, __LINE__ ) \
+ { \
+ CSTD_COMPILE_TIME_ASSERT( ( ( MALI_MODULE_##module##_MAJOR << 16 ) | MALI_MODULE_##module##_MINOR ) \
+ >= ( ( (major) << 16 ) | (minor) ) ); \
+ }
+
+/**
+ * @hideinitializer
+ * This macro provides a compile time assert that a module interface that has been
+ * @#included in the source base is less than or equal to the version specified.
+ *
+ * Expected use is for cases where a later published minor version is found to be
+ * incompatible in some way after the new minor has been issued.
+ *
+ * It should be invoked at the global scope and ideally following straight after
+ * the module header has been included. For example:
+ *
+ * @code
+ * #include <modulex/mali_modulex.h>
+ *
+ * MALI_MODULE_ASSERT_MAXIMUM( MODULEX, 1, 3 )
+ * @endcode
+ */
+#define MALI_MODULE_ASSERT_MAXIMUM( module, major, minor ) \
+ MALIP_MODULE_ASSERT_FUNCTION_SIGNATURE( module, maximum, __LINE__ ) \
+ { \
+ CSTD_COMPILE_TIME_ASSERT( ( ( MALI_MODULE_##module##_MAJOR << 16 ) | MALI_MODULE_##module##_MINOR ) \
+ <= ( ( (major) << 16 ) | (minor) ) ); \
+ }
+
+/**
+ * @hideinitializer
+ * This macro provides a compile time assert that a module interface that has been
+ * @#included in the source base is equal to the version specified.
+ *
+ * Expected use is for cases where a specific version is known to work and other
+ * versions are considered to be risky.
+ *
+ * It should be invoked at the global scope and ideally following straight after
+ * the module header has been included. For example:
+ *
+ * @code
+ * #include <modulex/mali_modulex.h>
+ *
+ * MALI_MODULE_ASSERT_EQUALS( MODULEX, 1, 3 )
+ * @endcode
+ */
+#define MALI_MODULE_ASSERT_EQUALS( module, major, minor ) \
+ MALIP_MODULE_ASSERT_FUNCTION_SIGNATURE( module, equals, __LINE__ ) \
+ { \
+ CSTD_COMPILE_TIME_ASSERT( MALI_MODULE_##module##_MAJOR == major ); \
+ CSTD_COMPILE_TIME_ASSERT( MALI_MODULE_##module##_MINOR == minor ); \
+ }
+
+/**
+ * @hideinitializer
+ * This macro provides a compile time assert that a module interface that has been
+ * @#included in the source base has a major version equal to the major version specified.
+ *
+ * Expected use is for cases where a module is considered low risk and any minor changes
+ * are not considered to be important.
+ *
+ * It should be invoked at the global scope and ideally following straight after
+ * the module header has been included. For example:
+ *
+ * @code
+ * #include <modulex/mali_modulex.h>
+ *
+ * MALI_MODULE_ASSERT_MAJOR_EQUALS( MODULEX, 1, 3 )
+ * @endcode
+ */
+#define MALI_MODULE_ASSERT_MAJOR_EQUALS( module, major ) \
+ MALIP_MODULE_ASSERT_FUNCTION_SIGNATURE( module, major_equals, __LINE__ ) \
+ { \
+ CSTD_COMPILE_TIME_ASSERT( MALI_MODULE_##module##_MAJOR == major ); \
+ }
+
+/**
+ * @hideinitializer
+ * This macro provides a compile time assert that a module interface that has been
+ * @#included in the source base has a major version equal to the major version specified
+ * and that the minor version is at least that which is specified.
+ *
+ * Expected use is for cases where a major revision is suitable from a specific minor
+ * revision but future major versions are a risk.
+ *
+ * It should be invoked at the global scope and ideally following straight after
+ * the module header has been included. For example:
+ *
+ * @code
+ * #include <modulex/mali_modulex.h>
+ *
+ * MALI_MODULE_ASSERT_MAJOR_EQUALS_MINOR_AT_LEAST( MODULEX, 1, 3 )
+ * @endcode
+ */
+#define MALI_MODULE_ASSERT_MAJOR_EQUALS_MINOR_AT_LEAST( module, major, minor ) \
+ MALIP_MODULE_ASSERT_FUNCTION_SIGNATURE( module, major_equals_minor_at_least, __LINE__ ) \
+ { \
+ CSTD_COMPILE_TIME_ASSERT( MALI_MODULE_##module##_MAJOR == major ); \
+ CSTD_COMPILE_TIME_ASSERT( MALI_MODULE_##module##_MINOR >= minor ); \
+ }
+
+/* @} */
+
+/* @} */
+
+#endif /* _MALISW_VERSION_MACROS_H_ */
--- /dev/null
+obj-y += common/
+obj-y += linux/
--- /dev/null
+# Copyright:
+# ----------------------------------------------------------------------------
+# This confidential and proprietary software may be used only as authorized
+# by a licensing agreement from ARM Limited.
+# (C) COPYRIGHT 2010 ARM Limited, ALL RIGHTS RESERVED
+# The entire notice above must be reproduced on all authorized copies and
+# copies may only be made to the extent permitted by a licensing agreement
+# from ARM Limited.
+# ----------------------------------------------------------------------------
+#
+EXTRA_CFLAGS += -I$(ROOT) -I$(ROOT)/osk/src/linux/include -I$(ROOT)/uk/platform_$(PLATFORM)
--- /dev/null
+ccflags-$(CONFIG_VITHAR) += -DMALI_DEBUG=0 -DMALI_HW_TYPE=2 \
+ -DMALI_USE_UMP=0 -DMALI_HW_VERSION=r0p0 \
+ -DMALI_BASE_TRACK_MEMLEAK=0 -DMALI_ANDROID=1 -DMALI_ERROR_INJECT_ON=0 \
+ -DMALI_NO_MALI=0 -DMALI_BACKEND_KERNEL=1 \
+ -DMALI_FAKE_PLATFORM_DEVICE=1 -DMALI_MOCK_TEST=0 -DMALI_KERNEL_TEST_API=0 \
+ -DMALI_INFINITE_CACHE=0 -DMALI_LICENSE_IS_GPL=1 \
+ -DMALI_PLATFORM_CONFIG=exynos5 -DUMP_SVN_REV_STRING="\"dummy\"" \
+ -DMALI_RELEASE_NAME="\"dummy\"" -DMALI_UNIT_TEST=0 -DMALI_INSTRUMENTATION_LEVEL=0 -DMALI_CUSTOMER_RELEASE=1
+
+ROOTDIR = $(src)/../../..
+
+ccflags-$(CONFIG_VITHAR) += -I$(ROOTDIR)/kbase
+ccflags-$(CONFIG_VITHAR) += -I$(src)/..
+
+ccflags-y += -I$(ROOTDIR) -I$(ROOTDIR)/include -I$(ROOTDIR)/osk/src/linux/include -I$(ROOTDIR)/uk/platform_dummy
+ccflags-y += -I$(ROOTDIR)/kbase/midg_gpus/r0p0
+
+obj-y += mali_kbase_device.o
+obj-y += mali_kbase_mem.o
+obj-y += mali_kbase_mmu.o
+obj-y += mali_kbase_jd.o
+obj-y += mali_kbase_jm.o
+obj-y += mali_kbase_gpuprops.o
+obj-y += mali_kbase_js.o
+obj-y += mali_kbase_js_affinity.o
+obj-y += mali_kbase_pm.o
+obj-y += mali_kbase_event.o
+obj-y += mali_kbase_context.o
+obj-y += mali_kbase_pm.o
+obj-y += mali_kbase_pm_driver.o
+obj-y += mali_kbase_pm_metrics.o
+obj-y += mali_kbase_pm_always_on.o
+obj-y += mali_kbase_pm_demand.o
+obj-y += mali_kbase_config.o
+obj-y += mali_kbase_security.o
+obj-y += mali_kbase_instr.o
+#obj-y += mali_kbase_instr_7115.o
+obj-y += mali_kbase_cpuprops.o
+obj-y += mali_kbase_js_ctx_attr.o
+obj-y += mali_kbase_8401_workaround.o
+obj-y += mali_kbase_cache_policy.o
+obj-y += mali_kbase_softjobs.o
+
+obj-y += mali_kbase_pm_metrics_dummy.o
+
+obj-y += mali_kbase_js_policy_cfs.o
+obj-y += mali_kbase_hw.o
+#obj-y += mali_kbase_js_policy_fcfs.o
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#ifndef _KBASE_H_
+#define _KBASE_H_
+
+#include <malisw/mali_malisw.h>
+#include <osk/mali_osk.h>
+#include <uk/mali_ukk.h>
+
+#include <kbase/mali_base_kernel.h>
+#include <kbase/src/common/mali_kbase_uku.h>
+
+#include "mali_kbase_pm.h"
+#include "mali_kbase_cpuprops.h"
+#include "mali_kbase_gpuprops.h"
+
+#if CSTD_OS_LINUX_KERNEL
+#include <kbase/src/linux/mali_kbase_linux.h>
+#elif defined(MALI_KBASE_USERSPACE)
+#include <kbase/src/userspace/mali_kbase_userspace.h>
+#else
+#error "Unsupported OS"
+#endif
+
+#ifndef KBASE_OS_SUPPORT
+#error Please fix for your platform!
+#endif
+
+#include "mali_kbase_defs.h"
+
+#include "mali_kbase_js.h"
+
+#include "mali_kbase_mem.h"
+
+#include "mali_kbase_security.h"
+
+/**
+ * @page page_base_kernel_main Kernel-side Base (KBase) APIs
+ *
+ * The Kernel-side Base (KBase) APIs are divided up as follows:
+ * - @subpage page_kbase_js_policy
+ */
+
+/**
+ * @defgroup base_kbase_api Kernel-side Base (KBase) APIs
+ */
+
+extern const kbase_device_info kbase_dev_info[];
+
+kbase_device *kbase_device_alloc(void);
+/* note: configuration attributes member of kbdev needs to have been setup before calling kbase_device_init */
+mali_error kbase_device_init(kbase_device *kbdev, const kbase_device_info *dev_info);
+void kbase_device_term(kbase_device *kbdev);
+void kbase_device_free(kbase_device *kbdev);
+int kbase_device_has_feature(kbase_device *kbdev, u32 feature);
+kbase_midgard_type kbase_device_get_type(kbase_device *kbdev);
+
+/**
+ * Ensure that all IRQ handlers have completed execution
+ *
+ * @param kbdev The kbase device
+ */
+void kbase_synchronize_irqs(kbase_device *kbdev);
+
+struct kbase_context *kbase_create_context(kbase_device *kbdev);
+void kbase_destroy_context(kbase_context *kctx);
+mali_error kbase_context_set_create_flags(kbase_context *kctx, u32 flags);
+
+mali_error kbase_instr_hwcnt_setup(kbase_context * kctx, kbase_uk_hwcnt_setup * setup);
+mali_error kbase_instr_hwcnt_enable(kbase_context * kctx, kbase_uk_hwcnt_setup * setup);
+mali_error kbase_instr_hwcnt_disable(kbase_context * kctx);
+mali_error kbase_instr_hwcnt_clear(kbase_context * kctx);
+mali_error kbase_instr_hwcnt_dump(kbase_context * kctx);
+mali_error kbase_instr_hwcnt_dump_irq(kbase_context * kctx);
+mali_bool kbase_instr_hwcnt_dump_complete(kbase_context * kctx, mali_bool *success);
+
+void kbase_clean_caches_done(kbase_device *kbdev);
+
+/**
+ * The GPU has completed performance count sampling successfully.
+ */
+void kbase_instr_hwcnt_sample_done(kbase_device *kbdev);
+
+mali_error kbase_create_os_context(kbase_os_context *osctx);
+void kbase_destroy_os_context(kbase_os_context *osctx);
+
+mali_error kbase_jd_init(struct kbase_context *kctx);
+void kbase_jd_exit(struct kbase_context *kctx);
+mali_error kbase_jd_submit(struct kbase_context *kctx, const kbase_uk_job_submit *user_bag);
+void kbase_jd_post_external_resources(kbase_jd_atom * katom);
+void kbase_jd_done(kbase_jd_atom *katom, int slot_nr, kbasep_js_tick *end_timestamp, mali_bool start_new_jobs);
+void kbase_jd_cancel(kbase_jd_atom *katom);
+void kbase_jd_flush_workqueues(kbase_context *kctx);
+void kbase_jd_zap_context(kbase_context *kctx);
+
+mali_error kbase_job_slot_init(kbase_device *kbdev);
+void kbase_job_slot_halt(kbase_device *kbdev);
+void kbase_job_slot_term(kbase_device *kbdev);
+void kbase_job_done(kbase_device *kbdev, u32 done);
+void kbase_job_zap_context(kbase_context *kctx);
+
+void kbase_job_slot_softstop(kbase_device *kbdev, int js, kbase_jd_atom *target_katom);
+void kbase_job_slot_hardstop(kbase_context *kctx, int js, kbase_jd_atom *target_katom);
+
+void kbase_event_post(kbase_context *ctx, kbase_event *event);
+int kbase_event_dequeue(kbase_context *ctx, base_jd_event *uevent);
+int kbase_event_pending(kbase_context *ctx);
+mali_error kbase_event_init(kbase_context *kctx);
+void kbase_event_close(kbase_context *kctx);
+void kbase_event_cleanup(kbase_context *kctx);
+void kbase_event_wakeup(kbase_context *kctx);
+
+void kbase_process_soft_job( kbase_context *kctx, kbase_jd_atom *katom );
+
+/* api used internally for register access. Contains validation and tracing */
+void kbase_reg_write(kbase_device *kbdev, u16 offset, u32 value, kbase_context * kctx);
+u32 kbase_reg_read(kbase_device *kbdev, u16 offset, kbase_context * kctx);
+void kbase_device_trace_register_access(kbase_context * kctx, kbase_reg_access_type type, u16 reg_offset, u32 reg_value);
+void kbase_device_trace_buffer_install(kbase_context * kctx, u32 * tb, size_t size);
+void kbase_device_trace_buffer_uninstall(kbase_context * kctx);
+
+/* api to be ported per OS, only need to do the raw register access */
+void kbase_os_reg_write(kbase_device *kbdev, u16 offset, u32 value);
+u32 kbase_os_reg_read(kbase_device *kbdev, u16 offset);
+
+/** Report a GPU fault.
+ *
+ * This function is called from the interrupt handler when a GPU fault occurs.
+ * It reports the details of the fault using OSK_PRINT_WARN.
+ *
+ * @param kbdev The kbase device that the GPU fault occurred from.
+ * @param multiple Zero if only GPU_FAULT was raised, non-zero if MULTIPLE_GPU_FAULTS was also set
+ */
+void kbase_report_gpu_fault(kbase_device *kbdev, int multiple);
+
+/** Kill all jobs that are currently running from a context
+ *
+ * This is used in response to a page fault to remove all jobs from the faulting context from the hardware.
+ *
+ * @param kctx The context to kill jobs from
+ */
+void kbase_job_kill_jobs_from_context(kbase_context *kctx);
+
+/**
+ * GPU interrupt handler
+ *
+ * This function is called from the interrupt handler when a GPU irq is to be handled.
+ *
+ * @param kbdev The kbase device to handle an IRQ for
+ * @param val The value of the GPU IRQ status register which triggered the call
+ */
+void kbase_gpu_interrupt(kbase_device * kbdev, u32 val);
+
+/**
+ * Prepare for resetting the GPU.
+ * This function just soft-stops all the slots to ensure that as many jobs as possible are saved.
+ *
+ * The function returns a boolean which should be interpreted as follows:
+ * - MALI_TRUE - Prepared for reset, kbase_reset_gpu should be called.
+ * - MALI_FALSE - Another thread is performing a reset, kbase_reset_gpu should not be called.
+ *
+ * @return See description
+ */
+mali_bool kbase_prepare_to_reset_gpu(kbase_device *kbdev);
+
+/** Reset the GPU
+ *
+ * This function should be called after kbase_prepare_to_reset_gpu iff it returns MALI_TRUE.
+ * It should never be called without a corresponding call to kbase_prepare_to_reset_gpu.
+ *
+ * After this function is called (or not called if kbase_prepare_to_reset_gpu returned MALI_FALSE),
+ * the caller should wait for kbdev->reset_waitq to be signalled to know when the reset has completed.
+ */
+void kbase_reset_gpu(kbase_device *kbdev);
+
+
+/** Returns the name associated with a Mali exception code
+ *
+ * @param exception_code[in] exception code
+ * @return name associated with the exception code
+ */
+const char *kbase_exception_name(u32 exception_code);
+
+
+#if KBASE_TRACE_ENABLE != 0
+/** Add trace values about a job-slot
+ *
+ * @note Any functions called through this macro will still be evaluated in
+ * Release builds (MALI_DEBUG=0). Therefore, when KBASE_TRACE_ENABLE == 0 any
+ * functions called to get the parameters supplied to this macro must:
+ * - be static or static inline
+ * - must just return 0 and have no other statements present in the body.
+ */
+#define KBASE_TRACE_ADD_SLOT( kbdev, code, ctx, uatom, gpu_addr, jobslot ) \
+ kbasep_trace_add( kbdev, KBASE_TRACE_CODE(code), ctx, uatom, gpu_addr, \
+ KBASE_TRACE_FLAG_JOBSLOT, 0, jobslot, 0 )
+
+/** Add trace values about a job-slot, with info
+ *
+ * @note Any functions called through this macro will still be evaluated in
+ * Release builds (MALI_DEBUG=0). Therefore, when KBASE_TRACE_ENABLE == 0 any
+ * functions called to get the parameters supplied to this macro must:
+ * - be static or static inline
+ * - must just return 0 and have no other statements present in the body.
+ */
+#define KBASE_TRACE_ADD_SLOT_INFO( kbdev, code, ctx, uatom, gpu_addr, jobslot, info_val ) \
+ kbasep_trace_add( kbdev, KBASE_TRACE_CODE(code), ctx, uatom, gpu_addr, \
+ KBASE_TRACE_FLAG_JOBSLOT, 0, jobslot, info_val )
+
+
+/** Add trace values about a ctx refcount
+ *
+ * @note Any functions called through this macro will still be evaluated in
+ * Release builds (MALI_DEBUG=0). Therefore, when KBASE_TRACE_ENABLE == 0 any
+ * functions called to get the parameters supplied to this macro must:
+ * - be static or static inline
+ * - must just return 0 and have no other statements present in the body.
+ */
+#define KBASE_TRACE_ADD_REFCOUNT( kbdev, code, ctx, uatom, gpu_addr, refcount ) \
+ kbasep_trace_add( kbdev, KBASE_TRACE_CODE(code), ctx, uatom, gpu_addr, \
+ KBASE_TRACE_FLAG_REFCOUNT, refcount, 0, 0 )
+/** Add trace values about a ctx refcount, and info
+ *
+ * @note Any functions called through this macro will still be evaluated in
+ * Release builds (MALI_DEBUG=0). Therefore, when KBASE_TRACE_ENABLE == 0 any
+ * functions called to get the parameters supplied to this macro must:
+ * - be static or static inline
+ * - must just return 0 and have no other statements present in the body.
+ */
+#define KBASE_TRACE_ADD_REFCOUNT_INFO( kbdev, code, ctx, uatom, gpu_addr, refcount, info_val ) \
+ kbasep_trace_add( kbdev, KBASE_TRACE_CODE(code), ctx, uatom, gpu_addr, \
+ KBASE_TRACE_FLAG_REFCOUNT, refcount, 0, info_val )
+
+/** Add trace values (no slot or refcount)
+ *
+ * @note Any functions called through this macro will still be evaluated in
+ * Release builds (MALI_DEBUG=0). Therefore, when KBASE_TRACE_ENABLE == 0 any
+ * functions called to get the parameters supplied to this macro must:
+ * - be static or static inline
+ * - must just return 0 and have no other statements present in the body.
+ */
+#define KBASE_TRACE_ADD( kbdev, code, ctx, uatom, gpu_addr, info_val ) \
+ kbasep_trace_add( kbdev, KBASE_TRACE_CODE(code), ctx, uatom, gpu_addr, \
+ 0, 0, 0, info_val )
+
+/** Clear the trace */
+#define KBASE_TRACE_CLEAR( kbdev ) \
+ kbasep_trace_clear( kbdev )
+
+/** Dump the slot trace */
+#define KBASE_TRACE_DUMP( kbdev ) \
+ kbasep_trace_dump( kbdev )
+
+/** PRIVATE - do not use directly. Use KBASE_TRACE_ADD() instead */
+void kbasep_trace_add(kbase_device *kbdev, kbase_trace_code code, void *ctx, void *uatom, u64 gpu_addr,
+ u8 flags, int refcount, int jobslot, u32 info_val );
+/** PRIVATE - do not use directly. Use KBASE_TRACE_CLEAR() instead */
+void kbasep_trace_clear(kbase_device *kbdev);
+#else /* KBASE_TRACE_ENABLE != 0 */
+#define KBASE_TRACE_ADD_SLOT( kbdev, code, ctx, uatom, gpu_addr, jobslot )\
+ do{\
+ CSTD_UNUSED(kbdev);\
+ CSTD_NOP(code);\
+ CSTD_UNUSED(ctx);\
+ CSTD_UNUSED(uatom);\
+ CSTD_UNUSED(gpu_addr);\
+ CSTD_UNUSED(jobslot);\
+ }while(0)
+
+#define KBASE_TRACE_ADD_SLOT_INFO( kbdev, code, ctx, uatom, gpu_addr, jobslot, info_val )\
+ do{\
+ CSTD_UNUSED(kbdev);\
+ CSTD_NOP(code);\
+ CSTD_UNUSED(ctx);\
+ CSTD_UNUSED(uatom);\
+ CSTD_UNUSED(gpu_addr);\
+ CSTD_UNUSED(jobslot);\
+ CSTD_UNUSED(info_val);\
+ CSTD_NOP(0);\
+ }while(0)
+
+#define KBASE_TRACE_ADD_REFCOUNT( kbdev, code, ctx, uatom, gpu_addr, refcount )\
+ do{\
+ CSTD_UNUSED(kbdev);\
+ CSTD_NOP(code);\
+ CSTD_UNUSED(ctx);\
+ CSTD_UNUSED(uatom);\
+ CSTD_UNUSED(gpu_addr);\
+ CSTD_UNUSED(refcount);\
+ CSTD_NOP(0);\
+ }while(0)
+
+#define KBASE_TRACE_ADD_REFCOUNT_INFO( kbdev, code, ctx, uatom, gpu_addr, refcount, info_val )\
+ do{\
+ CSTD_UNUSED(kbdev);\
+ CSTD_NOP(code);\
+ CSTD_UNUSED(ctx);\
+ CSTD_UNUSED(uatom);\
+ CSTD_UNUSED(gpu_addr);\
+ CSTD_UNUSED(info_val);\
+ CSTD_NOP(0);\
+ }while(0)
+
+#define KBASE_TRACE_ADD( kbdev, code, subcode, ctx, uatom, val )\
+ do{\
+ CSTD_UNUSED(kbdev);\
+ CSTD_NOP(code);\
+ CSTD_UNUSED(subcode);\
+ CSTD_UNUSED(ctx);\
+ CSTD_UNUSED(uatom);\
+ CSTD_UNUSED(val);\
+ CSTD_NOP(0);\
+ }while(0)
+
+#define KBASE_TRACE_CLEAR( kbdev )\
+ do{\
+ CSTD_UNUSED(kbdev);\
+ CSTD_NOP(0);\
+ }while(0)
+#define KBASE_TRACE_DUMP( kbdev )\
+ do{\
+ CSTD_UNUSED(kbdev);\
+ CSTD_NOP(0);\
+ }while(0)
+
+#endif /* KBASE_TRACE_ENABLE != 0 */
+/** PRIVATE - do not use directly. Use KBASE_TRACE_DUMP() instead */
+void kbasep_trace_dump(kbase_device *kbdev);
+#endif
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_8401_workaround.c
+ * Functions related to working around BASE_HW_ISSUE_8401
+ */
+
+#include <kbase/src/common/mali_kbase.h>
+#include <kbase/src/common/mali_kbase_defs.h>
+#include <kbase/src/common/mali_kbase_jm.h>
+#include <kbase/src/common/mali_kbase_8401_workaround.h>
+
+#define WORKAROUND_PAGE_OFFSET (2)
+#define URT_POINTER_INDEX (20)
+#define RMU_POINTER_INDEX (23)
+#define RSD_POINTER_INDEX (24)
+#define TSD_POINTER_INDEX (31)
+
+static const u32 compute_job_32bit_header[] =
+{
+ /* Job Descriptor Header */
+
+ /* Job Status */
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ /* Flags and Indices */
+ /* job_type = compute shader job */
+ 0x00000008, 0x00000000,
+ /* Pointer to next job */
+ 0x00000000,
+ /* Reserved */
+ 0x00000000,
+ /* Job Dimension Data */
+ 0x0000000f, 0x21040842,
+ /* Task Split */
+ 0x08000000,
+ /* Reserved */
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+
+ /* Draw Call Descriptor - 32 bit (Must be aligned to a 64-byte boundry) */
+
+ /* Flags */
+ 0x00000004,
+ /* Primary Attribute Offset */
+ 0x00000000,
+ /* Primitive Index Base Value */
+ 0x00000000,
+
+ /* Pointer To Vertex Position Array (64-byte alignment) */
+ 0x00000000,
+ /* Pointer To Uniform Remapping Table (8-byte alignment) */
+ 0,
+ /* Pointer To Image Descriptor Pointer Table */
+ 0x00000000,
+ /* Pointer To Sampler Array */
+ 0x00000000,
+ /* Pointer To Register-Mapped Uniform Data Area (16-byte alignment) */
+ 0,
+ /* Pointer To Renderer State Descriptor (64-byte alignment) */
+ 0,
+ /* Pointer To Primary Attribute Buffer Array */
+ 0x00000000,
+ /* Pointer To Primary Attribute Array */
+ 0x00000000,
+ /* Pointer To Secondary Attribute Buffer Array */
+ 0x00000000,
+ /* Pointer To Secondary Attribute Array */
+ 0x00000000,
+ /* Pointer To Viewport Descriptor */
+ 0x00000000,
+ /* Pointer To Occlusion Query Result */
+ 0x00000000,
+ /* Pointer To Thread Storage (64 byte alignment) */
+ 0,
+};
+
+
+static const u32 compute_job_32bit_urt[] =
+{
+ /* Uniform Remapping Table Entry */
+ 0, 0,
+};
+
+
+static const u32 compute_job_32bit_rmu[] =
+{
+ /* Register Mapped Uniform Data Area (16 byte aligned), an array of 128-bit
+ * register values.
+ *
+ * NOTE: this is also used as the URT pointer, so the first 16-byte entry
+ * must be all zeros.
+ *
+ * For BASE_HW_ISSUE_8987, we place 16 RMUs here, because this should only
+ * be run concurrently with other GLES jobs (i.e. FS jobs from slot 0).
+ */
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000,
+ 0x00000000, 0x00000000, 0x00000000, 0x00000000
+};
+
+static const u32 compute_job_32bit_rsd[] =
+{
+ /* Renderer State Descriptor */
+
+ /* Shader program inital PC (low) */
+ 0x00000001,
+ /* Shader program initial PC (high) */
+ 0x00000000,
+ /* Image descriptor array sizes */
+ 0x00000000,
+ /* Attribute array sizes */
+ 0x00000000,
+ /* Uniform array size and Shader Flags */
+ /* Flags set: R, D, SE, Reg Uniforms==16, FPM==OpenCL */
+ 0x42003800,
+ /* Depth bias */
+ 0x00000000,
+ /* Depth slope bias */
+ 0x00000000,
+ /* Depth bias clamp */
+ 0x00000000,
+ /* Multisample Write Mask and Flags */
+ 0x00000000,
+ /* Stencil Write Masks and Alpha parameters */
+ 0x00000000,
+ /* Stencil tests - forward facing */
+ 0x00000000,
+ /* Stencel tests - back facing */
+ 0x00000000,
+ /* Alpha Test Reference Value */
+ 0x00000000,
+ /* Thread Balancing Information */
+ 0x00000000,
+ /* Blend Parameters or Pointer (low) */
+ 0x00000000,
+ /* Blend Parameters or Pointer (high) */
+ 0x00000000,
+};
+
+static const u32 compute_job_32bit_tsd[] =
+{
+ /* Thread Storage Descriptor */
+
+ /* Thread Local Storage Sizes */
+ 0x00000000,
+ /* Workgroup Local Memory Area Flags */
+ 0x0000001f,
+ /* Pointer to Local Storage Area */
+ 0x00021000, 0x00000001,
+ /* Pointer to Workgroup Local Storage Area */
+ 0x00000000, 0x00000000,
+ /* Pointer to Shader Exception Handler */
+ 0x00000000, 0x00000000
+};
+
+static kbase_jd_atom dummy_job_atom[KBASE_8401_WORKAROUND_COMPUTEJOB_COUNT];
+
+/**
+ * Initialize the compute job sturcture.
+ */
+
+static void kbasep_8401_workaround_update_job_pointers(u32 *dummy_compute_job, int page_nr)
+{
+ u32 base_address = (page_nr+WORKAROUND_PAGE_OFFSET)*OSK_PAGE_SIZE;
+ u8 *dummy_job = (u8*) dummy_compute_job;
+ u8 *dummy_job_urt;
+ u8 *dummy_job_rmu;
+ u8 *dummy_job_rsd;
+ u8 *dummy_job_tsd;
+
+ OSK_ASSERT(dummy_compute_job);
+
+ /* determin where each job section goes taking alignment restrictions into consideration */
+ dummy_job_urt = (u8*) ((((uintptr_t)dummy_job + sizeof(compute_job_32bit_header))+7) & ~7);
+ dummy_job_rmu = (u8*) ((((uintptr_t)dummy_job_urt + sizeof(compute_job_32bit_urt))+15) & ~15);
+ dummy_job_rsd = (u8*) ((((uintptr_t)dummy_job_rmu + sizeof(compute_job_32bit_rmu))+63) & ~63);
+ dummy_job_tsd = (u8*) ((((uintptr_t)dummy_job_rsd + sizeof(compute_job_32bit_rsd))+63) & ~63);
+
+ /* Make sure the job fits within a single page */
+ OSK_ASSERT(OSK_PAGE_SIZE > ((dummy_job_tsd+sizeof(compute_job_32bit_tsd)) - dummy_job));
+
+ /* Copy the job sections to the allocated memory */
+ memcpy(dummy_job, compute_job_32bit_header, sizeof(compute_job_32bit_header));
+ memcpy(dummy_job_urt, compute_job_32bit_urt, sizeof(compute_job_32bit_urt));
+ memcpy(dummy_job_rmu, compute_job_32bit_rmu, sizeof(compute_job_32bit_rmu));
+ memcpy(dummy_job_rsd, compute_job_32bit_rsd, sizeof(compute_job_32bit_rsd));
+ memcpy(dummy_job_tsd, compute_job_32bit_tsd, sizeof(compute_job_32bit_tsd));
+
+ /* Update header pointers */
+ *(dummy_compute_job + URT_POINTER_INDEX) = (dummy_job_urt - dummy_job) + base_address;
+ *(dummy_compute_job + RMU_POINTER_INDEX) = (dummy_job_rmu - dummy_job) + base_address;
+ *(dummy_compute_job + RSD_POINTER_INDEX) = (dummy_job_rsd - dummy_job) + base_address;
+ *(dummy_compute_job + TSD_POINTER_INDEX) = (dummy_job_tsd - dummy_job) + base_address;
+ /* Update URT pointer */
+ *((u32*)dummy_job_urt+0) = (((dummy_job_rmu - dummy_job) + base_address) << 8) & 0xffffff00;
+ *((u32*)dummy_job_urt+1) = (((dummy_job_rmu - dummy_job) + base_address) >> 24) & 0xff;
+}
+
+/**
+ * Initialize the memory for 8401 workaround.
+ */
+
+mali_error kbasep_8401_workaround_init(kbase_device *kbdev)
+{
+ kbasep_js_device_data *js_devdata;
+ kbase_context *workaround_kctx;
+ u32 count;
+ int i;
+ u16 as_present_mask;
+
+ OSK_ASSERT(kbdev);
+ OSK_ASSERT(kbdev->workaround_kctx == NULL);
+
+ js_devdata = &kbdev->js_data;
+
+ /* For this workaround we reserve one address space to allow us to
+ * submit a special job independent of other contexts */
+ --(kbdev->nr_hw_address_spaces);
+
+ if ( kbdev->nr_user_address_spaces == (kbdev->nr_hw_address_spaces + 1) )
+ {
+ /* Only update nr_user_address_spaces if it was unchanged - to ensure
+ * HW workarounds that have modified this will still work */
+ --(kbdev->nr_user_address_spaces);
+ }
+ OSK_ASSERT( kbdev->nr_user_address_spaces <= kbdev->nr_hw_address_spaces );
+
+ /* Recalculate the free address spaces bit-pattern */
+ as_present_mask = (1U << kbdev->nr_hw_address_spaces) - 1;
+ js_devdata->as_free &= as_present_mask;
+
+ workaround_kctx = kbase_create_context(kbdev);
+ if(!workaround_kctx)
+ {
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+
+ /* Allocate the pages required to contain the job */
+ count = kbase_phy_pages_alloc(workaround_kctx->kbdev,
+ &workaround_kctx->pgd_allocator,
+ KBASE_8401_WORKAROUND_COMPUTEJOB_COUNT,
+ kbdev->workaround_compute_job_pa);
+ if(count < KBASE_8401_WORKAROUND_COMPUTEJOB_COUNT)
+ {
+ goto page_release;
+ }
+
+ /* Get virtual address of mapped memory and write a compute job for each page */
+ for(i = 0; i < KBASE_8401_WORKAROUND_COMPUTEJOB_COUNT; i++)
+ {
+ kbdev->workaround_compute_job_va[i] = osk_kmap(kbdev->workaround_compute_job_pa[i]);
+ if(NULL == kbdev->workaround_compute_job_va[i])
+ {
+ goto page_free;
+ }
+
+ /* Generate the compute job data */
+ kbasep_8401_workaround_update_job_pointers((u32*)kbdev->workaround_compute_job_va[i], i);
+ }
+
+ /* Insert pages to the gpu mmu. */
+ kbase_mmu_insert_pages(workaround_kctx,
+ /* vpfn = page number */
+ (u64)WORKAROUND_PAGE_OFFSET,
+ /* physical address */
+ kbdev->workaround_compute_job_pa,
+ /* number of pages */
+ KBASE_8401_WORKAROUND_COMPUTEJOB_COUNT,
+ /* flags */
+ KBASE_REG_GPU_RD|KBASE_REG_CPU_RD|KBASE_REG_CPU_WR|KBASE_REG_GPU_WR);
+
+ kbdev->workaround_kctx = workaround_kctx;
+ return MALI_ERROR_NONE;
+page_free:
+ while(i--)
+ {
+ osk_kunmap(kbdev->workaround_compute_job_pa[i], kbdev->workaround_compute_job_va[i]);
+ }
+page_release:
+ kbase_phy_pages_free(kbdev, &workaround_kctx->pgd_allocator, count, kbdev->workaround_compute_job_pa);
+ kbase_destroy_context(workaround_kctx);
+
+ return MALI_ERROR_FUNCTION_FAILED;
+}
+
+/**
+ * Free up the memory used by 8401 workaround.
+ **/
+
+void kbasep_8401_workaround_term(kbase_device *kbdev)
+{
+ kbasep_js_device_data *js_devdata;
+ int i;
+ u16 restored_as;
+
+ OSK_ASSERT(kbdev);
+ OSK_ASSERT(kbdev->workaround_kctx);
+
+ js_devdata = &kbdev->js_data;
+
+ for(i = 0; i < KBASE_8401_WORKAROUND_COMPUTEJOB_COUNT; i++)
+ {
+ osk_kunmap(kbdev->workaround_compute_job_pa[i], kbdev->workaround_compute_job_va[i]);
+ }
+
+ kbase_phy_pages_free(kbdev, &kbdev->workaround_kctx->pgd_allocator, KBASE_8401_WORKAROUND_COMPUTEJOB_COUNT, kbdev->workaround_compute_job_pa);
+
+ kbase_destroy_context(kbdev->workaround_kctx);
+ kbdev->workaround_kctx = NULL;
+
+ /* Free up the workaround address space */
+ kbdev->nr_hw_address_spaces++;
+
+ if ( kbdev->nr_user_address_spaces == (kbdev->nr_hw_address_spaces - 1) )
+ {
+ /* Only update nr_user_address_spaces if it was unchanged - to ensure
+ * HW workarounds that have modified this will still work */
+ ++(kbdev->nr_user_address_spaces);
+ }
+ OSK_ASSERT( kbdev->nr_user_address_spaces <= kbdev->nr_hw_address_spaces );
+
+ /* Recalculate the free address spaces bit-pattern */
+ restored_as = (1U << kbdev->nr_hw_address_spaces);
+ js_devdata->as_free |= restored_as;
+}
+
+/**
+ * Submit the 8401 workaround job.
+ *
+ * Important for BASE_HW_ISSUE_8987: This job always uses 16 RMUs
+ * - Therefore, on slot[1] it will always use the same number of RMUs as another
+ * GLES job.
+ * - On slot[2], no other job (GLES or otherwise) will be running on the
+ * cores, by virtue of it being slot[2]. Therefore, any value of RMUs is
+ * acceptable.
+ */
+void kbasep_8401_submit_dummy_job(kbase_device *kbdev, int js)
+{
+ u32 cfg;
+ mali_addr64 jc;
+ u32 pgd_high;
+
+ /* While this workaround is active we reserve the last address space just for submitting the dummy jobs */
+ int as = kbdev->nr_hw_address_spaces;
+
+ /* Don't issue compute jobs on job slot 0 */
+ OSK_ASSERT(js != 0);
+ OSK_ASSERT(js < KBASE_8401_WORKAROUND_COMPUTEJOB_COUNT);
+
+ /* Job chain GPU address */
+ jc = (js+WORKAROUND_PAGE_OFFSET)*OSK_PAGE_SIZE; /* GPU phys address (see kbase_mmu_insert_pages call in kbasep_8401_workaround_init*/
+
+ /* Clear the job status words which may contain values from a previous job completion */
+ memset(kbdev->workaround_compute_job_va[js], 0, 4*sizeof(u32));
+
+ /* Get the affinity of the previous job */
+ dummy_job_atom[js].affinity = ((u64)kbase_reg_read(kbdev, JOB_SLOT_REG(js, JSn_AFFINITY_LO), NULL)) |
+ (((u64)kbase_reg_read(kbdev, JOB_SLOT_REG(js, JSn_AFFINITY_HI), NULL)) << 32);
+
+ /* Don't submit a compute job if the affinity was previously zero (i.e. no jobs have run yet on this slot) */
+ if(!dummy_job_atom[js].affinity)
+ {
+ return;
+ }
+
+ /* Ensure that our page tables are programmed into the MMU */
+ kbase_reg_write(kbdev, MMU_AS_REG(as, ASn_TRANSTAB_LO),
+ (kbdev->workaround_kctx->pgd & ASn_TRANSTAB_ADDR_SPACE_MASK) | ASn_TRANSTAB_READ_INNER
+ | ASn_TRANSTAB_ADRMODE_TABLE, NULL);
+
+ /* Need to use a conditional expression to avoid "right shift count >= width of type"
+ * error when using an if statement - although the size_of condition is evaluated at compile
+ * time the unused branch is not removed until after it is type-checked and the error
+ * produced.
+ */
+ pgd_high = sizeof(kbdev->workaround_kctx->pgd) > 4 ? (kbdev->workaround_kctx->pgd >> 32) : 0;
+ kbase_reg_write(kbdev, MMU_AS_REG(as, ASn_TRANSTAB_HI), pgd_high, NULL);
+
+ kbase_reg_write(kbdev, MMU_AS_REG(as, ASn_MEMATTR_LO), ASn_MEMATTR_IMPL_DEF_CACHE_POLICY, NULL);
+ kbase_reg_write(kbdev, MMU_AS_REG(as, ASn_MEMATTR_HI), ASn_MEMATTR_IMPL_DEF_CACHE_POLICY, NULL);
+ kbase_reg_write(kbdev, MMU_AS_REG(as, ASn_COMMAND), ASn_COMMAND_UPDATE, NULL);
+
+ kbase_reg_write(kbdev, JOB_SLOT_REG(js, JSn_HEAD_NEXT_LO), jc & 0xFFFFFFFF, NULL);
+ kbase_reg_write(kbdev, JOB_SLOT_REG(js, JSn_HEAD_NEXT_HI), jc >> 32, NULL);
+
+ kbase_reg_write(kbdev, JOB_SLOT_REG(js, JSn_AFFINITY_NEXT_LO), dummy_job_atom[js].affinity & 0xFFFFFFFF, NULL);
+ kbase_reg_write(kbdev, JOB_SLOT_REG(js, JSn_AFFINITY_NEXT_HI), dummy_job_atom[js].affinity >> 32, NULL);
+
+ /* start MMU, medium priority, cache clean/flush on end, clean/flush on start */
+ cfg = as | JSn_CONFIG_END_FLUSH_CLEAN_INVALIDATE | JSn_CONFIG_START_MMU
+ | JSn_CONFIG_START_FLUSH_CLEAN_INVALIDATE | JSn_CONFIG_THREAD_PRI(8);
+ kbase_reg_write(kbdev, JOB_SLOT_REG(js, JSn_CONFIG_NEXT), cfg, NULL);
+
+ KBASE_TRACE_ADD_SLOT( kbdev, JM_SUBMIT, NULL, 0, jc, js );
+
+ kbase_reg_write(kbdev, JOB_SLOT_REG(js, JSn_COMMAND_NEXT), JSn_COMMAND_START, NULL);
+ /* Report that the job has been submitted */
+ kbasep_jm_enqueue_submit_slot(&kbdev->jm_slots[js], &dummy_job_atom[js]);
+}
+
+/**
+ * Check if the katom given is a dummy compute job.
+ */
+mali_bool kbasep_8401_is_workaround_job(kbase_jd_atom *katom)
+{
+ int i;
+
+ /* Note: we don't check the first dummy_job_atom as slot 0 is never used for the workaround */
+ for(i = 1; i < KBASE_8401_WORKAROUND_COMPUTEJOB_COUNT; i++)
+ {
+ if(katom == &dummy_job_atom[i])
+ {
+ /* This is a dummy job */
+ return MALI_TRUE;
+ }
+ }
+
+ /* This is a real job */
+ return MALI_FALSE;
+}
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_8401_workaround.h
+ * Functions related to working around BASE_HW_ISSUE_8401
+ */
+
+#ifndef _KBASE_8401_WORKAROUND_H_
+#define _KBASE_8401_WORKAROUND_H_
+
+mali_error kbasep_8401_workaround_init(kbase_device *kbdev);
+void kbasep_8401_workaround_term(kbase_device *kbdev);
+void kbasep_8401_submit_dummy_job(kbase_device *kbdev, int js);
+mali_bool kbasep_8401_is_workaround_job(kbase_jd_atom *katom);
+
+#endif /* _KBASE_8401_WORKAROUND_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_cache_policy.h
+ * Cache Policy API.
+ */
+
+#include "mali_kbase_cache_policy.h"
+
+/*
+ * The output flags should be a combination of the following values:
+ * KBASE_REG_CPU_CACHED: CPU cache should be enabled
+ * KBASE_REG_GPU_CACHED: GPU cache should be enabled
+ *
+ * The input flags may contain a combination of hints:
+ * BASE_MEM_HINT_CPU_RD: region heavily read CPU side
+ * BASE_MEM_HINT_CPU_WR: region heavily written CPU side
+ * BASE_MEM_HINT_GPU_RD: region heavily read GPU side
+ * BASE_MEM_HINT_GPU_WR: region heavily written GPU side
+ */
+u32 kbase_cache_enabled(u32 flags, u32 nr_pages)
+{
+ u32 cache_flags = 0;
+
+ CSTD_UNUSED(nr_pages);
+
+ /* The CPU cache should be enabled for regions heavily read and written
+ * from the CPU side
+ */
+#if !MALI_UNCACHED
+ if ((flags & BASE_MEM_HINT_CPU_RD) && (flags & BASE_MEM_HINT_CPU_WR))
+ {
+ cache_flags |= KBASE_REG_CPU_CACHED;
+ }
+#endif
+
+ /* The GPU cache should be enabled for regions heavily read and written
+ * from the GPU side
+ */
+ if ((flags & BASE_MEM_HINT_GPU_RD) && (flags & BASE_MEM_HINT_GPU_WR))
+ {
+ cache_flags |= KBASE_REG_GPU_CACHED;
+ }
+
+ return cache_flags;
+}
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_cache_policy.h
+ * Cache Policy API.
+ */
+
+#ifndef _KBASE_CACHE_POLICY_H_
+#define _KBASE_CACHE_POLICY_H_
+
+#include <malisw/mali_malisw.h>
+#include "mali_kbase.h"
+#include <kbase/mali_base_kernel.h>
+
+/**
+ * @brief Choose the cache policy for a specific region
+ *
+ * Tells whether the CPU and GPU caches should be enabled or not for a specific region.
+ * This function can be modified to customize the cache policy depending on the flags
+ * and size of the region.
+ *
+ * @param[in] flags flags describing attributes of the region
+ * @param[in] size total number of pages (backed or not) for the region
+ *
+ * @return a combination of KBASE_REG_CPU_CACHED and KBASE_REG_GPU_CACHED depending
+ * on the cache policy
+ */
+u32 kbase_cache_enabled(u32 flags, u32 nr_pages);
+
+#endif /* _KBASE_CACHE_POLICY_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#include <kbase/src/common/mali_kbase.h>
+#include <kbase/src/common/mali_kbase_defs.h>
+#include <kbase/src/common/mali_kbase_cpuprops.h>
+#include <osk/mali_osk.h>
+#if MALI_USE_UMP == 1
+#include <ump/ump_common.h>
+#endif /* MALI_USE_UMP == 1 */
+
+/* Specifies how many attributes are permitted in the config (excluding terminating attribute).
+ * This is used in validation function so we can detect if configuration is properly terminated. This value can be
+ * changed if we need to introduce more attributes or many memory regions need to be defined */
+#define ATTRIBUTE_COUNT_MAX 32
+
+/* right now we allow only 2 memory attributes (excluding termination attribute) */
+#define MEMORY_ATTRIBUTE_COUNT_MAX 2
+
+/* Limits for gpu frequency configuration parameters. These will use for config validation. */
+#define MAX_GPU_ALLOWED_FREQ_KHZ 1000000
+#define MIN_GPU_ALLOWED_FREQ_KHZ 1
+
+/* Default irq throttle time. This is the default desired minimum time in between two consecutive
+ * interrupts from the gpu. The irq throttle gpu register is set after this value. */
+#define DEFAULT_IRQ_THROTTLE_TIME_US 20
+
+/*** Begin Scheduling defaults ***/
+
+/**
+ * Default scheduling tick granuality, in nanoseconds
+ */
+#define DEFAULT_JS_SCHEDULING_TICK_NS 100000000u /* 100ms */
+
+/**
+ * Default minimum number of scheduling ticks before jobs are soft-stopped.
+ *
+ * This defines the time-slice for a job (which may be different from that of a context)
+ */
+#define DEFAULT_JS_SOFT_STOP_TICKS 1 /* Between 0.1 and 0.2s before soft-stop */
+
+/**
+ * Default minimum number of scheduling ticks before Soft-Stoppable
+ * (BASE_JD_REQ_NSS bit clear) jobs are hard-stopped
+ */
+#define DEFAULT_JS_HARD_STOP_TICKS_SS_HW_ISSUE_8408 12 /* 1.2s before hard-stop, for a certain GLES2 test at 128x128 (bound by combined vertex+tiler job) */
+#define DEFAULT_JS_HARD_STOP_TICKS_SS 2 /* Between 0.2 and 0.3s before hard-stop */
+
+/**
+ * Default minimum number of scheduling ticks before Non-Soft-Stoppable
+ * (BASE_JD_REQ_NSS bit set) jobs are hard-stopped
+ */
+#define DEFAULT_JS_HARD_STOP_TICKS_NSS 600 /* 60s @ 100ms tick */
+
+/**
+ * Default minimum number of scheduling ticks before the GPU is reset
+ * to clear a "stuck" Soft-Stoppable job
+ */
+#define DEFAULT_JS_RESET_TICKS_SS_HW_ISSUE_8408 18 /* 1.8s before resetting GPU, for a certain GLES2 test at 128x128 (bound by combined vertex+tiler job) */
+#define DEFAULT_JS_RESET_TICKS_SS 3 /* 0.3-0.4s before GPU is reset */
+
+/**
+ * Default minimum number of scheduling ticks before the GPU is reset
+ * to clear a "stuck" Non-Soft-Stoppable job
+ */
+#define DEFAULT_JS_RESET_TICKS_NSS 601 /* 60.1s @ 100ms tick */
+
+/**
+ * Number of milliseconds given for other jobs on the GPU to be
+ * soft-stopped when the GPU needs to be reset.
+ */
+#define DEFAULT_JS_RESET_TIMEOUT_MS 3000
+
+/**
+ * Default timeslice that a context is scheduled in for, in nanoseconds.
+ *
+ * When a context has used up this amount of time across its jobs, it is
+ * scheduled out to let another run.
+ *
+ * @note the resolution is nanoseconds (ns) here, because that's the format
+ * often used by the OS.
+ */
+#define DEFAULT_JS_CTX_TIMESLICE_NS 50000000 /* 0.05s - at 20fps a ctx does at least 1 frame before being scheduled out. At 40fps, 2 frames, etc */
+
+/**
+ * Default initial runtime of a context for CFS, in ticks.
+ *
+ * This value is relative to that of the least-run context, and defines where
+ * in the CFS queue a new context is added.
+ */
+#define DEFAULT_JS_CFS_CTX_RUNTIME_INIT_SLICES 1
+
+/**
+ * Default minimum runtime value of a context for CFS, in ticks.
+ *
+ * This value is relative to that of the least-run context. This prevents
+ * "stored-up timeslices" DoS attacks.
+ */
+#define DEFAULT_JS_CFS_CTX_RUNTIME_MIN_SLICES 2
+
+/**
+ * Default setting for whether to prefer security or performance.
+ *
+ * Currently affects only r0p0-15dev0 HW and earlier.
+ */
+#define DEFAULT_SECURE_BUT_LOSS_OF_PERFORMANCE MALI_FALSE
+
+/*** End Scheduling defaults ***/
+
+/**
+ * Default value for KBASE_CONFIG_ATTR_CPU_SPEED_FUNC.
+ * Points to @ref kbase_cpuprops_get_default_clock_speed.
+ */
+#define DEFAULT_CPU_SPEED_FUNC ((uintptr_t)kbase_cpuprops_get_default_clock_speed)
+
+#if (!defined(MALI_KBASE_USERSPACE) || !MALI_KBASE_USERSPACE) && (!MALI_LICENSE_IS_GPL || MALI_FAKE_PLATFORM_DEVICE)
+
+extern kbase_platform_config platform_config;
+kbase_platform_config *kbasep_get_platform_config(void)
+{
+ return &platform_config;
+}
+#endif /* (!defined(MALI_KBASE_USERSPACE) || !MALI_KBASE_USERSPACE) && (!MALI_LICENSE_IS_GPL || MALI_FAKE_PLATFORM_DEVICE) */
+
+#
+
+int kbasep_get_config_attribute_count(const kbase_attribute *attributes)
+{
+ int count = 1;
+
+ OSK_ASSERT(attributes != NULL);
+
+ while (attributes->id != KBASE_CONFIG_ATTR_END)
+ {
+ attributes++;
+ count++;
+ }
+
+ return count;
+}
+
+int kbasep_get_config_attribute_count_by_id(const kbase_attribute *attributes, int attribute_id)
+{
+ int count = 0;
+ OSK_ASSERT(attributes != NULL);
+
+ while (attributes->id != KBASE_CONFIG_ATTR_END)
+ {
+ if (attributes->id == attribute_id)
+ {
+ count++;
+ }
+ attributes++;
+ }
+
+ return count;
+}
+
+static const char* midgard_type_strings[] =
+ {
+ "mali-t6xm",
+ "mali-t6f1",
+ "mali-t601",
+ "mali-t604",
+ "mali-t608"
+ };
+
+const char *kbasep_midgard_type_to_string(kbase_midgard_type midgard_type)
+{
+ OSK_ASSERT(midgard_type < KBASE_MALI_COUNT);
+
+ return midgard_type_strings[midgard_type];
+}
+
+const kbase_attribute *kbasep_get_next_attribute(const kbase_attribute *attributes, int attribute_id)
+{
+ OSK_ASSERT(attributes != NULL);
+
+ while (attributes->id != KBASE_CONFIG_ATTR_END)
+ {
+ if (attributes->id == attribute_id)
+ {
+ return attributes;
+ }
+ attributes++;
+ }
+ return NULL;
+}
+KBASE_EXPORT_TEST_API(kbasep_get_next_attribute)
+
+uintptr_t kbasep_get_config_value(struct kbase_device *kbdev, const kbase_attribute *attributes, int attribute_id)
+{
+ const kbase_attribute *attr;
+
+ OSK_ASSERT(attributes != NULL);
+
+ attr = kbasep_get_next_attribute(attributes, attribute_id);
+ if (attr != NULL)
+ {
+ return attr->data;
+ }
+
+ /* default values */
+ switch (attribute_id)
+ {
+ case KBASE_CONFIG_ATTR_MEMORY_PER_PROCESS_LIMIT:
+ return (uintptr_t)-1;
+#if MALI_USE_UMP == 1
+ case KBASE_CONFIG_ATTR_UMP_DEVICE:
+ return UMP_DEVICE_W_SHIFT;
+#endif /* MALI_USE_UMP == 1 */
+ case KBASE_CONFIG_ATTR_MEMORY_OS_SHARED_MAX:
+ return (uintptr_t)-1;
+ case KBASE_CONFIG_ATTR_MEMORY_OS_SHARED_PERF_GPU:
+ return KBASE_MEM_PERF_NORMAL;
+ case KBASE_CONFIG_ATTR_GPU_IRQ_THROTTLE_TIME_US:
+ return DEFAULT_IRQ_THROTTLE_TIME_US;
+ /* Begin scheduling defaults */
+ case KBASE_CONFIG_ATTR_JS_SCHEDULING_TICK_NS:
+ return DEFAULT_JS_SCHEDULING_TICK_NS;
+ case KBASE_CONFIG_ATTR_JS_SOFT_STOP_TICKS:
+ return DEFAULT_JS_SOFT_STOP_TICKS;
+ case KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_SS:
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8408))
+ {
+ return DEFAULT_JS_HARD_STOP_TICKS_SS_HW_ISSUE_8408;
+ }
+ else
+ {
+ return DEFAULT_JS_HARD_STOP_TICKS_SS;
+ }
+ case KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_NSS:
+ return DEFAULT_JS_HARD_STOP_TICKS_NSS;
+ case KBASE_CONFIG_ATTR_JS_CTX_TIMESLICE_NS:
+ return DEFAULT_JS_CTX_TIMESLICE_NS;
+ case KBASE_CONFIG_ATTR_JS_CFS_CTX_RUNTIME_INIT_SLICES:
+ return DEFAULT_JS_CFS_CTX_RUNTIME_INIT_SLICES;
+ case KBASE_CONFIG_ATTR_JS_CFS_CTX_RUNTIME_MIN_SLICES:
+ return DEFAULT_JS_CFS_CTX_RUNTIME_MIN_SLICES;
+ case KBASE_CONFIG_ATTR_JS_RESET_TICKS_SS:
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8408))
+ {
+ return DEFAULT_JS_RESET_TICKS_SS_HW_ISSUE_8408;
+ }
+ else
+ {
+ return DEFAULT_JS_RESET_TICKS_SS;
+ }
+ case KBASE_CONFIG_ATTR_JS_RESET_TICKS_NSS:
+ return DEFAULT_JS_RESET_TICKS_NSS;
+ case KBASE_CONFIG_ATTR_JS_RESET_TIMEOUT_MS:
+ return DEFAULT_JS_RESET_TIMEOUT_MS;
+ /* End scheduling defaults */
+ case KBASE_CONFIG_ATTR_POWER_MANAGEMENT_CALLBACKS:
+ return 0;
+ case KBASE_CONFIG_ATTR_PLATFORM_FUNCS:
+ return 0;
+ case KBASE_CONFIG_ATTR_SECURE_BUT_LOSS_OF_PERFORMANCE:
+ return DEFAULT_SECURE_BUT_LOSS_OF_PERFORMANCE;
+ case KBASE_CONFIG_ATTR_CPU_SPEED_FUNC:
+ return DEFAULT_CPU_SPEED_FUNC;
+ default:
+ OSK_PRINT_ERROR(OSK_BASE_CORE,
+ "kbasep_get_config_value. Cannot get value of attribute with id=%d and no default value defined",
+ attribute_id);
+ return 0;
+ }
+}
+KBASE_EXPORT_TEST_API(kbasep_get_config_value)
+
+mali_bool kbasep_platform_device_init(kbase_device *kbdev)
+{
+ kbase_platform_funcs_conf *platform_funcs;
+
+ platform_funcs = (kbase_platform_funcs_conf *) kbasep_get_config_value(kbdev, kbdev->config_attributes, KBASE_CONFIG_ATTR_PLATFORM_FUNCS);
+ if(platform_funcs)
+ {
+ if(platform_funcs->platform_init_func)
+ {
+ return platform_funcs->platform_init_func(kbdev);
+ }
+ }
+ return MALI_TRUE;
+}
+
+void kbasep_platform_device_term(kbase_device *kbdev)
+{
+ kbase_platform_funcs_conf *platform_funcs;
+
+ platform_funcs = (kbase_platform_funcs_conf *) kbasep_get_config_value(kbdev, kbdev->config_attributes, KBASE_CONFIG_ATTR_PLATFORM_FUNCS);
+ if(platform_funcs)
+ {
+ if(platform_funcs->platform_term_func)
+ {
+ platform_funcs->platform_term_func(kbdev);
+ }
+ }
+}
+
+void kbasep_get_memory_performance(const kbase_memory_resource *resource, kbase_memory_performance *cpu_performance,
+ kbase_memory_performance *gpu_performance)
+{
+ kbase_attribute *attributes;
+
+ OSK_ASSERT(resource != NULL);
+ OSK_ASSERT(cpu_performance != NULL );
+ OSK_ASSERT(gpu_performance != NULL);
+
+ attributes = resource->attributes;
+ *cpu_performance = *gpu_performance = KBASE_MEM_PERF_NORMAL; /* default performance */
+
+ if (attributes == NULL)
+ {
+ return;
+ }
+
+ while (attributes->id != KBASE_CONFIG_ATTR_END)
+ {
+ if (attributes->id == KBASE_MEM_ATTR_PERF_GPU)
+ {
+ *gpu_performance = (kbase_memory_performance) attributes->data;
+ }
+ else if (attributes->id == KBASE_MEM_ATTR_PERF_CPU)
+ {
+ *cpu_performance = (kbase_memory_performance) attributes->data;
+ }
+ attributes++;
+ }
+}
+
+#if MALI_USE_UMP == 1
+static mali_bool kbasep_validate_ump_device(int ump_device)
+{
+ mali_bool valid;
+
+ switch (ump_device)
+ {
+ case UMP_DEVICE_W_SHIFT:
+ case UMP_DEVICE_X_SHIFT:
+ case UMP_DEVICE_Y_SHIFT:
+ case UMP_DEVICE_Z_SHIFT:
+ valid = MALI_TRUE;
+ break;
+ default:
+ valid = MALI_FALSE;
+ break;
+ }
+ return valid;
+}
+#endif /* MALI_USE_UMP == 1 */
+
+static mali_bool kbasep_validate_memory_performance(kbase_memory_performance performance)
+{
+ return performance <= KBASE_MEM_PERF_MAX_VALUE;
+}
+
+static mali_bool kbasep_validate_memory_resource(const kbase_memory_resource *memory_resource)
+{
+ OSK_ASSERT(memory_resource != NULL);
+
+ if (memory_resource->name == NULL)
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE, "Unnamed memory region found");
+ return MALI_FALSE;
+ }
+
+ if (memory_resource->base & ((1 << OSK_PAGE_SHIFT) - 1))
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE, "Base address of \"%s\" memory region is not page aligned", memory_resource->name);
+ return MALI_FALSE;
+ }
+
+ if (memory_resource->size & ((1 << OSK_PAGE_SHIFT) - 1))
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE, "Size of \"%s\" memory region is not a multiple of page size", memory_resource->name);
+ return MALI_FALSE;
+ }
+
+ if (memory_resource->attributes != NULL) /* we allow NULL attribute list */
+ {
+ int i;
+
+ for (i = 0; memory_resource->attributes[i].id != KBASE_MEM_ATTR_END; i++)
+ {
+ if (i >= MEMORY_ATTRIBUTE_COUNT_MAX)
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE, "More than MEMORY_ATTRIBUTE_COUNT_MAX=%d configuration attributes defined. Is memory attribute list properly terminated?",
+ MEMORY_ATTRIBUTE_COUNT_MAX);
+ return MALI_FALSE;
+ }
+ switch(memory_resource->attributes[i].id)
+ {
+ case KBASE_MEM_ATTR_PERF_CPU:
+ if (MALI_TRUE != kbasep_validate_memory_performance(
+ (kbase_memory_performance)memory_resource->attributes[i].data))
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE, "CPU performance of \"%s\" region is invalid: %d",
+ memory_resource->name, (kbase_memory_performance)memory_resource->attributes[i].data);
+ return MALI_FALSE;
+ }
+ break;
+
+ case KBASE_MEM_ATTR_PERF_GPU:
+ if (MALI_TRUE != kbasep_validate_memory_performance(
+ (kbase_memory_performance)memory_resource->attributes[i].data))
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE, "GPU performance of \"%s\" region is invalid: %d",
+ memory_resource->name, (kbase_memory_performance)memory_resource->attributes[i].data);
+ return MALI_FALSE;
+ }
+ break;
+ default:
+ OSK_PRINT_WARN(OSK_BASE_CORE, "Invalid memory attribute found in \"%s\" memory region: %d",
+ memory_resource->name, memory_resource->attributes[i].id);
+ return MALI_FALSE;
+ }
+ }
+ }
+
+ return MALI_TRUE;
+}
+
+
+static mali_bool kbasep_validate_gpu_clock_freq(kbase_device *kbdev, const kbase_attribute *attributes)
+{
+ uintptr_t freq_min = kbasep_get_config_value(kbdev, attributes, KBASE_CONFIG_ATTR_GPU_FREQ_KHZ_MIN);
+ uintptr_t freq_max = kbasep_get_config_value(kbdev, attributes, KBASE_CONFIG_ATTR_GPU_FREQ_KHZ_MAX);
+
+ if ((freq_min > MAX_GPU_ALLOWED_FREQ_KHZ) ||
+ (freq_min < MIN_GPU_ALLOWED_FREQ_KHZ) ||
+ (freq_max > MAX_GPU_ALLOWED_FREQ_KHZ) ||
+ (freq_max < MIN_GPU_ALLOWED_FREQ_KHZ) ||
+ (freq_min > freq_max))
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE, "Invalid GPU frequencies found in configuration: min=%ldkHz, max=%ldkHz.", freq_min, freq_max);
+ return MALI_FALSE;
+ }
+
+ return MALI_TRUE;
+}
+
+static mali_bool kbasep_validate_pm_callback(const kbase_pm_callback_conf *callbacks)
+{
+ if (callbacks == NULL)
+ {
+ /* Having no callbacks is valid */
+ return MALI_TRUE;
+ }
+ if ((callbacks->power_off_callback != NULL && callbacks->power_on_callback == NULL) ||
+ (callbacks->power_off_callback == NULL && callbacks->power_on_callback != NULL))
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE, "Invalid power management callbacks: Only one of power_off_callback and power_on_callback was specified");
+ return MALI_FALSE;
+ }
+ return MALI_TRUE;
+}
+
+static mali_bool kbasep_validate_cpu_speed_func(kbase_cpuprops_clock_speed_function fcn)
+{
+ return fcn != NULL;
+}
+
+mali_bool kbasep_validate_configuration_attributes(kbase_device *kbdev, const kbase_attribute *attributes)
+{
+ int i;
+ mali_bool had_gpu_freq_min = MALI_FALSE, had_gpu_freq_max = MALI_FALSE;
+
+ OSK_ASSERT(attributes);
+
+ for (i = 0; attributes[i].id != KBASE_CONFIG_ATTR_END; i++)
+ {
+ if (i >= ATTRIBUTE_COUNT_MAX)
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE, "More than ATTRIBUTE_COUNT_MAX=%d configuration attributes defined. Is attribute list properly terminated?",
+ ATTRIBUTE_COUNT_MAX);
+ return MALI_FALSE;
+ }
+
+ switch (attributes[i].id)
+ {
+ case KBASE_CONFIG_ATTR_MEMORY_RESOURCE:
+ if (MALI_FALSE == kbasep_validate_memory_resource((kbase_memory_resource *)attributes[i].data))
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE, "Invalid memory region found in configuration");
+ return MALI_FALSE;
+ }
+ break;
+
+ case KBASE_CONFIG_ATTR_MEMORY_OS_SHARED_MAX:
+ /* Some shared memory is required for GPU page tables, see MIDBASE-1534 */
+ if ( 0 == attributes[i].data )
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE, "Maximum OS Shared Memory Maximum is set to 0 which is not supported");
+ return MALI_FALSE;
+ }
+ break;
+
+ case KBASE_CONFIG_ATTR_MEMORY_OS_SHARED_PERF_GPU:
+ if (MALI_FALSE == kbasep_validate_memory_performance((kbase_memory_performance)attributes[i].data))
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE, "Shared OS memory GPU performance attribute has invalid value: %d",
+ (kbase_memory_performance)attributes[i].data);
+ return MALI_FALSE;
+ }
+ break;
+
+ case KBASE_CONFIG_ATTR_MEMORY_PER_PROCESS_LIMIT:
+ /* any value is allowed */
+ break;
+#if MALI_USE_UMP == 1
+ case KBASE_CONFIG_ATTR_UMP_DEVICE:
+ if (MALI_FALSE == kbasep_validate_ump_device(attributes[i].data))
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE, "Unknown UMP device found in configuration: %d",
+ (int)attributes[i].data);
+ return MALI_FALSE;
+ }
+ break;
+#endif /* MALI_USE_UMP == 1 */
+
+ case KBASE_CONFIG_ATTR_GPU_FREQ_KHZ_MIN:
+ had_gpu_freq_min = MALI_TRUE;
+ if (MALI_FALSE == kbasep_validate_gpu_clock_freq(kbdev, attributes))
+ {
+ /* Warning message handled by kbasep_validate_gpu_clock_freq() */
+ return MALI_FALSE;
+ }
+ break;
+
+ case KBASE_CONFIG_ATTR_GPU_FREQ_KHZ_MAX:
+ had_gpu_freq_max = MALI_TRUE;
+ if (MALI_FALSE == kbasep_validate_gpu_clock_freq(kbdev, attributes))
+ {
+ /* Warning message handled by kbasep_validate_gpu_clock_freq() */
+ return MALI_FALSE;
+ }
+ break;
+
+ /* Only non-zero unsigned 32-bit values accepted */
+ case KBASE_CONFIG_ATTR_JS_SCHEDULING_TICK_NS:
+ #if CSTD_CPU_64BIT
+ if ( attributes[i].data == 0u || (u64)attributes[i].data > (u64)U32_MAX )
+ #else
+ if ( attributes[i].data == 0u )
+ #endif
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE, "Invalid Job Scheduling Configuration attribute for "
+ "KBASE_CONFIG_ATTR_JS_SCHEDULING_TICKS_NS: %d",
+ (int)attributes[i].data);
+ return MALI_FALSE;
+ }
+ break;
+
+ /* All these Job Scheduling attributes are FALLTHROUGH: only unsigned 32-bit values accepted */
+ case KBASE_CONFIG_ATTR_JS_SOFT_STOP_TICKS:
+ case KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_SS:
+ case KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_NSS:
+ case KBASE_CONFIG_ATTR_JS_RESET_TICKS_SS:
+ case KBASE_CONFIG_ATTR_JS_RESET_TICKS_NSS:
+ case KBASE_CONFIG_ATTR_JS_RESET_TIMEOUT_MS:
+ case KBASE_CONFIG_ATTR_JS_CTX_TIMESLICE_NS:
+ case KBASE_CONFIG_ATTR_JS_CFS_CTX_RUNTIME_INIT_SLICES:
+ case KBASE_CONFIG_ATTR_JS_CFS_CTX_RUNTIME_MIN_SLICES:
+ #if CSTD_CPU_64BIT
+ if ( (u64)attributes[i].data > (u64)U32_MAX )
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE, "Job Scheduling Configuration attribute exceeds 32-bits: "
+ "id==%d val==%d",
+ attributes[i].id, (int)attributes[i].data);
+ return MALI_FALSE;
+ }
+ #endif
+ break;
+
+ case KBASE_CONFIG_ATTR_GPU_IRQ_THROTTLE_TIME_US:
+ #if CSTD_CPU_64BIT
+ if ( (u64)attributes[i].data > (u64)U32_MAX )
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE, "IRQ throttle time attribute exceeds 32-bits: "
+ "id==%d val==%d",
+ attributes[i].id, (int)attributes[i].data);
+ return MALI_FALSE;
+ }
+ #endif
+ break;
+
+ case KBASE_CONFIG_ATTR_POWER_MANAGEMENT_CALLBACKS:
+ if (MALI_FALSE == kbasep_validate_pm_callback((kbase_pm_callback_conf*)attributes[i].data))
+ {
+ /* Warning message handled by kbasep_validate_pm_callback() */
+ return MALI_FALSE;
+ }
+ break;
+
+ case KBASE_CONFIG_ATTR_SECURE_BUT_LOSS_OF_PERFORMANCE:
+ if ( attributes[i].data != MALI_TRUE && attributes[i].data != MALI_FALSE )
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE,
+ "Value for KBASE_CONFIG_ATTR_SECURE_BUT_LOSS_OF_PERFORMANCE was not "
+ "MALI_TRUE or MALI_FALSE: %u",
+ (unsigned int)attributes[i].data);
+ return MALI_FALSE;
+ }
+ break;
+
+ case KBASE_CONFIG_ATTR_CPU_SPEED_FUNC:
+ if (MALI_FALSE == kbasep_validate_cpu_speed_func((kbase_cpuprops_clock_speed_function)attributes[i].data))
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE, "Invalid function pointer in KBASE_CONFIG_ATTR_CPU_SPEED_FUNC");
+ return MALI_FALSE;
+ }
+ break;
+
+ case KBASE_CONFIG_ATTR_PLATFORM_FUNCS:
+ /* any value is allowed */
+ break;
+
+ default:
+ OSK_PRINT_WARN(OSK_BASE_CORE, "Invalid attribute found in configuration: %d", attributes[i].id);
+ return MALI_FALSE;
+ }
+ }
+
+ if(!had_gpu_freq_min)
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE, "Configuration does not include mandatory attribute KBASE_CONFIG_ATTR_GPU_FREQ_KHZ_MIN");
+ return MALI_FALSE;
+ }
+
+ if(!had_gpu_freq_max)
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE, "Configuration does not include mandatory attribute KBASE_CONFIG_ATTR_GPU_FREQ_KHZ_MAX");
+ return MALI_FALSE;
+ }
+
+ return MALI_TRUE;
+}
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_context.c
+ * Base kernel context APIs
+ */
+
+#include <kbase/src/common/mali_kbase.h>
+#include <kbase/src/common/mali_midg_regmap.h>
+
+/**
+ * @brief Create a kernel base context.
+ *
+ * Allocate and init a kernel base context. Calls
+ * kbase_create_os_context() to setup OS specific structures.
+ */
+struct kbase_context *kbase_create_context(kbase_device *kbdev)
+{
+ struct kbase_context *kctx;
+ struct kbase_va_region *pmem_reg;
+ struct kbase_va_region *tmem_reg;
+ struct kbase_va_region *exec_reg;
+ osk_error osk_err;
+ mali_error mali_err;
+
+ OSK_ASSERT(kbdev != NULL);
+
+ /* zero-inited as lot of code assume it's zero'ed out on create */
+ kctx = osk_calloc(sizeof(*kctx));
+ if (!kctx)
+ goto out;
+
+ kctx->kbdev = kbdev;
+ kctx->as_nr = KBASEP_AS_NR_INVALID;
+
+ if (kbase_mem_usage_init(&kctx->usage, kctx->kbdev->memdev.per_process_memory_limit >> OSK_PAGE_SHIFT))
+ {
+ goto free_kctx;
+ }
+
+ if (kbase_jd_init(kctx))
+ goto free_memctx;
+
+ mali_err = kbasep_js_kctx_init( kctx );
+ if ( MALI_ERROR_NONE != mali_err )
+ {
+ goto free_jd; /* safe to call kbasep_js_kctx_term in this case */
+ }
+
+ mali_err = kbase_event_init(kctx);
+ if (MALI_ERROR_NONE != mali_err)
+ goto free_jd;
+
+ osk_err = osk_mutex_init(&kctx->reg_lock, OSK_LOCK_ORDER_MEM_REG);
+ if (OSK_ERR_NONE != osk_err)
+ goto free_event;
+
+ OSK_DLIST_INIT(&kctx->reg_list);
+
+ /* Use a new *Shared Memory* allocator for GPU page tables.
+ * See MIDBASE-1534 for details. */
+ osk_err = osk_phy_allocator_init(&kctx->pgd_allocator, 0, 0, NULL);
+ if (OSK_ERR_NONE != osk_err)
+ goto free_region_lock;
+
+ mali_err = kbase_mmu_init(kctx);
+ if(MALI_ERROR_NONE != mali_err)
+ goto free_phy;
+
+ kctx->pgd = kbase_mmu_alloc_pgd(kctx);
+ if (!kctx->pgd)
+ goto free_mmu;
+
+ if (kbase_create_os_context(&kctx->osctx))
+ goto free_pgd;
+
+ kctx->nr_outstanding_atoms = 0;
+ if ( OSK_ERR_NONE != osk_waitq_init(&kctx->complete_outstanding_waitq))
+ {
+ goto free_osctx;
+ }
+ osk_waitq_set(&kctx->complete_outstanding_waitq);
+
+ /* Make sure page 0 is not used... */
+ pmem_reg = kbase_alloc_free_region(kctx, 1,
+ KBASE_REG_ZONE_EXEC_BASE - 1, KBASE_REG_ZONE_PMEM);
+ exec_reg = kbase_alloc_free_region(kctx, KBASE_REG_ZONE_EXEC_BASE,
+ KBASE_REG_ZONE_EXEC_SIZE, KBASE_REG_ZONE_EXEC);
+ tmem_reg = kbase_alloc_free_region(kctx, KBASE_REG_ZONE_TMEM_BASE,
+ KBASE_REG_ZONE_TMEM_SIZE, KBASE_REG_ZONE_TMEM);
+
+ if (!pmem_reg || !exec_reg || !tmem_reg)
+ {
+ if (pmem_reg)
+ kbase_free_alloced_region(pmem_reg);
+ if (exec_reg)
+ kbase_free_alloced_region(exec_reg);
+ if (tmem_reg)
+ kbase_free_alloced_region(tmem_reg);
+
+ kbase_destroy_context(kctx);
+ return NULL;
+ }
+
+ OSK_DLIST_PUSH_FRONT(&kctx->reg_list, pmem_reg, struct kbase_va_region, link);
+ OSK_DLIST_PUSH_BACK(&kctx->reg_list, exec_reg, struct kbase_va_region, link);
+ OSK_DLIST_PUSH_BACK(&kctx->reg_list, tmem_reg, struct kbase_va_region, link);
+
+ return kctx;
+free_osctx:
+ kbase_destroy_os_context(&kctx->osctx);
+free_pgd:
+ kbase_mmu_free_pgd(kctx);
+free_mmu:
+ kbase_mmu_term(kctx);
+free_phy:
+ osk_phy_allocator_term(&kctx->pgd_allocator);
+free_region_lock:
+ osk_mutex_term(&kctx->reg_lock);
+free_event:
+ kbase_event_cleanup(kctx);
+free_jd:
+ /* Safe to call this one even when didn't initialize (assuming kctx was sufficiently zeroed) */
+ kbasep_js_kctx_term(kctx);
+ kbase_jd_exit(kctx);
+free_memctx:
+ kbase_mem_usage_term(&kctx->usage);
+free_kctx:
+ osk_free(kctx);
+out:
+ return NULL;
+
+}
+KBASE_EXPORT_SYMBOL(kbase_create_context)
+
+/**
+ * @brief Destroy a kernel base context.
+ *
+ * Destroy a kernel base context. Calls kbase_destroy_os_context() to
+ * free OS specific structures. Will release all outstanding regions.
+ */
+void kbase_destroy_context(struct kbase_context *kctx)
+{
+ kbase_device *kbdev;
+
+ OSK_ASSERT(NULL != kctx);
+
+ kbdev = kctx->kbdev;
+ OSK_ASSERT(NULL != kbdev);
+
+ KBASE_TRACE_ADD( kbdev, CORE_CTX_DESTROY, kctx, NULL, 0u, 0u );
+
+ /* Ensure the core is powered up for the destroy process */
+ kbase_pm_context_active(kbdev);
+
+ if(kbdev->hwcnt.kctx == kctx)
+ {
+ /* disable the use of the hw counters if the app didn't use the API correctly or crashed */
+ KBASE_TRACE_ADD( kbdev, CORE_CTX_HWINSTR_TERM, kctx, NULL, 0u, 0u );
+ OSK_PRINT_WARN(OSK_BASE_CTX,
+ "The privileged process asking for instrumentation forgot to disable it "
+ "before exiting. Will end instrumentation for them" );
+ kbase_instr_hwcnt_disable(kctx);
+ }
+
+ kbase_jd_zap_context(kctx);
+ kbase_event_cleanup(kctx);
+
+ kbase_gpu_vm_lock(kctx);
+
+ /* MMU is disabled as part of scheduling out the context */
+ kbase_mmu_free_pgd(kctx);
+ osk_phy_allocator_term(&kctx->pgd_allocator);
+ OSK_DLIST_EMPTY_LIST(&kctx->reg_list, struct kbase_va_region,
+ link, kbase_free_alloced_region);
+ kbase_destroy_os_context(&kctx->osctx);
+ kbase_gpu_vm_unlock(kctx);
+
+ /* Safe to call this one even when didn't initialize (assuming kctx was sufficiently zeroed) */
+ kbasep_js_kctx_term(kctx);
+
+ kbase_jd_exit(kctx);
+ osk_mutex_term(&kctx->reg_lock);
+
+ kbase_pm_context_idle(kbdev);
+
+ kbase_mmu_term(kctx);
+
+ kbase_mem_usage_term(&kctx->usage);
+
+ osk_waitq_term(&kctx->complete_outstanding_waitq);
+ osk_free(kctx);
+}
+KBASE_EXPORT_SYMBOL(kbase_destroy_context)
+
+/**
+ * Set creation flags on a context
+ */
+mali_error kbase_context_set_create_flags(kbase_context *kctx, u32 flags)
+{
+ mali_error err = MALI_ERROR_NONE;
+ kbasep_js_kctx_info *js_kctx_info;
+ OSK_ASSERT(NULL != kctx);
+
+ js_kctx_info = &kctx->jctx.sched_info;
+
+ /* Validate flags */
+ if ( flags != (flags & BASE_CONTEXT_CREATE_KERNEL_FLAGS) )
+ {
+ err = MALI_ERROR_FUNCTION_FAILED;
+ goto out;
+ }
+
+ osk_mutex_lock( &js_kctx_info->ctx.jsctx_mutex );
+
+ /* Ensure this is the first call */
+ if ( (js_kctx_info->ctx.flags & KBASE_CTX_FLAG_CREATE_FLAGS_SET) != 0 )
+ {
+ OSK_PRINT_ERROR(OSK_BASE_CTX, "User attempted to set context creation flags more than once - not allowed");
+ err = MALI_ERROR_FUNCTION_FAILED;
+ goto out_unlock;
+ }
+
+ js_kctx_info->ctx.flags |= KBASE_CTX_FLAG_CREATE_FLAGS_SET;
+
+ /* Translate the flags */
+ if ( (flags & BASE_CONTEXT_SYSTEM_MONITOR_SUBMIT_DISABLED) == 0 )
+ {
+ /* This flag remains set until it is explicitly cleared */
+ js_kctx_info->ctx.flags &= ~((u32)KBASE_CTX_FLAG_SUBMIT_DISABLED);
+ }
+
+ if ( (flags & BASE_CONTEXT_HINT_ONLY_COMPUTE) != 0 )
+ {
+ js_kctx_info->ctx.flags |= (u32)KBASE_CTX_FLAG_HINT_ONLY_COMPUTE;
+ }
+
+ /* Latch the initial attributes into the Job Scheduler */
+ kbasep_js_ctx_attr_set_initial_attrs( kctx->kbdev, kctx );
+
+
+out_unlock:
+ osk_mutex_unlock( &js_kctx_info->ctx.jsctx_mutex );
+out:
+ return err;
+}
+KBASE_EXPORT_SYMBOL(kbase_context_set_create_flags)
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_cpuprops.c
+ * Base kernel property query APIs
+ */
+
+#include "mali_kbase_cpuprops.h"
+#include "mali_kbase.h"
+#include "mali_kbase_uku.h"
+#include <kbase/mali_kbase_config.h>
+#include <osk/mali_osk.h>
+
+int kbase_cpuprops_get_default_clock_speed(u32 *clock_speed)
+{
+ OSK_ASSERT( NULL != clock_speed );
+
+ *clock_speed = 100;
+ return 0;
+}
+
+mali_error kbase_cpuprops_uk_get_props(struct kbase_context *kctx, kbase_uk_cpuprops * kbase_props)
+{
+ int result;
+ kbase_cpuprops_clock_speed_function kbase_cpuprops_uk_get_clock_speed;
+
+ kbase_props->props.cpu_l1_dcache_line_size_log2 = OSK_L1_DCACHE_LINE_SIZE_LOG2;
+ kbase_props->props.cpu_l1_dcache_size = OSK_L1_DCACHE_SIZE;
+ kbase_props->props.cpu_flags = BASE_CPU_PROPERTY_FLAG_LITTLE_ENDIAN;
+
+ kbase_props->props.nr_cores = OSK_NUM_CPUS;
+ kbase_props->props.cpu_page_size_log2 = OSK_PAGE_SHIFT;
+ kbase_props->props.available_memory_size = OSK_MEM_PAGES << OSK_PAGE_SHIFT;
+
+ kbase_cpuprops_uk_get_clock_speed = (kbase_cpuprops_clock_speed_function)kbasep_get_config_value( kctx->kbdev, kctx->kbdev->config_attributes, KBASE_CONFIG_ATTR_CPU_SPEED_FUNC );
+ result = kbase_cpuprops_uk_get_clock_speed(&kbase_props->props.max_cpu_clock_speed_mhz);
+ if (result != 0)
+ {
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+
+ return MALI_ERROR_NONE;
+}
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_cpuprops.h
+ * Base kernel property query APIs
+ */
+
+#ifndef _KBASE_CPUPROPS_H_
+#define _KBASE_CPUPROPS_H_
+
+#include <malisw/mali_malisw.h>
+
+/* Forward declarations */
+struct kbase_context;
+struct kbase_uk_cpuprops;
+
+/**
+ * @brief Default implementation of @ref KBASE_CONFIG_ATTR_CPU_SPEED_FUNC.
+ *
+ * This function sets clock_speed to 100, so will be an underestimate for
+ * any real system.
+ *
+ * See @refkbase_cpuprops_clock_speed_function for details on the parameters
+ * and return value.
+ */
+int kbase_cpuprops_get_default_clock_speed(u32 *clock_speed);
+
+/**
+ * @brief Provides CPU properties data.
+ *
+ * Fill the kbase_uk_cpuprops with values from CPU configuration.
+ *
+ * @param kctx The kbase context
+ * @param kbase_props A copy of the kbase_uk_cpuprops structure from userspace
+ *
+ * @return MALI_ERROR_NONE on success. Any other value indicates failure.
+ */
+mali_error kbase_cpuprops_uk_get_props(struct kbase_context *kctx, struct kbase_uk_cpuprops* kbase_props);
+
+#endif /*_KBASE_CPUPROPS_H_*/
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_defs.h
+ *
+ * Defintions (types, defines, etcs) common to Kbase. They are placed here to
+ * allow the hierarchy of header files to work.
+ */
+
+#ifndef _KBASE_DEFS_H_
+#define _KBASE_DEFS_H_
+
+#define KBASE_DRV_NAME "mali"
+
+#include <kbase/mali_kbase_config.h>
+#include <kbase/mali_base_hwconfig.h>
+#include <osk/mali_osk.h>
+
+#ifdef CONFIG_KDS
+#include <kds/include/linux/kds.h>
+#endif
+
+/** Enable SW tracing when set */
+#ifndef KBASE_TRACE_ENABLE
+#if MALI_DEBUG
+#define KBASE_TRACE_ENABLE 1
+#else
+#define KBASE_TRACE_ENABLE 0
+#endif /*MALI_DEBUG*/
+#endif /*KBASE_TRACE_ENABLE*/
+
+/* Maximum number of outstanding atoms per kbase context
+ * this is set for security reasons to prevent a malicious app
+ * from hanging the driver */
+#define MAX_KCTX_OUTSTANDING_ATOMS (1ul << 6)
+
+/** Dump Job slot trace on error (only active if KBASE_TRACE_ENABLE != 0) */
+#define KBASE_TRACE_DUMP_ON_JOB_SLOT_ERROR 1
+
+/**
+ * Number of milliseconds before resetting the GPU when a job cannot be "zapped" from the hardware.
+ * Note that the time is actually ZAP_TIMEOUT+SOFT_STOP_RESET_TIMEOUT between the context zap starting and the GPU
+ * actually being reset to give other contexts time for their jobs to be soft-stopped and removed from the hardware
+ * before resetting.
+ */
+#define ZAP_TIMEOUT 1000
+
+/**
+ * Prevent soft-stops from occuring in scheduling situations
+ *
+ * This is not due to HW issues, but when scheduling is desired to be more predictable.
+ *
+ * Therefore, soft stop may still be disabled due to HW issues.
+ *
+ * @note Soft stop will still be used for non-scheduling purposes e.g. when terminating a context.
+ *
+ * @note if not in use, define this value to 0 instead of #undef'ing it
+ */
+#define KBASE_DISABLE_SCHEDULING_SOFT_STOPS 0
+
+/**
+ * Prevent hard-stops from occuring in scheduling situations
+ *
+ * This is not due to HW issues, but when scheduling is desired to be more predictable.
+ *
+ * @note Hard stop will still be used for non-scheduling purposes e.g. when terminating a context.
+ *
+ * @note if not in use, define this value to 0 instead of #undef'ing it
+ */
+#define KBASE_DISABLE_SCHEDULING_HARD_STOPS 0
+
+/* Forward declarations+defintions */
+typedef struct kbase_context kbase_context;
+typedef struct kbase_jd_atom kbasep_jd_atom;
+typedef struct kbase_device kbase_device;
+
+/**
+ * The maximum number of Job Slots to support in the Hardware.
+ *
+ * You can optimize this down if your target devices will only ever support a
+ * small number of job slots.
+ */
+#define BASE_JM_MAX_NR_SLOTS 16
+
+/**
+ * The maximum number of Address Spaces to support in the Hardware.
+ *
+ * You can optimize this down if your target devices will only ever support a
+ * small number of Address Spaces
+ */
+#define BASE_MAX_NR_AS 16
+
+#ifndef UINTPTR_MAX
+
+/**
+ * @brief Maximum value representable by type uintptr_t
+ */
+#if CSTD_CPU_32BIT
+#define UINTPTR_MAX U32_MAX
+#elif CSTD_CPU_64BIT
+#define UINTPTR_MAX U64_MAX
+#endif /* CSTD_CPU_64BIT */
+
+#endif /* !defined(UINTPTR_MAX) */
+
+/* mmu */
+#define ENTRY_IS_ATE 1ULL
+#define ENTRY_IS_INVAL 2ULL
+#define ENTRY_IS_PTE 3ULL
+
+#define MIDGARD_MMU_VA_BITS 48
+
+#define ENTRY_ATTR_BITS (7ULL << 2) /* bits 4:2 */
+#define ENTRY_RD_BIT (1ULL << 6)
+#define ENTRY_WR_BIT (1ULL << 7)
+#define ENTRY_SHARE_BITS (3ULL << 8) /* bits 9:8 */
+#define ENTRY_ACCESS_BIT (1ULL << 10)
+#define ENTRY_NX_BIT (1ULL << 54)
+
+#define ENTRY_FLAGS_MASK (ENTRY_ATTR_BITS | ENTRY_RD_BIT | ENTRY_WR_BIT | ENTRY_SHARE_BITS | ENTRY_ACCESS_BIT | ENTRY_NX_BIT)
+
+#if MIDGARD_MMU_VA_BITS > 39
+#define MIDGARD_MMU_TOPLEVEL 0
+#else
+#define MIDGARD_MMU_TOPLEVEL 1
+#endif
+
+#define GROWABLE_FLAGS_REQUIRED (KBASE_REG_PF_GROW | KBASE_REG_ZONE_TMEM)
+#define GROWABLE_FLAGS_MASK (GROWABLE_FLAGS_REQUIRED | KBASE_REG_FREE)
+
+/** setting in kbase_context::as_nr that indicates it's invalid */
+#define KBASEP_AS_NR_INVALID (-1)
+
+#define KBASE_LOCK_REGION_MAX_SIZE (63)
+#define KBASE_LOCK_REGION_MIN_SIZE (11)
+
+#define KBASE_TRACE_SIZE_LOG2 8 /* 256 entries */
+#define KBASE_TRACE_SIZE (1 << KBASE_TRACE_SIZE_LOG2)
+#define KBASE_TRACE_MASK ((1 << KBASE_TRACE_SIZE_LOG2)-1)
+
+#include "mali_kbase_js_defs.h"
+
+typedef struct kbase_event {
+ osk_dlist_item entry;
+ const void *data;
+ base_jd_event_code event_code;
+} kbase_event;
+
+
+/* Hijack the event entry field to link the struct with the different
+ * queues... */
+typedef struct kbase_jd_bag {
+ kbase_event event;
+ u64 core_restriction;
+ size_t offset;
+ u32 nr_atoms;
+ /** Set when the bag has a power management reference. This is used to ensure that the GPU is
+ * not turned off after a soft-job has read the GPU counters until the bag has completed */
+ mali_bool8 has_pm_ctx_reference;
+} kbase_jd_bag;
+
+/**
+ * @brief States to model state machine processed by kbasep_js_job_check_ref_cores(), which
+ * handles retaining cores for power management and affinity management.
+ *
+ * The state @ref KBASE_ATOM_COREREF_STATE_RECHECK_AFFINITY prevents an attack
+ * where lots of atoms could be submitted before powerup, and each has an
+ * affinity chosen that causes other atoms to have an affinity
+ * violation. Whilst the affinity was not causing violations at the time it
+ * was chosen, it could cause violations thereafter. For example, 1000 jobs
+ * could have had their affinity chosen during the powerup time, so any of
+ * those 1000 jobs could cause an affinity violation later on.
+ *
+ * The attack would otherwise occur because other atoms/contexts have to wait for:
+ * -# the currently running atoms (which are causing the violation) to
+ * finish
+ * -# and, the atoms that had their affinity chosen during powerup to
+ * finish. These are run preferrentially because they don't cause a
+ * violation, but instead continue to cause the violation in others.
+ * -# or, the attacker is scheduled out (which might not happen for just 2
+ * contexts)
+ *
+ * By re-choosing the affinity (which is designed to avoid violations at the
+ * time it's chosen), we break condition (2) of the wait, which minimizes the
+ * problem to just waiting for current jobs to finish (which can be bounded if
+ * the Job Scheduling Policy has a timer).
+ */
+typedef enum
+{
+ /** Starting state: No affinity chosen, and cores must be requested. kbase_jd_atom::affinity==0 */
+ KBASE_ATOM_COREREF_STATE_NO_CORES_REQUESTED,
+ /** Cores requested, but waiting for them to be powered. Requested cores given by kbase_jd_atom::affinity */
+ KBASE_ATOM_COREREF_STATE_WAITING_FOR_REQUESTED_CORES,
+ /** Cores given by kbase_jd_atom::affinity are powered, but affinity might be out-of-date, so must recheck */
+ KBASE_ATOM_COREREF_STATE_RECHECK_AFFINITY,
+ /** Cores given by kbase_jd_atom::affinity are powered, and affinity is up-to-date, but must check for violations */
+ KBASE_ATOM_COREREF_STATE_CHECK_AFFINITY_VIOLATIONS,
+ /** Cores are powered, kbase_jd_atom::affinity up-to-date, no affinity violations: atom can be submitted to HW */
+ KBASE_ATOM_COREREF_STATE_READY
+
+} kbase_atom_coreref_state;
+
+typedef struct kbase_jd_atom {
+ kbase_event event;
+ osk_workq_work work;
+ kbasep_js_tick start_timestamp;
+ base_jd_atom *user_atom;
+ kbase_jd_bag *bag;
+ kbase_context *kctx;
+ base_jd_dep pre_dep;
+ base_jd_dep post_dep;
+ u32 nr_syncsets;
+ u32 nr_extres;
+ u32 device_nr;
+ u64 affinity;
+ u64 jc;
+ kbase_atom_coreref_state coreref_state;
+#ifdef CONFIG_KDS
+ struct kds_resource_set * kds_rset;
+ mali_bool kds_dep_satisfied;
+#endif
+
+ base_jd_core_req core_req; /**< core requirements */
+
+ kbasep_js_policy_job_info sched_info;
+ /** Job Slot to retry submitting to if submission from IRQ handler failed
+ *
+ * NOTE: see if this can be unified into the another member e.g. the event */
+ int retry_submit_on_slot;
+ /* atom priority scaled to nice range with +20 offset 0..39 */
+ int nice_prio;
+
+ int poking; /* BASE_HW_ISSUE_8316 */
+} kbase_jd_atom;
+
+/*
+ * Theory of operations:
+ *
+ * - sem is an array of 256 bits, each bit being a semaphore
+ * for a 1-1 job dependency:
+ * Initially set to 0 (passing)
+ * Incremented when a post_dep is queued
+ * Decremented when a post_dep is completed
+ * pre_dep is satisfied when value is 0
+ * sem #0 is hardwired to 0 (always passing).
+ *
+ * - queue is an array of atoms, one per semaphore.
+ * When a pre_dep is not satisfied, the atom is added to both
+ * queues it depends on (except for queue 0 which is never used).
+ * Each time a post_dep is signal, the corresponding bit is cleared,
+ * the atoms removed from the queue, and the corresponding pre_dep
+ * is cleared. The atom can be run when pre_dep[0] == pre_dep[1] == 0.
+ */
+
+#define KBASE_JD_DEP_QUEUE_SIZE 256
+
+typedef struct kbase_jd_dep_queue {
+ kbase_jd_atom *queue[KBASE_JD_DEP_QUEUE_SIZE];
+ u32 sem[BASEP_JD_SEM_ARRAY_SIZE];
+} kbase_jd_dep_queue;
+
+typedef struct kbase_jd_context {
+ osk_mutex lock;
+ kbasep_js_kctx_info sched_info;
+ kbase_jd_dep_queue dep_queue;
+ base_jd_atom *pool;
+ size_t pool_size;
+
+ /** Tracks all job-dispatch jobs. This includes those not tracked by
+ * the scheduler: 'not ready to run' and 'dependency-only' jobs. */
+ u32 job_nr;
+
+ /** Waitq that reflects whether there are no jobs (including SW-only
+ * dependency jobs). This is set when no jobs are present on the ctx,
+ * and clear when there are jobs.
+ *
+ * @note: Job Dispatcher knows about more jobs than the Job Scheduler:
+ * the Job Scheduler is unaware of jobs that are blocked on dependencies,
+ * and SW-only dependency jobs.
+ *
+ * This waitq can be waited upon to find out when the context jobs are all
+ * done/cancelled (including those that might've been blocked on
+ * dependencies) - and so, whether it can be terminated. However, it should
+ * only be terminated once it is neither present in the policy-queue (see
+ * kbasep_js_policy_try_evict_ctx() ) nor the run-pool (see
+ * kbasep_js_kctx_info::ctx::is_scheduled_waitq).
+ *
+ * Since the waitq is only set under kbase_jd_context::lock,
+ * the waiter should also briefly obtain and drop kbase_jd_context::lock to
+ * guarentee that the setter has completed its work on the kbase_context */
+ osk_waitq zero_jobs_waitq;
+ osk_workq job_done_wq;
+ osk_spinlock_irq tb_lock;
+ u32 *tb;
+ size_t tb_wrap_offset;
+
+#ifdef CONFIG_KDS
+ struct kds_callback kds_cb;
+#endif
+} kbase_jd_context;
+
+typedef struct kbase_jm_slot
+{
+ osk_spinlock_irq lock;
+
+ /* The number of slots must be a power of two */
+#define BASE_JM_SUBMIT_SLOTS 16
+#define BASE_JM_SUBMIT_SLOTS_MASK (BASE_JM_SUBMIT_SLOTS - 1)
+
+ kbase_jd_atom *submitted[BASE_JM_SUBMIT_SLOTS];
+
+ u8 submitted_head;
+ u8 submitted_nr;
+
+} kbase_jm_slot;
+
+typedef enum kbase_midgard_type
+{
+ KBASE_MALI_T6XM,
+ KBASE_MALI_T6F1,
+ KBASE_MALI_T601,
+ KBASE_MALI_T604,
+ KBASE_MALI_T608,
+
+ KBASE_MALI_COUNT
+} kbase_midgard_type;
+
+#define KBASE_FEATURE_HAS_MODEL_PMU (1U << 0)
+#define KBASE_FEATURE_NEEDS_REG_DELAY (1U << 1)
+#define KBASE_FEATURE_HAS_16BIT_PC (1U << 2)
+#define KBASE_FEATURE_LACKS_RESET_INT (1U << 3)
+#define KBASE_FEATURE_DELAYED_PERF_WRITE_STATUS (1U << 4)
+
+typedef struct kbase_device_info
+{
+ kbase_midgard_type dev_type;
+ u32 features;
+} kbase_device_info;
+
+/**
+ * Important: Our code makes assumptions that a kbase_as structure is always at
+ * kbase_device->as[number]. This is used to recover the containing
+ * kbase_device from a kbase_as structure.
+ *
+ * Therefore, kbase_as structures must not be allocated anywhere else.
+ */
+typedef struct kbase_as
+{
+ int number;
+
+ osk_workq pf_wq;
+ osk_workq_work work_pagefault;
+ osk_workq_work work_busfault;
+ mali_addr64 fault_addr;
+ osk_mutex transaction_mutex;
+
+ /* BASE_HW_ISSUE_8316 */
+ osk_workq poke_wq;
+ osk_workq_work poke_work;
+ osk_atomic poke_refcount;
+ osk_timer poke_timer;
+} kbase_as;
+
+/* tracking of memory usage */
+typedef struct kbasep_mem_usage
+{
+ u32 max_pages;
+ osk_atomic cur_pages;
+} kbasep_mem_usage;
+
+/**
+ * @brief Specifies order in which physical allocators are selected.
+ *
+ * Enumeration lists different orders in which physical allocators are selected on memory allocation.
+ *
+ */
+typedef enum kbase_phys_allocator_order
+{
+ ALLOCATOR_ORDER_CONFIG, /* Select allocators in order they appeared in the configuration file */
+ ALLOCATOR_ORDER_GPU_PERFORMANCE, /* Select allocators in order from fastest to slowest on the GPU */
+ ALLOCATOR_ORDER_CPU_PERFORMANCE, /* Select allocators in order from fastest to slowest on the CPU */
+ ALLOCATOR_ORDER_CPU_GPU_PERFORMANCE, /* Select allocators in order from fastest to slowest on the CPU and GPU */
+
+ ALLOCATOR_ORDER_COUNT
+} kbase_phys_allocator_order;
+
+
+/* A simple structure to keep a sorted list of
+ * osk_phy_allocator pointers.
+ * Used by the iterator object
+ */
+typedef struct kbase_phys_allocator_array
+{
+ /* the allocators */
+ osk_phy_allocator * allocs;
+ osk_phy_allocator ** sorted_allocs[ALLOCATOR_ORDER_COUNT];
+ /* number of allocators */
+ unsigned int count;
+
+#if MALI_DEBUG
+ mali_bool it_bound;
+#endif /* MALI_DEBUG */
+} kbase_phys_allocator_array;
+
+/**
+ * Instrumentation State Machine States:
+ * DISABLED - requires instrumentation to be enabled
+ * IDLE - state machine is active and ready for a command.
+ * DUMPING - hardware is currently dumping a frame.
+ * POSTCLEANING- hardware is currently cleaning and invalidating caches.
+ * PRECLEANING - same as POSTCLEANING, except on completion, state machine will transiton to CLEANED instead of IDLE.
+ * CLEANED - cache clean completed, waiting for Instrumentation setup.
+ * ERROR - an error has occured during DUMPING (page fault).
+ */
+
+typedef enum
+{
+ KBASE_INSTR_STATE_DISABLED = 0,
+ KBASE_INSTR_STATE_IDLE,
+ KBASE_INSTR_STATE_DUMPING,
+ KBASE_INSTR_STATE_CLEANED,
+ KBASE_INSTR_STATE_PRECLEANING,
+ KBASE_INSTR_STATE_POSTCLEANING,
+ KBASE_INSTR_STATE_RESETTING,
+ KBASE_INSTR_STATE_FAULT
+
+} kbase_instr_state;
+
+
+typedef struct kbasep_mem_device
+{
+#if MALI_USE_UMP == 1
+ u32 ump_device_id; /* Which UMP device this GPU should be mapped to.
+ Read-only, copied from platform configuration on startup.*/
+#endif /* MALI_USE_UMP == 1 */
+
+ u32 per_process_memory_limit; /* How much memory (in bytes) a single process can access.
+ Read-only, copied from platform configuration on startup. */
+ kbasep_mem_usage usage; /* Tracks usage of OS shared memory. Initialized with platform
+ configuration data, updated when OS memory is allocated/freed.*/
+ kbase_phys_allocator_array allocators; /* List of available physical memory allocators */
+} kbasep_mem_device;
+
+
+#define KBASE_TRACE_CODE( X ) KBASE_TRACE_CODE_ ## X
+
+typedef enum
+{
+ /* IMPORTANT: USE OF SPECIAL #INCLUDE OF NON-STANDARD HEADER FILE
+ * THIS MUST BE USED AT THE START OF THE ENUM */
+#define KBASE_TRACE_CODE_MAKE_CODE( X ) KBASE_TRACE_CODE( X )
+#include "mali_kbase_trace_defs.h"
+#undef KBASE_TRACE_CODE_MAKE_CODE
+ /* Comma on its own, to extend the list */
+ ,
+ /* Must be the last in the enum */
+ KBASE_TRACE_CODE_COUNT
+} kbase_trace_code;
+
+#define KBASE_TRACE_FLAG_REFCOUNT (((u8)1) << 0)
+#define KBASE_TRACE_FLAG_JOBSLOT (((u8)1) << 1)
+
+typedef struct kbase_trace
+{
+ osk_timeval timestamp;
+ u32 thread_id;
+ u32 cpu;
+ void *ctx;
+ void *uatom;
+ u64 gpu_addr;
+ u32 info_val;
+ u8 code;
+ u8 jobslot;
+ u8 refcount;
+ u8 flags;
+} kbase_trace;
+
+struct kbase_device {
+ const kbase_device_info *dev_info;
+ kbase_jm_slot jm_slots[BASE_JM_MAX_NR_SLOTS];
+ s8 slot_submit_count_irq[BASE_JM_MAX_NR_SLOTS];
+ kbase_os_device osdev;
+ kbase_pm_device_data pm;
+ kbasep_js_device_data js_data;
+ kbasep_mem_device memdev;
+
+ kbase_as as[BASE_MAX_NR_AS];
+
+ osk_phy_allocator mmu_fault_allocator;
+ osk_phy_addr mmu_fault_pages[4];
+ osk_spinlock_irq mmu_mask_change;
+
+ kbase_gpu_props gpu_props;
+
+ /**< List of SW workarounds for HW issues */
+ unsigned long hw_issues_mask[(BASE_HW_ISSUE_END + OSK_BITS_PER_LONG - 1)/OSK_BITS_PER_LONG];
+
+ /* Cached present bitmaps - these are the same as the corresponding hardware registers */
+ u64 shader_present_bitmap;
+ u64 tiler_present_bitmap;
+ u64 l2_present_bitmap;
+ u64 l3_present_bitmap;
+
+ /* Bitmaps of cores that are currently in use (running jobs).
+ * These should be kept up to date by the job scheduler.
+ *
+ * pm.power_change_lock should be held when accessing these members.
+ *
+ * kbase_pm_check_transitions should be called when bits are cleared to
+ * update the power management system and allow transitions to occur. */
+ u64 shader_inuse_bitmap;
+ u64 tiler_inuse_bitmap;
+
+ /* Refcount for cores in use */
+ u32 shader_inuse_cnt[64];
+ u32 tiler_inuse_cnt[64];
+
+ /* Bitmaps of cores the JS needs for jobs ready to run */
+ u64 shader_needed_bitmap;
+ u64 tiler_needed_bitmap;
+
+ /* Refcount for cores needed */
+ u32 shader_needed_cnt[64];
+ u32 tiler_needed_cnt[64];
+
+ /* Bitmaps of cores that are currently available (powered up and the power policy is happy for jobs to be
+ * submitted to these cores. These are updated by the power management code. The job scheduler should avoid
+ * submitting new jobs to any cores that are not marked as available.
+ *
+ * pm.power_change_lock should be held when accessing these members.
+ */
+ u64 shader_available_bitmap;
+ u64 tiler_available_bitmap;
+
+ s8 nr_hw_address_spaces; /**< Number of address spaces in the GPU (constant after driver initialisation) */
+ s8 nr_user_address_spaces; /**< Number of address spaces available to user contexts */
+
+ /* Structure used for instrumentation and HW counters dumping */
+ struct {
+ /* The lock should be used when accessing any of the following members */
+ osk_spinlock_irq lock;
+
+ kbase_context *kctx;
+ u64 addr;
+ osk_waitq waitqueue;
+ kbase_instr_state state;
+ } hwcnt;
+
+ /* Set when we're about to reset the GPU */
+ osk_atomic reset_gpu;
+#define KBASE_RESET_GPU_NOT_PENDING 0 /* The GPU reset isn't pending */
+#define KBASE_RESET_GPU_PREPARED 1 /* kbase_prepare_to_reset_gpu has been called */
+#define KBASE_RESET_GPU_COMMITTED 2 /* kbase_reset_gpu has been called - the reset will now definitely happen
+ * within the timeout period */
+#define KBASE_RESET_GPU_HAPPENING 3 /* The GPU reset process is currently occuring (timeout has expired or
+ * kbasep_try_reset_gpu_early was called) */
+
+ /* Work queue and work item for performing the reset in */
+ osk_workq reset_workq;
+ osk_workq_work reset_work;
+ /* Signalled when reset_gpu==KBASE_RESET_GPU_NOT_PENDING */
+ osk_waitq reset_waitq;
+ osk_timer reset_timer;
+
+ /*value to be written to the irq_throttle register each time an irq is served */
+ osk_atomic irq_throttle_cycles;
+
+ const kbase_attribute *config_attributes;
+
+ /* >> BASE_HW_ISSUE_8401 >> */
+#define KBASE_8401_WORKAROUND_COMPUTEJOB_COUNT 3
+ kbase_context *workaround_kctx;
+ osk_virt_addr workaround_compute_job_va[KBASE_8401_WORKAROUND_COMPUTEJOB_COUNT];
+ osk_phy_addr workaround_compute_job_pa[KBASE_8401_WORKAROUND_COMPUTEJOB_COUNT];
+ /* << BASE_HW_ISSUE_8401 << */
+
+#if KBASE_TRACE_ENABLE != 0
+ osk_spinlock_irq trace_lock;
+ u16 trace_first_out;
+ u16 trace_next_in;
+ kbase_trace *trace_rbuf;
+#endif
+
+#if MALI_CUSTOMER_RELEASE == 0
+ /* This is used to override the current job scheduler values for
+ * KBASE_CONFIG_ATTR_JS_STOP_STOP_TICKS_SS
+ * KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_SS
+ * KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_NSS
+ * KBASE_CONFIG_ATTR_JS_RESET_TICKS_SS
+ * KBASE_CONFIG_ATTR_JS_RESET_TICKS_NSS.
+ *
+ * These values are set via the js_timeouts sysfs file.
+ */
+ u32 js_soft_stop_ticks;
+ u32 js_hard_stop_ticks_ss;
+ u32 js_hard_stop_ticks_nss;
+ u32 js_reset_ticks_ss;
+ u32 js_reset_ticks_nss;
+#endif
+ /* Platform specific private data to be accessed by mali_kbase_config_xxx.c only */
+ void *platform_context;
+};
+
+struct kbase_context
+{
+ kbase_device *kbdev;
+ osk_phy_allocator pgd_allocator;
+ osk_phy_addr pgd;
+ osk_dlist event_list;
+ osk_mutex event_mutex;
+ mali_bool event_closed;
+
+ u64 *mmu_teardown_pages;
+
+ osk_mutex reg_lock; /* To be converted to a rwlock? */
+ osk_dlist reg_list; /* Ordered list of GPU regions */
+
+ kbase_os_context osctx;
+ kbase_jd_context jctx;
+ kbasep_mem_usage usage;
+ ukk_session ukk_session;
+ u32 nr_outstanding_atoms;
+ osk_waitq complete_outstanding_waitq; /*if there are too many outstanding atoms
+ *per context we wait on this waitqueue
+ *to be signaled before submitting more jobs
+ */
+
+ /** This is effectively part of the Run Pool, because it only has a valid
+ * setting (!=KBASEP_AS_NR_INVALID) whilst the context is scheduled in
+ *
+ * The kbasep_js_device_data::runpool_irq::lock must be held whilst accessing
+ * this.
+ *
+ * If the context relating to this as_nr is required, you must use
+ * kbasep_js_runpool_retain_ctx() to ensure that the context doesn't disappear
+ * whilst you're using it. Alternatively, just hold the kbasep_js_device_data::runpool_irq::lock
+ * to ensure the context doesn't disappear (but this has restrictions on what other locks
+ * you can take whilst doing this) */
+ int as_nr;
+
+ /* NOTE:
+ *
+ * Flags are in jctx.sched_info.ctx.flags
+ * Mutable flags *must* be accessed under jctx.sched_info.ctx.jsctx_mutex
+ *
+ * All other flags must be added there */
+};
+
+typedef enum kbase_reg_access_type
+{
+ REG_READ,
+ REG_WRITE
+} kbase_reg_access_type;
+
+
+typedef enum kbase_share_attr_bits
+{
+ /* (1ULL << 8) bit is reserved */
+ SHARE_BOTH_BITS = (2ULL << 8), /* inner and outer shareable coherency */
+ SHARE_INNER_BITS = (3ULL << 8) /* inner shareable coherency */
+} kbase_share_attr_bits;
+
+
+#endif /* _KBASE_DEFS_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_device.c
+ * Base kernel device APIs
+ */
+
+#include <kbase/src/common/mali_kbase.h>
+#include <kbase/src/common/mali_kbase_defs.h>
+#include <kbase/src/common/mali_kbase_hw.h>
+
+/* NOTE: Magic - 0x45435254 (TRCE in ASCII).
+ * Supports tracing feature provided in the base module.
+ * Please keep it in sync with the value of base module.
+ */
+#define TRACE_BUFFER_HEADER_SPECIAL 0x45435254
+
+#ifdef MALI_PLATFORM_CONFIG_VEXPRESS
+#if (MALI_BACKEND_KERNEL && (!MALI_LICENSE_IS_GPL || MALI_FAKE_PLATFORM_DEVICE))
+extern kbase_attribute config_attributes_hw_issue_8408[];
+#endif /* (MALI_BACKEND_KERNEL && (!MALI_LICENSE_IS_GPL || MALI_FAKE_PLATFORM_DEVICE)) */
+#endif /* MALI_PLATFORM_CONFIG_VEXPRESS */
+
+/* This array is referenced at compile time, it cannot be made static... */
+const kbase_device_info kbase_dev_info[] = {
+ {
+ KBASE_MALI_T6XM,
+ (KBASE_FEATURE_HAS_MODEL_PMU)
+ },
+ {
+ KBASE_MALI_T6F1,
+ (KBASE_FEATURE_NEEDS_REG_DELAY |
+ KBASE_FEATURE_DELAYED_PERF_WRITE_STATUS |
+ KBASE_FEATURE_HAS_16BIT_PC)
+ },
+ {
+ KBASE_MALI_T601, 0
+ },
+ {
+ KBASE_MALI_T604, 0
+ },
+ {
+ KBASE_MALI_T608, 0
+ },
+};
+
+#if KBASE_TRACE_ENABLE != 0
+STATIC CONST char *kbasep_trace_code_string[] =
+{
+ /* IMPORTANT: USE OF SPECIAL #INCLUDE OF NON-STANDARD HEADER FILE
+ * THIS MUST BE USED AT THE START OF THE ARRAY */
+#define KBASE_TRACE_CODE_MAKE_CODE( X ) # X
+#include "mali_kbase_trace_defs.h"
+#undef KBASE_TRACE_CODE_MAKE_CODE
+};
+#endif
+
+STATIC mali_error kbasep_trace_init( kbase_device *kbdev );
+STATIC void kbasep_trace_term( kbase_device *kbdev );
+STATIC void kbasep_trace_hook_wrapper( void *param );
+
+void kbasep_as_do_poke(osk_workq_work * work);
+void kbasep_reset_timer_callback(void *data);
+void kbasep_reset_timeout_worker(osk_workq_work *data);
+
+kbase_device *kbase_device_alloc(void)
+{
+ return osk_calloc(sizeof(kbase_device));
+}
+
+mali_error kbase_device_init(kbase_device *kbdev, const kbase_device_info *dev_info)
+{
+ osk_error osk_err;
+ int i; /* i used after the for loop, don't reuse ! */
+
+ kbdev->dev_info = dev_info;
+
+ osk_err = osk_spinlock_irq_init(&kbdev->mmu_mask_change, OSK_LOCK_ORDER_MMU_MASK);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ goto fail;
+ }
+
+ /* Initialize platform specific context */
+ if(MALI_FALSE == kbasep_platform_device_init(kbdev))
+ {
+ goto free_mmu_lock;
+ }
+
+ /* Ensure we can access the GPU registers */
+ kbase_pm_register_access_enable(kbdev);
+
+ /* Get the list of workarounds for issues on the current HW (identified by the GPU_ID register) */
+ if (MALI_ERROR_NONE != kbase_hw_set_issues_mask(kbdev))
+ {
+ kbase_pm_register_access_disable(kbdev);
+ goto free_platform;
+ }
+
+ /* Find out GPU properties based on the GPU feature registers */
+ kbase_gpuprops_set(kbdev);
+
+ kbdev->nr_hw_address_spaces = kbdev->gpu_props.num_address_spaces;
+
+ /* We're done accessing the GPU registers for now. */
+ kbase_pm_register_access_disable(kbdev);
+
+ for (i = 0; i < kbdev->nr_hw_address_spaces; i++)
+ {
+ const char format[] = "mali_mmu%d";
+ char name[sizeof(format)];
+ const char poke_format[] = "mali_mmu%d_poker"; /* BASE_HW_ISSUE_8316 */
+ char poke_name[sizeof(poke_format)]; /* BASE_HW_ISSUE_8316 */
+
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8316))
+ {
+ if (0 > osk_snprintf(poke_name, sizeof(poke_name), poke_format, i))
+ {
+ goto free_workqs;
+ }
+ }
+
+ if (0 > osk_snprintf(name, sizeof(name), format, i))
+ {
+ goto free_workqs;
+ }
+
+ kbdev->as[i].number = i;
+ kbdev->as[i].fault_addr = 0ULL;
+ osk_err = osk_workq_init(&kbdev->as[i].pf_wq, name, 0);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ goto free_workqs;
+ }
+ osk_err = osk_mutex_init(&kbdev->as[i].transaction_mutex, OSK_LOCK_ORDER_AS);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ osk_workq_term(&kbdev->as[i].pf_wq);
+ goto free_workqs;
+ }
+
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8316))
+ {
+ osk_err = osk_workq_init(&kbdev->as[i].poke_wq, poke_name, 0);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ osk_workq_term(&kbdev->as[i].pf_wq);
+ osk_mutex_term(&kbdev->as[i].transaction_mutex);
+ goto free_workqs;
+ }
+ osk_workq_work_init(&kbdev->as[i].poke_work, kbasep_as_do_poke);
+ osk_err = osk_timer_init(&kbdev->as[i].poke_timer);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ osk_workq_term(&kbdev->as[i].poke_wq);
+ osk_workq_term(&kbdev->as[i].pf_wq);
+ osk_mutex_term(&kbdev->as[i].transaction_mutex);
+ goto free_workqs;
+ }
+ osk_timer_callback_set(&kbdev->as[i].poke_timer, kbasep_as_poke_timer_callback , &kbdev->as[i]);
+ osk_atomic_set(&kbdev->as[i].poke_refcount, 0);
+ }
+ }
+ /* don't change i after this point */
+
+ osk_err = osk_spinlock_irq_init(&kbdev->hwcnt.lock, OSK_LOCK_ORDER_HWCNT);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ goto free_workqs;
+ }
+
+ kbdev->hwcnt.state = KBASE_INSTR_STATE_DISABLED;
+ osk_err = osk_waitq_init(&kbdev->hwcnt.waitqueue);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ goto free_hwcnt_lock;
+ }
+
+ if (OSK_ERR_NONE != osk_workq_init(&kbdev->reset_workq, "Mali reset workqueue", 0))
+ {
+ goto free_hwcnt_waitq;
+ }
+
+ osk_workq_work_init(&kbdev->reset_work, kbasep_reset_timeout_worker);
+
+ osk_err = osk_waitq_init(&kbdev->reset_waitq);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ goto free_reset_workq;
+ }
+
+ osk_err = osk_timer_init(&kbdev->reset_timer);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ goto free_reset_waitq;
+ }
+ osk_timer_callback_set(&kbdev->reset_timer, kbasep_reset_timer_callback, kbdev);
+
+ if ( kbasep_trace_init( kbdev ) != MALI_ERROR_NONE )
+ {
+ goto free_reset_timer;
+ }
+
+ osk_debug_assert_register_hook( &kbasep_trace_hook_wrapper, kbdev );
+
+#ifdef MALI_PLATFORM_CONFIG_VEXPRESS
+#if (MALI_BACKEND_KERNEL && (!MALI_LICENSE_IS_GPL || MALI_FAKE_PLATFORM_DEVICE))
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8408))
+ {
+ /* BASE_HW_ISSUE_8408 requires a configuration with different timeouts for
+ * the vexpress platform */
+ kbdev->config_attributes = config_attributes_hw_issue_8408;
+ }
+#endif /* (MALI_BACKEND_KERNEL && (!MALI_LICENSE_IS_GPL || MALI_FAKE_PLATFORM_DEVICE)) */
+#endif /* MALI_PLATFORM_CONFIG=vexpress */
+
+ return MALI_ERROR_NONE;
+
+free_reset_timer:
+ osk_timer_term(&kbdev->reset_timer);
+free_reset_waitq:
+ osk_waitq_term(&kbdev->reset_waitq);
+free_reset_workq:
+ osk_workq_term(&kbdev->reset_workq);
+free_hwcnt_waitq:
+ osk_waitq_term(&kbdev->hwcnt.waitqueue);
+free_hwcnt_lock:
+ osk_spinlock_irq_term(&kbdev->hwcnt.lock);
+free_workqs:
+ while (i > 0)
+ {
+ i--;
+ osk_mutex_term(&kbdev->as[i].transaction_mutex);
+ osk_workq_term(&kbdev->as[i].pf_wq);
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8316))
+ {
+ osk_workq_term(&kbdev->as[i].poke_wq);
+ osk_timer_term(&kbdev->as[i].poke_timer);
+ }
+ }
+free_platform:
+ kbasep_platform_device_term(kbdev);
+free_mmu_lock:
+ osk_spinlock_irq_term(&kbdev->mmu_mask_change);
+fail:
+ return MALI_ERROR_FUNCTION_FAILED;
+}
+
+void kbase_device_term(kbase_device *kbdev)
+{
+ int i;
+
+ osk_debug_assert_register_hook( NULL, NULL );
+
+ kbasep_trace_term( kbdev );
+
+ osk_timer_term(&kbdev->reset_timer);
+ osk_waitq_term(&kbdev->reset_waitq);
+ osk_workq_term(&kbdev->reset_workq);
+
+ for (i = 0; i < kbdev->nr_hw_address_spaces; i++)
+ {
+ osk_mutex_term(&kbdev->as[i].transaction_mutex);
+ osk_workq_term(&kbdev->as[i].pf_wq);
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8316))
+ {
+ osk_timer_term(&kbdev->as[i].poke_timer);
+ osk_workq_term(&kbdev->as[i].poke_wq);
+ }
+ }
+
+ kbasep_platform_device_term(kbdev);
+
+ osk_spinlock_irq_term(&kbdev->hwcnt.lock);
+ osk_waitq_term(&kbdev->hwcnt.waitqueue);
+}
+
+void kbase_device_free(kbase_device *kbdev)
+{
+ osk_free(kbdev);
+}
+
+int kbase_device_has_feature(kbase_device *kbdev, u32 feature)
+{
+ return !!(kbdev->dev_info->features & feature);
+}
+KBASE_EXPORT_TEST_API(kbase_device_has_feature)
+
+kbase_midgard_type kbase_device_get_type(kbase_device *kbdev)
+{
+ return kbdev->dev_info->dev_type;
+}
+KBASE_EXPORT_TEST_API(kbase_device_get_type)
+
+void kbase_device_trace_buffer_install(kbase_context * kctx, u32 * tb, size_t size)
+{
+ OSK_ASSERT(kctx);
+ OSK_ASSERT(tb);
+
+ /* set up the header */
+ /* magic number in the first 4 bytes */
+ tb[0] = TRACE_BUFFER_HEADER_SPECIAL;
+ /* Store (write offset = 0, wrap counter = 0, transaction active = no)
+ * write offset 0 means never written.
+ * Offsets 1 to (wrap_offset - 1) used to store values when trace started
+ */
+ tb[1] = 0;
+
+ /* install trace buffer */
+ osk_spinlock_irq_lock(&kctx->jctx.tb_lock);
+ kctx->jctx.tb_wrap_offset = size / 8;
+ kctx->jctx.tb = tb;
+ osk_spinlock_irq_unlock(&kctx->jctx.tb_lock);
+}
+
+void kbase_device_trace_buffer_uninstall(kbase_context * kctx)
+{
+ OSK_ASSERT(kctx);
+ osk_spinlock_irq_lock(&kctx->jctx.tb_lock);
+ kctx->jctx.tb = NULL;
+ kctx->jctx.tb_wrap_offset = 0;
+ osk_spinlock_irq_unlock(&kctx->jctx.tb_lock);
+}
+
+void kbase_device_trace_register_access(kbase_context * kctx, kbase_reg_access_type type, u16 reg_offset, u32 reg_value)
+{
+ osk_spinlock_irq_lock(&kctx->jctx.tb_lock);
+ if (kctx->jctx.tb)
+ {
+ u16 wrap_count;
+ u16 write_offset;
+ osk_atomic dummy; /* osk_atomic_set called to use memory barriers until OSK get's them */
+ u32 * tb = kctx->jctx.tb;
+ u32 header_word;
+
+ header_word = tb[1];
+ OSK_ASSERT(0 == (header_word & 0x1));
+
+ wrap_count = (header_word >> 1) & 0x7FFF;
+ write_offset = (header_word >> 16) & 0xFFFF;
+
+ /* mark as transaction in progress */
+ tb[1] |= 0x1;
+ osk_atomic_set(&dummy, 1);
+
+ /* calculate new offset */
+ write_offset++;
+ if (write_offset == kctx->jctx.tb_wrap_offset)
+ {
+ /* wrap */
+ write_offset = 1;
+ wrap_count++;
+ wrap_count &= 0x7FFF; /* 15bit wrap counter */
+ }
+
+ /* store the trace entry at the selected offset */
+ tb[write_offset * 2 + 0] = (reg_offset & ~0x3) | ((type == REG_WRITE) ? 0x1 : 0x0);
+ tb[write_offset * 2 + 1] = reg_value;
+
+ osk_atomic_set(&dummy, 1);
+
+ /* new header word */
+ header_word = (write_offset << 16) | (wrap_count << 1) | 0x0; /* transaction complete */
+ tb[1] = header_word;
+ }
+ osk_spinlock_irq_unlock(&kctx->jctx.tb_lock);
+}
+
+void kbase_reg_write(kbase_device *kbdev, u16 offset, u32 value, kbase_context * kctx)
+{
+ OSK_ASSERT(kbdev->pm.gpu_powered);
+ OSK_ASSERT(kctx==NULL || kctx->as_nr != KBASEP_AS_NR_INVALID);
+ OSK_PRINT_INFO(OSK_BASE_CORE, "w: reg %04x val %08x", offset, value);
+ kbase_os_reg_write(kbdev, offset, value);
+ if (kctx && kctx->jctx.tb) kbase_device_trace_register_access(kctx, REG_WRITE, offset, value);
+}
+KBASE_EXPORT_TEST_API(kbase_reg_write)
+
+u32 kbase_reg_read(kbase_device *kbdev, u16 offset, kbase_context * kctx)
+{
+ u32 val;
+ OSK_ASSERT(kbdev->pm.gpu_powered);
+ OSK_ASSERT(kctx==NULL || kctx->as_nr != KBASEP_AS_NR_INVALID);
+ val = kbase_os_reg_read(kbdev, offset);
+ OSK_PRINT_INFO(OSK_BASE_CORE, "r: reg %04x val %08x", offset, val);
+ if (kctx && kctx->jctx.tb) kbase_device_trace_register_access(kctx, REG_READ, offset, val);
+ return val;
+}
+KBASE_EXPORT_TEST_API(kbase_reg_read)
+
+void kbase_report_gpu_fault(kbase_device *kbdev, int multiple)
+{
+ u32 status;
+ u64 address;
+
+ status = kbase_reg_read(kbdev, GPU_CONTROL_REG(GPU_FAULTSTATUS), NULL);
+ address = (u64)kbase_reg_read(kbdev, GPU_CONTROL_REG(GPU_FAULTADDRESS_HI), NULL) << 32;
+ address |= kbase_reg_read(kbdev, GPU_CONTROL_REG(GPU_FAULTADDRESS_LO), NULL);
+
+ OSK_PRINT_WARN(OSK_BASE_CORE, "GPU Fault 0x08%x (%s) at 0x%016llx", status, kbase_exception_name(status), address);
+ if (multiple)
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE, "There were multiple GPU faults - some have not been reported\n");
+ }
+}
+
+void kbase_gpu_interrupt(kbase_device * kbdev, u32 val)
+{
+ if (val & GPU_FAULT)
+ {
+ kbase_report_gpu_fault(kbdev, val & MULTIPLE_GPU_FAULTS);
+ }
+
+ if (val & RESET_COMPLETED)
+ {
+ kbase_pm_reset_done(kbdev);
+ }
+
+ if (val & PRFCNT_SAMPLE_COMPLETED)
+ {
+ kbase_instr_hwcnt_sample_done(kbdev);
+ }
+
+ if (val & CLEAN_CACHES_COMPLETED)
+ {
+ kbase_clean_caches_done(kbdev);
+ }
+
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(GPU_IRQ_CLEAR), val, NULL);
+
+ /* kbase_pm_check_transitions must be called after the IRQ has been cleared. This is because it might trigger
+ * further power transitions and we don't want to miss the interrupt raised to notify us that these further
+ * transitions have finished.
+ */
+ if (val & POWER_CHANGED_ALL)
+ {
+ kbase_pm_check_transitions(kbdev);
+ }
+}
+
+
+/*
+ * Device trace functions
+ */
+#if KBASE_TRACE_ENABLE != 0
+
+STATIC mali_error kbasep_trace_init( kbase_device *kbdev )
+{
+ osk_error osk_err;
+
+ void *rbuf = osk_malloc(sizeof(kbase_trace)*KBASE_TRACE_SIZE);
+
+ kbdev->trace_rbuf = rbuf;
+ osk_err = osk_spinlock_irq_init(&kbdev->trace_lock, OSK_LOCK_ORDER_TRACE);
+
+ if (rbuf == NULL || OSK_ERR_NONE != osk_err)
+ {
+ if ( rbuf != NULL )
+ {
+ osk_free( rbuf );
+ }
+ if ( osk_err == OSK_ERR_NONE )
+ {
+ osk_spinlock_irq_term(&kbdev->trace_lock);
+ }
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+
+ return MALI_ERROR_NONE;
+}
+
+STATIC void kbasep_trace_term( kbase_device *kbdev )
+{
+ osk_spinlock_irq_term(&kbdev->trace_lock);
+ osk_free( kbdev->trace_rbuf );
+}
+
+void kbasep_trace_dump_msg( kbase_trace *trace_msg )
+{
+ char buffer[OSK_DEBUG_MESSAGE_SIZE];
+ s32 written = 0;
+
+ /* Initial part of message */
+ written += MAX( osk_snprintf(buffer+written, MAX((int)OSK_DEBUG_MESSAGE_SIZE-written,0),
+ "%d.%.6d,%d,%d,%s,%p,%p,%.8llx,",
+ trace_msg->timestamp.tv_sec,
+ trace_msg->timestamp.tv_usec,
+ trace_msg->thread_id,
+ trace_msg->cpu,
+ kbasep_trace_code_string[trace_msg->code],
+ trace_msg->ctx,
+ trace_msg->uatom,
+ trace_msg->gpu_addr ), 0 );
+
+ /* NOTE: Could add function callbacks to handle different message types */
+ if ( (trace_msg->flags & KBASE_TRACE_FLAG_JOBSLOT) != MALI_FALSE )
+ {
+ /* Jobslot present */
+ written += MAX( osk_snprintf(buffer+written, MAX((int)OSK_DEBUG_MESSAGE_SIZE-written,0),
+ "%d", trace_msg->jobslot), 0 );
+ }
+ written += MAX( osk_snprintf(buffer+written, MAX((int)OSK_DEBUG_MESSAGE_SIZE-written,0),
+ ","), 0 );
+
+ if ( (trace_msg->flags & KBASE_TRACE_FLAG_REFCOUNT) != MALI_FALSE )
+ {
+ /* Refcount present */
+ written += MAX( osk_snprintf(buffer+written, MAX((int)OSK_DEBUG_MESSAGE_SIZE-written,0),
+ "%d", trace_msg->refcount), 0 );
+ }
+ written += MAX( osk_snprintf(buffer+written, MAX((int)OSK_DEBUG_MESSAGE_SIZE-written,0),
+ ",", trace_msg->jobslot), 0 );
+
+ /* Rest of message */
+ written += MAX( osk_snprintf(buffer+written, MAX((int)OSK_DEBUG_MESSAGE_SIZE-written,0),
+ "0x%.8x", trace_msg->info_val), 0 );
+
+ OSK_PRINT( OSK_BASE_CORE, "%s", buffer );
+}
+
+void kbasep_trace_add(kbase_device *kbdev, kbase_trace_code code, void *ctx, void *uatom, u64 gpu_addr,
+ u8 flags, int refcount, int jobslot, u32 info_val )
+{
+ kbase_trace *trace_msg;
+
+ osk_spinlock_irq_lock( &kbdev->trace_lock );
+
+ trace_msg = &kbdev->trace_rbuf[kbdev->trace_next_in];
+
+ /* Fill the message */
+ osk_debug_get_thread_info( &trace_msg->thread_id, &trace_msg->cpu );
+
+ osk_gettimeofday(&trace_msg->timestamp);
+
+ trace_msg->code = code;
+ trace_msg->ctx = ctx;
+ trace_msg->uatom = uatom;
+ trace_msg->gpu_addr = gpu_addr;
+ trace_msg->jobslot = jobslot;
+ trace_msg->refcount = MIN((unsigned int)refcount, 0xFF) ;
+ trace_msg->info_val = info_val;
+ trace_msg->flags = flags;
+
+ /* Update the ringbuffer indices */
+ kbdev->trace_next_in = (kbdev->trace_next_in + 1) & KBASE_TRACE_MASK;
+ if ( kbdev->trace_next_in == kbdev->trace_first_out )
+ {
+ kbdev->trace_first_out = (kbdev->trace_first_out + 1) & KBASE_TRACE_MASK;
+ }
+
+ /* Done */
+
+ osk_spinlock_irq_unlock( &kbdev->trace_lock );
+}
+
+void kbasep_trace_clear(kbase_device *kbdev)
+{
+ osk_spinlock_irq_lock( &kbdev->trace_lock );
+ kbdev->trace_first_out = kbdev->trace_next_in;
+ osk_spinlock_irq_unlock( &kbdev->trace_lock );
+}
+
+void kbasep_trace_dump(kbase_device *kbdev)
+{
+ u32 start;
+ u32 end;
+
+
+ OSK_PRINT( OSK_BASE_CORE, "Dumping trace:\nsecs,nthread,cpu,code,ctx,uatom,gpu_addr,jobslot,refcount,info_val");
+ osk_spinlock_irq_lock( &kbdev->trace_lock );
+ start = kbdev->trace_first_out;
+ end = kbdev->trace_next_in;
+
+ while (start != end)
+ {
+ kbase_trace *trace_msg = &kbdev->trace_rbuf[start];
+ kbasep_trace_dump_msg( trace_msg );
+
+ start = (start + 1) & KBASE_TRACE_MASK;
+ }
+ OSK_PRINT( OSK_BASE_CORE, "TRACE_END");
+
+ osk_spinlock_irq_unlock( &kbdev->trace_lock );
+
+ KBASE_TRACE_CLEAR(kbdev);
+}
+
+STATIC void kbasep_trace_hook_wrapper( void *param )
+{
+ kbase_device *kbdev = (kbase_device*)param;
+ kbasep_trace_dump( kbdev );
+}
+
+#else /* KBASE_TRACE_ENABLE != 0 */
+STATIC mali_error kbasep_trace_init( kbase_device *kbdev )
+{
+ CSTD_UNUSED(kbdev);
+ return MALI_ERROR_NONE;
+}
+
+STATIC void kbasep_trace_term( kbase_device *kbdev )
+{
+ CSTD_UNUSED(kbdev);
+}
+
+STATIC void kbasep_trace_hook_wrapper( void *param )
+{
+ CSTD_UNUSED(param);
+}
+
+void kbasep_trace_add(kbase_device *kbdev, kbase_trace_code code, void *ctx, void *uatom, u64 gpu_addr,
+ u8 flags, int refcount, int jobslot, u32 info_val )
+{
+ CSTD_UNUSED(kbdev);
+ CSTD_UNUSED(code);
+ CSTD_UNUSED(ctx);
+ CSTD_UNUSED(uatom);
+ CSTD_UNUSED(gpu_addr);
+ CSTD_UNUSED(flags);
+ CSTD_UNUSED(refcount);
+ CSTD_UNUSED(jobslot);
+ CSTD_UNUSED(info_val);
+}
+
+void kbasep_trace_clear(kbase_device *kbdev)
+{
+ CSTD_UNUSED(kbdev);
+}
+
+void kbasep_trace_dump(kbase_device *kbdev)
+{
+ CSTD_UNUSED(kbdev);
+}
+#endif /* KBASE_TRACE_ENABLE != 0 */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#include <kbase/src/common/mali_kbase.h>
+
+#ifdef __KERNEL__
+#define beenthere(f, a...) pr_debug("%s:" f, __func__, ##a)
+#else
+#define beenthere(f, a...) OSK_PRINT_INFO(OSK_BASE_EVENT, "%s:" f, __func__, ##a)
+#endif
+
+STATIC void *kbase_event_process(kbase_context *ctx,
+ kbase_event *event)
+{
+ void *data;
+ void *ptr = event;
+ kbasep_js_policy *js_policy;
+
+ /*
+ * We're in the right user context, do some post processing
+ * before returning to user-mode.
+ */
+
+ OSK_ASSERT(ctx != NULL);
+ OSK_ASSERT(event->event_code);
+ js_policy = &(ctx->kbdev->js_data.policy);
+
+ if ((event->event_code & BASE_JD_SW_EVENT_TYPE_MASK) == BASE_JD_SW_EVENT_JOB)
+ {
+ kbase_jd_atom *katom = (void *)event->data;
+ /* return the offset in the ring buffer... */
+ data = (void *)((uintptr_t)katom->user_atom - (uintptr_t)ctx->jctx.pool);
+
+ if (katom->core_req & BASE_JD_REQ_EXTERNAL_RESOURCES)
+ {
+ kbase_jd_post_external_resources(katom);
+ }
+ else
+ {
+ /* perform the sync operations only on successful jobs */
+ kbase_post_job_sync(ctx,
+ base_jd_get_atom_syncset(katom->user_atom, 0),
+ katom->nr_syncsets);
+ }
+
+ if ((katom->core_req & BASE_JD_REQ_SOFT_JOB) == 0)
+ {
+ kbasep_js_policy_term_job( js_policy, ctx, katom );
+ }
+
+ ptr = katom;
+ /* As the event is integral part of the katom, return
+ * immediatly... */
+ goto out;
+ }
+
+ if ((event->event_code & BASE_JD_SW_EVENT_TYPE_MASK) == BASE_JD_SW_EVENT_BAG)
+ {
+ ptr = CONTAINER_OF(event, kbase_jd_bag, event);
+ goto assign;
+ }
+
+assign:
+ data = (void *)event->data; /* recast to discard const) */
+out:
+ osk_free(ptr);
+ return data;
+}
+
+int kbase_event_pending(kbase_context *ctx)
+{
+ int ret;
+
+ OSK_ASSERT(ctx);
+
+ osk_mutex_lock(&ctx->event_mutex);
+ ret = (MALI_FALSE == OSK_DLIST_IS_EMPTY(&ctx->event_list)) || (MALI_TRUE == ctx->event_closed);
+ osk_mutex_unlock(&ctx->event_mutex);
+
+ return ret;
+}
+KBASE_EXPORT_TEST_API(kbase_event_pending)
+
+int kbase_event_dequeue(kbase_context *ctx, base_jd_event *uevent)
+{
+ kbase_event *event;
+
+ OSK_ASSERT(ctx);
+
+ osk_mutex_lock(&ctx->event_mutex);
+
+ if (OSK_DLIST_IS_EMPTY(&ctx->event_list))
+ {
+ if (ctx->event_closed)
+ {
+ /* generate the BASE_JD_EVENT_DRV_TERMINATED message on the fly */
+ osk_mutex_unlock(&ctx->event_mutex);
+ uevent->event_code = BASE_JD_EVENT_DRV_TERMINATED;
+ uevent->data = NULL;
+ beenthere("event system closed, returning BASE_JD_EVENT_DRV_TERMINATED(0x%X)\n", BASE_JD_EVENT_DRV_TERMINATED);
+ return 0;
+ }
+ else
+ {
+ osk_mutex_unlock(&ctx->event_mutex);
+ return -1;
+ }
+ }
+
+ /* normal event processing */
+ event = OSK_DLIST_POP_FRONT(&ctx->event_list, kbase_event, entry);
+
+ osk_mutex_unlock(&ctx->event_mutex);
+
+ beenthere("event dequeuing %p\n", (void*)event);
+ uevent->event_code = event->event_code;
+ uevent->data = kbase_event_process(ctx, event);
+
+ return 0;
+}
+KBASE_EXPORT_TEST_API(kbase_event_dequeue)
+
+void kbase_event_post(kbase_context *ctx,
+ kbase_event *event)
+{
+ beenthere("event queuing %p\n", event);
+
+ OSK_ASSERT(ctx);
+
+ osk_mutex_lock(&ctx->event_mutex);
+ OSK_DLIST_PUSH_BACK(&ctx->event_list, event,
+ kbase_event, entry);
+ osk_mutex_unlock(&ctx->event_mutex);
+
+ kbase_event_wakeup(ctx);
+}
+KBASE_EXPORT_TEST_API(kbase_event_post)
+
+void kbase_event_close(kbase_context * kctx)
+{
+ osk_mutex_lock(&kctx->event_mutex);
+ kctx->event_closed = MALI_TRUE;
+ osk_mutex_unlock(&kctx->event_mutex);
+ kbase_event_wakeup(kctx);
+}
+
+mali_error kbase_event_init(kbase_context *kctx)
+{
+ osk_error osk_err;
+
+ OSK_ASSERT(kctx);
+
+ OSK_DLIST_INIT(&kctx->event_list);
+ osk_err = osk_mutex_init(&kctx->event_mutex, OSK_LOCK_ORDER_QUEUE);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+
+ kctx->event_closed = MALI_FALSE;
+ return MALI_ERROR_NONE;
+}
+KBASE_EXPORT_TEST_API(kbase_event_init)
+
+void kbase_event_cleanup(kbase_context *kctx)
+{
+ OSK_ASSERT(kctx);
+
+ osk_mutex_lock(&kctx->event_mutex);
+ while (!OSK_DLIST_IS_EMPTY(&kctx->event_list))
+ {
+ kbase_event *event;
+ event = OSK_DLIST_POP_FRONT(&kctx->event_list,
+ kbase_event, entry);
+ beenthere("event dropping %p\n", event);
+ osk_free(event);
+ }
+ osk_mutex_unlock(&kctx->event_mutex);
+ osk_mutex_term(&kctx->event_mutex);
+}
+KBASE_EXPORT_TEST_API(kbase_event_cleanup)
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#ifndef MALI_GATOR_SUPPORT
+#define MALI_GATOR_SUPPORT 0
+#endif
+
+#if MALI_GATOR_SUPPORT
+#define GATOR_MAKE_EVENT(type,number) (((type) << 24) | ((number) << 16))
+#define GATOR_JOB_SLOT_START 1
+#define GATOR_JOB_SLOT_STOP 2
+#define GATOR_JOB_SLOT_SOFT_STOPPED 3
+void kbase_trace_mali_job_slots_event(u32 event);
+void kbase_trace_mali_pm_status(u32 event, u64 value);
+void kbase_trace_mali_pm_power_off(u32 event, u64 value);
+void kbase_trace_mali_pm_power_on(u32 event, u64 value);
+void kbase_trace_mali_page_fault_insert_pages(int event, u32 value);
+void kbase_trace_mali_mmu_as_in_use(int event);
+void kbase_trace_mali_mmu_as_released(int event);
+void kbase_trace_mali_total_alloc_pages_change(long long int event);
+#endif
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_gpuprops.c
+ * Base kernel property query APIs
+ */
+
+#include <kbase/src/common/mali_kbase.h>
+#include <kbase/src/common/mali_midg_regmap.h>
+#include <kbase/src/common/mali_kbase_gpuprops.h>
+
+/**
+ * @brief Extracts bits from a 32-bit bitfield.
+ * @hideinitializer
+ *
+ * @param[in] value The value from which to extract bits.
+ * @param[in] offset The first bit to extract (0 being the LSB).
+ * @param[in] size The number of bits to extract.
+ * @return Bits [@a offset, @a offset + @a size) from @a value.
+ *
+ * @pre offset + size <= 32.
+ */
+/* from mali_cdsb.h */
+#define KBASE_UBFX32(value, offset, size) \
+ (((u32)(value) >> (u32)(offset)) & (u32)((1ULL << (u32)(size)) - 1))
+
+mali_error kbase_gpuprops_uk_get_props(kbase_context *kctx, kbase_uk_gpuprops * kbase_props)
+{
+ OSK_ASSERT(NULL != kctx);
+ OSK_ASSERT(NULL != kbase_props);
+
+ OSK_MEMCPY(&kbase_props->props, &kctx->kbdev->gpu_props.props, sizeof(kbase_props->props));
+
+ return MALI_ERROR_NONE;
+}
+
+STATIC void kbase_gpuprops_dump_registers(kbase_device * kbdev, kbase_gpuprops_regdump * regdump)
+{
+ int i;
+
+ OSK_ASSERT(NULL != kbdev);
+ OSK_ASSERT(NULL != regdump);
+
+ /* Fill regdump with the content of the relevant registers */
+ regdump->gpu_id = kbase_os_reg_read(kbdev, GPU_CONTROL_REG(GPU_ID));
+ regdump->l2_features = kbase_os_reg_read(kbdev, GPU_CONTROL_REG(L2_FEATURES));
+ regdump->l3_features = kbase_os_reg_read(kbdev, GPU_CONTROL_REG(L3_FEATURES));
+ regdump->tiler_features = kbase_os_reg_read(kbdev, GPU_CONTROL_REG(TILER_FEATURES));
+ regdump->mem_features = kbase_os_reg_read(kbdev, GPU_CONTROL_REG(MEM_FEATURES));
+ regdump->mmu_features = kbase_os_reg_read(kbdev, GPU_CONTROL_REG(MMU_FEATURES));
+ regdump->as_present = kbase_os_reg_read(kbdev, GPU_CONTROL_REG(AS_PRESENT));
+ regdump->js_present = kbase_os_reg_read(kbdev, GPU_CONTROL_REG(JS_PRESENT));
+
+ for(i = 0; i < MIDG_MAX_JOB_SLOTS; i++)
+ {
+ regdump->js_features[i] = kbase_os_reg_read(kbdev, GPU_CONTROL_REG(JS_FEATURES_REG(i)));
+ }
+
+ for(i = 0; i < BASE_GPU_NUM_TEXTURE_FEATURES_REGISTERS; i++)
+ {
+ regdump->texture_features[i] = kbase_os_reg_read(kbdev, GPU_CONTROL_REG(TEXTURE_FEATURES_REG(i)));
+ }
+
+ regdump->shader_present_lo = kbase_os_reg_read(kbdev, GPU_CONTROL_REG(SHADER_PRESENT_LO));
+ regdump->shader_present_hi = kbase_os_reg_read(kbdev, GPU_CONTROL_REG(SHADER_PRESENT_HI));
+
+ regdump->tiler_present_lo = kbase_os_reg_read(kbdev, GPU_CONTROL_REG(TILER_PRESENT_LO));
+ regdump->tiler_present_hi = kbase_os_reg_read(kbdev, GPU_CONTROL_REG(TILER_PRESENT_HI));
+
+ regdump->l2_present_lo = kbase_os_reg_read(kbdev, GPU_CONTROL_REG(L2_PRESENT_LO));
+ regdump->l2_present_hi = kbase_os_reg_read(kbdev, GPU_CONTROL_REG(L2_PRESENT_HI));
+
+ regdump->l3_present_lo = kbase_os_reg_read(kbdev, GPU_CONTROL_REG(L3_PRESENT_LO));
+ regdump->l3_present_hi = kbase_os_reg_read(kbdev, GPU_CONTROL_REG(L3_PRESENT_HI));
+}
+
+STATIC void kbase_gpuprops_construct_coherent_groups(base_gpu_props * const props)
+{
+ struct mali_base_gpu_coherent_group *current_group;
+ u64 group_present;
+ u64 group_mask;
+ u64 first_set, first_set_prev;
+ u32 num_groups = 0;
+
+ OSK_ASSERT(NULL != props);
+
+ props->coherency_info.coherency = props->raw_props.mem_features;
+ props->coherency_info.num_core_groups = osk_count_set_bits64(props->raw_props.l2_present);
+
+ if (props->coherency_info.coherency & GROUPS_L3_COHERENT)
+ {
+ /* Group is l3 coherent */
+ group_present = props->raw_props.l3_present;
+ }
+ else if (props->coherency_info.coherency & GROUPS_L2_COHERENT)
+ {
+ /* Group is l2 coherent */
+ group_present = props->raw_props.l2_present;
+ }
+ else
+ {
+ /* Group is l1 coherent */
+ group_present = props->raw_props.shader_present;
+ }
+
+ /*
+ * The coherent group mask can be computed from the l2/l3 present
+ * register.
+ *
+ * For the coherent group n:
+ * group_mask[n] = (first_set[n] - 1) & ~(first_set[n-1] - 1)
+ * where first_set is group_present with only its nth set-bit kept
+ * (i.e. the position from where a new group starts).
+ *
+ * For instance if the groups are l2 coherent and l2_present=0x0..01111:
+ * The first mask is:
+ * group_mask[1] = (first_set[1] - 1) & ~(first_set[0] - 1)
+ * = (0x0..010 - 1) & ~(0x0..01 - 1)
+ * = 0x0..00f
+ * The second mask is:
+ * group_mask[2] = (first_set[2] - 1) & ~(first_set[1] - 1)
+ * = (0x0..100 - 1) & ~(0x0..010 - 1)
+ * = 0x0..0f0
+ * And so on until all the bits from group_present have been cleared
+ * (i.e. there is no group left).
+ */
+
+ current_group = props->coherency_info.group;
+ first_set = group_present & ~(group_present - 1);
+
+ while (group_present != 0 && num_groups < BASE_MAX_COHERENT_GROUPS)
+ {
+ group_present -= first_set; /* Clear the current group bit */
+ first_set_prev = first_set;
+
+ first_set = group_present & ~(group_present - 1);
+ group_mask = (first_set - 1) & ~(first_set_prev - 1);
+
+ /* Populate the coherent_group structure for each group */
+ current_group->core_mask = group_mask & props->raw_props.shader_present;
+ current_group->num_cores = osk_count_set_bits64(current_group->core_mask);
+
+ num_groups++;
+ current_group++;
+ }
+
+ if (group_present != 0)
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE, "Too many coherent groups (keeping only %d groups).\n", BASE_MAX_COHERENT_GROUPS);
+ }
+
+ props->coherency_info.num_groups = num_groups;
+}
+
+/**
+ * @brief Get the GPU configuration
+ *
+ * Fill the base_gpu_props structure with values from the GPU configuration registers
+ *
+ * @param gpu_props The base_gpu_props structure
+ * @param kbdev The kbase_device structure for the device
+ */
+static void kbase_gpuprops_get_props(base_gpu_props * gpu_props, kbase_device * kbdev)
+{
+ kbase_gpuprops_regdump regdump;
+ int i;
+
+ OSK_ASSERT(NULL != kbdev);
+ OSK_ASSERT(NULL != gpu_props);
+
+ /* Dump relevant registers */
+ kbase_gpuprops_dump_registers(kbdev, ®dump);
+
+ /* Populate the base_gpu_props structure */
+ gpu_props->core_props.version_status = KBASE_UBFX32(regdump.gpu_id, 0U, 4);
+ gpu_props->core_props.minor_revision = KBASE_UBFX32(regdump.gpu_id, 4U, 8);
+ gpu_props->core_props.major_revision = KBASE_UBFX32(regdump.gpu_id, 12U, 4);
+ gpu_props->core_props.product_id = KBASE_UBFX32(regdump.gpu_id, 16U, 16);
+ gpu_props->core_props.log2_program_counter_size = KBASE_GPU_PC_SIZE_LOG2;
+ gpu_props->core_props.gpu_speed_mhz = KBASE_GPU_SPEED_MHZ;
+ gpu_props->core_props.gpu_available_memory_size = OSK_MEM_PAGES << OSK_PAGE_SHIFT;
+
+ for(i = 0; i < BASE_GPU_NUM_TEXTURE_FEATURES_REGISTERS; i++)
+ {
+ gpu_props->core_props.texture_features[i] = regdump.texture_features[i];
+ }
+
+ gpu_props->l2_props.log2_line_size = KBASE_UBFX32(regdump.l2_features, 0U, 8);
+ gpu_props->l2_props.log2_cache_size = KBASE_UBFX32(regdump.l2_features, 16U, 8);
+
+ gpu_props->l3_props.log2_line_size = KBASE_UBFX32(regdump.l3_features, 0U, 8);
+ gpu_props->l3_props.log2_cache_size = KBASE_UBFX32(regdump.l3_features, 16U, 8);
+
+ gpu_props->tiler_props.bin_size_bytes = 1 << KBASE_UBFX32(regdump.tiler_features, 0U, 6);
+ gpu_props->tiler_props.max_active_levels = KBASE_UBFX32(regdump.tiler_features, 8U, 4);
+
+ gpu_props->raw_props.gpu_id = regdump.gpu_id;
+ gpu_props->raw_props.tiler_features = regdump.tiler_features;
+ gpu_props->raw_props.mem_features = regdump.mem_features;
+ gpu_props->raw_props.mmu_features = regdump.mmu_features;
+ gpu_props->raw_props.l2_features = regdump.l2_features;
+ gpu_props->raw_props.l3_features = regdump.l3_features;
+
+ gpu_props->raw_props.as_present = regdump.as_present;
+ gpu_props->raw_props.js_present = regdump.js_present;
+ gpu_props->raw_props.shader_present = ((u64)regdump.shader_present_hi << 32) + regdump.shader_present_lo;
+ gpu_props->raw_props.tiler_present = ((u64)regdump.tiler_present_hi << 32) + regdump.tiler_present_lo;
+ gpu_props->raw_props.l2_present = ((u64)regdump.l2_present_hi << 32) + regdump.l2_present_lo;
+ gpu_props->raw_props.l3_present = ((u64)regdump.l3_present_hi << 32) + regdump.l3_present_lo;
+
+ for(i = 0; i < MIDG_MAX_JOB_SLOTS; i++)
+ {
+ gpu_props->raw_props.js_features[i] = regdump.js_features[i];
+ }
+
+ /* Initialize the coherent_group structure for each group */
+ kbase_gpuprops_construct_coherent_groups(gpu_props);
+}
+
+void kbase_gpuprops_set(kbase_device *kbdev)
+{
+ kbase_gpu_props *gpu_props;
+ struct midg_raw_gpu_props *raw;
+
+ OSK_ASSERT(NULL != kbdev);
+ gpu_props = &kbdev->gpu_props;
+ raw = &gpu_props->props.raw_props;
+
+ /* Initialize the base_gpu_props structure */
+ kbase_gpuprops_get_props(&gpu_props->props, kbdev);
+
+ /* Populate kbase-only fields */
+ gpu_props->l2_props.associativity = KBASE_UBFX32(raw->l2_features, 8U, 8);
+ gpu_props->l2_props.external_bus_width = KBASE_UBFX32(raw->l2_features, 24U, 8);
+
+ gpu_props->l3_props.associativity = KBASE_UBFX32(raw->l3_features, 8U, 8);
+ gpu_props->l3_props.external_bus_width = KBASE_UBFX32(raw->l3_features, 24U, 8);
+
+ gpu_props->mem.core_group = KBASE_UBFX32(raw->mem_features, 0U, 1);
+ gpu_props->mem.supergroup = KBASE_UBFX32(raw->mem_features, 1U, 1);
+
+ gpu_props->mmu.va_bits = KBASE_UBFX32(raw->mmu_features, 0U, 8);
+ gpu_props->mmu.pa_bits = KBASE_UBFX32(raw->mmu_features, 8U, 8);
+
+ gpu_props->num_cores = osk_count_set_bits64(raw->shader_present);
+ gpu_props->num_core_groups = osk_count_set_bits64(raw->l2_present);
+ gpu_props->num_supergroups = osk_count_set_bits64(raw->l3_present);
+ gpu_props->num_address_spaces = osk_count_set_bits(raw->as_present);
+ gpu_props->num_job_slots = osk_count_set_bits(raw->js_present);
+}
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_gpuprops.h
+ * Base kernel property query APIs
+ */
+
+#ifndef _KBASE_GPUPROPS_H_
+#define _KBASE_GPUPROPS_H_
+
+#include "mali_kbase_gpuprops_types.h"
+
+/* Forward definition - see mali_kbase.h */
+struct kbase_device;
+struct kbase_context;
+
+/**
+ * @brief Set up Kbase GPU properties.
+ *
+ * Set up Kbase GPU properties with information from the GPU registers
+ *
+ * @param kbdev The kbase_device structure for the device
+ */
+void kbase_gpuprops_set(struct kbase_device *kbdev);
+
+/**
+ * @brief Provide GPU properties to userside through UKU call.
+ *
+ * Fill the kbase_uk_gpuprops with values from GPU configuration registers.
+ *
+ * @param kctx The kbase_context structure
+ * @param kbase_props A copy of the kbase_uk_gpuprops structure from userspace
+ *
+ * @return MALI_ERROR_NONE on success. Any other value indicates failure.
+ */
+mali_error kbase_gpuprops_uk_get_props(struct kbase_context *kctx, kbase_uk_gpuprops * kbase_props);
+
+
+#endif /* _KBASE_GPUPROPS_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_gpuprops_types.h
+ * Base kernel property query APIs
+ */
+
+#ifndef _KBASE_GPUPROPS_TYPES_H_
+#define _KBASE_GPUPROPS_TYPES_H_
+
+#include <kbase/mali_base_kernel.h>
+
+#define KBASE_GPU_SPEED_MHZ 123
+#define KBASE_GPU_PC_SIZE_LOG2 16U
+
+typedef struct kbase_gpuprops_regdump
+{
+ u32 gpu_id;
+ u32 l2_features;
+ u32 l3_features;
+ u32 tiler_features;
+ u32 mem_features;
+ u32 mmu_features;
+ u32 as_present;
+ u32 js_present;
+
+ u32 js_features[MIDG_MAX_JOB_SLOTS];
+
+ u32 texture_features[BASE_GPU_NUM_TEXTURE_FEATURES_REGISTERS];
+
+ u32 shader_present_lo;
+ u32 shader_present_hi;
+
+ u32 tiler_present_lo;
+ u32 tiler_present_hi;
+
+ u32 l2_present_lo;
+ u32 l2_present_hi;
+
+ u32 l3_present_lo;
+ u32 l3_present_hi;
+}kbase_gpuprops_regdump;
+
+typedef struct kbase_gpu_cache_props
+{
+ u8 associativity;
+ u8 external_bus_width;
+}kbase_gpu_cache_props;
+
+typedef struct kbase_gpu_mem_props
+{
+ u8 core_group;
+ u8 supergroup;
+}kbase_gpu_mem_props;
+
+typedef struct kbase_gpu_mmu_props
+{
+ u8 va_bits;
+ u8 pa_bits;
+}kbase_gpu_mmu_props;
+
+typedef struct mali_kbase_gpu_props
+{
+ /* kernel-only properties */
+ u8 num_cores;
+ u8 num_core_groups;
+ u8 num_supergroups;
+ u8 num_address_spaces;
+ u8 num_job_slots;
+
+ kbase_gpu_cache_props l2_props;
+ kbase_gpu_cache_props l3_props;
+
+ kbase_gpu_mem_props mem;
+ kbase_gpu_mmu_props mmu;
+
+ /**
+ * Implementation specific irq throttle value (us), should be adjusted during integration.
+ */
+ u32 irq_throttle_time_us;
+
+ /* Properties shared with userspace */
+ base_gpu_props props;
+}kbase_gpu_props;
+
+
+
+#endif /* _KBASE_GPUPROPS_TYPES_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file
+ * Run-time work-arounds helpers
+ */
+
+#include <kbase/mali_base_hwconfig.h>
+#include <kbase/src/common/mali_midg_regmap.h>
+#include "mali_kbase.h"
+#include "mali_kbase_hw.h"
+
+mali_error kbase_hw_set_issues_mask(kbase_device *kbdev)
+{
+ const base_hw_issue *issues;
+
+#if MALI_BACKEND_KERNEL || MALI_NO_MALI
+ u32 gpu_id = kbase_os_reg_read(kbdev, GPU_CONTROL_REG(GPU_ID));
+
+ switch (gpu_id)
+ {
+ case GPU_ID_MAKE(GPU_ID_PI_T60X, 0, 0, GPU_ID_S_15DEV0):
+ case GPU_ID_MAKE(GPU_ID_PI_T65X, 0, 0, GPU_ID_S_15DEV0):
+ issues = base_hw_issues_t60x_t65x_r0p0_15dev0;
+ break;
+ case GPU_ID_MAKE(GPU_ID_PI_T60X, 0, 0, GPU_ID_S_EAC):
+ case GPU_ID_MAKE(GPU_ID_PI_T65X, 0, 0, GPU_ID_S_EAC):
+ issues = base_hw_issues_t60x_t65x_r0p0_eac;
+ break;
+ case GPU_ID_MAKE(GPU_ID_PI_T65X, 0, 1, 0):
+ issues = base_hw_issues_t65x_r0p1;
+ break;
+ case GPU_ID_MAKE(GPU_ID_PI_T60X, 1, 0, 0):
+ case GPU_ID_MAKE(GPU_ID_PI_T65X, 1, 0, 0):
+ issues = base_hw_issues_t60x_t65x_r1p0;
+ break;
+ case GPU_ID_MAKE(GPU_ID_PI_T62X, 0, 0, 0):
+ issues = base_hw_issues_t62x_r0p0;
+ break;
+ case GPU_ID_MAKE(GPU_ID_PI_T67X, 0, 0, 0):
+ issues = base_hw_issues_t67x_r0p0;
+ break;
+ default:
+ OSK_PRINT_WARN(OSK_BASE_CORE, "Unknown GPU ID %x", gpu_id);
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+
+ OSK_PRINT_INFO(OSK_BASE_CORE, "GPU identified as 0x%04x r%dp%d status %d",
+ (gpu_id & GPU_ID_VERSION_PRODUCT_ID) >> GPU_ID_VERSION_PRODUCT_ID_SHIFT,
+ (gpu_id & GPU_ID_VERSION_MAJOR) >> GPU_ID_VERSION_MAJOR_SHIFT,
+ (gpu_id & GPU_ID_VERSION_MINOR) >> GPU_ID_VERSION_MINOR_SHIFT,
+ (gpu_id & GPU_ID_VERSION_STATUS) >> GPU_ID_VERSION_STATUS_SHIFT);
+#else
+ /* We can only know that the model is used at compile-time */
+ issues = base_hw_issues_model;
+#endif
+
+ for (; *issues != BASE_HW_ISSUE_END; issues++)
+ {
+ osk_bitarray_set_bit(*issues, &kbdev->hw_issues_mask[0]);
+ }
+
+ return MALI_ERROR_NONE;
+}
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file
+ * Run-time work-arounds helpers
+ */
+
+#ifndef _KBASE_HW_H_
+#define _KBASE_HW_H_
+
+#include <osk/mali_osk.h>
+#include "mali_kbase_defs.h"
+
+/**
+ * @brief Tell whether a work-around should be enabled
+ */
+#define kbase_hw_has_issue(kbdev, issue)\
+ osk_bitarray_test_bit(issue, &(kbdev)->hw_issues_mask[0])
+
+/**
+ * @brief Set the HW issues mask depending on the GPU ID
+ */
+mali_error kbase_hw_set_issues_mask(kbase_device *kbdev);
+
+#endif /* _KBASE_HW_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_instr.c
+ * Base kernel instrumentation APIs.
+ */
+
+#include <kbase/src/common/mali_kbase.h>
+#include <kbase/src/common/mali_midg_regmap.h>
+
+/**
+ * @brief Issue Cache Clean & Invalidate command to hardware
+ */
+static void kbasep_instr_hwcnt_cacheclean(kbase_device *kbdev)
+{
+ u32 irq_mask;
+
+ OSK_ASSERT(NULL != kbdev);
+
+ /* Enable interrupt */
+ irq_mask = kbase_reg_read(kbdev,GPU_CONTROL_REG(GPU_IRQ_MASK),NULL);
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(GPU_IRQ_MASK), irq_mask | CLEAN_CACHES_COMPLETED, NULL);
+ /* clean&invalidate the caches so we're sure the mmu tables for the dump buffer is valid */
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(GPU_COMMAND), GPU_COMMAND_CLEAN_INV_CACHES, NULL);
+}
+
+/**
+ * @brief Enable HW counters collection
+ *
+ * Note: will wait for a cache clean to complete
+ */
+mali_error kbase_instr_hwcnt_enable(kbase_context * kctx, kbase_uk_hwcnt_setup * setup)
+{
+ mali_error err = MALI_ERROR_FUNCTION_FAILED;
+ kbasep_js_device_data *js_devdata;
+ mali_bool access_allowed;
+ u32 irq_mask;
+ kbase_device *kbdev;
+
+ OSK_ASSERT(NULL != kctx);
+ kbdev = kctx->kbdev;
+ OSK_ASSERT(NULL != kbdev);
+ OSK_ASSERT(NULL != setup);
+
+ js_devdata = &kbdev->js_data;
+ OSK_ASSERT(NULL != js_devdata);
+
+ /* Determine if the calling task has access to this capability */
+ access_allowed = kbase_security_has_capability(kctx, KBASE_SEC_INSTR_HW_COUNTERS_COLLECT, KBASE_SEC_FLAG_NOAUDIT);
+ if (MALI_FALSE == access_allowed)
+ {
+ goto out;
+ }
+
+ if ((setup->dump_buffer == 0ULL) ||
+ (setup->dump_buffer & (2048-1)))
+ {
+ /* alignment failure */
+ goto out;
+ }
+
+
+ /* Mark the context as active so the GPU is kept turned on */
+ kbase_pm_context_active(kbdev);
+
+ osk_spinlock_irq_lock(&kbdev->hwcnt.lock);
+
+ if (kbdev->hwcnt.state == KBASE_INSTR_STATE_RESETTING)
+ {
+ /* GPU is being reset*/
+ osk_spinlock_irq_unlock(&kbdev->hwcnt.lock);
+ osk_waitq_wait(&kbdev->hwcnt.waitqueue);
+ osk_spinlock_irq_lock(&kbdev->hwcnt.lock);
+ }
+
+
+ if (kbdev->hwcnt.state != KBASE_INSTR_STATE_DISABLED)
+ {
+ /* Instrumentation is already enabled */
+ osk_spinlock_irq_unlock(&kbdev->hwcnt.lock);
+ kbase_pm_context_idle(kbdev);
+ goto out;
+ }
+
+ /* Enable interrupt */
+ irq_mask = kbase_reg_read(kbdev,GPU_CONTROL_REG(GPU_IRQ_MASK),NULL);
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(GPU_IRQ_MASK), irq_mask | PRFCNT_SAMPLE_COMPLETED, NULL);
+
+ /* In use, this context is the owner */
+ kbdev->hwcnt.kctx = kctx;
+ /* Remember the dump address so we can reprogram it later */
+ kbdev->hwcnt.addr = setup->dump_buffer;
+
+ /* Precleaning so that state does not transition to IDLE */
+ kbdev->hwcnt.state = KBASE_INSTR_STATE_PRECLEANING;
+ osk_waitq_clear(&kbdev->hwcnt.waitqueue);
+
+ osk_spinlock_irq_unlock(&kbdev->hwcnt.lock);
+
+ /* Clean&invalidate the caches so we're sure the mmu tables for the dump buffer is valid */
+ kbasep_instr_hwcnt_cacheclean(kbdev);
+ /* Wait for cacheclean to complete */
+ osk_waitq_wait(&kbdev->hwcnt.waitqueue);
+ OSK_ASSERT(kbdev->hwcnt.state == KBASE_INSTR_STATE_CLEANED);
+
+ /* Schedule the context in */
+ kbasep_js_schedule_privileged_ctx(kbdev, kctx);
+
+ /* Configure */
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(PRFCNT_BASE_LO), setup->dump_buffer & 0xFFFFFFFF, kctx);
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(PRFCNT_BASE_HI), setup->dump_buffer >> 32, kctx);
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(PRFCNT_JM_EN), setup->jm_bm, kctx);
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(PRFCNT_SHADER_EN), setup->shader_bm, kctx);
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(PRFCNT_L3_CACHE_EN), setup->l3_cache_bm, kctx);
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(PRFCNT_MMU_L2_EN), setup->mmu_l2_bm, kctx);
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8186))
+ {
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(PRFCNT_TILER_EN), 0, kctx);
+ }
+
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(PRFCNT_CONFIG), (kctx->as_nr << PRFCNT_CONFIG_AS_SHIFT) | PRFCNT_CONFIG_MODE_MANUAL, kctx);
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(PRFCNT_TILER_EN), setup->tiler_bm, kctx);
+
+ osk_spinlock_irq_lock(&kbdev->hwcnt.lock);
+
+ if (kbdev->hwcnt.state == KBASE_INSTR_STATE_RESETTING)
+ {
+ /* GPU is being reset*/
+ osk_spinlock_irq_unlock(&kbdev->hwcnt.lock);
+ osk_waitq_wait(&kbdev->hwcnt.waitqueue);
+ osk_spinlock_irq_lock(&kbdev->hwcnt.lock);
+ }
+
+ kbdev->hwcnt.state = KBASE_INSTR_STATE_IDLE;
+ osk_waitq_set(&kbdev->hwcnt.waitqueue);
+
+ osk_spinlock_irq_unlock(&kbdev->hwcnt.lock);
+
+ err = MALI_ERROR_NONE;
+
+ OSK_PRINT_INFO( OSK_BASE_CORE, "HW counters dumping set-up for context %p", kctx);
+
+out:
+ return err;
+}
+KBASE_EXPORT_SYMBOL(kbase_instr_hwcnt_enable)
+
+/**
+ * @brief Disable HW counters collection
+ *
+ * Note: might sleep, waiting for an ongoing dump to complete
+ */
+mali_error kbase_instr_hwcnt_disable(kbase_context * kctx)
+{
+ mali_error err = MALI_ERROR_FUNCTION_FAILED;
+ u32 irq_mask;
+ kbase_device *kbdev;
+
+ OSK_ASSERT(NULL != kctx);
+ kbdev = kctx->kbdev;
+ OSK_ASSERT(NULL != kbdev);
+
+ while (1)
+ {
+ osk_spinlock_irq_lock(&kbdev->hwcnt.lock);
+
+ if (kbdev->hwcnt.state == KBASE_INSTR_STATE_DISABLED)
+ {
+ /* Instrumentation is not enabled */
+ osk_spinlock_irq_unlock(&kbdev->hwcnt.lock);
+ goto out;
+ }
+
+ if (kbdev->hwcnt.kctx != kctx)
+ {
+ /* Instrumentation has been setup for another context */
+ osk_spinlock_irq_unlock(&kbdev->hwcnt.lock);
+ goto out;
+ }
+
+ if (kbdev->hwcnt.state == KBASE_INSTR_STATE_IDLE)
+ {
+ break;
+ }
+
+ osk_spinlock_irq_unlock(&kbdev->hwcnt.lock);
+
+ /* Ongoing dump/setup - wait for its completion */
+ osk_waitq_wait(&kbdev->hwcnt.waitqueue);
+ }
+
+ kbdev->hwcnt.state = KBASE_INSTR_STATE_DISABLED;
+ osk_waitq_clear(&kbdev->hwcnt.waitqueue);
+
+ /* Disable interrupt */
+ irq_mask = kbase_reg_read(kbdev,GPU_CONTROL_REG(GPU_IRQ_MASK),NULL);
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(GPU_IRQ_MASK), irq_mask & ~PRFCNT_SAMPLE_COMPLETED, NULL);
+
+ /* Disable the counters */
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(PRFCNT_CONFIG), 0, kctx);
+
+ kbdev->hwcnt.kctx = NULL;
+ kbdev->hwcnt.addr = 0ULL;
+
+ osk_spinlock_irq_unlock(&kbdev->hwcnt.lock);
+
+ /* Release the context, this implicitly (and indirectly) calls kbase_pm_context_idle */
+ kbasep_js_release_privileged_ctx(kbdev, kctx);
+
+ OSK_PRINT_INFO( OSK_BASE_CORE, "HW counters dumping disabled for context %p", kctx);
+
+ err = MALI_ERROR_NONE;
+
+out:
+ return err;
+}
+KBASE_EXPORT_SYMBOL(kbase_instr_hwcnt_disable)
+
+/**
+ * @brief Configure HW counters collection
+ */
+mali_error kbase_instr_hwcnt_setup(kbase_context * kctx, kbase_uk_hwcnt_setup * setup)
+{
+ mali_error err = MALI_ERROR_FUNCTION_FAILED;
+ kbase_device *kbdev;
+
+ OSK_ASSERT(NULL != kctx);
+
+ kbdev = kctx->kbdev;
+ OSK_ASSERT(NULL != kbdev);
+
+ if (NULL == setup)
+ {
+ /* Bad parameter - abort */
+ goto out;
+ }
+
+ if (setup->dump_buffer != 0ULL)
+ {
+ /* Enable HW counters */
+ err = kbase_instr_hwcnt_enable(kctx, setup);
+ }
+ else
+ {
+ /* Disable HW counters */
+ err = kbase_instr_hwcnt_disable(kctx);
+ }
+
+out:
+ return err;
+}
+
+/**
+ * @brief Issue Dump command to hardware
+ */
+mali_error kbase_instr_hwcnt_dump_irq(kbase_context * kctx)
+{
+ mali_error err = MALI_ERROR_FUNCTION_FAILED;
+ kbase_device *kbdev;
+
+ OSK_ASSERT(NULL != kctx);
+ kbdev = kctx->kbdev;
+ OSK_ASSERT(NULL != kbdev);
+
+ osk_spinlock_irq_lock(&kbdev->hwcnt.lock);
+
+ OSK_ASSERT(kbdev->hwcnt.state != KBASE_INSTR_STATE_RESETTING);
+
+ if (kbdev->hwcnt.kctx != kctx)
+ {
+ /* The instrumentation has been setup for another context */
+ goto unlock;
+ }
+
+ if (kbdev->hwcnt.state != KBASE_INSTR_STATE_IDLE)
+ {
+ /* HW counters are disabled or another dump is ongoing */
+ goto unlock;
+ }
+
+ osk_waitq_clear(&kbdev->hwcnt.waitqueue);
+
+ /* Mark that we're dumping - the PF handler can signal that we faulted */
+ kbdev->hwcnt.state = KBASE_INSTR_STATE_DUMPING;
+
+ /* Reconfigure the dump address */
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(PRFCNT_BASE_LO), kbdev->hwcnt.addr & 0xFFFFFFFF, NULL);
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(PRFCNT_BASE_HI), kbdev->hwcnt.addr >> 32, NULL);
+
+ /* Start dumping */
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(GPU_COMMAND), GPU_COMMAND_PRFCNT_SAMPLE, kctx);
+
+ OSK_PRINT_INFO( OSK_BASE_CORE, "HW counters dumping done for context %p", kctx);
+
+ err = MALI_ERROR_NONE;
+
+unlock:
+ osk_spinlock_irq_unlock(&kbdev->hwcnt.lock);
+ return err;
+}
+KBASE_EXPORT_SYMBOL(kbase_instr_hwcnt_dump_irq)
+
+/**
+ * @brief Tell whether the HW counters dump has completed
+ *
+ * Notes:
+ * - does not sleep
+ * - success will be set to MALI_TRUE if the dump succeeded or
+ * MALI_FALSE on failure
+ */
+mali_bool kbase_instr_hwcnt_dump_complete(kbase_context * kctx, mali_bool *success)
+{
+ mali_bool complete = MALI_FALSE;
+ kbase_device *kbdev;
+
+ OSK_ASSERT(NULL != kctx);
+ kbdev = kctx->kbdev;
+ OSK_ASSERT(NULL != kbdev);
+ OSK_ASSERT(NULL != success);
+
+ osk_spinlock_irq_lock(&kbdev->hwcnt.lock);
+
+ if (kbdev->hwcnt.state == KBASE_INSTR_STATE_IDLE)
+ {
+ *success = MALI_TRUE;
+ complete = MALI_TRUE;
+ }
+ else if (kbdev->hwcnt.state == KBASE_INSTR_STATE_FAULT)
+ {
+ *success = MALI_FALSE;
+ complete = MALI_TRUE;
+ kbdev->hwcnt.state = KBASE_INSTR_STATE_IDLE;
+ }
+
+ osk_spinlock_irq_unlock(&kbdev->hwcnt.lock);
+
+ return complete;
+}
+KBASE_EXPORT_SYMBOL(kbase_instr_hwcnt_dump_complete)
+
+/**
+ * @brief Issue Dump command to hardware and wait for completion
+ */
+mali_error kbase_instr_hwcnt_dump(kbase_context * kctx)
+{
+ mali_error err = MALI_ERROR_FUNCTION_FAILED;
+ kbase_device *kbdev;
+
+ OSK_ASSERT(NULL != kctx);
+ kbdev = kctx->kbdev;
+ OSK_ASSERT(NULL != kbdev);
+
+ err = kbase_instr_hwcnt_dump_irq(kctx);
+ if (MALI_ERROR_NONE != err)
+ {
+ /* Can't dump HW counters */
+ goto out;
+ }
+
+ /* Wait for dump & cacheclean to complete */
+ osk_waitq_wait(&kbdev->hwcnt.waitqueue);
+
+ osk_spinlock_irq_lock(&kbdev->hwcnt.lock);
+
+ if (kbdev->hwcnt.state == KBASE_INSTR_STATE_RESETTING)
+ {
+ /* GPU is being reset*/
+ osk_spinlock_irq_unlock(&kbdev->hwcnt.lock);
+ osk_waitq_wait(&kbdev->hwcnt.waitqueue);
+ osk_spinlock_irq_lock(&kbdev->hwcnt.lock);
+ }
+
+ if (kbdev->hwcnt.state == KBASE_INSTR_STATE_FAULT)
+ {
+ err = MALI_ERROR_FUNCTION_FAILED;
+ kbdev->hwcnt.state = KBASE_INSTR_STATE_IDLE;
+ }
+ else
+ {
+ /* Dump done */
+ OSK_ASSERT(kbdev->hwcnt.state == KBASE_INSTR_STATE_IDLE);
+ err = MALI_ERROR_NONE;
+ }
+
+ osk_spinlock_irq_unlock(&kbdev->hwcnt.lock);
+out:
+ return err;
+}
+KBASE_EXPORT_SYMBOL(kbase_instr_hwcnt_dump)
+
+/**
+ * @brief Clear the HW counters
+ */
+mali_error kbase_instr_hwcnt_clear(kbase_context * kctx)
+{
+ mali_error err = MALI_ERROR_FUNCTION_FAILED;
+ kbase_device *kbdev;
+
+ OSK_ASSERT(NULL != kctx);
+ kbdev = kctx->kbdev;
+ OSK_ASSERT(NULL != kbdev);
+
+ osk_spinlock_irq_lock(&kbdev->hwcnt.lock);
+
+ if (kbdev->hwcnt.state == KBASE_INSTR_STATE_RESETTING)
+ {
+ /* GPU is being reset*/
+ osk_spinlock_irq_unlock(&kbdev->hwcnt.lock);
+ osk_waitq_wait(&kbdev->hwcnt.waitqueue);
+ osk_spinlock_irq_lock(&kbdev->hwcnt.lock);
+ }
+
+ /* Check it's the context previously set up and we're not already dumping */
+ if (kbdev->hwcnt.kctx != kctx ||
+ kbdev->hwcnt.state != KBASE_INSTR_STATE_IDLE)
+ {
+ goto out;
+ }
+
+ /* Clear the counters */
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(GPU_COMMAND), GPU_COMMAND_PRFCNT_CLEAR, kctx);
+
+ err = MALI_ERROR_NONE;
+
+out:
+ osk_spinlock_irq_unlock(&kbdev->hwcnt.lock);
+ return err;
+}
+KBASE_EXPORT_SYMBOL(kbase_instr_hwcnt_clear)
+
+/**
+ * @brief Dump complete interrupt received
+ */
+void kbase_instr_hwcnt_sample_done(kbase_device *kbdev)
+{
+ if (kbdev->hwcnt.state == KBASE_INSTR_STATE_FAULT)
+ {
+ osk_waitq_set(&kbdev->hwcnt.waitqueue);
+ }
+ else
+ {
+ /* Always clean and invalidate the cache after a successful dump */
+ kbdev->hwcnt.state = KBASE_INSTR_STATE_POSTCLEANING;
+ kbasep_instr_hwcnt_cacheclean(kbdev);
+ }
+}
+
+/**
+ * @brief Cache clean interrupt received
+ */
+void kbase_clean_caches_done(kbase_device *kbdev)
+{
+ u32 irq_mask;
+
+ if (kbdev->hwcnt.state != KBASE_INSTR_STATE_DISABLED)
+ {
+ /* Disable interrupt */
+ irq_mask = kbase_reg_read(kbdev,GPU_CONTROL_REG(GPU_IRQ_MASK),NULL);
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(GPU_IRQ_MASK), irq_mask & ~CLEAN_CACHES_COMPLETED, NULL);
+
+ if (kbdev->hwcnt.state == KBASE_INSTR_STATE_PRECLEANING)
+ {
+ /* Don't return IDLE as we need kbase_instr_hwcnt_setup to continue rather than
+ allow access to another waiting thread */
+ kbdev->hwcnt.state = KBASE_INSTR_STATE_CLEANED;
+ }
+ else
+ {
+ /* All finished and idle */
+ kbdev->hwcnt.state = KBASE_INSTR_STATE_IDLE;
+ }
+
+ osk_waitq_set(&kbdev->hwcnt.waitqueue);
+ }
+}
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+#ifdef CONFIG_DMA_SHARED_BUFFER
+#include <linux/dma-buf.h>
+#endif /* CONFIG_DMA_SHARED_BUFFER */
+#include <kbase/src/common/mali_kbase.h>
+#include <kbase/src/common/mali_kbase_uku.h>
+#include <kbase/src/common/mali_kbase_js_affinity.h>
+#if MALI_USE_UMP == 1
+#include <ump/ump_kernel_interface.h>
+#endif /* MALI_USE_UMP == 1 */
+
+#define beenthere(f, a...) OSK_PRINT_INFO(OSK_BASE_JD, "%s:" f, __func__, ##a)
+
+/*
+ * This is the kernel side of the API. Only entry points are:
+ * - kbase_jd_submit(): Called from userspace to submit a single bag
+ * - kbase_jd_done(): Called from interrupt context to track the
+ * completion of a job.
+ * Callouts:
+ * - to the job manager (enqueue a job)
+ * - to the event subsystem (signals the completion/failure of bag/job-chains).
+ */
+
+STATIC INLINE void dep_raise_sem(u32 *sem, u8 dep)
+{
+ if (!dep)
+ return;
+
+ sem[BASEP_JD_SEM_WORD_NR(dep)] |= BASEP_JD_SEM_MASK_IN_WORD(dep);
+}
+KBASE_EXPORT_TEST_API(dep_raise_sem)
+
+STATIC INLINE void dep_clear_sem(u32 *sem, u8 dep)
+{
+ if (!dep)
+ return;
+
+ sem[BASEP_JD_SEM_WORD_NR(dep)] &= ~BASEP_JD_SEM_MASK_IN_WORD(dep);
+}
+KBASE_EXPORT_TEST_API(dep_clear_sem)
+
+STATIC INLINE int dep_get_sem(u32 *sem, u8 dep)
+{
+ if (!dep)
+ return 0;
+
+ return !!(sem[BASEP_JD_SEM_WORD_NR(dep)] & BASEP_JD_SEM_MASK_IN_WORD(dep));
+}
+KBASE_EXPORT_TEST_API(dep_get_sem)
+
+STATIC INLINE mali_bool jd_add_dep(kbase_jd_context *ctx,
+ kbase_jd_atom *katom, u8 d)
+{
+ kbase_jd_dep_queue *dq = &ctx->dep_queue;
+ u8 s = katom->pre_dep.dep[d];
+
+ if (!dep_get_sem(ctx->dep_queue.sem, s))
+ return MALI_FALSE;
+
+ /*
+ * The slot must be free already. If not, then something went
+ * wrong in the validate path.
+ */
+ OSK_ASSERT(!dq->queue[s]);
+
+ dq->queue[s] = katom;
+ beenthere("queued %p slot %d", (void *)katom, s);
+
+ return MALI_TRUE;
+}
+KBASE_EXPORT_TEST_API(jd_add_dep)
+
+/*
+ * This function only computes the address of the first possible
+ * atom. It doesn't mean it's actually valid (jd_validate_atom takes
+ * care of that).
+ */
+STATIC INLINE base_jd_atom *jd_get_first_atom(kbase_jd_context *ctx,
+ kbase_jd_bag *bag)
+{
+ /* Check that offset is within pool */
+ if ((bag->offset + sizeof(base_jd_atom)) > ctx->pool_size)
+ return NULL;
+
+ return (base_jd_atom *)((char *)ctx->pool + bag->offset);
+}
+KBASE_EXPORT_TEST_API(jd_get_first_atom)
+
+/*
+ * Same as with jd_get_first_atom, but for any subsequent atom.
+ */
+STATIC INLINE base_jd_atom *jd_get_next_atom(kbase_jd_atom *katom)
+{
+ return (katom->core_req & BASE_JD_REQ_EXTERNAL_RESOURCES) ? (base_jd_atom *)base_jd_get_external_resource(katom->user_atom, katom->nr_extres) :
+ (base_jd_atom *)base_jd_get_atom_syncset(katom->user_atom, katom->nr_syncsets);
+}
+KBASE_EXPORT_TEST_API(jd_get_next_atom)
+
+#ifdef CONFIG_KDS
+static void kds_dep_clear(void * callback_parameter, void * callback_extra_parameter)
+{
+ kbase_jd_atom * katom;
+ kbase_jd_context * ctx;
+
+ katom = (kbase_jd_atom*)callback_parameter;
+ OSK_ASSERT(katom);
+ ctx = &katom->kctx->jctx;
+
+ osk_mutex_lock(&ctx->lock);
+
+ OSK_ASSERT(katom->kds_dep_satisfied == MALI_FALSE);
+
+ /* This atom's KDS dependency has now been met */
+ katom->kds_dep_satisfied = MALI_TRUE;
+
+ /* Check whether the atom's other dependencies were already met */
+ if (ctx->dep_queue.queue[katom->pre_dep.dep[0]] != katom &&
+ ctx->dep_queue.queue[katom->pre_dep.dep[1]] != katom)
+ {
+ /* katom dep complete, add to JS */
+ mali_bool resched;
+
+ resched = kbasep_js_add_job( katom->kctx, katom );
+
+ if (resched)
+ {
+ kbasep_js_try_schedule_head_ctx(katom->kctx->kbdev);
+ }
+ }
+ osk_mutex_unlock(&ctx->lock);
+}
+#endif
+
+#ifdef CONFIG_DMA_SHARED_BUFFER
+static mali_error kbase_jd_umm_map(struct kbase_context * kctx, struct kbase_va_region * reg)
+{
+ struct sg_table * st;
+ struct scatterlist * s;
+ int i;
+ osk_phy_addr * pa;
+ mali_error err;
+
+ OSK_ASSERT(NULL == reg->imported_metadata.umm.st);
+ st = dma_buf_map_attachment(reg->imported_metadata.umm.dma_attachment, DMA_BIDIRECTIONAL);
+
+ if (!st)
+ {
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+
+ /* save for later */
+ reg->imported_metadata.umm.st = st;
+
+ pa = kbase_get_phy_pages(reg);
+ OSK_ASSERT(pa);
+
+ for_each_sg(st->sgl, s, st->nents, i)
+ {
+ int j;
+ size_t pages = PFN_DOWN(sg_dma_len(s));
+
+ for (j = 0; j < pages; j++)
+ *pa++ = sg_dma_address(s) + (j << PAGE_SHIFT);
+ }
+
+ err = kbase_mmu_insert_pages(kctx, reg->start_pfn, kbase_get_phy_pages(reg), reg->nr_alloc_pages, reg->flags | KBASE_REG_GPU_WR | KBASE_REG_GPU_RD);
+
+ if (MALI_ERROR_NONE != err)
+ {
+ dma_buf_unmap_attachment(reg->imported_metadata.umm.dma_attachment, reg->imported_metadata.umm.st, DMA_BIDIRECTIONAL);
+ reg->imported_metadata.umm.st = NULL;
+ }
+
+ return err;
+}
+
+static void kbase_jd_umm_unmap(struct kbase_context * kctx, struct kbase_va_region * reg)
+{
+ OSK_ASSERT(kctx);
+ OSK_ASSERT(reg);
+ OSK_ASSERT(reg->imported_metadata.umm.dma_attachment);
+ OSK_ASSERT(reg->imported_metadata.umm.st);
+ kbase_mmu_teardown_pages(kctx, reg->start_pfn, reg->nr_alloc_pages);
+ dma_buf_unmap_attachment(reg->imported_metadata.umm.dma_attachment, reg->imported_metadata.umm.st, DMA_BIDIRECTIONAL);
+ reg->imported_metadata.umm.st = NULL;
+}
+#endif /* CONFIG_DMA_SHARED_BUFFER */
+
+void kbase_jd_post_external_resources(kbase_jd_atom * katom)
+{
+#ifdef CONFIG_DMA_SHARED_BUFFER
+ u32 res_no;
+#endif /* CONFIG_DMA_SHARED_BUFFER */
+
+ OSK_ASSERT(katom);
+ OSK_ASSERT(katom->core_req & BASE_JD_REQ_EXTERNAL_RESOURCES);
+
+#ifdef CONFIG_KDS
+ if (katom->kds_rset)
+ {
+ kds_resource_set_release(&katom->kds_rset);
+ }
+#endif
+
+#if defined(CONFIG_DMA_SHARED_BUFFER) || (MALI_DEBUG != 0)
+ /* Lock also used in debug mode just for lock order checking */
+ kbase_gpu_vm_lock(katom->kctx);
+#endif /* defined(CONFIG_DMA_SHARED_BUFFER) || (MALI_DEBUG != 0) */
+#ifdef CONFIG_DMA_SHARED_BUFFER
+ res_no = katom->nr_extres;
+ while (res_no-- > 0)
+ {
+ base_external_resource * res;
+ kbase_va_region * reg;
+
+ res = base_jd_get_external_resource(katom->user_atom, res_no);
+ reg = kbase_region_lookup(katom->kctx, res->ext_resource & ~BASE_EXT_RES_ACCESS_EXCLUSIVE);
+ /* if reg wasn't found then it has been freed while the job ran */
+ if (reg)
+ {
+ if (1 == reg->imported_metadata.umm.current_mapping_usage_count--)
+ {
+ /* last job using */
+ kbase_jd_umm_unmap(katom->kctx, reg);
+ }
+ }
+ }
+#endif /* CONFIG_DMA_SHARED_BUFFER */
+#if defined(CONFIG_DMA_SHARED_BUFFER) || (MALI_DEBUG != 0)
+ /* Lock also used in debug mode just for lock order checking */
+ kbase_gpu_vm_unlock(katom->kctx);
+#endif /* defined(CONFIG_DMA_SHARED_BUFFER) || (MALI_DEBUG != 0) */
+}
+
+static mali_error kbase_jd_pre_external_resources(kbase_jd_atom * katom)
+{
+ mali_error err_ret_val = MALI_ERROR_FUNCTION_FAILED;
+ u32 res_no;
+#ifdef CONFIG_KDS
+ u32 kds_res_count = 0;
+ struct kds_resource ** kds_resources = NULL;
+ unsigned long * kds_access_bitmap = NULL;
+#endif /* CONFIG_KDS */
+
+ OSK_ASSERT(katom);
+ OSK_ASSERT(katom->core_req & BASE_JD_REQ_EXTERNAL_RESOURCES);
+
+ if (!katom->nr_extres)
+ {
+ /* no resources encoded, early out */
+ return MALI_ERROR_NONE;
+ }
+
+#ifdef CONFIG_KDS
+ /* assume we have to wait for all */
+ kds_resources = osk_malloc(sizeof(struct kds_resource *) * katom->nr_extres);
+ if (NULL == kds_resources)
+ {
+ err_ret_val = MALI_ERROR_OUT_OF_MEMORY;
+ goto early_err_out;
+ }
+
+ kds_access_bitmap = osk_calloc(sizeof(unsigned long) * ((katom->nr_extres + OSK_BITS_PER_LONG - 1) / OSK_BITS_PER_LONG));
+ if (NULL == kds_access_bitmap)
+ {
+ err_ret_val = MALI_ERROR_OUT_OF_MEMORY;
+ goto early_err_out;
+ }
+#endif /* CONFIG_KDS */
+
+#if defined(CONFIG_DMA_SHARED_BUFFER) || (MALI_DEBUG != 0)
+ /* need to keep the GPU VM locked while we set up UMM buffers */
+ /* Lock also used in debug mode just for lock order checking */
+ kbase_gpu_vm_lock(katom->kctx);
+#endif /* defined(CONFIG_DMA_SHARED_BUFFER) || (MALI_DEBUG != 0) */
+
+ for (res_no = 0; res_no < katom->nr_extres; res_no++)
+ {
+ base_external_resource * res;
+ kbase_va_region * reg;
+
+ res = base_jd_get_external_resource(katom->user_atom, res_no);
+ reg = kbase_region_lookup(katom->kctx, res->ext_resource & ~BASE_EXT_RES_ACCESS_EXCLUSIVE);
+
+ /* did we find a matching region object? */
+ if (NULL == reg)
+ {
+ /* roll back */
+ goto failed_loop;
+ }
+
+ /* decide what needs to happen for this resource */
+ switch (reg->imported_type)
+ {
+ case BASE_TMEM_IMPORT_TYPE_UMP:
+ {
+#if defined(CONFIG_KDS) && (MALI_USE_UMP == 1)
+ struct kds_resource * kds_res;
+ kds_res = ump_dd_kds_resource_get(reg->imported_metadata.ump_handle);
+ if (kds_res)
+ {
+ kds_resources[kds_res_count] = kds_res;
+ if (res->ext_resource & BASE_EXT_RES_ACCESS_EXCLUSIVE)
+ osk_bitarray_set_bit(kds_res_count, kds_access_bitmap);
+ kds_res_count++;
+ }
+#endif /*defined(CONFIG_KDS) && (MALI_USE_UMP == 1)*/
+ break;
+ }
+#ifdef CONFIG_DMA_SHARED_BUFFER
+ case BASE_TMEM_IMPORT_TYPE_UMM:
+ {
+ reg->imported_metadata.umm.current_mapping_usage_count++;
+ if (1 == reg->imported_metadata.umm.current_mapping_usage_count)
+ {
+ err_ret_val = kbase_jd_umm_map(katom->kctx, reg);
+ if (MALI_ERROR_NONE != err_ret_val)
+ {
+ /* failed to map this buffer, roll back */
+ goto failed_loop;
+ }
+ }
+ break;
+ }
+#endif
+ default:
+ goto failed_loop;
+ }
+ }
+ /* successfully parsed the extres array */
+#if defined(CONFIG_DMA_SHARED_BUFFER) || (MALI_DEBUG != 0)
+ /* drop the vm lock before we call into kds */
+ /* Lock also used in debug mode just for lock order checking */
+ kbase_gpu_vm_unlock(katom->kctx);
+#endif /* defined(CONFIG_DMA_SHARED_BUFFER) || (MALI_DEBUG != 0) */
+
+#ifdef CONFIG_KDS
+ if (kds_res_count)
+ {
+ /* We have resources to wait for with kds */
+ katom->kds_dep_satisfied = MALI_FALSE;
+ if (kds_async_waitall(&katom->kds_rset, KDS_FLAG_LOCKED_IGNORE, &katom->kctx->jctx.kds_cb, katom, NULL, kds_res_count, kds_access_bitmap, kds_resources))
+ {
+ goto failed_kds_setup;
+ }
+ }
+ else
+ {
+ /* Nothing to wait for, so kds dep met */
+ katom->kds_dep_satisfied = MALI_TRUE;
+ }
+ osk_free(kds_resources);
+ osk_free(kds_access_bitmap);
+#endif /* CONFIG_KDS */
+
+ /* all done OK */
+ return MALI_ERROR_NONE;
+
+
+/* error handling section */
+
+#ifdef CONFIG_KDS
+failed_kds_setup:
+#endif /* CONFIG_KDS */
+#if defined(CONFIG_DMA_SHARED_BUFFER) || (MALI_DEBUG != 0)
+ /* lock before we unmap */
+ /* Lock also used in debug mode just for lock order checking */
+ kbase_gpu_vm_lock(katom->kctx);
+#endif /* defined(CONFIG_DMA_SHARED_BUFFER) || (MALI_DEBUG != 0) */
+
+failed_loop:
+#ifdef CONFIG_DMA_SHARED_BUFFER
+ /* undo the loop work */
+ while (res_no-- > 0)
+ {
+ base_external_resource * res;
+ kbase_va_region * reg;
+ res = base_jd_get_external_resource(katom->user_atom, res_no);
+ reg = kbase_region_lookup(katom->kctx, res->ext_resource & ~BASE_EXT_RES_ACCESS_EXCLUSIVE);
+ /* if reg wasn't found then it has been freed when we set up kds */
+ if (reg)
+ {
+ reg->imported_metadata.umm.current_mapping_usage_count--;
+ if (0 == reg->imported_metadata.umm.current_mapping_usage_count)
+ {
+ kbase_jd_umm_unmap(katom->kctx, reg);
+ }
+ }
+ }
+#endif /* CONFIG_DMA_SHARED_BUFFER */
+#if defined(CONFIG_DMA_SHARED_BUFFER) || (MALI_DEBUG != 0)
+ /* Lock also used in debug mode just for lock order checking */
+ kbase_gpu_vm_unlock(katom->kctx);
+#endif /* defined(CONFIG_DMA_SHARED_BUFFER) || (MALI_DEBUG != 0) */
+
+#ifdef CONFIG_KDS
+early_err_out:
+ if (kds_resources)
+ {
+ osk_free(kds_resources);
+ }
+ if (kds_access_bitmap)
+ {
+ osk_free(kds_access_bitmap);
+ }
+#endif /* CONFIG_KDS */
+ return err_ret_val;
+}
+/*
+ * This will check atom for correctness and if so, initialize its js policy.
+ */
+STATIC INLINE kbase_jd_atom *jd_validate_atom(struct kbase_context *kctx,
+ kbase_jd_bag *bag,
+ base_jd_atom *atom,
+ u32 *sem)
+{
+ kbase_jd_context *jctx = &kctx->jctx;
+ kbase_jd_atom *katom;
+ u32 nr_syncsets;
+ u32 nr_extres;
+ base_jd_core_req core_req;
+ base_jd_dep pre_dep;
+ int nice_priority;
+
+ /* Check the atom struct fits in the pool before we attempt to access it
+ Note: a bad bag->nr_atom could trigger this condition */
+ if(((char *)atom + sizeof(base_jd_atom)) > ((char *)jctx->pool + jctx->pool_size))
+ return NULL;
+
+ core_req = atom->core_req;
+ nr_syncsets = atom->nr_syncsets;
+ nr_extres = atom->nr_extres;
+ pre_dep = atom->pre_dep;
+
+ if (kbase_hw_has_issue(kctx->kbdev, BASE_HW_ISSUE_8987))
+ {
+ /* For this HW workaround, we scheduled differently on the 'ONLY_COMPUTE'
+ * flag, at the expense of ignoring the NSS flag.
+ *
+ * NOTE: We could allow the NSS flag still (and just ensure that we still
+ * submit on slot 2 when the NSS flag is set), but we don't because:
+ * - If we only have NSS contexts, the NSS jobs get all the cores, delaying
+ * a non-NSS context from getting cores for a long time.
+ * - A single compute context won't be subject to any timers anyway -
+ * only when there are >1 contexts (GLES *or* CL) will it get subject to
+ * timers.
+ */
+ core_req &= ~((base_jd_core_req)BASE_JD_REQ_NSS);
+ }
+
+ /*
+ * Check that dependencies are sensible: the atom cannot have
+ * pre-dependencies that are already in use by another atom.
+ */
+ if (jctx->dep_queue.queue[pre_dep.dep[0]] ||
+ jctx->dep_queue.queue[pre_dep.dep[1]])
+ return NULL;
+
+ /* Check for conflicting dependencies inside the bag */
+ if (dep_get_sem(sem, pre_dep.dep[0]) ||
+ dep_get_sem(sem, pre_dep.dep[1]))
+ return NULL;
+
+ dep_raise_sem(sem, pre_dep.dep[0]);
+ dep_raise_sem(sem, pre_dep.dep[1]);
+
+ /* Check that the whole atom fits within the pool. */
+ if (core_req & BASE_JD_REQ_EXTERNAL_RESOURCES)
+ {
+ /* extres integrity will be verified when we parse them */
+ if ((char*)base_jd_get_external_resource(atom, nr_extres) > ((char*)jctx->pool + jctx->pool_size))
+ {
+ return NULL;
+ }
+ }
+ else
+ {
+ /* syncsets integrity will be performed as we execute them */
+ if ((char *)base_jd_get_atom_syncset(atom, nr_syncsets) > ((char *)jctx->pool + jctx->pool_size))
+ return NULL;
+ }
+
+ /* We surely want to preallocate a pool of those, or have some
+ * kind of slab allocator around */
+ katom = osk_calloc(sizeof(*katom));
+ if (!katom)
+ return NULL; /* Ideally we should handle OOM more gracefully */
+
+ katom->user_atom = atom;
+ katom->pre_dep = pre_dep;
+ katom->post_dep = atom->post_dep;
+ katom->bag = bag;
+ katom->kctx = kctx;
+ katom->nr_syncsets = nr_syncsets;
+ katom->nr_extres = nr_extres;
+ katom->device_nr = atom->device_nr;
+ katom->affinity = 0;
+ katom->jc = atom->jc;
+ katom->coreref_state= KBASE_ATOM_COREREF_STATE_NO_CORES_REQUESTED;
+ katom->core_req = core_req;
+#ifdef CONFIG_KDS
+ /* Start by assuming that the KDS dependencies are satisfied,
+ * kbase_jd_pre_external_resources will correct this if there are dependencies */
+ katom->kds_dep_satisfied = MALI_TRUE;
+#endif
+
+ /*
+ * If the priority is increased we need to check the caller has security caps to do this, if
+ * prioirty is decreased then this is ok as the result will have no negative impact on other
+ * processes running.
+ */
+ katom->nice_prio = atom->prio;
+ if( 0 > katom->nice_prio)
+ {
+ mali_bool access_allowed;
+ access_allowed = kbase_security_has_capability(kctx, KBASE_SEC_MODIFY_PRIORITY, KBASE_SEC_FLAG_NOAUDIT);
+ if(!access_allowed)
+ {
+ /* For unprivileged processes - a negative priority is interpreted as zero */
+ katom->nice_prio = 0;
+ }
+ }
+
+ /* Scale priority range to use NICE range */
+ if(katom->nice_prio)
+ {
+ /* Remove sign for calculation */
+ nice_priority = katom->nice_prio+128;
+ /* Fixed point maths to scale from ..255 to 0..39 (NICE range with +20 offset) */
+ katom->nice_prio = (((20<<16)/128)*nice_priority)>>16;
+ }
+
+ /* pre-fill the event */
+ katom->event.event_code = BASE_JD_EVENT_DONE;
+ katom->event.data = katom;
+
+ if (katom->core_req & BASE_JD_REQ_EXTERNAL_RESOURCES)
+ {
+ /* handle what we need to do to access the external resources */
+ if (MALI_ERROR_NONE != kbase_jd_pre_external_resources(katom))
+ {
+ /* setup failed (no access, bad resource, unknown resource types, etc.) */
+ osk_free(katom);
+ return NULL;
+ }
+ }
+
+
+ /* Initialize the jobscheduler policy for this atom. Function will
+ * return error if the atom is malformed. Then immediately terminate
+ * the policy to free allocated resources and return error.
+ *
+ * Soft-jobs never enter the job scheduler so we don't initialise the policy for these
+ */
+ if ((katom->core_req & BASE_JD_REQ_SOFT_JOB) == 0)
+ {
+ kbasep_js_policy *js_policy = &(kctx->kbdev->js_data.policy);
+ if (MALI_ERROR_NONE != kbasep_js_policy_init_job( js_policy, kctx, katom ))
+ {
+ if ( katom->core_req & BASE_JD_REQ_EXTERNAL_RESOURCES )
+ {
+ kbase_jd_post_external_resources(katom);
+ }
+ osk_free( katom );
+ return NULL;
+ }
+ }
+
+ return katom;
+}
+KBASE_EXPORT_TEST_API(jd_validate_atom)
+
+static void kbase_jd_cancel_bag(kbase_context *kctx, kbase_jd_bag *bag,
+ base_jd_event_code code)
+{
+ bag->event.event_code = code;
+ kbase_event_post(kctx, &bag->event);
+}
+
+STATIC void kbase_jd_katom_dtor(kbase_event *event)
+{
+ kbase_jd_atom *katom = CONTAINER_OF(event, kbase_jd_atom, event);
+ kbase_context *kctx = katom->kctx;
+ kbasep_js_policy *js_policy = &(kctx->kbdev->js_data.policy);
+
+ /* Soft-jobs never enter the job scheduler (see jd_validate_atom) therefore we
+ * do not need to terminate any of these jobs in the scheduler. We could get a
+ * request here due to kbase_jd_validate_bag failing an atom in the bag when
+ * a soft-job has already been validated and added to the event list */
+ if ((katom->core_req & BASE_JD_REQ_SOFT_JOB) == 0)
+ {
+ kbasep_js_policy_term_job( js_policy, kctx, katom );
+ }
+
+ if ( katom->core_req & BASE_JD_REQ_EXTERNAL_RESOURCES )
+ {
+ kbase_jd_post_external_resources(katom);
+ }
+ osk_free(katom);
+}
+KBASE_EXPORT_TEST_API(kbase_jd_katom_dtor)
+
+STATIC mali_error kbase_jd_validate_bag(kbase_context *kctx,
+ kbase_jd_bag *bag,
+ osk_dlist *klistp)
+{
+ kbase_jd_context *jctx;
+ kbasep_js_kctx_info *js_kctx_info;
+ kbase_jd_atom *katom;
+ base_jd_atom *atom;
+ mali_error err = MALI_ERROR_NONE;
+ u32 sem[BASEP_JD_SEM_ARRAY_SIZE] = { 0 };
+ u32 i;
+
+ OSK_ASSERT( kctx != NULL );
+
+ /* Bags without any atoms are not allowed */
+ if (bag->nr_atoms == 0)
+ {
+ kbase_jd_cancel_bag(kctx, bag, BASE_JD_EVENT_BAG_INVALID);
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+
+ jctx = &kctx->jctx;
+ js_kctx_info = &kctx->jctx.sched_info;
+
+ atom = jd_get_first_atom(jctx, bag);
+ if (!atom)
+ {
+ /* Bad start... */
+ kbase_jd_cancel_bag(kctx, bag, BASE_JD_EVENT_BAG_INVALID);
+ err = MALI_ERROR_FUNCTION_FAILED;
+ goto out;
+ }
+
+ for (i = 0; i < bag->nr_atoms; i++)
+ {
+ katom = jd_validate_atom(kctx, bag, atom, sem);
+ if (!katom)
+ {
+ OSK_DLIST_EMPTY_LIST_REVERSE(klistp, kbase_event,
+ entry, kbase_jd_katom_dtor);
+ kbase_jd_cancel_bag(kctx, bag, BASE_JD_EVENT_BAG_INVALID);
+ err = MALI_ERROR_FUNCTION_FAILED;
+ goto out;
+ }
+
+ OSK_DLIST_PUSH_BACK(klistp, &katom->event,
+ kbase_event, entry);
+ atom = jd_get_next_atom(katom);
+ }
+
+out:
+ return err;
+}
+KBASE_EXPORT_TEST_API(kbase_jd_validate_bag)
+
+STATIC INLINE kbase_jd_atom *jd_resolve_dep(kbase_jd_atom *katom, u8 d, int zapping)
+{
+ u8 other_dep;
+ u8 dep;
+ kbase_jd_atom *dep_katom;
+ kbase_jd_context *ctx = &katom->kctx->jctx;
+
+ dep = katom->post_dep.dep[d];
+
+ if (!dep)
+ return NULL;
+
+ dep_clear_sem(ctx->dep_queue.sem, dep);
+
+ /* Get the atom that's waiting for us (if any), and remove it
+ * from this particular dependency queue */
+ dep_katom = ctx->dep_queue.queue[dep];
+
+ /* Case of a dangling dependency */
+ if (!dep_katom)
+ return NULL;
+
+ ctx->dep_queue.queue[dep] = NULL;
+
+ beenthere("removed %p from slot %d",
+ (void *)dep_katom, dep);
+
+#ifdef CONFIG_KDS
+ if (dep_katom->kds_dep_satisfied == MALI_FALSE)
+ {
+ /* The KDS dependency has not been satisfied yet */
+ return NULL;
+ }
+#endif
+
+ /* Find out if this atom is waiting for another job to be done.
+ * If it's not waiting anymore, put it on the run queue. */
+ if (dep_katom->pre_dep.dep[0] == dep)
+ other_dep = dep_katom->pre_dep.dep[1];
+ else
+ other_dep = dep_katom->pre_dep.dep[0];
+
+ /*
+ * The following line seem to confuse people, so here's the
+ * rational behind it:
+ *
+ * The queue hold pointers to atoms waiting for a single
+ * pre-dependency to be satisfied. Above, we've already
+ * satisfied a pre-dep for an atom (dep_katom). The next step
+ * is to check whether this atom is now free to run, or has to
+ * wait for another pre-dep to be satisfied.
+ *
+ * For a single entry, 3 possibilities:
+ *
+ * - It's a pointer to dep_katom -> the pre-dep has not been
+ * satisfied yet, and it cannot run immediately.
+ *
+ * - It's NULL -> the atom can be scheduled immediately, as
+ * the dependency has already been satisfied.
+ *
+ * - Neither of the above: this is the case of a dependency
+ * that has already been satisfied, and the slot reused by
+ * an incoming atom -> dep_katom can be run immediately.
+ */
+ if (ctx->dep_queue.queue[other_dep] != dep_katom)
+ return dep_katom;
+
+ /*
+ * We're on a killing spree. Cancel the additionnal
+ * dependency, and return the atom anyway. An unfortunate
+ * consequence is that userpace may receive notifications out
+ * of order WRT the dependency tree.
+ */
+ if (zapping)
+ {
+ ctx->dep_queue.queue[other_dep] = NULL;
+ return dep_katom;
+ }
+
+ beenthere("katom %p waiting for slot %d",
+ (void *)dep_katom, other_dep);
+ return NULL;
+}
+KBASE_EXPORT_TEST_API(jd_resolve_dep)
+
+/*
+ * Perform the necessary handling of an atom that has finished running
+ * on the GPU. The @a zapping parameter instruct the function to
+ * propagate the state of the completed atom to all the atoms that
+ * depend on it, directly or indirectly.
+ *
+ * This flag is used for error propagation in the "failed job", or
+ * when destroying a context.
+ *
+ * When not zapping, the caller must hold the kbasep_js_kctx_info::ctx::jsctx_mutex.
+ */
+STATIC mali_bool jd_done_nolock(kbase_jd_atom *katom, int zapping)
+{
+ kbase_jd_atom *dep_katom;
+ struct kbase_context *kctx = katom->kctx;
+ osk_dlist ts; /* traversal stack */
+ osk_dlist *tsp = &ts;
+ osk_dlist vl; /* visited list */
+ osk_dlist *vlp = &vl;
+ kbase_jd_atom *node;
+ base_jd_event_code event_code = katom->event.event_code;
+ mali_bool need_to_try_schedule_context = MALI_FALSE;
+
+ /*
+ * We're trying to achieve two goals here:
+ * - Eliminate dependency atoms very early so we can push real
+ * jobs to the HW
+ * - Avoid recursion which could result in a nice DoS from
+ * user-space.
+ *
+ * We use two lists here:
+ * - One as a stack (ts) to get rid of the recursion
+ * - The other to queue jobs that are either done or ready to
+ * run.
+ */
+ OSK_DLIST_INIT(tsp);
+ OSK_DLIST_INIT(vlp);
+
+ /* push */
+ OSK_DLIST_PUSH_BACK(tsp, &katom->event, kbase_event, entry);
+
+ while(!OSK_DLIST_IS_EMPTY(tsp))
+ {
+ /* pop */
+ node = OSK_DLIST_POP_BACK(tsp, kbase_jd_atom, event.entry);
+
+ if (node == katom ||
+ node->core_req == BASE_JD_REQ_DEP ||
+ zapping)
+ {
+ int i;
+ for (i = 0; i < 2; i++)
+ {
+ dep_katom = jd_resolve_dep(node, i, zapping);
+ if (dep_katom) /* push */
+ OSK_DLIST_PUSH_BACK(tsp,
+ &dep_katom->event,
+ kbase_event,
+ entry);
+ }
+ }
+
+ OSK_DLIST_PUSH_BACK(vlp, &node->event,
+ kbase_event, entry);
+ }
+
+ while(!OSK_DLIST_IS_EMPTY(vlp))
+ {
+ node = OSK_DLIST_POP_FRONT(vlp, kbase_jd_atom, event.entry);
+
+ if (node == katom ||
+ node->core_req == BASE_JD_REQ_DEP ||
+ (node->core_req & BASE_JD_REQ_SOFT_JOB) ||
+ zapping)
+ {
+ kbase_jd_bag *bag = node->bag;
+
+ /* If we're zapping stuff, propagate the event code */
+ if (zapping)
+ {
+ node->event.event_code = event_code;
+ }
+ else if (node->core_req & BASE_JD_REQ_SOFT_JOB)
+ {
+ kbase_process_soft_job( kctx, node );
+ }
+
+ /* This will signal our per-context worker
+ * thread that we're done with this katom. Any
+ * use of this katom after that point IS A
+ * ERROR!!! */
+ kbase_event_post(kctx, &node->event);
+ beenthere("done atom %p\n", (void*)node);
+
+ OSK_ASSERT( kctx->nr_outstanding_atoms > 0 );
+ if (--kctx->nr_outstanding_atoms < MAX_KCTX_OUTSTANDING_ATOMS)
+ {
+ osk_waitq_set(&kctx->complete_outstanding_waitq);
+ }
+ if (--bag->nr_atoms == 0)
+ {
+ if (bag->has_pm_ctx_reference)
+ {
+ /* This bag had a pm reference on the GPU, release it */
+ kbase_pm_context_idle(kctx->kbdev);
+ }
+ beenthere("done bag %p\n", (void*)bag);
+ /* This atom was the last, signal userspace */
+ kbase_event_post(kctx, &bag->event);
+ /* The bag may be freed by this point - it is a bug to try to access it after this point */
+ }
+
+ /* Decrement and check the TOTAL number of jobs. This includes
+ * those not tracked by the scheduler: 'not ready to run' and
+ * 'dependency-only' jobs. */
+ if (--kctx->jctx.job_nr == 0)
+ {
+ /* All events are safely queued now, and we can signal any waiter
+ * that we've got no more jobs (so we can be safely terminated) */
+ osk_waitq_set(&kctx->jctx.zero_jobs_waitq);
+ }
+ }
+ else
+ {
+ /* Queue an action about whether we should try scheduling a context */
+ need_to_try_schedule_context |= kbasep_js_add_job( kctx, node );
+ }
+ }
+
+ return need_to_try_schedule_context;
+}
+KBASE_EXPORT_TEST_API(jd_done_nolock)
+
+mali_error kbase_jd_submit(kbase_context *kctx, const kbase_uk_job_submit *user_bag)
+{
+ osk_dlist klist;
+ osk_dlist *klistp = &klist;
+ kbase_jd_context *jctx = &kctx->jctx;
+ kbase_jd_atom *katom;
+ kbase_jd_bag *bag;
+ mali_error err = MALI_ERROR_NONE;
+ int i = -1;
+ mali_bool need_to_try_schedule_context = MALI_FALSE;
+ kbase_device *kbdev;
+
+ /*
+ * kbase_jd_submit isn't expected to fail and so all errors with the jobs
+ * are reported by immediately falling them (through event system)
+ */
+ kbdev = kctx->kbdev;
+
+ beenthere("%s", "Enter");
+ bag = osk_malloc(sizeof(*bag));
+ if (NULL == bag)
+ {
+ err = MALI_ERROR_OUT_OF_MEMORY;
+ goto out_bag;
+ }
+
+ bag->core_restriction = user_bag->core_restriction;
+ bag->offset = user_bag->offset;
+ bag->nr_atoms = user_bag->nr_atoms;
+ bag->event.event_code = BASE_JD_EVENT_BAG_DONE;
+ bag->event.data = (void *)(uintptr_t)user_bag->bag_uaddr;
+ bag->has_pm_ctx_reference = MALI_FALSE;
+
+ osk_mutex_lock(&jctx->lock);
+ while (kctx->nr_outstanding_atoms >= MAX_KCTX_OUTSTANDING_ATOMS)
+ {
+ osk_mutex_unlock(&jctx->lock);
+ osk_waitq_wait(&kctx->complete_outstanding_waitq);
+ osk_mutex_lock(&jctx->lock);
+ }
+ /*
+ * Use a transient list to store all the validated atoms.
+ * Once we're sure nothing is wrong, there's no going back.
+ */
+ OSK_DLIST_INIT(klistp);
+
+ /* The above mutex lock provides necessary barrier to read this flag */
+ if ((kctx->jctx.sched_info.ctx.flags & KBASE_CTX_FLAG_SUBMIT_DISABLED) != 0)
+ {
+ OSK_PRINT_ERROR(OSK_BASE_JD, "Cancelled bag because context had SUBMIT_DISABLED set on it");
+ kbase_jd_cancel_bag(kctx, bag, BASE_JD_EVENT_BAG_INVALID);
+ err = MALI_ERROR_FUNCTION_FAILED;
+ goto out;
+ }
+
+ if (kbase_jd_validate_bag(kctx, bag, klistp))
+ {
+ err = MALI_ERROR_FUNCTION_FAILED;
+ goto out;
+ }
+
+ kctx->nr_outstanding_atoms += user_bag->nr_atoms;
+ if (kctx->nr_outstanding_atoms >= MAX_KCTX_OUTSTANDING_ATOMS )
+ {
+ osk_waitq_clear(&kctx->complete_outstanding_waitq);
+ }
+
+ while(!OSK_DLIST_IS_EMPTY(klistp))
+ {
+
+ katom = OSK_DLIST_POP_FRONT(klistp,
+ kbase_jd_atom, event.entry);
+ i++;
+
+ /* This is crucial. As jobs are processed in-order, we must
+ * indicate that any job with a pre-dep on this particular job
+ * must wait for its completion (indicated as a post-dep).
+ */
+ dep_raise_sem(jctx->dep_queue.sem, katom->post_dep.dep[0]);
+ dep_raise_sem(jctx->dep_queue.sem, katom->post_dep.dep[1]);
+
+ if (!(katom->core_req & BASE_JD_REQ_EXTERNAL_RESOURCES))
+ {
+ /* Process pre-exec syncsets before queueing */
+ kbase_pre_job_sync(kctx,
+ base_jd_get_atom_syncset(katom->user_atom, 0),
+ katom->nr_syncsets);
+ }
+
+ /* Update the TOTAL number of jobs. This includes those not tracked by
+ * the scheduler: 'not ready to run' and 'dependency-only' jobs. */
+ jctx->job_nr++;
+ /* Cause any future waiter-on-termination to wait until the jobs are
+ * finished */
+ osk_waitq_clear(&jctx->zero_jobs_waitq);
+ /* If no pre-dep has been set, then we're free to run
+ * the job immediately */
+ if ((jd_add_dep(jctx, katom, 0) | jd_add_dep(jctx, katom, 1)))
+ {
+ beenthere("queuing atom #%d(%p %p)", i,
+ (void *)katom, (void *)katom->user_atom);
+ continue;
+ }
+
+#ifdef CONFIG_KDS
+ if (!katom->kds_dep_satisfied)
+ {
+ /* Queue atom due to KDS dependency */
+ beenthere("queuing atom #%d(%p %p)", i,
+ (void *)katom, (void *)katom->user_atom);
+ continue;
+ }
+#endif
+
+ beenthere("running atom #%d(%p %p)", i,
+ (void *)katom, (void *)katom->user_atom);
+
+ if (katom->core_req & BASE_JD_REQ_SOFT_JOB)
+ {
+ kbase_process_soft_job( kctx, katom );
+ /* Pure software job, so resolve it immediately */
+ need_to_try_schedule_context |= jd_done_nolock(katom, 0);
+ }
+ else if (katom->core_req != BASE_JD_REQ_DEP)
+ {
+ need_to_try_schedule_context |= kbasep_js_add_job( kctx, katom );
+ }
+ else
+ {
+ /* This is a pure dependency. Resolve it immediately */
+ need_to_try_schedule_context |= jd_done_nolock(katom, 0);
+ }
+ }
+
+ /* This is an optimization: we only need to do this after processing all jobs
+ * resolved from this context. */
+ if ( need_to_try_schedule_context != MALI_FALSE )
+ {
+ kbasep_js_try_schedule_head_ctx( kbdev );
+ }
+out:
+ osk_mutex_unlock(&jctx->lock);
+out_bag:
+ beenthere("%s", "Exit");
+ return err;
+}
+KBASE_EXPORT_TEST_API(kbase_jd_submit)
+
+/**
+ * This function:
+ * - requeues the job from the runpool (if it was soft-stopped/removed from NEXT registers)
+ * - removes it from the system if it finished/failed/was cancelled.
+ * - resolves dependencies to add dependent jobs to the context, potentially starting them if necessary (which may add more references to the context)
+ * - releases the reference to the context from the no-longer-running job.
+ * - Handles retrying submission outside of IRQ context if it failed from within IRQ context.
+ */
+static void jd_done_worker(osk_workq_work *data)
+{
+ kbase_jd_atom *katom = CONTAINER_OF(data, kbase_jd_atom, work);
+ kbase_jd_context *jctx;
+ kbase_context *kctx;
+ kbasep_js_kctx_info *js_kctx_info;
+ kbasep_js_policy *js_policy;
+ kbase_device *kbdev;
+ kbasep_js_device_data *js_devdata;
+ int zapping;
+ u64 cache_jc = katom->jc;
+ base_jd_atom *cache_user_atom = katom->user_atom;
+
+ mali_bool retry_submit;
+ int retry_jobslot;
+
+ kctx = katom->kctx;
+ jctx = &kctx->jctx;
+ kbdev = kctx->kbdev;
+ js_kctx_info = &kctx->jctx.sched_info;
+
+ js_devdata = &kbdev->js_data;
+ js_policy = &kbdev->js_data.policy;
+
+ KBASE_TRACE_ADD( kbdev, JD_DONE_WORKER, kctx, katom->user_atom, katom->jc, 0 );
+ /*
+ * Begin transaction on JD context and JS context
+ */
+ osk_mutex_lock( &jctx->lock );
+ osk_mutex_lock( &js_kctx_info->ctx.jsctx_mutex );
+
+ /* This worker only gets called on contexts that are scheduled *in*. This is
+ * because it only happens in response to an IRQ from a job that was
+ * running.
+ */
+ OSK_ASSERT( js_kctx_info->ctx.is_scheduled != MALI_FALSE );
+
+ /* Release cores this job was using (this might power down unused cores, and
+ * cause extra latency if a job submitted here - such as depenedent jobs -
+ * would use those cores) */
+ kbasep_js_job_check_deref_cores(kbdev, katom);
+
+ /* Grab the retry_submit state before the katom disappears */
+ retry_submit = kbasep_js_get_job_retry_submit_slot( katom, &retry_jobslot );
+
+ if (katom->event.event_code == BASE_JD_EVENT_STOPPED
+ || katom->event.event_code == BASE_JD_EVENT_REMOVED_FROM_NEXT )
+ {
+ /* Requeue the atom on soft-stop / removed from NEXT registers */
+ OSK_PRINT_INFO(OSK_BASE_JM, "JS: Soft Stopped/Removed from next %p on Ctx %p; Requeuing", kctx );
+
+ osk_mutex_lock( &js_devdata->runpool_mutex );
+ kbasep_js_clear_job_retry_submit( katom );
+
+ osk_spinlock_irq_lock( &js_devdata->runpool_irq.lock );
+ kbasep_js_policy_enqueue_job( js_policy, katom );
+ osk_spinlock_irq_unlock( &js_devdata->runpool_irq.lock );
+
+ /* A STOPPED/REMOVED job must cause a re-submit to happen, in case it
+ * was the last job left. Crucially, work items on work queues can run
+ * out of order e.g. on different CPUs, so being able to submit from
+ * the IRQ handler is not a good indication that we don't need to run
+ * jobs; the submitted job could be processed on the work-queue
+ * *before* the stopped job, even though it was submitted after. */
+ OSK_ASSERT( retry_submit != MALI_FALSE );
+
+ osk_mutex_unlock( &js_devdata->runpool_mutex );
+ osk_mutex_unlock( &js_kctx_info->ctx.jsctx_mutex );
+ }
+ else
+ {
+ /* Remove the job from the system for all other reasons */
+ mali_bool need_to_try_schedule_context;
+
+ kbasep_js_remove_job( kctx, katom );
+ osk_mutex_unlock( &js_kctx_info->ctx.jsctx_mutex );
+ /* jd_done_nolock() requires the jsctx_mutex lock to be dropped */
+
+ zapping = (katom->event.event_code != BASE_JD_EVENT_DONE);
+ need_to_try_schedule_context = jd_done_nolock(katom, zapping);
+
+ /* This ctx is already scheduled in, so return value guarenteed FALSE */
+ OSK_ASSERT( need_to_try_schedule_context == MALI_FALSE );
+ }
+ /* katom may have been freed now, do not use! */
+
+ /*
+ * Transaction complete
+ */
+ osk_mutex_unlock( &jctx->lock );
+
+ /* Job is now no longer running, so can now safely release the context reference
+ * This potentially schedules out the context, schedules in a new one, and
+ * runs a new job on the new one */
+ kbasep_js_runpool_release_ctx( kbdev, kctx );
+
+ /* Submit on any slots that might've had atoms blocked by the affinity of
+ the completed atom. */
+ kbase_js_affinity_submit_to_blocked_slots( kbdev );
+
+ /* If the IRQ handler failed to get a job from the policy, try again from
+ * outside the IRQ handler */
+ if ( retry_submit != MALI_FALSE )
+ {
+ KBASE_TRACE_ADD_SLOT( kbdev, JD_DONE_TRY_RUN_NEXT_JOB, kctx, cache_user_atom, cache_jc, retry_jobslot );
+ osk_mutex_lock( &js_devdata->runpool_mutex );
+ kbasep_js_try_run_next_job_on_slot( kbdev, retry_jobslot );
+ osk_mutex_unlock( &js_devdata->runpool_mutex );
+ }
+ KBASE_TRACE_ADD( kbdev, JD_DONE_WORKER_END, kctx, cache_user_atom, cache_jc, 0 );
+}
+
+/**
+ * Work queue job cancel function
+ * Only called as part of 'Zapping' a context (which occurs on termination)
+ * Operates serially with the jd_done_worker() on the work queue.
+ *
+ * This can only be called on contexts that aren't scheduled.
+ *
+ * @note We don't need to release most of the resources that would occur on
+ * kbase_jd_done() or jd_done_worker(), because the atoms here must not be
+ * running (by virtue of only being called on contexts that aren't
+ * scheduled). The only resources that are an exception to this are:
+ * - those held by kbasep_js_job_check_ref_cores(), because these resources are
+ * held for non-running atoms as well as running atoms.
+ */
+static void jd_cancel_worker(osk_workq_work *data)
+{
+ kbase_jd_atom *katom = CONTAINER_OF(data, kbase_jd_atom, work);
+ kbase_jd_context *jctx;
+ kbase_context *kctx;
+ kbasep_js_kctx_info *js_kctx_info;
+ mali_bool need_to_try_schedule_context;
+
+ kctx = katom->kctx;
+ jctx = &kctx->jctx;
+ js_kctx_info = &kctx->jctx.sched_info;
+
+ {
+ kbase_device *kbdev = kctx->kbdev;
+ KBASE_TRACE_ADD( kbdev, JD_CANCEL_WORKER, kctx, katom->user_atom, katom->jc, 0 );
+ }
+
+ /* This only gets called on contexts that are scheduled out. Hence, we must
+ * make sure we don't de-ref the number of running jobs (there aren't
+ * any), nor must we try to schedule out the context (it's already
+ * scheduled out).
+ */
+ OSK_ASSERT( js_kctx_info->ctx.is_scheduled == MALI_FALSE );
+
+ /* Release cores this job was using (this might power down unused cores) */
+ kbasep_js_job_check_deref_cores(kctx->kbdev, katom);
+
+ /* Scheduler: Remove the job from the system */
+ osk_mutex_lock( &js_kctx_info->ctx.jsctx_mutex );
+ kbasep_js_remove_job( kctx, katom );
+ osk_mutex_unlock( &js_kctx_info->ctx.jsctx_mutex );
+
+ osk_mutex_lock(&jctx->lock);
+
+ /* Always enable zapping */
+ need_to_try_schedule_context = jd_done_nolock(katom, 1);
+ /* Because we're zapping, we're not adding any more jobs to this ctx, so no need to
+ * schedule the context. There's also no need for the jsctx_mutex to have been taken
+ * around this too. */
+ OSK_ASSERT( need_to_try_schedule_context == MALI_FALSE );
+
+ /* katom may have been freed now, do not use! */
+ osk_mutex_unlock(&jctx->lock);
+
+}
+
+/**
+ * @brief Complete a job that has been removed from the Hardware
+ *
+ * This must be used whenever a job has been removed from the Hardware, e.g.:
+ * - An IRQ indicates that the job finished (for both error and 'done' codes)
+ * - The job was evicted from the JSn_HEAD_NEXT registers during a Soft/Hard stop.
+ *
+ * Some work is carried out immediately, and the rest is deferred onto a workqueue
+ *
+ * This can be called safely from atomic context.
+ *
+ */
+void kbase_jd_done(kbase_jd_atom *katom, int slot_nr, kbasep_js_tick *end_timestamp, mali_bool start_new_jobs)
+{
+ kbase_context *kctx;
+ kbase_device *kbdev;
+ OSK_ASSERT(katom);
+ kctx = katom->kctx;
+ OSK_ASSERT(kctx);
+ kbdev = kctx->kbdev;
+ OSK_ASSERT(kbdev);
+
+ KBASE_TRACE_ADD( kbdev, JD_DONE, kctx, katom->user_atom, katom->jc, 0 );
+
+ kbasep_js_job_done_slot_irq( katom, slot_nr, end_timestamp, start_new_jobs );
+
+ osk_workq_work_init(&katom->work, jd_done_worker);
+ osk_workq_submit(&kctx->jctx.job_done_wq, &katom->work);
+}
+KBASE_EXPORT_TEST_API(kbase_jd_done)
+
+
+void kbase_jd_cancel(kbase_jd_atom *katom)
+{
+ kbase_context *kctx;
+ kbasep_js_kctx_info *js_kctx_info;
+ kbase_device *kbdev;
+
+ kctx = katom->kctx;
+ js_kctx_info = &kctx->jctx.sched_info;
+ kbdev = kctx->kbdev;
+
+ KBASE_TRACE_ADD( kbdev, JD_CANCEL, kctx, katom->user_atom, katom->jc, 0 );
+
+ /* This should only be done from a context that is not scheduled */
+ OSK_ASSERT( js_kctx_info->ctx.is_scheduled == MALI_FALSE );
+
+ katom->event.event_code = BASE_JD_EVENT_JOB_CANCELLED;
+
+ osk_workq_work_init(&katom->work, jd_cancel_worker);
+ osk_workq_submit(&kctx->jctx.job_done_wq, &katom->work);
+}
+
+void kbase_jd_flush_workqueues(kbase_context *kctx)
+{
+ kbase_device *kbdev;
+ int i;
+
+ OSK_ASSERT( kctx );
+
+ kbdev = kctx->kbdev;
+ OSK_ASSERT( kbdev );
+
+ osk_workq_flush( &kctx->jctx.job_done_wq );
+
+ /* Flush all workqueues, for simplicity */
+ for (i = 0; i < kbdev->nr_hw_address_spaces; i++)
+ {
+ osk_workq_flush( &kbdev->as[i].pf_wq );
+ }
+}
+
+typedef struct zap_reset_data
+{
+ /* The stages are:
+ * 1. The timer has never been called
+ * 2. The zap has timed out, all slots are soft-stopped - the GPU reset will happen.
+ * The GPU has been reset when kbdev->reset_waitq is signalled
+ *
+ * (-1 - The timer has been cancelled)
+ */
+ int stage;
+ kbase_device *kbdev;
+ osk_timer *timer;
+ osk_spinlock lock;
+} zap_reset_data;
+
+static void zap_timeout_callback(void *data)
+{
+ zap_reset_data *reset_data = (zap_reset_data*)data;
+ kbase_device *kbdev = reset_data->kbdev;
+
+ osk_spinlock_lock(&reset_data->lock);
+
+ if (reset_data->stage == -1)
+ {
+ goto out;
+ }
+
+ if (kbase_prepare_to_reset_gpu(kbdev))
+ {
+ kbase_reset_gpu(kbdev);
+ }
+
+ reset_data->stage = 2;
+
+out:
+ osk_spinlock_unlock(&reset_data->lock);
+}
+
+void kbase_jd_zap_context(kbase_context *kctx)
+{
+ kbase_device *kbdev;
+ osk_timer zap_timeout;
+ osk_error ret;
+ zap_reset_data reset_data;
+
+ OSK_ASSERT(kctx);
+
+ kbdev = kctx->kbdev;
+
+ KBASE_TRACE_ADD( kbdev, JD_ZAP_CONTEXT, kctx, NULL, 0u, 0u );
+ kbase_job_zap_context(kctx);
+
+ ret = osk_timer_on_stack_init(&zap_timeout);
+ if (ret != OSK_ERR_NONE)
+ {
+ goto skip_timeout;
+ }
+
+ ret = osk_spinlock_init(&reset_data.lock, OSK_LOCK_ORDER_JD_ZAP_CONTEXT);
+ if (ret != OSK_ERR_NONE)
+ {
+ osk_timer_on_stack_term(&zap_timeout);
+ goto skip_timeout;
+ }
+
+ reset_data.kbdev = kbdev;
+ reset_data.timer = &zap_timeout;
+ reset_data.stage = 1;
+ osk_timer_callback_set(&zap_timeout, zap_timeout_callback, &reset_data);
+ ret = osk_timer_start(&zap_timeout, ZAP_TIMEOUT);
+
+ if (ret != OSK_ERR_NONE)
+ {
+ osk_spinlock_term(&reset_data.lock);
+ osk_timer_on_stack_term(&zap_timeout);
+ goto skip_timeout;
+ }
+
+ /* If we jump to here then the zap timeout will not be active,
+ * so if the GPU hangs the driver will also hang. This will only
+ * happen if the driver is very resource starved.
+ */
+skip_timeout:
+
+ /* Wait for all jobs to finish, and for the context to be not-scheduled
+ * (due to kbase_job_zap_context(), we also guarentee it's not in the JS
+ * policy queue either */
+ osk_waitq_wait(&kctx->jctx.zero_jobs_waitq);
+ osk_waitq_wait(&kctx->jctx.sched_info.ctx.not_scheduled_waitq);
+
+ if (ret == OSK_ERR_NONE)
+ {
+ osk_spinlock_lock(&reset_data.lock);
+ if (reset_data.stage == 1)
+ {
+ /* The timer hasn't run yet - so cancel it */
+ reset_data.stage = -1;
+ }
+ osk_spinlock_unlock(&reset_data.lock);
+
+ osk_timer_stop(&zap_timeout);
+
+ if (reset_data.stage == 2)
+ {
+ /* The reset has already started.
+ * Wait for the reset to complete
+ */
+ osk_waitq_wait(&kbdev->reset_waitq);
+ }
+ osk_timer_on_stack_term(&zap_timeout);
+ osk_spinlock_term(&reset_data.lock);
+ }
+
+ OSK_PRINT_INFO(OSK_BASE_JM, "Zap: Finished Context %p", kctx );
+
+ /* Ensure that the signallers of the waitqs have finished */
+ osk_mutex_lock(&kctx->jctx.lock);
+ osk_mutex_lock(&kctx->jctx.sched_info.ctx.jsctx_mutex);
+ osk_mutex_unlock(&kctx->jctx.sched_info.ctx.jsctx_mutex);
+ osk_mutex_unlock(&kctx->jctx.lock);
+}
+KBASE_EXPORT_TEST_API(kbase_jd_zap_context)
+
+mali_error kbase_jd_init(struct kbase_context *kctx)
+{
+ void *kaddr;
+ int i;
+ mali_error mali_err;
+ osk_error osk_err;
+#ifdef CONFIG_KDS
+ int err;
+#endif
+
+ OSK_ASSERT(kctx);
+ OSK_ASSERT(NULL == kctx->jctx.pool);
+
+ kaddr = osk_vmalloc(BASEP_JCTX_RB_NRPAGES * OSK_PAGE_SIZE);
+ if (!kaddr)
+ {
+ mali_err = MALI_ERROR_OUT_OF_MEMORY;
+ goto out;
+ }
+ osk_err = osk_workq_init(&kctx->jctx.job_done_wq, "mali_jd", 0);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ mali_err = MALI_ERROR_OUT_OF_MEMORY;
+ goto out1;
+ }
+
+ for (i = 0; i < KBASE_JD_DEP_QUEUE_SIZE; i++)
+ kctx->jctx.dep_queue.queue[i] = NULL;
+
+ for (i = 0; i < BASEP_JD_SEM_ARRAY_SIZE; i++)
+ kctx->jctx.dep_queue.sem[i] = 0;
+
+ osk_err = osk_mutex_init(&kctx->jctx.lock, OSK_LOCK_ORDER_JCTX);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ mali_err = MALI_ERROR_FUNCTION_FAILED;
+ goto out2;
+ }
+
+ osk_err = osk_waitq_init(&kctx->jctx.zero_jobs_waitq);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ mali_err = MALI_ERROR_FUNCTION_FAILED;
+ goto out3;
+ }
+
+ osk_err = osk_spinlock_irq_init(&kctx->jctx.tb_lock, OSK_LOCK_ORDER_TB);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ mali_err = MALI_ERROR_FUNCTION_FAILED;
+ goto out4;
+ }
+
+#ifdef CONFIG_KDS
+ err = kds_callback_init(&kctx->jctx.kds_cb, 0, kds_dep_clear);
+ if (0 != err)
+ {
+ mali_err = MALI_ERROR_FUNCTION_FAILED;
+ goto out5;
+ }
+#endif
+
+ osk_waitq_set(&kctx->jctx.zero_jobs_waitq);
+
+ kctx->jctx.pool = kaddr;
+ kctx->jctx.pool_size = BASEP_JCTX_RB_NRPAGES * OSK_PAGE_SIZE;
+ kctx->jctx.job_nr = 0;
+
+ return MALI_ERROR_NONE;
+
+#ifdef CONFIG_KDS
+out5:
+ osk_spinlock_irq_term(&kctx->jctx.tb_lock);
+#endif
+out4:
+ osk_waitq_term(&kctx->jctx.zero_jobs_waitq);
+out3:
+ osk_mutex_term(&kctx->jctx.lock);
+out2:
+ osk_workq_term(&kctx->jctx.job_done_wq);
+out1:
+ osk_vfree(kaddr);
+out:
+ return mali_err;
+}
+KBASE_EXPORT_TEST_API(kbase_jd_init)
+
+void kbase_jd_exit(struct kbase_context *kctx)
+{
+ OSK_ASSERT(kctx);
+ /* Assert if kbase_jd_init has not been called before this function
+ (kbase_jd_init initializes the pool) */
+ OSK_ASSERT(kctx->jctx.pool);
+
+#ifdef CONFIG_KDS
+ kds_callback_term(&kctx->jctx.kds_cb);
+#endif
+ osk_spinlock_irq_term(&kctx->jctx.tb_lock);
+ /* Work queue is emptied by this */
+ osk_workq_term(&kctx->jctx.job_done_wq);
+ osk_waitq_term(&kctx->jctx.zero_jobs_waitq);
+ osk_vfree(kctx->jctx.pool);
+ osk_mutex_term(&kctx->jctx.lock);
+}
+KBASE_EXPORT_TEST_API(kbase_jd_exit)
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_jm.c
+ * Base kernel job manager APIs
+ */
+
+#include <kbase/src/common/mali_kbase.h>
+#include <kbase/src/common/mali_midg_regmap.h>
+#include <kbase/src/common/mali_kbase_gator.h>
+#include <kbase/src/common/mali_kbase_js_affinity.h>
+#include <kbase/src/common/mali_kbase_8401_workaround.h>
+#include <kbase/src/common/mali_kbase_hw.h>
+
+#include "mali_kbase_jm.h"
+
+#define beenthere(f, a...) OSK_PRINT_INFO(OSK_BASE_JM, "%s:" f, __func__, ##a)
+
+static void kbasep_try_reset_gpu_early(kbase_device *kbdev);
+
+static void kbase_job_hw_submit(kbase_device *kbdev, kbase_jd_atom *katom, int js)
+{
+ kbase_context *kctx;
+ u32 cfg;
+ u64 jc_head = katom->jc;
+
+ OSK_ASSERT(kbdev);
+ OSK_ASSERT(katom);
+
+ kctx = katom->kctx;
+
+ /* Command register must be available */
+ OSK_ASSERT(kbasep_jm_is_js_free(kbdev, js, kctx));
+ /* Affinity is not violating */
+ kbase_js_debug_log_current_affinities( kbdev );
+ OSK_ASSERT(!kbase_js_affinity_would_violate(kbdev, js, katom->affinity));
+
+ kbase_reg_write(kbdev, JOB_SLOT_REG(js, JSn_HEAD_NEXT_LO), jc_head & 0xFFFFFFFF, kctx);
+ kbase_reg_write(kbdev, JOB_SLOT_REG(js, JSn_HEAD_NEXT_HI), jc_head >> 32, kctx);
+
+ kbase_reg_write(kbdev, JOB_SLOT_REG(js, JSn_AFFINITY_NEXT_LO), katom->affinity & 0xFFFFFFFF, kctx);
+ kbase_reg_write(kbdev, JOB_SLOT_REG(js, JSn_AFFINITY_NEXT_HI), katom->affinity >> 32, kctx);
+
+ /* start MMU, medium priority, cache clean/flush on end, clean/flush on start */
+ cfg = kctx->as_nr | JSn_CONFIG_END_FLUSH_CLEAN_INVALIDATE | JSn_CONFIG_START_MMU
+ | JSn_CONFIG_START_FLUSH_CLEAN_INVALIDATE | JSn_CONFIG_THREAD_PRI(8);
+
+ kbase_reg_write(kbdev, JOB_SLOT_REG(js, JSn_CONFIG_NEXT), cfg, kctx);
+
+ /* Write an approximate start timestamp.
+ * It's approximate because there might be a job in the HEAD register. In
+ * such cases, we'll try to make a better approximation in the IRQ handler
+ * (up to the KBASE_JS_IRQ_THROTTLE_TIME_US). */
+ katom->start_timestamp = kbasep_js_get_js_ticks();
+
+ /* GO ! */
+ OSK_PRINT_INFO(OSK_BASE_JM, "JS: Submitting atom %p from ctx %p to js[%d] with head=0x%llx, affinity=0x%llx",
+ katom, kctx, js, jc_head, katom->affinity );
+
+ KBASE_TRACE_ADD_SLOT_INFO( kbdev, JM_SUBMIT, kctx, katom->user_atom, jc_head, js, (u32)katom->affinity );
+
+#if MALI_GATOR_SUPPORT
+ kbase_trace_mali_job_slots_event(GATOR_MAKE_EVENT(GATOR_JOB_SLOT_START, js));
+#endif
+ kbase_reg_write(kbdev, JOB_SLOT_REG(js, JSn_COMMAND_NEXT), JSn_COMMAND_START, katom->kctx);
+}
+
+void kbase_job_submit_nolock(kbase_device *kbdev, kbase_jd_atom *katom, int js)
+{
+ kbase_jm_slot *jm_slots;
+
+ OSK_ASSERT(kbdev);
+
+ jm_slots = kbdev->jm_slots;
+
+ /*
+ * We can have:
+ * - one job already done (pending interrupt),
+ * - one running,
+ * - one ready to be run.
+ * Hence a maximum of 3 inflight jobs. We have a 4 job
+ * queue, which I hope will be enough...
+ */
+ kbasep_jm_enqueue_submit_slot( &jm_slots[js], katom );
+ kbase_job_hw_submit(kbdev, katom, js);
+}
+
+void kbase_job_done_slot(kbase_device *kbdev, int s, u32 completion_code, u64 job_tail, kbasep_js_tick *end_timestamp)
+{
+ kbase_jm_slot *slot;
+ kbase_jd_atom *katom;
+ mali_addr64 jc_head;
+ kbase_context *kctx;
+
+ OSK_ASSERT(kbdev);
+
+ /* IMPORTANT: this function must only contain work necessary to complete a
+ * job from a Real IRQ (and not 'fake' completion, e.g. from
+ * Soft-stop). For general work that must happen no matter how the job was
+ * removed from the hardware, place it in kbase_jd_done() */
+
+ slot = &kbdev->jm_slots[s];
+ katom = kbasep_jm_dequeue_submit_slot( slot );
+
+ /* If the katom completed is because it's a dummy job for HW workarounds, then take no further action */
+ if(kbasep_jm_is_dummy_workaround_job(kbdev, katom))
+ {
+ KBASE_TRACE_ADD_SLOT_INFO( kbdev, JM_JOB_DONE, NULL, NULL, 0, s, completion_code );
+ return;
+ }
+
+ jc_head = katom->jc;
+ kctx = katom->kctx;
+
+ KBASE_TRACE_ADD_SLOT_INFO( kbdev, JM_JOB_DONE, kctx, katom->user_atom, jc_head, s, completion_code );
+
+ if ( completion_code != BASE_JD_EVENT_DONE && completion_code != BASE_JD_EVENT_STOPPED )
+ {
+
+#if KBASE_TRACE_DUMP_ON_JOB_SLOT_ERROR != 0
+ KBASE_TRACE_DUMP( kbdev );
+#endif
+ }
+ if (job_tail != 0)
+ {
+ mali_bool was_updated = (job_tail != jc_head);
+ /* Some of the job has been executed, so we update the job chain address to where we should resume from */
+ katom->jc = job_tail;
+ if ( was_updated )
+ {
+ KBASE_TRACE_ADD_SLOT( kbdev, JM_UPDATE_HEAD, kctx, katom->user_atom, job_tail, s );
+ }
+ }
+
+ /* Only update the event code for jobs that weren't cancelled */
+ if ( katom->event.event_code != BASE_JD_EVENT_JOB_CANCELLED )
+ {
+ katom->event.event_code = completion_code;
+ }
+ kbase_device_trace_register_access(kctx, REG_WRITE , JOB_CONTROL_REG(JOB_IRQ_CLEAR), 1 << s);
+
+ /* Complete the job, with start_new_jobs = MALI_TRUE
+ *
+ * Also defer remaining work onto the workqueue:
+ * - Re-queue Soft-stopped jobs
+ * - For any other jobs, queue the job back into the dependency system
+ * - Schedule out the parent context if necessary, and schedule a new one in.
+ */
+ kbase_jd_done( katom, s, end_timestamp, MALI_TRUE );
+}
+
+/**
+ * Update the start_timestamp of the job currently in the HEAD, based on the
+ * fact that we got an IRQ for the previous set of completed jobs.
+ *
+ * The estimate also takes into account the KBASE_JS_IRQ_THROTTLE_TIME_US and
+ * the time the job was submitted, to work out the best estimate (which might
+ * still result in an over-estimate to the calculated time spent)
+ */
+STATIC void kbasep_job_slot_update_head_start_timestamp( kbase_device *kbdev, kbase_jm_slot *slot, kbasep_js_tick end_timestamp )
+{
+ OSK_ASSERT(slot);
+
+ if ( kbasep_jm_nr_jobs_submitted( slot ) > 0 )
+ {
+ kbase_jd_atom *katom;
+ kbasep_js_tick new_timestamp;
+ katom = kbasep_jm_peek_idx_submit_slot( slot, 0 ); /* The atom in the HEAD */
+
+ OSK_ASSERT( katom != NULL );
+
+ if ( kbasep_jm_is_dummy_workaround_job( kbdev, katom ) != MALI_FALSE )
+ {
+ /* Don't access the members of HW workaround 'dummy' jobs */
+ return;
+ }
+
+ /* Account for any IRQ Throttle time - makes an overestimate of the time spent by the job */
+ new_timestamp = end_timestamp - kbasep_js_convert_js_us_to_ticks(KBASE_JS_IRQ_THROTTLE_TIME_US);
+ if ( kbasep_js_ticks_after( new_timestamp, katom->start_timestamp ) )
+ {
+ /* Only update the timestamp if it's a better estimate than what's currently stored.
+ * This is because our estimate that accounts for the throttle time may be too much
+ * of an overestimate */
+ katom->start_timestamp = new_timestamp;
+ }
+ }
+}
+
+void kbase_job_done(kbase_device *kbdev, u32 done)
+{
+ int i;
+ u32 count = 0;
+ kbasep_js_tick end_timestamp = kbasep_js_get_js_ticks();
+
+ OSK_ASSERT(kbdev);
+
+ KBASE_TRACE_ADD( kbdev, JM_IRQ, NULL, NULL, 0, done );
+
+ OSK_MEMSET( &kbdev->slot_submit_count_irq[0], 0, sizeof(kbdev->slot_submit_count_irq) );
+
+ /* write irq throttle register, this will prevent irqs from occurring until
+ * the given number of gpu clock cycles have passed */
+ {
+ u32 irq_throttle_cycles = osk_atomic_get( &kbdev->irq_throttle_cycles );
+ kbase_reg_write( kbdev, JOB_CONTROL_REG( JOB_IRQ_THROTTLE ), irq_throttle_cycles, NULL );
+ }
+
+ while (done) {
+ kbase_jm_slot *slot;
+ u32 failed = done >> 16;
+
+ /* treat failed slots as finished slots */
+ u32 finished = (done & 0xFFFF) | failed;
+
+ /* Note: This is inherently unfair, as we always check
+ * for lower numbered interrupts before the higher
+ * numbered ones.*/
+ i = osk_find_first_set_bit(finished);
+ OSK_ASSERT(i >= 0);
+
+ slot = kbase_job_slot_lock(kbdev, i);
+
+ do {
+ int nr_done;
+ u32 active;
+ u32 completion_code = BASE_JD_EVENT_DONE; /* assume OK */
+ u64 job_tail = 0;
+
+ if (failed & (1u << i))
+ {
+ /* read out the job slot status code if the job slot reported failure */
+ completion_code = kbase_reg_read(kbdev, JOB_SLOT_REG(i, JSn_STATUS), NULL);
+
+ switch(completion_code)
+ {
+ case BASE_JD_EVENT_STOPPED:
+#if MALI_GATOR_SUPPORT
+ kbase_trace_mali_job_slots_event(GATOR_MAKE_EVENT(GATOR_JOB_SLOT_SOFT_STOPPED, i));
+#endif
+ /* Soft-stopped job - read the value of JS<n>_TAIL so that the job chain can be resumed */
+ job_tail = (u64)kbase_reg_read(kbdev, JOB_SLOT_REG(i, JSn_TAIL_LO), NULL) |
+ ((u64)kbase_reg_read(kbdev, JOB_SLOT_REG(i, JSn_TAIL_HI), NULL) << 32);
+ break;
+ default:
+ OSK_PRINT_WARN(OSK_BASE_JD, "error detected from slot %d, job status 0x%08x (%s)",
+ i, completion_code, kbase_exception_name(completion_code));
+ }
+
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_6787))
+ {
+ /* cache flush when jobs complete with non-done codes */
+ /* use GPU_COMMAND completion solution */
+ /* clean & invalidate the caches */
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(GPU_COMMAND), 8, NULL);
+
+ /* wait for cache flush to complete before continuing */
+ while((kbase_reg_read(kbdev, GPU_CONTROL_REG(GPU_IRQ_RAWSTAT), NULL) & CLEAN_CACHES_COMPLETED) == 0);
+ /* clear the CLEAN_CACHES_COMPLETED irq*/
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(GPU_IRQ_CLEAR), CLEAN_CACHES_COMPLETED, NULL);
+ }
+ }
+
+ kbase_reg_write(kbdev, JOB_CONTROL_REG(JOB_IRQ_CLEAR), done & ((1 << i) | (1 << (i + 16))), NULL);
+ active = kbase_reg_read(kbdev, JOB_CONTROL_REG(JOB_IRQ_JS_STATE), NULL);
+
+ if (((active >> i) & 1) == 0 && (((done >> (i+16)) & 1) == 0))
+ {
+ /* There is a potential race we must work around:
+ *
+ * 1. A job slot has a job in both current and next registers
+ * 2. The job in current completes successfully, the IRQ handler reads RAWSTAT
+ * and calls this function with the relevant bit set in "done"
+ * 3. The job in the next registers becomes the current job on the GPU
+ * 4. Sometime before the JOB_IRQ_CLEAR line above the job on the GPU _fails_
+ * 5. The IRQ_CLEAR clears the done bit but not the failed bit. This atomically sets
+ * JOB_IRQ_JS_STATE. However since both jobs have now completed the relevant bits
+ * for the slot are set to 0.
+ *
+ * If we now did nothing then we'd incorrectly assume that _both_ jobs had completed
+ * successfully (since we haven't yet observed the fail bit being set in RAWSTAT).
+ *
+ * So at this point if there are no active jobs left we check to see if RAWSTAT has a failure
+ * bit set for the job slot. If it does we know that there has been a new failure that we
+ * didn't previously know about, so we make sure that we record this in active (but we wait
+ * for the next loop to deal with it).
+ *
+ * If we were handling a job failure (i.e. done has the relevant high bit set) then we know that
+ * the value read back from JOB_IRQ_JS_STATE is the correct number of remaining jobs because
+ * the failed job will have prevented any futher jobs from starting execution.
+ */
+ u32 rawstat = kbase_reg_read(kbdev, JOB_CONTROL_REG(JOB_IRQ_RAWSTAT), NULL);
+
+ if ((rawstat >> (i+16)) & 1)
+ {
+ /* There is a failed job that we've missed - add it back to active */
+ active |= (1u << i);
+ }
+ }
+
+ OSK_PRINT_INFO(OSK_BASE_JM, "Job ended with status 0x%08X\n", completion_code);
+
+ nr_done = kbasep_jm_nr_jobs_submitted( slot );
+ nr_done -= (active >> i) & 1;
+ nr_done -= (active >> (i + 16)) & 1;
+
+ if (nr_done <= 0)
+ {
+ OSK_PRINT_WARN(OSK_BASE_JM,
+ "Spurious interrupt on slot %d",
+ i);
+ goto spurious;
+ }
+
+ count += nr_done;
+
+ while (nr_done) {
+ if (nr_done == 1)
+ {
+ kbase_job_done_slot(kbdev, i, completion_code, job_tail, &end_timestamp);
+ }
+ else
+ {
+ /* More than one job has completed. Since this is not the last job being reported this time it
+ * must have passed. This is because the hardware will not allow further jobs in a job slot to
+ * complete until the faile job is cleared from the IRQ status.
+ */
+ kbase_job_done_slot(kbdev, i, BASE_JD_EVENT_DONE, 0, &end_timestamp);
+ }
+ nr_done--;
+ }
+
+spurious:
+ done = kbase_reg_read(kbdev, JOB_CONTROL_REG(JOB_IRQ_RAWSTAT), NULL);
+
+ failed = done >> 16;
+ finished = (done & 0xFFFF) | failed;
+ } while (finished & (1 << i));
+
+ kbasep_job_slot_update_head_start_timestamp( kbdev, slot, end_timestamp );
+
+ kbase_job_slot_unlock(kbdev, i);
+ }
+
+ if (osk_atomic_get(&kbdev->reset_gpu) == KBASE_RESET_GPU_COMMITTED)
+ {
+ /* If we're trying to reset the GPU then we might be able to do it early
+ * (without waiting for a timeout) because some jobs have completed
+ */
+ kbasep_try_reset_gpu_early(kbdev);
+ }
+
+ KBASE_TRACE_ADD( kbdev, JM_IRQ_END, NULL, NULL, 0, count );
+}
+KBASE_EXPORT_TEST_API(kbase_job_done)
+
+
+static mali_bool kbasep_soft_stop_allowed(kbase_device *kbdev, u16 core_reqs)
+{
+ mali_bool soft_stops_allowed = MALI_TRUE;
+
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8408))
+ {
+ if ((core_reqs & BASE_JD_REQ_T) != 0)
+ {
+ soft_stops_allowed = MALI_FALSE;
+ }
+ }
+ return soft_stops_allowed;
+}
+
+static mali_bool kbasep_hard_stop_allowed(kbase_device *kbdev, u16 core_reqs)
+{
+ mali_bool hard_stops_allowed = MALI_TRUE;
+
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8394))
+ {
+ if ((core_reqs & BASE_JD_REQ_T) != 0)
+ {
+ hard_stops_allowed = MALI_FALSE;
+ }
+ }
+ return hard_stops_allowed;
+}
+
+static void kbasep_job_slot_soft_or_hard_stop_do_action(kbase_device *kbdev, int js, u32 action,
+ u16 core_reqs, kbase_context *kctx )
+{
+#if KBASE_TRACE_ENABLE
+ u32 status_reg_before;
+ u64 job_in_head_before;
+ u32 status_reg_after;
+
+ /* Check the head pointer */
+ job_in_head_before = ((u64)kbase_reg_read(kbdev, JOB_SLOT_REG(js, JSn_HEAD_LO), NULL))
+ | (((u64)kbase_reg_read(kbdev, JOB_SLOT_REG(js, JSn_HEAD_HI), NULL)) << 32);
+ status_reg_before = kbase_reg_read(kbdev, JOB_SLOT_REG(js, JSn_STATUS), NULL );
+#endif
+
+ if (action == JSn_COMMAND_SOFT_STOP)
+ {
+ mali_bool soft_stop_allowed = kbasep_soft_stop_allowed( kbdev, core_reqs );
+ if (!soft_stop_allowed)
+ {
+#if MALI_DEBUG != 0
+ OSK_PRINT(OSK_BASE_JM, "Attempt made to soft-stop a job that cannot be soft-stopped. core_reqs = 0x%X", (unsigned int) core_reqs);
+#endif
+ return;
+ }
+ }
+
+ if (action == JSn_COMMAND_HARD_STOP)
+ {
+ mali_bool hard_stop_allowed = kbasep_hard_stop_allowed( kbdev, core_reqs );
+ if (!hard_stop_allowed)
+ {
+ /* Jobs can be hard-stopped for the following reasons:
+ * * CFS decides the job has been running too long (and soft-stop has not occurred).
+ * In this case the GPU will be reset by CFS if the job remains on the GPU.
+ *
+ * * The context is destroyed, kbase_jd_zap_context will attempt to hard-stop the job. However
+ * it also has a watchdog which will cause the GPU to be reset if the job remains on the GPU.
+ *
+ * * An (unhandled) MMU fault occurred. As long as BASE_HW_ISSUE_8245 is defined then
+ * the GPU will be reset.
+ *
+ * All three cases result in the GPU being reset if the hard-stop fails,
+ * so it is safe to just return and ignore the hard-stop request.
+ */
+ OSK_PRINT_WARN(OSK_BASE_JM, "Attempt made to hard-stop a job that cannot be hard-stopped. core_reqs = 0x%X", (unsigned int) core_reqs);
+ return;
+ }
+ }
+
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8316) && action == JSn_COMMAND_SOFT_STOP)
+ {
+ int i;
+ kbase_jm_slot *slot;
+ slot = &kbdev->jm_slots[js];
+
+ for (i = 0; i < kbasep_jm_nr_jobs_submitted(slot); i++)
+ {
+ kbase_jd_atom *katom;
+ kbase_as * as;
+
+ katom = kbasep_jm_peek_idx_submit_slot(slot, i);
+
+ OSK_ASSERT(katom);
+
+ if ( kbasep_jm_is_dummy_workaround_job( kbdev, katom ) != MALI_FALSE )
+ {
+ /* Don't access the members of HW workaround 'dummy' jobs
+ *
+ * This assumes that such jobs can't cause HW_ISSUE_8316, and could only be blocked
+ * by other jobs causing HW_ISSUE_8316 (which will get poked/or eventually get killed) */
+ continue;
+ }
+
+ if ( !katom->poking )
+ {
+ OSK_ASSERT(katom->kctx);
+ OSK_ASSERT(katom->kctx->as_nr != KBASEP_AS_NR_INVALID);
+
+ katom->poking = 1;
+ as = &kbdev->as[katom->kctx->as_nr];
+ kbase_as_poking_timer_retain(as);
+ }
+ }
+ }
+
+ kbase_reg_write(kbdev, JOB_SLOT_REG(js, JSn_COMMAND), action, kctx);
+
+#if KBASE_TRACE_ENABLE
+ status_reg_after = kbase_reg_read(kbdev, JOB_SLOT_REG(js, JSn_STATUS), NULL );
+ if (status_reg_after == BASE_JD_EVENT_ACTIVE)
+ {
+ kbase_jm_slot *slot;
+ kbase_jd_atom *head;
+ kbase_context *head_kctx;
+
+ slot = &kbdev->jm_slots[js];
+ head = kbasep_jm_peek_idx_submit_slot( slot, slot->submitted_nr-1 );
+ head_kctx = head->kctx;
+
+ /* We don't need to check kbasep_jm_is_dummy_workaround_job( head ) here:
+ * - Members are not indirected through
+ * - The members will all be zero anyway
+ */
+ if ( status_reg_before == BASE_JD_EVENT_ACTIVE )
+ {
+ KBASE_TRACE_ADD_SLOT( kbdev, JM_CHECK_HEAD, head_kctx, head->user_atom, job_in_head_before, js );
+ }
+ else
+ {
+ KBASE_TRACE_ADD_SLOT( kbdev, JM_CHECK_HEAD, NULL, NULL, 0, js );
+ }
+ if (action == JSn_COMMAND_SOFT_STOP)
+ {
+ KBASE_TRACE_ADD_SLOT( kbdev, JM_SOFTSTOP, head_kctx, head->user_atom, head->jc, js );
+ }
+ else
+ {
+ KBASE_TRACE_ADD_SLOT( kbdev, JM_HARDSTOP, head_kctx, head->user_atom, head->jc, js );
+ }
+ }
+ else
+ {
+ if ( status_reg_before == BASE_JD_EVENT_ACTIVE )
+ {
+ KBASE_TRACE_ADD_SLOT( kbdev, JM_CHECK_HEAD, NULL, NULL, job_in_head_before, js );
+ }
+ else
+ {
+ KBASE_TRACE_ADD_SLOT( kbdev, JM_CHECK_HEAD, NULL, NULL, 0, js );
+ }
+
+ if (action == JSn_COMMAND_SOFT_STOP)
+ {
+ KBASE_TRACE_ADD_SLOT( kbdev, JM_SOFTSTOP, NULL, NULL, 0, js );
+ }
+ else
+ {
+ KBASE_TRACE_ADD_SLOT( kbdev, JM_HARDSTOP, NULL, NULL, 0, js );
+ }
+ }
+#endif
+}
+
+/* Helper macros used by kbasep_job_slot_soft_or_hard_stop */
+#define JM_SLOT_MAX_JOB_SUBMIT_REGS 2
+#define JM_JOB_IS_CURRENT_JOB_INDEX(n) (1 == n) /* Index of the last job to process */
+#define JM_JOB_IS_NEXT_JOB_INDEX(n) (2 == n) /* Index of the prior to last job to process */
+
+/** Soft or hard-stop a slot
+ *
+ * This function safely ensures that the correct job is either hard or soft-stopped.
+ * It deals with evicting jobs from the next registers where appropriate.
+ *
+ * This does not attempt to stop or evict jobs that are 'dummy' jobs for HW workarounds.
+ *
+ * @param kbdex The kbase device
+ * @param kctx The context to soft/hard-stop job(s) from (or NULL is all jobs should be targeted)
+ * @param js The slot that the job(s) are on
+ * @param target_katom The atom that should be targeted (or NULL if all jobs from the context should be targeted)
+ * @param action The action to perform, either JSn_COMMAND_HARD_STOP or JSn_COMMAND_SOFT_STOP
+ */
+static void kbasep_job_slot_soft_or_hard_stop(kbase_device *kbdev, kbase_context *kctx, int js,
+ kbase_jd_atom *target_katom, u32 action)
+{
+ kbase_jd_atom *katom;
+ u8 i;
+ u8 jobs_submitted;
+ kbase_jm_slot *slot;
+ u16 core_reqs;
+
+
+ OSK_ASSERT(action == JSn_COMMAND_HARD_STOP || action == JSn_COMMAND_SOFT_STOP);
+ OSK_ASSERT(kbdev);
+
+ slot = &kbdev->jm_slots[js];
+ OSK_ASSERT(slot);
+
+ jobs_submitted = kbasep_jm_nr_jobs_submitted( slot );
+ KBASE_TRACE_ADD_SLOT_INFO( kbdev, JM_SLOT_SOFT_OR_HARD_STOP, kctx, NULL, 0u, js, jobs_submitted );
+
+ if (jobs_submitted > JM_SLOT_MAX_JOB_SUBMIT_REGS)
+ {
+ i = jobs_submitted - JM_SLOT_MAX_JOB_SUBMIT_REGS;
+ }
+ else
+ {
+ i = 0;
+ }
+
+ /* Loop through all jobs that have been submitted to the slot and haven't completed */
+ for(;i < jobs_submitted;i++)
+ {
+ katom = kbasep_jm_peek_idx_submit_slot( slot, i );
+
+ if (kctx && katom->kctx != kctx)
+ {
+ continue;
+ }
+ if (target_katom && katom != target_katom)
+ {
+ continue;
+ }
+ if ( kbasep_jm_is_dummy_workaround_job( kbdev, katom ) )
+ {
+ continue;
+ }
+
+ core_reqs = katom->core_req;
+
+ if (JM_JOB_IS_CURRENT_JOB_INDEX(jobs_submitted - i))
+ {
+ /* The last job in the slot, check if there is a job in the next register */
+ if (kbase_reg_read(kbdev, JOB_SLOT_REG(js, JSn_COMMAND_NEXT), NULL) == 0)
+ {
+ kbasep_job_slot_soft_or_hard_stop_do_action(kbdev, js, action, core_reqs, katom->kctx);
+ }
+ else
+ {
+ /* The job is in the next registers */
+ beenthere("clearing job from next registers on slot %d", js);
+ kbase_reg_write(kbdev, JOB_SLOT_REG(js, JSn_COMMAND_NEXT), JSn_COMMAND_NOP, NULL);
+
+ /* Check to see if we did remove a job from the next registers */
+ if (kbase_reg_read(kbdev, JOB_SLOT_REG(js, JSn_HEAD_NEXT_LO), NULL) != 0 ||
+ kbase_reg_read(kbdev, JOB_SLOT_REG(js, JSn_HEAD_NEXT_HI), NULL) != 0)
+ {
+ /* The job was successfully cleared from the next registers, requeue it */
+ kbase_jd_atom *dequeued_katom = kbasep_jm_dequeue_tail_submit_slot( slot );
+ OSK_ASSERT(dequeued_katom == katom);
+ jobs_submitted --;
+
+ /* Set the next registers to NULL */
+ kbase_reg_write(kbdev, JOB_SLOT_REG(js, JSn_HEAD_NEXT_LO), 0, NULL);
+ kbase_reg_write(kbdev, JOB_SLOT_REG(js, JSn_HEAD_NEXT_HI), 0, NULL);
+
+ KBASE_TRACE_ADD_SLOT( kbdev, JM_SLOT_EVICT, dequeued_katom->kctx, dequeued_katom->user_atom, dequeued_katom->jc, js );
+
+ dequeued_katom->event.event_code = BASE_JD_EVENT_REMOVED_FROM_NEXT;
+ /* Complete the job, indicate it took no time, but require start_new_jobs == MALI_FALSE
+ * to prevent this slot being resubmitted to until we've dropped the lock */
+ kbase_jd_done(dequeued_katom, js, NULL, MALI_FALSE);
+ }
+ else
+ {
+ /* The job transitioned into the current registers before we managed to evict it,
+ * in this case we fall back to soft/hard-stopping the job */
+ beenthere("missed job in next register, soft/hard-stopping slot %d", js);
+ kbasep_job_slot_soft_or_hard_stop_do_action(kbdev, js, action, core_reqs, katom->kctx);
+ }
+ }
+ }
+ else if (JM_JOB_IS_NEXT_JOB_INDEX(jobs_submitted-i))
+ {
+ /* There's a job after this one, check to see if that job is in the next registers */
+ if (kbase_reg_read(kbdev, JOB_SLOT_REG(js, JSn_COMMAND_NEXT), NULL) != 0)
+ {
+ kbase_jd_atom *check_next_atom;
+ /* It is - we should remove that job and soft/hard-stop the slot */
+
+ /* Only proceed when the next jobs isn't a HW workaround 'dummy' job
+ *
+ * This can't be an ASSERT due to MMU fault code:
+ * - This first hard-stops the job that caused the fault
+ * - Under HW Issue 8401, this inserts a dummy workaround job into NEXT
+ * - Under HW Issue 8245, it will then reset the GPU
+ * - This causes a Soft-stop to occur on all slots
+ * - By the time of the soft-stop, we may (depending on timing) still have:
+ * - The original job in HEAD, if it's not finished the hard-stop
+ * - The dummy workaround job in NEXT
+ *
+ * Other cases could be coded in future that cause back-to-back Soft/Hard
+ * stops with dummy workaround jobs in place, e.g. MMU handler code and Job
+ * Scheduler watchdog timer running in parallel.
+ *
+ * Note, the index i+1 is valid to peek from: i == jobs_submitted-2, therefore
+ * i+1 == jobs_submitted-1 */
+ check_next_atom = kbasep_jm_peek_idx_submit_slot( slot, i+1 );
+ if ( kbasep_jm_is_dummy_workaround_job( kbdev, check_next_atom ) != MALI_FALSE )
+ {
+ continue;
+ }
+
+ beenthere("clearing job from next registers on slot %d", js);
+ kbase_reg_write(kbdev, JOB_SLOT_REG(js, JSn_COMMAND_NEXT), JSn_COMMAND_NOP, NULL);
+
+ /* Check to see if we did remove a job from the next registers */
+ if (kbase_reg_read(kbdev, JOB_SLOT_REG(js, JSn_HEAD_NEXT_LO), NULL) != 0 ||
+ kbase_reg_read(kbdev, JOB_SLOT_REG(js, JSn_HEAD_NEXT_HI), NULL) != 0)
+ {
+ /* We did remove a job from the next registers, requeue it */
+ kbase_jd_atom *dequeued_katom = kbasep_jm_dequeue_tail_submit_slot( slot );
+ OSK_ASSERT(dequeued_katom != NULL);
+ jobs_submitted --;
+
+ /* Set the next registers to NULL */
+ kbase_reg_write(kbdev, JOB_SLOT_REG(js, JSn_HEAD_NEXT_LO), 0, NULL);
+ kbase_reg_write(kbdev, JOB_SLOT_REG(js, JSn_HEAD_NEXT_HI), 0, NULL);
+
+ KBASE_TRACE_ADD_SLOT( kbdev, JM_SLOT_EVICT, dequeued_katom->kctx, dequeued_katom->user_atom, dequeued_katom->jc, js );
+
+ dequeued_katom->event.event_code = BASE_JD_EVENT_REMOVED_FROM_NEXT;
+ /* Complete the job, indicate it took no time, but require start_new_jobs == MALI_FALSE
+ * to prevent this slot being resubmitted to until we've dropped the lock */
+ kbase_jd_done(dequeued_katom, js, NULL, MALI_FALSE);
+ }
+ else
+ {
+ /* We missed the job, that means the job we're interested in left the hardware before
+ * we managed to do anything, so we can proceed to the next job */
+ continue;
+ }
+
+ /* Next is now free, so we can soft/hard-stop the slot */
+ beenthere("soft/hard-stopped slot %d (there was a job in next which was successfully cleared)\n", js);
+ kbasep_job_slot_soft_or_hard_stop_do_action(kbdev, js, action, core_reqs, katom->kctx);
+ }
+ /* If there was no job in the next registers, then the job we were
+ * interested in has finished, so we need not take any action
+ */
+ }
+ }
+}
+
+void kbase_job_kill_jobs_from_context(kbase_context *kctx)
+{
+ kbase_device *kbdev;
+ int i;
+
+ OSK_ASSERT( kctx != NULL );
+ kbdev = kctx->kbdev;
+ OSK_ASSERT( kbdev != NULL );
+
+ /* Cancel any remaining running jobs for this kctx */
+ for (i = 0; i < kbdev->gpu_props.num_job_slots; i++)
+ {
+ kbase_job_slot_lock(kbdev, i);
+ kbase_job_slot_hardstop(kctx, i, NULL);
+ kbase_job_slot_unlock(kbdev, i);
+ }
+}
+
+void kbase_job_zap_context(kbase_context *kctx)
+{
+ kbase_device *kbdev;
+ kbasep_js_device_data *js_devdata;
+ kbasep_js_kctx_info *js_kctx_info;
+ int i;
+ mali_bool evict_success;
+
+ OSK_ASSERT( kctx != NULL );
+ kbdev = kctx->kbdev;
+ OSK_ASSERT( kbdev != NULL );
+ js_devdata = &kbdev->js_data;
+ js_kctx_info = &kctx->jctx.sched_info;
+
+
+ /*
+ * Critical assumption: No more submission is possible outside of the
+ * workqueue. This is because the OS *must* prevent U/K calls (IOCTLs)
+ * whilst the kbase_context is terminating.
+ */
+
+
+ /* First, atomically do the following:
+ * - mark the context as dying
+ * - try to evict it from the policy queue */
+
+ osk_mutex_lock( &js_kctx_info->ctx.jsctx_mutex );
+ js_kctx_info->ctx.is_dying = MALI_TRUE;
+
+ OSK_PRINT_INFO(OSK_BASE_JM, "Zap: Try Evict Ctx %p", kctx );
+ osk_mutex_lock( &js_devdata->queue_mutex );
+ evict_success = kbasep_js_policy_try_evict_ctx( &js_devdata->policy, kctx );
+ osk_mutex_unlock( &js_devdata->queue_mutex );
+
+ osk_mutex_unlock( &js_kctx_info->ctx.jsctx_mutex );
+
+ /* locks must be dropped by this point, to prevent deadlock on flush */
+ OSK_PRINT_INFO(OSK_BASE_JM, "Zap: Flush Workqueue Ctx %p", kctx );
+ KBASE_TRACE_ADD( kbdev, JM_FLUSH_WORKQS, kctx, NULL, 0u, 0u );
+ kbase_jd_flush_workqueues( kctx );
+ KBASE_TRACE_ADD( kbdev, JM_FLUSH_WORKQS_DONE, kctx, NULL, 0u, 0u );
+
+ /*
+ * At this point we know that:
+ * - If eviction succeeded, it was in the policy queue, but now no longer is
+ * - If eviction failed, then it wasn't in the policy queue. It is one of the following:
+ * - a. it didn't have any jobs, and so is not in the Policy Queue or the
+ * Run Pool (no work required)
+ * - b. it was in the process of a scheduling transaction - but this can only
+ * happen as a result of the work-queue. Two options:
+ * - i. it is now scheduled by the time of the flush - case d.
+ * - ii. it is evicted from the Run Pool due to having to roll-back a transaction
+ * - c. it is about to be scheduled out.
+ * - In this case, we've marked it as dying, so the schedule-out code
+ * marks all jobs for killing, evicts it from the Run Pool, and does *not*
+ * place it back on the Policy Queue. The workqueue flush ensures this has
+ * completed
+ * - d. it is scheduled, and may or may not be running jobs
+ * - e. it was scheduled, but didn't get scheduled out during flushing of
+ * the workqueues. By the time we obtain the jsctx_mutex again, it may've
+ * been scheduled out
+ *
+ *
+ * Also Note: No-one can now clear the not_scheduled_waitq, because the
+ * context is guarenteed to not be in the policy queue, and can never
+ * return to it either (because is_dying is set). The waitq may already by
+ * clear (due to it being scheduled), but the code below ensures that it
+ * will eventually get set (be descheduled).
+ */
+
+ osk_mutex_lock( &js_kctx_info->ctx.jsctx_mutex );
+ if ( evict_success != MALI_FALSE || js_kctx_info->ctx.is_scheduled == MALI_FALSE )
+ {
+ /* The following events require us to kill off remaining jobs and
+ * update PM book-keeping:
+ * - we evicted it correctly (it must have jobs to be in the Policy Queue)
+ *
+ * These events need no action:
+ * - Case a: it didn't have any jobs, and was never in the Queue
+ * - Case b-ii: scheduling transaction was partially rolled-back (this
+ * already cancels the jobs and pm-idles the ctx)
+ * - Case c: scheduled out and killing of all jobs completed on the work-queue (it's not in the Run Pool)
+ * - Case e: it was scheduled out after the workqueue was flushed, but
+ * before we re-obtained the jsctx_mutex. The jobs have already been
+ * cancelled (but the cancel may not have completed yet) and the PM has
+ * already been idled
+ */
+
+ KBASE_TRACE_ADD( kbdev, JM_ZAP_NON_SCHEDULED, kctx, NULL, 0u, js_kctx_info->ctx.is_scheduled );
+
+ OSK_PRINT_INFO(OSK_BASE_JM, "Zap: Ctx %p evict_success=%d, scheduled=%d", kctx, evict_success, js_kctx_info->ctx.is_scheduled );
+
+ if ( evict_success != MALI_FALSE )
+ {
+ /* Only cancel jobs and pm-idle when we evicted from the policy queue.
+ *
+ * Having is_dying set ensures that this kills, and doesn't requeue
+ *
+ * In addition, is_dying set ensure that this calls kbase_pm_context_idle().
+ * This is safe because the context is guaranteed to not be in the
+ * runpool, by virtue of it being evicted from the policy queue */
+ kbasep_js_runpool_requeue_or_kill_ctx( kbdev, kctx );
+ }
+ osk_mutex_unlock( &js_kctx_info->ctx.jsctx_mutex );
+ }
+ else
+ {
+ mali_bool was_retained;
+ /* Didn't evict, but it is scheduled - it's in the Run Pool:
+ * Cases d and b(i) */
+ KBASE_TRACE_ADD( kbdev, JM_ZAP_SCHEDULED, kctx, NULL, 0u, js_kctx_info->ctx.is_scheduled );
+ OSK_PRINT_INFO(OSK_BASE_JM, "Zap: Ctx %p is in RunPool", kctx );
+
+ /* Disable the ctx from submitting any more jobs */
+ osk_spinlock_irq_lock( &js_devdata->runpool_irq.lock );
+ kbasep_js_clear_submit_allowed( js_devdata, kctx );
+ osk_spinlock_irq_unlock( &js_devdata->runpool_irq.lock );
+
+ /* Retain and release the context whilst it is is now disallowed from submitting
+ * jobs - ensures that someone somewhere will be removing the context later on */
+ was_retained = kbasep_js_runpool_retain_ctx( kbdev, kctx );
+
+ /* Since it's scheduled and we have the jsctx_mutex, it must be retained successfully */
+ OSK_ASSERT( was_retained != MALI_FALSE );
+
+ OSK_PRINT_INFO(OSK_BASE_JM, "Zap: Ctx %p Kill Any Running jobs", kctx );
+ /* Cancel any remaining running jobs for this kctx - if any. Submit is disallowed
+ * which takes effect from the dropping of the runpool_irq lock above, so no more new
+ * jobs will appear after we do this. */
+ for (i = 0; i < kbdev->gpu_props.num_job_slots; i++)
+ {
+ kbase_job_slot_lock(kbdev, i);
+ kbase_job_slot_hardstop(kctx, i, NULL);
+ kbase_job_slot_unlock(kbdev, i);
+ }
+ osk_mutex_unlock( &js_kctx_info->ctx.jsctx_mutex );
+
+ OSK_PRINT_INFO(OSK_BASE_JM, "Zap: Ctx %p Release (may or may not schedule out immediately)", kctx );
+ kbasep_js_runpool_release_ctx( kbdev, kctx );
+ }
+ KBASE_TRACE_ADD( kbdev, JM_ZAP_DONE, kctx, NULL, 0u, 0u );
+
+ /* After this, you must wait on both the kbase_jd_context::zero_jobs_waitq
+ * and the kbasep_js_kctx_info::ctx::is_scheduled_waitq - to wait for the
+ * jobs to be destroyed, and the context to be de-scheduled (if it was on
+ * the runpool).
+ *
+ * kbase_jd_zap_context() will do this. */
+}
+KBASE_EXPORT_TEST_API(kbase_job_zap_context)
+
+mali_error kbase_job_slot_init(kbase_device *kbdev)
+{
+ int i;
+ osk_error osk_err;
+
+ OSK_ASSERT(kbdev);
+
+ for (i = 0; i < kbdev->gpu_props.num_job_slots; i++)
+ {
+ osk_err = osk_spinlock_irq_init(&kbdev->jm_slots[i].lock, OSK_LOCK_ORDER_JSLOT);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ int j;
+ for (j = 0; j < i; j++)
+ {
+ osk_spinlock_irq_term(&kbdev->jm_slots[j].lock);
+ }
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+ kbasep_jm_init_submit_slot( &kbdev->jm_slots[i] );
+ }
+ return MALI_ERROR_NONE;
+}
+KBASE_EXPORT_TEST_API(kbase_job_slot_init)
+
+void kbase_job_slot_halt(kbase_device *kbdev)
+{
+ CSTD_UNUSED(kbdev);
+}
+
+void kbase_job_slot_term(kbase_device *kbdev)
+{
+ int i;
+
+ OSK_ASSERT(kbdev);
+
+ for (i = 0; i < kbdev->gpu_props.num_job_slots; i++)
+ {
+ osk_spinlock_irq_term(&kbdev->jm_slots[i].lock);
+ }
+}
+KBASE_EXPORT_TEST_API(kbase_job_slot_term)
+
+
+/**
+ * Soft-stop the specified job slot
+ *
+ * The job slot lock must be held when calling this function.
+ * The job slot must not already be in the process of being soft-stopped.
+ *
+ * Where possible any job in the next register is evicted before the soft-stop.
+ *
+ * @param kbdev The kbase device
+ * @param js The job slot to soft-stop
+ * @param target_katom The job that should be soft-stopped (or NULL for any job)
+ */
+void kbase_job_slot_softstop(kbase_device *kbdev, int js, kbase_jd_atom *target_katom)
+{
+ kbasep_job_slot_soft_or_hard_stop(kbdev, NULL, js, target_katom, JSn_COMMAND_SOFT_STOP);
+}
+
+/**
+ * Hard-stop the specified job slot
+ *
+ * The job slot lock must be held when calling this function.
+ *
+ * @param kctx The kbase context that contains the job(s) that should be hard-stopped
+ * @param js The job slot to hard-stop
+ * @param target_katom The job that should be hard-stopped (or NULL for all jobs from the context)
+ */
+void kbase_job_slot_hardstop(kbase_context *kctx, int js, kbase_jd_atom *target_katom)
+{
+ kbase_device *kbdev = kctx->kbdev;
+ kbasep_job_slot_soft_or_hard_stop(kbdev, kctx, js, target_katom, JSn_COMMAND_HARD_STOP);
+
+ if (kbase_hw_has_issue(kctx->kbdev, BASE_HW_ISSUE_8401) ||
+ kbase_hw_has_issue(kctx->kbdev, BASE_HW_ISSUE_9510))
+ {
+ /* The workaround for HW issue 8401 has an issue, so instead of hard-stopping
+ * just reset the GPU. This will ensure that the jobs leave the GPU.
+ *
+ * All callers of this function immediately drop the slot lock after calling this function.
+ * So this is safe because the parent functions don't require atomicity regarding the job slot.
+ */
+ kbase_job_slot_unlock(kbdev, js);
+ if (kbase_prepare_to_reset_gpu(kbdev))
+ {
+ kbase_reset_gpu(kbdev);
+ }
+ kbase_job_slot_lock(kbdev, js);
+ }
+}
+
+void kbasep_reset_timeout_worker(osk_workq_work *data)
+{
+ kbase_device *kbdev;
+ int i;
+ kbasep_js_tick end_timestamp = kbasep_js_get_js_ticks();
+ kbasep_js_device_data *js_devdata;
+ kbase_uk_hwcnt_setup hwcnt_setup = {{0}};
+ kbase_instr_state bckp_state;
+
+ OSK_ASSERT(data);
+
+ kbdev = CONTAINER_OF(data, kbase_device, reset_work);
+
+ OSK_ASSERT(kbdev);
+
+ kbase_pm_context_active(kbdev);
+
+ js_devdata = &kbdev->js_data;
+
+ /* All slot have been soft-stopped and we've waited SOFT_STOP_RESET_TIMEOUT for the slots to clear, at this point
+ * we assume that anything that is still left on the GPU is stuck there and we'll kill it when we reset the GPU */
+
+ OSK_PRINT_ERROR(OSK_BASE_JD, "Resetting GPU");
+
+ /* Make sure the timer has completed - this cannot be done from interrupt context,
+ * so this cannot be done within kbasep_try_reset_gpu_early. */
+ osk_timer_stop(&kbdev->reset_timer);
+
+ osk_spinlock_irq_lock(&kbdev->hwcnt.lock);
+
+ if (kbdev->hwcnt.state == KBASE_INSTR_STATE_RESETTING)
+ { /*the same interrupt handler preempted itself*/
+ /* GPU is being reset*/
+ osk_spinlock_irq_unlock(&kbdev->hwcnt.lock);
+ osk_waitq_wait(&kbdev->hwcnt.waitqueue);
+ osk_spinlock_irq_lock(&kbdev->hwcnt.lock);
+ }
+ /* Save the HW counters setup */
+ if (kbdev->hwcnt.kctx != NULL)
+ {
+ kbase_context *kctx = kbdev->hwcnt.kctx;
+ hwcnt_setup.dump_buffer = kbase_reg_read(kbdev, GPU_CONTROL_REG(PRFCNT_BASE_LO), kctx) & 0xffffffff;
+ hwcnt_setup.dump_buffer |= (mali_addr64)kbase_reg_read(kbdev, GPU_CONTROL_REG(PRFCNT_BASE_HI), kctx) << 32;
+ hwcnt_setup.jm_bm = kbase_reg_read(kbdev, GPU_CONTROL_REG(PRFCNT_JM_EN), kctx);
+ hwcnt_setup.shader_bm = kbase_reg_read(kbdev, GPU_CONTROL_REG(PRFCNT_SHADER_EN), kctx);
+ hwcnt_setup.tiler_bm = kbase_reg_read(kbdev, GPU_CONTROL_REG(PRFCNT_TILER_EN), kctx);
+ hwcnt_setup.l3_cache_bm = kbase_reg_read(kbdev, GPU_CONTROL_REG(PRFCNT_L3_CACHE_EN), kctx);
+ hwcnt_setup.mmu_l2_bm = kbase_reg_read(kbdev, GPU_CONTROL_REG(PRFCNT_MMU_L2_EN), kctx);
+ }
+ bckp_state = kbdev->hwcnt.state;
+ kbdev->hwcnt.state = KBASE_INSTR_STATE_RESETTING;
+ osk_waitq_clear(&kbdev->hwcnt.waitqueue);
+ /* Disable IRQ to avoid IRQ handlers to kick in after releaseing the spinlock;
+ * this also clears any outstanding interrupts */
+ kbase_pm_disable_interrupts(kbdev);
+ /* Ensure that any IRQ handlers have finished */
+ kbase_synchronize_irqs(kbdev);
+ osk_spinlock_irq_unlock(&kbdev->hwcnt.lock);
+
+ /* Reset the GPU */
+ kbase_pm_power_transitioning(kbdev);
+ kbase_pm_init_hw(kbdev);
+
+ kbase_pm_power_transitioning(kbdev);
+
+ osk_spinlock_irq_lock(&kbdev->hwcnt.lock);
+ /* Restore the HW counters setup */
+ if (kbdev->hwcnt.kctx != NULL)
+ {
+ kbase_context *kctx = kbdev->hwcnt.kctx;
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(PRFCNT_BASE_LO), hwcnt_setup.dump_buffer & 0xFFFFFFFF, kctx);
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(PRFCNT_BASE_HI), hwcnt_setup.dump_buffer >> 32, kctx);
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(PRFCNT_JM_EN), hwcnt_setup.jm_bm, kctx);
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(PRFCNT_SHADER_EN), hwcnt_setup.shader_bm, kctx);
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(PRFCNT_L3_CACHE_EN), hwcnt_setup.l3_cache_bm, kctx);
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(PRFCNT_MMU_L2_EN), hwcnt_setup.mmu_l2_bm, kctx);
+
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8186))
+ {
+ /* Issue 8186 requires TILER_EN to be disabled before updating PRFCNT_CONFIG. We then restore the register contents */
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(PRFCNT_TILER_EN), 0, kctx);
+ }
+
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(PRFCNT_CONFIG), (kctx->as_nr << PRFCNT_CONFIG_AS_SHIFT) | PRFCNT_CONFIG_MODE_MANUAL, kctx);
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(PRFCNT_TILER_EN), hwcnt_setup.tiler_bm, kctx);
+ }
+ osk_waitq_set(&kbdev->hwcnt.waitqueue);
+ kbdev->hwcnt.state = bckp_state;
+ osk_spinlock_irq_unlock(&kbdev->hwcnt.lock);
+
+ /* Re-init the power policy */
+ kbase_pm_send_event(kbdev, KBASE_PM_EVENT_POLICY_INIT);
+
+ /* Wait for the policy to power up the GPU */
+ kbase_pm_wait_for_power_up(kbdev);
+
+ /* Complete any jobs that were still on the GPU */
+ for (i = 0; i < kbdev->gpu_props.num_job_slots; i++)
+ {
+ int nr_done;
+ kbase_jm_slot *slot = kbase_job_slot_lock(kbdev, i);
+
+ nr_done = kbasep_jm_nr_jobs_submitted( slot );
+ while (nr_done) {
+ OSK_PRINT_ERROR(OSK_BASE_JD, "Job stuck in slot %d on the GPU was cancelled", i);
+ kbase_job_done_slot(kbdev, i, BASE_JD_EVENT_JOB_CANCELLED, 0, &end_timestamp);
+ nr_done--;
+ }
+
+ kbase_job_slot_unlock(kbdev, i);
+ }
+
+ osk_mutex_lock( &js_devdata->runpool_mutex );
+
+ /* Reprogram the GPU's MMU */
+ for(i = 0; i < BASE_MAX_NR_AS; i++)
+ {
+ if (js_devdata->runpool_irq.per_as_data[i].kctx) {
+ kbase_as *as = &kbdev->as[i];
+ osk_mutex_lock(&as->transaction_mutex);
+ kbase_mmu_update(js_devdata->runpool_irq.per_as_data[i].kctx);
+ osk_mutex_unlock(&as->transaction_mutex);
+ }
+ }
+
+ osk_atomic_set(&kbdev->reset_gpu, KBASE_RESET_GPU_NOT_PENDING);
+ osk_waitq_set(&kbdev->reset_waitq);
+ OSK_PRINT_ERROR(OSK_BASE_JD, "Reset complete");
+
+ /* Try submitting some jobs to restart processing */
+ KBASE_TRACE_ADD( kbdev, JM_SUBMIT_AFTER_RESET, NULL, NULL, 0u, 0 );
+ kbasep_js_try_run_next_job(kbdev);
+ osk_mutex_unlock( &js_devdata->runpool_mutex );
+
+ kbase_pm_context_idle(kbdev);
+}
+
+void kbasep_reset_timer_callback(void *data)
+{
+ kbase_device *kbdev = (kbase_device*)data;
+
+ OSK_ASSERT(kbdev);
+
+ if (osk_atomic_compare_and_swap(&kbdev->reset_gpu, KBASE_RESET_GPU_COMMITTED, KBASE_RESET_GPU_HAPPENING) !=
+ KBASE_RESET_GPU_COMMITTED)
+ {
+ /* Reset has been cancelled or has already occurred */
+ return;
+ }
+ osk_workq_submit(&kbdev->reset_workq, &kbdev->reset_work);
+}
+
+/*
+ * If all jobs are evicted from the GPU then we can reset the GPU
+ * immediately instead of waiting for the timeout to elapse
+ */
+static void kbasep_try_reset_gpu_early(kbase_device *kbdev)
+{
+ int i;
+ int pending_jobs = 0;
+
+ OSK_ASSERT(kbdev);
+
+ /* Count the number of jobs */
+ for (i = 0; i < kbdev->gpu_props.num_job_slots; i++)
+ {
+ kbase_jm_slot *slot = kbase_job_slot_lock(kbdev, i);
+ pending_jobs += kbasep_jm_nr_jobs_submitted(slot);
+ kbase_job_slot_unlock(kbdev, i);
+ }
+
+ if (pending_jobs > 0)
+ {
+ /* There are still jobs on the GPU - wait */
+ return;
+ }
+
+ /* Check that the reset has been committed to (i.e. kbase_reset_gpu has been called), and that no other
+ * thread beat this thread to starting the reset */
+ if (osk_atomic_compare_and_swap(&kbdev->reset_gpu, KBASE_RESET_GPU_COMMITTED, KBASE_RESET_GPU_HAPPENING) !=
+ KBASE_RESET_GPU_COMMITTED)
+ {
+ /* Reset has already occurred */
+ return;
+ }
+ osk_workq_submit(&kbdev->reset_workq, &kbdev->reset_work);
+}
+
+/*
+ * Prepare for resetting the GPU.
+ * This function just soft-stops all the slots to ensure that as many jobs as possible are saved.
+ *
+ * The function returns a boolean which should be interpreted as follows:
+ * - MALI_TRUE - Prepared for reset, kbase_reset_gpu should be called.
+ * - MALI_FALSE - Another thread is performing a reset, kbase_reset_gpu should not be called.
+ *
+ * @return See description
+ */
+mali_bool kbase_prepare_to_reset_gpu(kbase_device *kbdev)
+{
+ int i;
+
+ OSK_ASSERT(kbdev);
+
+ if (osk_atomic_compare_and_swap(&kbdev->reset_gpu, KBASE_RESET_GPU_NOT_PENDING, KBASE_RESET_GPU_PREPARED) !=
+ KBASE_RESET_GPU_NOT_PENDING)
+ {
+ /* Some other thread is already resetting the GPU */
+ return MALI_FALSE;
+ }
+
+ osk_waitq_clear(&kbdev->reset_waitq);
+
+ OSK_PRINT_ERROR(OSK_BASE_JD, "Preparing to soft-reset GPU: Soft-stopping all jobs");
+
+ for (i = 0; i < kbdev->gpu_props.num_job_slots; i++)
+ {
+ kbase_job_slot_lock(kbdev, i);
+ kbase_job_slot_softstop(kbdev, i, NULL);
+ kbase_job_slot_unlock(kbdev, i);
+ }
+
+ return MALI_TRUE;
+}
+
+/*
+ * This function should be called after kbase_prepare_to_reset_gpu iff it returns MALI_TRUE.
+ * It should never be called without a corresponding call to kbase_prepare_to_reset_gpu.
+ *
+ * After this function is called (or not called if kbase_prepare_to_reset_gpu returned MALI_FALSE),
+ * the caller should wait for kbdev->reset_waitq to be signalled to know when the reset has completed.
+ */
+void kbase_reset_gpu(kbase_device *kbdev)
+{
+ osk_error ret;
+ u32 timeout_ms;
+
+ OSK_ASSERT(kbdev);
+
+ /* Note this is an assert/atomic_set because it is a software issue for a race to be occuring here */
+ OSK_ASSERT(osk_atomic_get(&kbdev->reset_gpu) == KBASE_RESET_GPU_PREPARED);
+ osk_atomic_set(&kbdev->reset_gpu, KBASE_RESET_GPU_COMMITTED);
+
+ timeout_ms = kbasep_get_config_value(kbdev, kbdev->config_attributes, KBASE_CONFIG_ATTR_JS_RESET_TIMEOUT_MS);
+ ret = osk_timer_start(&kbdev->reset_timer, timeout_ms);
+ if (ret != OSK_ERR_NONE)
+ {
+ OSK_PRINT_ERROR(OSK_BASE_JD, "Failed to start timer for soft-resetting GPU");
+ /* We can't rescue jobs from the GPU so immediately reset */
+ osk_workq_submit(&kbdev->reset_workq, &kbdev->reset_work);
+ }
+
+ /* Try resetting early */
+ kbasep_try_reset_gpu_early(kbdev);
+}
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_jm.h
+ * Job Manager Low-level APIs.
+ */
+
+#ifndef _KBASE_JM_H_
+#define _KBASE_JM_H_
+
+#include <kbase/src/common/mali_kbase_8401_workaround.h>
+#include <kbase/src/common/mali_kbase_hw.h>
+
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @addtogroup base_kbase_api
+ * @{
+ */
+
+
+/**
+ * @addtogroup kbase_jm Job Manager Low-level APIs
+ * @{
+ *
+ */
+
+static INLINE int kbasep_jm_is_js_free(kbase_device *kbdev, int js, kbase_context *kctx)
+{
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( 0 <= js && js < kbdev->gpu_props.num_job_slots );
+
+ return !kbase_reg_read(kbdev, JOB_SLOT_REG(js, JSn_COMMAND_NEXT), kctx);
+}
+
+/**
+ * This checks that:
+ * - there is enough space in the GPU's buffers (JSn_NEXT and JSn_HEAD registers) to accomodate the job.
+ * - there is enough space to track the job in a our Submit Slots. Note that we have to maintain space to
+ * requeue one job in case the next registers on the hardware need to be cleared.
+ */
+static INLINE mali_bool kbasep_jm_is_submit_slots_free(kbase_device *kbdev, int js, kbase_context *kctx)
+{
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( 0 <= js && js < kbdev->gpu_props.num_job_slots );
+
+ if (osk_atomic_get(&kbdev->reset_gpu) != KBASE_RESET_GPU_NOT_PENDING)
+ {
+ /* The GPU is being reset - so prevent submission */
+ return MALI_FALSE;
+ }
+
+ return (mali_bool)( kbasep_jm_is_js_free(kbdev, js, kctx)
+ && kbdev->jm_slots[js].submitted_nr < (BASE_JM_SUBMIT_SLOTS-2) );
+}
+
+/**
+ * Initialize a submit slot
+ */
+static INLINE void kbasep_jm_init_submit_slot( kbase_jm_slot *slot )
+{
+ slot->submitted_nr = 0;
+ slot->submitted_head = 0;
+}
+
+/**
+ * Find the atom at the idx'th element in the queue without removing it, starting at the head with idx==0.
+ */
+static INLINE kbase_jd_atom* kbasep_jm_peek_idx_submit_slot( kbase_jm_slot *slot, u8 idx )
+{
+ u8 pos;
+ kbase_jd_atom *katom;
+
+ OSK_ASSERT( idx < BASE_JM_SUBMIT_SLOTS );
+
+ pos = (slot->submitted_head + idx) & BASE_JM_SUBMIT_SLOTS_MASK;
+ katom = slot->submitted[pos];
+
+ return katom;
+}
+
+/**
+ * Pop front of the submitted
+ */
+static INLINE kbase_jd_atom* kbasep_jm_dequeue_submit_slot( kbase_jm_slot *slot )
+{
+ u8 pos;
+ kbase_jd_atom *katom;
+
+ pos = slot->submitted_head & BASE_JM_SUBMIT_SLOTS_MASK;
+ katom = slot->submitted[pos];
+ slot->submitted[pos] = NULL; /* Just to catch bugs... */
+ OSK_ASSERT(katom);
+
+ /* rotate the buffers */
+ slot->submitted_head = (slot->submitted_head + 1) & BASE_JM_SUBMIT_SLOTS_MASK;
+ slot->submitted_nr--;
+
+ OSK_PRINT_INFO( OSK_BASE_JM, "katom %p new head %u",
+ (void *)katom, (unsigned int)slot->submitted_head);
+
+ return katom;
+}
+
+/* Pop back of the submitted queue (unsubmit a job)
+ */
+static INLINE kbase_jd_atom *kbasep_jm_dequeue_tail_submit_slot( kbase_jm_slot *slot )
+{
+ u8 pos;
+
+ slot->submitted_nr--;
+
+ pos = (slot->submitted_head + slot->submitted_nr) & BASE_JM_SUBMIT_SLOTS_MASK;
+
+ return slot->submitted[pos];
+}
+
+static INLINE u8 kbasep_jm_nr_jobs_submitted( kbase_jm_slot *slot )
+{
+ return slot->submitted_nr;
+}
+
+
+/**
+ * Push back of the submitted
+ */
+static INLINE void kbasep_jm_enqueue_submit_slot( kbase_jm_slot *slot, kbase_jd_atom *katom )
+{
+ u8 nr;
+ u8 pos;
+ nr = slot->submitted_nr++;
+ OSK_ASSERT(nr < BASE_JM_SUBMIT_SLOTS);
+
+ pos = (slot->submitted_head + nr) & BASE_JM_SUBMIT_SLOTS_MASK;
+ slot->submitted[pos] = katom;
+}
+
+/**
+ * @brief Query whether a job peeked/dequeued from the submit slots is a
+ * 'dummy' job that is used for hardware workaround purposes.
+ *
+ * Any time a job is peeked/dequeued from the submit slots, this should be
+ * queried on that job.
+ *
+ * If a \a atom is indicated as being a dummy job, then you <b>must not attempt
+ * to use \a atom</b>. This is because its members will not necessarily be
+ * initialized, and so could lead to a fault if they were used.
+ *
+ * @param[in] kbdev kbase device pointer
+ * @param[in] atom The atom to query
+ *
+ * @return MALI_TRUE if \a atom is for a dummy job, in which case you must not
+ * attempt to use it.
+ * @return MALI_FALSE otherwise, and \a atom is safe to use.
+ */
+static INLINE mali_bool kbasep_jm_is_dummy_workaround_job( kbase_device *kbdev, kbase_jd_atom *atom )
+{
+ /* Query the set of workaround jobs here */
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8401))
+ {
+ if ( kbasep_8401_is_workaround_job( atom ) != MALI_FALSE )
+ {
+ return MALI_TRUE;
+ }
+ }
+
+ /* This job is not a workaround job, so it will be processed as normal */
+ return MALI_FALSE;
+}
+
+/**
+ * @brief Submit a job to a certain job-slot
+ *
+ * The caller must check kbasep_jm_is_submit_slots_free() != MALI_FALSE before calling this.
+ *
+ * The following locking conditions are made on the caller:
+ * - it must hold the kbasep_js_device_data::runpoool_irq::lock
+ * - This is to access the kbase_context::as_nr
+ * - In any case, the kbase_js code that calls this function will always have
+ * this lock held.
+ * - it must hold kbdev->jm_slots[ \a s ].lock
+ */
+void kbase_job_submit_nolock(kbase_device *kbdev, kbase_jd_atom *katom, int js);
+
+/**
+ * @brief Complete the head job on a particular job-slot
+ */
+void kbase_job_done_slot(kbase_device *kbdev, int s, u32 completion_code, u64 job_tail, kbasep_js_tick *end_timestamp);
+
+/**
+ * @brief Obtain the lock for a job slot.
+ *
+ * This function also returns the structure for the specified job slot to simplify the code
+ *
+ * @param[in] kbdev Kbase device pointer
+ * @param[in] js The job slot number to lock
+ *
+ * @return The job slot structure
+ */
+static INLINE kbase_jm_slot *kbase_job_slot_lock(kbase_device *kbdev, int js)
+{
+ osk_spinlock_irq_lock(&kbdev->jm_slots[js].lock);
+ return &kbdev->jm_slots[js];
+}
+
+/**
+ * @brief Release the lock for a job slot
+ *
+ * @param[in] kbdev Kbase device pointer
+ * @param[in] js The job slot number to unlock
+ */
+static INLINE void kbase_job_slot_unlock(kbase_device *kbdev, int js)
+{
+ osk_spinlock_irq_unlock(&kbdev->jm_slots[js].lock);
+}
+
+/** @} */ /* end group kbase_jm */
+/** @} */ /* end group base_kbase_api */
+/** @} */ /* end group base_api */
+
+#endif /* _KBASE_JM_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/*
+ * Job Scheduler Implementation
+ */
+#include <kbase/src/common/mali_kbase.h>
+#include <kbase/src/common/mali_kbase_js.h>
+#include <kbase/src/common/mali_kbase_js_affinity.h>
+#include <kbase/src/common/mali_kbase_gator.h>
+#include <kbase/src/common/mali_kbase_hw.h>
+
+#include "mali_kbase_jm.h"
+#include <kbase/src/common/mali_kbase_defs.h>
+
+/*
+ * Private types
+ */
+
+/** Bitpattern indicating the result of releasing a context */
+enum
+{
+ /** The context was descheduled - caller should try scheduling in a new one
+ * to keep the runpool full */
+ KBASEP_JS_RELEASE_RESULT_WAS_DESCHEDULED = (1u << 0),
+
+ /** The Runpool's context attributes changed. The scheduler might be able to
+ * submit more jobs than previously, and so the caller should call
+ * kbasep_js_try_run_next_job(). */
+ KBASEP_JS_RELEASE_RESULT_CTX_ATTR_CHANGE = (1u << 1)
+
+};
+
+typedef u32 kbasep_js_release_result;
+
+/*
+ * Private function prototypes
+ */
+STATIC INLINE void kbasep_js_deref_permon_check_and_disable_cycle_counter( kbase_device *kbdev,
+ kbase_jd_atom * katom );
+
+STATIC INLINE void kbasep_js_ref_permon_check_and_enable_cycle_counter( kbase_device *kbdev,
+ kbase_jd_atom * katom );
+
+STATIC kbasep_js_release_result kbasep_js_runpool_release_ctx_internal( kbase_device *kbdev, kbase_context *kctx );
+
+/** Helper for trace subcodes */
+#if KBASE_TRACE_ENABLE != 0
+STATIC int kbasep_js_trace_get_refcnt( kbase_device *kbdev, kbase_context *kctx )
+{
+ kbasep_js_device_data *js_devdata;
+ int as_nr;
+ int refcnt = 0;
+
+ js_devdata = &kbdev->js_data;
+
+ osk_spinlock_irq_lock( &js_devdata->runpool_irq.lock );
+ as_nr = kctx->as_nr;
+ if ( as_nr != KBASEP_AS_NR_INVALID )
+ {
+ kbasep_js_per_as_data *js_per_as_data;
+ js_per_as_data = &js_devdata->runpool_irq.per_as_data[as_nr];
+
+ refcnt = js_per_as_data->as_busy_refcount;
+ }
+ osk_spinlock_irq_unlock( &js_devdata->runpool_irq.lock );
+
+ return refcnt;
+}
+#else /* KBASE_TRACE_ENABLE != 0 */
+STATIC int kbasep_js_trace_get_refcnt( kbase_device *kbdev, kbase_context *kctx )
+{
+ CSTD_UNUSED( kbdev );
+ CSTD_UNUSED( kctx );
+ return 0;
+}
+#endif /* KBASE_TRACE_ENABLE != 0 */
+
+
+
+/*
+ * Private types
+ */
+enum
+{
+ JS_DEVDATA_INIT_NONE =0,
+ JS_DEVDATA_INIT_CONSTANTS =(1 << 0),
+ JS_DEVDATA_INIT_RUNPOOL_MUTEX =(1 << 1),
+ JS_DEVDATA_INIT_QUEUE_MUTEX =(1 << 2),
+ JS_DEVDATA_INIT_RUNPOOL_IRQ_LOCK=(1 << 3),
+ JS_DEVDATA_INIT_POLICY =(1 << 4),
+ JS_DEVDATA_INIT_ALL =((1 << 5)-1)
+};
+
+enum
+{
+ JS_KCTX_INIT_NONE =0,
+ JS_KCTX_INIT_CONSTANTS =(1 << 0),
+ JS_KCTX_INIT_JSCTX_MUTEX =(1 << 1),
+ JS_KCTX_INIT_POLICY =(1 << 2),
+ JS_KCTX_INIT_JSCTX_WAITQ_SCHED =(1 << 3),
+ JS_KCTX_INIT_JSCTX_WAITQ_NSCHED =(1 << 4),
+ JS_KCTX_INIT_ALL =((1 << 5)-1)
+};
+
+/*
+ * Private functions
+ */
+
+/**
+ * Check if the job had performance monitoring enabled and decrement the count. If no jobs require
+ * performance monitoring, then the cycle counters will be disabled in the GPU.
+ *
+ * No locks need to be held - locking is handled further down
+ *
+ * This function does not sleep.
+ */
+
+STATIC INLINE void kbasep_js_deref_permon_check_and_disable_cycle_counter( kbase_device *kbdev, kbase_jd_atom * katom )
+{
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( katom != NULL );
+
+ if ( katom->core_req & BASE_JD_REQ_PERMON )
+ {
+ kbase_pm_release_gpu_cycle_counter(kbdev);
+ }
+}
+
+/**
+ * Check if the job has performance monitoring enabled and keep a count of it. If at least one
+ * job requires performance monitoring, then the cycle counters will be enabled in the GPU.
+ *
+ * No locks need to be held - locking is handled further down
+ *
+ * This function does not sleep.
+ */
+
+STATIC INLINE void kbasep_js_ref_permon_check_and_enable_cycle_counter( kbase_device *kbdev, kbase_jd_atom * katom )
+{
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( katom != NULL );
+
+ if ( katom->core_req & BASE_JD_REQ_PERMON )
+ {
+ kbase_pm_request_gpu_cycle_counter(kbdev);
+ }
+}
+
+/*
+ * The following locking conditions are made on the caller:
+ * - The caller must hold the kbasep_js_kctx_info::ctx::jsctx_mutex.
+ * - The caller must hold the kbasep_js_device_data::runpool_mutex
+ */
+STATIC INLINE void runpool_inc_context_count( kbase_device *kbdev, kbase_context *kctx )
+{
+ kbasep_js_device_data *js_devdata;
+ kbasep_js_kctx_info *js_kctx_info;
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( kctx != NULL );
+
+ js_devdata = &kbdev->js_data;
+ js_kctx_info = &kctx->jctx.sched_info;
+
+ /* Track total contexts */
+ ++(js_devdata->nr_all_contexts_running);
+
+ if ( (js_kctx_info->ctx.flags & KBASE_CTX_FLAG_SUBMIT_DISABLED) == 0 )
+ {
+ /* Track contexts that can submit jobs */
+ ++(js_devdata->nr_user_contexts_running);
+ }
+}
+
+/*
+ * The following locking conditions are made on the caller:
+ * - The caller must hold the kbasep_js_kctx_info::ctx::jsctx_mutex.
+ * - The caller must hold the kbasep_js_device_data::runpool_mutex
+ */
+STATIC INLINE void runpool_dec_context_count( kbase_device *kbdev, kbase_context *kctx )
+{
+ kbasep_js_device_data *js_devdata;
+ kbasep_js_kctx_info *js_kctx_info;
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( kctx != NULL );
+
+ js_devdata = &kbdev->js_data;
+ js_kctx_info = &kctx->jctx.sched_info;
+
+ /* Track total contexts */
+ --(js_devdata->nr_all_contexts_running);
+
+ if ( (js_kctx_info->ctx.flags & KBASE_CTX_FLAG_SUBMIT_DISABLED) == 0 )
+ {
+ /* Track contexts that can submit jobs */
+ --(js_devdata->nr_user_contexts_running);
+ }
+}
+
+/**
+ * @brief check whether the runpool is full for a specified context
+ *
+ * If kctx == NULL, then this makes the least restrictive check on the
+ * runpool. A specific context that is supplied immediately after could fail
+ * the check, even under the same conditions.
+ *
+ * Therefore, once a context is obtained you \b must re-check it with this
+ * function, since the return value could change to MALI_FALSE.
+ *
+ * The following locking conditions are made on the caller:
+ * - In all cases, the caller must hold kbasep_js_device_data::runpool_mutex
+ * - When kctx != NULL the caller must hold the kbasep_js_kctx_info::ctx::jsctx_mutex.
+ * - When kctx == NULL, then the caller need not hold any jsctx_mutex locks (but it doesn't do any harm to do so).
+ */
+STATIC mali_bool check_is_runpool_full( kbase_device *kbdev, kbase_context *kctx )
+{
+ kbasep_js_device_data *js_devdata;
+ mali_bool is_runpool_full;
+ OSK_ASSERT( kbdev != NULL );
+
+ js_devdata = &kbdev->js_data;
+
+ is_runpool_full = (mali_bool)(js_devdata->nr_all_contexts_running >= kbdev->nr_hw_address_spaces);
+
+ if ( kctx != NULL && (kctx->jctx.sched_info.ctx.flags & KBASE_CTX_FLAG_SUBMIT_DISABLED) == 0 )
+ {
+ /* Contexts that don't submit might use less of the address spaces available, due to HW workarounds */
+ is_runpool_full = (mali_bool)(js_devdata->nr_user_contexts_running >= kbdev->nr_user_address_spaces);
+ }
+
+ return is_runpool_full;
+}
+
+
+STATIC base_jd_core_req core_reqs_from_jsn_features( u16 features /* JS<n>_FEATURE register value */ )
+{
+ base_jd_core_req core_req = 0u;
+
+ if ( (features & JSn_FEATURE_SET_VALUE_JOB) != 0 )
+ {
+ core_req |= BASE_JD_REQ_V;
+ }
+ if ( (features & JSn_FEATURE_CACHE_FLUSH_JOB) != 0 )
+ {
+ core_req |= BASE_JD_REQ_CF;
+ }
+ if ( (features & JSn_FEATURE_COMPUTE_JOB) != 0 )
+ {
+ core_req |= BASE_JD_REQ_CS;
+ }
+ if ( (features & JSn_FEATURE_TILER_JOB) != 0 )
+ {
+ core_req |= BASE_JD_REQ_T;
+ }
+ if ( (features & JSn_FEATURE_FRAGMENT_JOB) != 0 )
+ {
+ core_req |= BASE_JD_REQ_FS;
+ }
+ return core_req;
+}
+
+/**
+ * Picks a free address space and add the context to the Policy. Then perform a
+ * transaction on this AS and RunPool IRQ to:
+ * - setup the runpool_irq structure and the context on that AS
+ * - Activate the MMU on that AS
+ * - Allow jobs to be submitted on that AS
+ *
+ * Locking conditions:
+ * - Caller must hold the kbasep_js_kctx_info::jsctx_mutex
+ * - Caller must hold the kbase_js_device_data::runpool_mutex
+ * - AS transaction mutex will be obtained
+ * - Runpool IRQ lock will be obtained
+ */
+STATIC void assign_and_activate_kctx_addr_space( kbase_device *kbdev, kbase_context *kctx )
+{
+ kbasep_js_device_data *js_devdata;
+ kbase_as *current_as;
+ kbasep_js_per_as_data *js_per_as_data;
+ long ffs_result;
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( kctx != NULL );
+
+ js_devdata = &kbdev->js_data;
+
+ /* Find the free address space */
+ ffs_result = osk_find_first_set_bit( js_devdata->as_free );
+ /* ASSERT that we should've found a free one */
+ OSK_ASSERT( 0 <= ffs_result && ffs_result < kbdev->nr_hw_address_spaces );
+ js_devdata->as_free &= ~((u16)(1u << ffs_result));
+
+ /*
+ * Transaction on the AS and runpool_irq
+ */
+ current_as = &kbdev->as[ffs_result];
+ js_per_as_data = &js_devdata->runpool_irq.per_as_data[ffs_result];
+ osk_mutex_lock( ¤t_as->transaction_mutex );
+ osk_spinlock_irq_lock( &js_devdata->runpool_irq.lock );
+
+ /* Attribute handling */
+ kbasep_js_ctx_attr_runpool_retain_ctx( kbdev, kctx );
+
+ /* Assign addr space */
+ kctx->as_nr = (int)ffs_result;
+#if MALI_GATOR_SUPPORT
+ kbase_trace_mali_mmu_as_in_use(kctx->as_nr);
+#endif
+ /* Activate this address space on the MMU */
+ kbase_mmu_update( kctx );
+
+ /* Allow it to run jobs */
+ kbasep_js_set_submit_allowed( js_devdata, kctx );
+
+ /* Book-keeping */
+ js_per_as_data->kctx = kctx;
+ js_per_as_data->as_busy_refcount = 0;
+
+ /* Lastly, add the context to the policy's runpool - this really allows it to run jobs */
+ kbasep_js_policy_runpool_add_ctx( &js_devdata->policy, kctx );
+ /*
+ * Transaction complete
+ */
+ osk_spinlock_irq_unlock( &js_devdata->runpool_irq.lock );
+ osk_mutex_unlock( ¤t_as->transaction_mutex );
+
+}
+
+void kbasep_js_try_run_next_job( kbase_device *kbdev )
+{
+ int js;
+
+ OSK_ASSERT( kbdev != NULL );
+
+ for ( js = 0; js < kbdev->gpu_props.num_job_slots ; ++js )
+ {
+ kbasep_js_try_run_next_job_on_slot( kbdev, js );
+ }
+}
+
+/** Hold the kbasep_js_device_data::runpool_irq::lock for this */
+mali_bool kbasep_js_runpool_retain_ctx_nolock( kbase_device *kbdev, kbase_context *kctx )
+{
+ kbasep_js_device_data *js_devdata;
+ kbasep_js_per_as_data *js_per_as_data;
+ mali_bool result = MALI_FALSE;
+ int as_nr;
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( kctx != NULL );
+ js_devdata = &kbdev->js_data;
+
+ as_nr = kctx->as_nr;
+ if ( as_nr != KBASEP_AS_NR_INVALID )
+ {
+ int new_refcnt;
+
+ OSK_ASSERT( as_nr >= 0 );
+ js_per_as_data = &js_devdata->runpool_irq.per_as_data[as_nr];
+
+ OSK_ASSERT( js_per_as_data->kctx != NULL );
+
+ new_refcnt = ++(js_per_as_data->as_busy_refcount);
+ KBASE_TRACE_ADD_REFCOUNT( kbdev, JS_RETAIN_CTX_NOLOCK, kctx, NULL, 0u,
+ new_refcnt );
+ result = MALI_TRUE;
+ }
+
+ return result;
+}
+
+/*
+ * Functions private to KBase ('Protected' functions)
+ */
+void kbase_js_try_run_jobs( kbase_device *kbdev )
+{
+ kbasep_js_device_data *js_devdata;
+
+ OSK_ASSERT( kbdev != NULL );
+ js_devdata = &kbdev->js_data;
+
+ osk_mutex_lock( &js_devdata->runpool_mutex );
+
+ kbasep_js_try_run_next_job( kbdev );
+
+ osk_mutex_unlock( &js_devdata->runpool_mutex );
+}
+
+mali_error kbasep_js_devdata_init( kbase_device *kbdev )
+{
+ kbasep_js_device_data *js_devdata;
+ mali_error err;
+ int i;
+ u16 as_present;
+ osk_error osk_err;
+
+ OSK_ASSERT( kbdev != NULL );
+
+ js_devdata = &kbdev->js_data;
+
+ OSK_ASSERT( js_devdata->init_status == JS_DEVDATA_INIT_NONE );
+
+ /* These two must be recalculated if nr_hw_address_spaces changes (e.g. for HW workarounds) */
+ as_present = (1U << kbdev->nr_hw_address_spaces) - 1;
+ kbdev->nr_user_address_spaces = kbdev->nr_hw_address_spaces;
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8987))
+ {
+ mali_bool use_workaround_for_security;
+ use_workaround_for_security = (mali_bool)kbasep_get_config_value( kbdev, kbdev->config_attributes, KBASE_CONFIG_ATTR_SECURE_BUT_LOSS_OF_PERFORMANCE );
+ if ( use_workaround_for_security != MALI_FALSE )
+ {
+ OSK_PRINT(OSK_BASE_JM, "GPU has HW ISSUE 8987, and driver configured for security workaround: 1 address space only");
+ kbdev->nr_user_address_spaces = 1;
+ }
+ }
+#if MALI_DEBUG
+ /* Soft-stop will be disabled on a single context by default unless softstop_always is set */
+ js_devdata->softstop_always = MALI_FALSE;
+#endif /* MALI_DEBUG */
+ js_devdata->nr_all_contexts_running = 0;
+ js_devdata->nr_user_contexts_running = 0;
+ js_devdata->as_free = as_present; /* All ASs initially free */
+ js_devdata->runpool_irq.submit_allowed = 0u; /* No ctx allowed to submit */
+ OSK_MEMSET( js_devdata->runpool_irq.ctx_attr_ref_count, 0, sizeof(js_devdata->runpool_irq.ctx_attr_ref_count) );
+ OSK_MEMSET( js_devdata->runpool_irq.slot_affinities, 0, sizeof( js_devdata->runpool_irq.slot_affinities ) );
+ js_devdata->runpool_irq.slots_blocked_on_affinity = 0u;
+ OSK_MEMSET( js_devdata->runpool_irq.slot_affinity_refcount, 0, sizeof( js_devdata->runpool_irq.slot_affinity_refcount ) );
+
+ /* Config attributes */
+ js_devdata->scheduling_tick_ns = (u32)kbasep_get_config_value( kbdev, kbdev->config_attributes, KBASE_CONFIG_ATTR_JS_SCHEDULING_TICK_NS );
+ js_devdata->soft_stop_ticks = (u32)kbasep_get_config_value( kbdev, kbdev->config_attributes, KBASE_CONFIG_ATTR_JS_SOFT_STOP_TICKS );
+ js_devdata->hard_stop_ticks_ss = (u32)kbasep_get_config_value( kbdev, kbdev->config_attributes, KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_SS );
+ js_devdata->hard_stop_ticks_nss = (u32)kbasep_get_config_value( kbdev, kbdev->config_attributes, KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_NSS );
+ js_devdata->gpu_reset_ticks_ss = (u32)kbasep_get_config_value( kbdev, kbdev->config_attributes, KBASE_CONFIG_ATTR_JS_RESET_TICKS_SS );
+ js_devdata->gpu_reset_ticks_nss = (u32)kbasep_get_config_value( kbdev, kbdev->config_attributes, KBASE_CONFIG_ATTR_JS_RESET_TICKS_NSS );
+ js_devdata->ctx_timeslice_ns = (u32)kbasep_get_config_value( kbdev, kbdev->config_attributes, KBASE_CONFIG_ATTR_JS_CTX_TIMESLICE_NS );
+ js_devdata->cfs_ctx_runtime_init_slices = (u32)kbasep_get_config_value( kbdev, kbdev->config_attributes, KBASE_CONFIG_ATTR_JS_CFS_CTX_RUNTIME_INIT_SLICES );
+ js_devdata->cfs_ctx_runtime_min_slices = (u32)kbasep_get_config_value( kbdev, kbdev->config_attributes, KBASE_CONFIG_ATTR_JS_CFS_CTX_RUNTIME_MIN_SLICES );
+
+ OSK_PRINT_INFO( OSK_BASE_JM, "JS Config Attribs: " );
+ OSK_PRINT_INFO( OSK_BASE_JM, "\tjs_devdata->scheduling_tick_ns:%u", js_devdata->scheduling_tick_ns );
+ OSK_PRINT_INFO( OSK_BASE_JM, "\tjs_devdata->soft_stop_ticks:%u", js_devdata->soft_stop_ticks );
+ OSK_PRINT_INFO( OSK_BASE_JM, "\tjs_devdata->hard_stop_ticks_ss:%u", js_devdata->hard_stop_ticks_ss );
+ OSK_PRINT_INFO( OSK_BASE_JM, "\tjs_devdata->hard_stop_ticks_nss:%u", js_devdata->hard_stop_ticks_nss );
+ OSK_PRINT_INFO( OSK_BASE_JM, "\tjs_devdata->gpu_reset_ticks_ss:%u", js_devdata->gpu_reset_ticks_ss );
+ OSK_PRINT_INFO( OSK_BASE_JM, "\tjs_devdata->gpu_reset_ticks_nss:%u", js_devdata->gpu_reset_ticks_nss );
+ OSK_PRINT_INFO( OSK_BASE_JM, "\tjs_devdata->ctx_timeslice_ns:%u", js_devdata->ctx_timeslice_ns );
+ OSK_PRINT_INFO( OSK_BASE_JM, "\tjs_devdata->cfs_ctx_runtime_init_slices:%u", js_devdata->cfs_ctx_runtime_init_slices );
+ OSK_PRINT_INFO( OSK_BASE_JM, "\tjs_devdata->cfs_ctx_runtime_min_slices:%u", js_devdata->cfs_ctx_runtime_min_slices );
+
+#if MALI_BACKEND_KERNEL /* Only output on real kernel modules, otherwise it fills up multictx testing output */
+#if KBASE_DISABLE_SCHEDULING_SOFT_STOPS != 0
+ OSK_PRINT( OSK_BASE_JM,
+ "Job Scheduling Policy Soft-stops disabled, ignoring value for soft_stop_ticks==%u at %uns per tick. Other soft-stops may still occur.",
+ js_devdata->soft_stop_ticks,
+ js_devdata->scheduling_tick_ns );
+#endif
+#if KBASE_DISABLE_SCHEDULING_HARD_STOPS != 0
+ OSK_PRINT( OSK_BASE_JM,
+ "Job Scheduling Policy Hard-stops disabled, ignoring values for hard_stop_ticks_ss==%d and hard_stop_ticks_nss==%u at %uns per tick. Other hard-stops may still occur.",
+ js_devdata->hard_stop_ticks_ss,
+ js_devdata->hard_stop_ticks_nss,
+ js_devdata->scheduling_tick_ns );
+#endif
+#if KBASE_DISABLE_SCHEDULING_SOFT_STOPS != 0 && KBASE_DISABLE_SCHEDULING_HARD_STOPS != 0
+ OSK_PRINT( OSK_BASE_JM, "Note: The JS policy's tick timer (if coded) will still be run, but do nothing." );
+#endif
+#endif /* MALI_BACKEND_KERNEL */
+
+ /* setup the number of irq throttle cycles base on given time */
+ {
+ u32 irq_throttle_time_us = kbdev->gpu_props.irq_throttle_time_us;
+ u32 irq_throttle_cycles = kbasep_js_convert_us_to_gpu_ticks_max_freq(kbdev, irq_throttle_time_us);
+ osk_atomic_set( &kbdev->irq_throttle_cycles, irq_throttle_cycles);
+ }
+
+ /* Clear the AS data, including setting NULL pointers */
+ OSK_MEMSET( &js_devdata->runpool_irq.per_as_data[0], 0, sizeof(js_devdata->runpool_irq.per_as_data) );
+
+ for ( i = 0; i < kbdev->gpu_props.num_job_slots; ++i )
+ {
+ js_devdata->js_reqs[i] = core_reqs_from_jsn_features( kbdev->gpu_props.props.raw_props.js_features[i] );
+ }
+ js_devdata->init_status |= JS_DEVDATA_INIT_CONSTANTS;
+
+ /* On error, we could continue on: providing none of the below resources
+ * rely on the ones above */
+
+ osk_err = osk_mutex_init( &js_devdata->runpool_mutex, OSK_LOCK_ORDER_JS_RUNPOOL );
+ if ( osk_err == OSK_ERR_NONE )
+ {
+ js_devdata->init_status |= JS_DEVDATA_INIT_RUNPOOL_MUTEX;
+ }
+
+ osk_err = osk_mutex_init( &js_devdata->queue_mutex, OSK_LOCK_ORDER_JS_QUEUE );
+ if ( osk_err == OSK_ERR_NONE )
+ {
+ js_devdata->init_status |= JS_DEVDATA_INIT_QUEUE_MUTEX;
+ }
+
+ osk_err = osk_spinlock_irq_init( &js_devdata->runpool_irq.lock, OSK_LOCK_ORDER_JS_RUNPOOL_IRQ );
+ if ( osk_err == OSK_ERR_NONE )
+ {
+ js_devdata->init_status |= JS_DEVDATA_INIT_RUNPOOL_IRQ_LOCK;
+ }
+
+ err = kbasep_js_policy_init( kbdev );
+ if ( err == MALI_ERROR_NONE)
+ {
+ js_devdata->init_status |= JS_DEVDATA_INIT_POLICY;
+ }
+
+ /* On error, do no cleanup; this will be handled by the caller(s), since
+ * we've designed this resource to be safe to terminate on init-fail */
+ if ( js_devdata->init_status != JS_DEVDATA_INIT_ALL)
+ {
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+
+ return MALI_ERROR_NONE;
+}
+
+void kbasep_js_devdata_halt( kbase_device *kbdev )
+{
+ CSTD_UNUSED(kbdev);
+}
+
+void kbasep_js_devdata_term( kbase_device *kbdev )
+{
+ kbasep_js_device_data *js_devdata;
+
+ OSK_ASSERT( kbdev != NULL );
+
+ js_devdata = &kbdev->js_data;
+
+ if ( (js_devdata->init_status & JS_DEVDATA_INIT_CONSTANTS) )
+ {
+ s8 zero_ctx_attr_ref_count[KBASEP_JS_CTX_ATTR_COUNT] = { 0, };
+ /* The caller must de-register all contexts before calling this */
+ OSK_ASSERT( js_devdata->nr_all_contexts_running == 0 );
+ OSK_ASSERT( OSK_MEMCMP( js_devdata->runpool_irq.ctx_attr_ref_count, zero_ctx_attr_ref_count, sizeof(js_devdata->runpool_irq.ctx_attr_ref_count)) == 0 );
+ CSTD_UNUSED( zero_ctx_attr_ref_count );
+ }
+ if ( (js_devdata->init_status & JS_DEVDATA_INIT_POLICY) )
+ {
+ kbasep_js_policy_term( &js_devdata->policy );
+ }
+ if ( (js_devdata->init_status & JS_DEVDATA_INIT_RUNPOOL_IRQ_LOCK) )
+ {
+ osk_spinlock_irq_term( &js_devdata->runpool_irq.lock );
+ }
+ if ( (js_devdata->init_status & JS_DEVDATA_INIT_QUEUE_MUTEX) )
+ {
+ osk_mutex_term( &js_devdata->queue_mutex );
+ }
+ if ( (js_devdata->init_status & JS_DEVDATA_INIT_RUNPOOL_MUTEX) )
+ {
+ osk_mutex_term( &js_devdata->runpool_mutex );
+ }
+
+ js_devdata->init_status = JS_DEVDATA_INIT_NONE;
+}
+
+
+mali_error kbasep_js_kctx_init( kbase_context *kctx )
+{
+ kbase_device *kbdev;
+ kbasep_js_kctx_info *js_kctx_info;
+ mali_error err;
+ osk_error osk_err;
+
+ OSK_ASSERT( kctx != NULL );
+
+ kbdev = kctx->kbdev;
+ OSK_ASSERT( kbdev != NULL );
+
+ js_kctx_info = &kctx->jctx.sched_info;
+ OSK_ASSERT( js_kctx_info->init_status == JS_KCTX_INIT_NONE );
+
+ js_kctx_info->ctx.nr_jobs = 0;
+ js_kctx_info->ctx.is_scheduled = MALI_FALSE;
+ js_kctx_info->ctx.is_dying = MALI_FALSE;
+ OSK_MEMSET( js_kctx_info->ctx.ctx_attr_ref_count, 0, sizeof(js_kctx_info->ctx.ctx_attr_ref_count) );
+
+ /* Initially, the context is disabled from submission until the create flags are set */
+ js_kctx_info->ctx.flags = KBASE_CTX_FLAG_SUBMIT_DISABLED;
+
+ js_kctx_info->init_status |= JS_KCTX_INIT_CONSTANTS;
+
+ /* On error, we could continue on: providing none of the below resources
+ * rely on the ones above */
+ osk_err = osk_mutex_init( &js_kctx_info->ctx.jsctx_mutex, OSK_LOCK_ORDER_JS_CTX );
+ if ( osk_err == OSK_ERR_NONE )
+ {
+ js_kctx_info->init_status |= JS_KCTX_INIT_JSCTX_MUTEX;
+ }
+
+ osk_err = osk_waitq_init( &js_kctx_info->ctx.scheduled_waitq );
+ if ( osk_err == OSK_ERR_NONE )
+ {
+ js_kctx_info->init_status |= JS_KCTX_INIT_JSCTX_WAITQ_SCHED;
+ }
+
+ osk_err = osk_waitq_init( &js_kctx_info->ctx.not_scheduled_waitq );
+ if ( osk_err == OSK_ERR_NONE )
+ {
+ js_kctx_info->init_status |= JS_KCTX_INIT_JSCTX_WAITQ_NSCHED;
+ }
+
+ err = kbasep_js_policy_init_ctx( kbdev, kctx );
+ if ( err == MALI_ERROR_NONE )
+ {
+ js_kctx_info->init_status |= JS_KCTX_INIT_POLICY;
+ }
+
+ /* On error, do no cleanup; this will be handled by the caller(s), since
+ * we've designed this resource to be safe to terminate on init-fail */
+ if ( js_kctx_info->init_status != JS_KCTX_INIT_ALL)
+ {
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+
+ /* Initially, the context is not scheduled */
+ osk_waitq_clear( &js_kctx_info->ctx.scheduled_waitq );
+ osk_waitq_set( &js_kctx_info->ctx.not_scheduled_waitq );
+
+ return MALI_ERROR_NONE;
+}
+
+void kbasep_js_kctx_term( kbase_context *kctx )
+{
+ kbase_device *kbdev;
+ kbasep_js_kctx_info *js_kctx_info;
+ kbasep_js_policy *js_policy;
+
+ OSK_ASSERT( kctx != NULL );
+
+ kbdev = kctx->kbdev;
+ OSK_ASSERT( kbdev != NULL );
+
+ js_policy = &kbdev->js_data.policy;
+ js_kctx_info = &kctx->jctx.sched_info;
+
+ if ( (js_kctx_info->init_status & JS_KCTX_INIT_CONSTANTS) )
+ {
+ /* The caller must de-register all jobs before calling this */
+ OSK_ASSERT( js_kctx_info->ctx.is_scheduled == MALI_FALSE );
+ OSK_ASSERT( js_kctx_info->ctx.nr_jobs == 0 );
+ /* Only certain Ctx Attributes will be zero (others can have a non-zero value for the life of the context) */
+ OSK_ASSERT( kbasep_js_ctx_attr_count_on_runpool( kbdev, KBASEP_JS_CTX_ATTR_NSS ) == 0 );
+ }
+
+ if ( (js_kctx_info->init_status & JS_KCTX_INIT_JSCTX_WAITQ_SCHED) )
+ {
+ osk_waitq_term( &js_kctx_info->ctx.scheduled_waitq );
+ }
+
+ if ( (js_kctx_info->init_status & JS_KCTX_INIT_JSCTX_WAITQ_NSCHED) )
+ {
+ osk_waitq_term( &js_kctx_info->ctx.not_scheduled_waitq );
+ }
+
+ if ( (js_kctx_info->init_status & JS_KCTX_INIT_JSCTX_MUTEX) )
+ {
+ osk_mutex_term( &js_kctx_info->ctx.jsctx_mutex );
+ }
+
+ if ( (js_kctx_info->init_status & JS_KCTX_INIT_POLICY) )
+ {
+ kbasep_js_policy_term_ctx( js_policy, kctx );
+ }
+
+ js_kctx_info->init_status = JS_KCTX_INIT_NONE;
+}
+
+/* Evict jobs from the NEXT registers
+ *
+ * The caller must hold:
+ * - kbasep_js_kctx_info::ctx::jsctx_mutex
+ * - kbasep_js_device_data::runpool_mutex
+ */
+STATIC void kbasep_js_runpool_evict_next_jobs( kbase_device *kbdev )
+{
+ int js;
+ kbasep_js_device_data *js_devdata;
+ u16 saved_submit_mask;
+
+ js_devdata = &kbdev->js_data;
+
+ /* Prevent contexts in the runpool from submitting jobs */
+ osk_spinlock_irq_lock(&js_devdata->runpool_irq.lock);
+ {
+ saved_submit_mask = js_devdata->runpool_irq.submit_allowed;
+ js_devdata->runpool_irq.submit_allowed = 0;
+ }
+ osk_spinlock_irq_unlock(&js_devdata->runpool_irq.lock);
+
+ /* Evict jobs from the NEXT registers */
+ for (js = 0; js < kbdev->gpu_props.num_job_slots; js++)
+ {
+ kbase_jm_slot *slot;
+ kbase_jd_atom *tail;
+
+ kbase_job_slot_lock(kbdev, js);
+
+ if (!kbase_reg_read(kbdev, JOB_SLOT_REG(js, JSn_COMMAND_NEXT), NULL))
+ {
+ /* No job in the NEXT register */
+ kbase_job_slot_unlock(kbdev, js);
+ continue;
+ }
+
+ slot = &kbdev->jm_slots[js];
+ tail = kbasep_jm_peek_idx_submit_slot(slot, slot->submitted_nr-1);
+
+ /* Clearing job from next registers */
+ kbase_reg_write(kbdev, JOB_SLOT_REG(js, JSn_COMMAND_NEXT), JSn_COMMAND_NOP, NULL);
+
+ /* Check to see if we did remove a job from the next registers */
+ if (kbase_reg_read(kbdev, JOB_SLOT_REG(js, JSn_HEAD_NEXT_LO), NULL) != 0 ||
+ kbase_reg_read(kbdev, JOB_SLOT_REG(js, JSn_HEAD_NEXT_HI), NULL) != 0)
+ {
+ /* The job was successfully cleared from the next registers, requeue it */
+ slot->submitted_nr--;
+
+ /* Set the next registers to NULL */
+ kbase_reg_write(kbdev, JOB_SLOT_REG(js, JSn_HEAD_NEXT_LO), 0, NULL);
+ kbase_reg_write(kbdev, JOB_SLOT_REG(js, JSn_HEAD_NEXT_HI), 0, NULL);
+
+ tail->event.event_code = BASE_JD_EVENT_REMOVED_FROM_NEXT;
+
+ /* Complete the job, indicate that it took no time, and start_new_jobs==MALI_FALSE */
+ kbase_jd_done(tail, js, NULL, MALI_FALSE);
+ }
+
+ kbase_job_slot_unlock(kbdev, js);
+ }
+
+ /* Allow contexts in the runpool to submit jobs again */
+ osk_spinlock_irq_lock(&js_devdata->runpool_irq.lock);
+ {
+ js_devdata->runpool_irq.submit_allowed = saved_submit_mask;
+ }
+ osk_spinlock_irq_unlock(&js_devdata->runpool_irq.lock);
+}
+
+/**
+ * Fast start a higher priority job
+ * If the runpool is full, the lower priority contexts with no running jobs
+ * will be evicted from the runpool
+ *
+ * If \a kctx_new is NULL, the first context with no running jobs will be evicted
+ *
+ * The following locking conditions are made on the caller:
+ * - The caller must \b not hold \a kctx_new's
+ * kbasep_js_kctx_info::ctx::jsctx_mutex, or that mutex of any ctx in the
+ * runpool. This is because \a kctx_new's jsctx_mutex and one of the other
+ * scheduled ctx's jsctx_mutex will be obtained internally.
+ * - it must \em not hold kbasep_js_device_data::runpool_irq::lock (as this will be
+ * obtained internally)
+ * - it must \em not hold kbasep_js_device_data::runpool_mutex (as this will be
+ * obtained internally)
+ * - it must \em not hold kbasep_jd_device_data::queue_mutex (again, it's used
+ * internally).
+ */
+STATIC void kbasep_js_runpool_attempt_fast_start_ctx( kbase_device *kbdev, kbase_context *kctx_new )
+{
+ kbasep_js_device_data *js_devdata;
+ kbasep_js_kctx_info *js_kctx_new;
+ kbasep_js_policy *js_policy;
+ kbasep_js_per_as_data *js_per_as_data;
+ int evict_as_nr;
+
+ OSK_ASSERT(kbdev != NULL);
+
+ js_devdata = &kbdev->js_data;
+ js_policy = &kbdev->js_data.policy;
+
+ if (kctx_new != NULL)
+ {
+ js_kctx_new = &kctx_new->jctx.sched_info;
+ osk_mutex_lock( &js_kctx_new->ctx.jsctx_mutex );
+ }
+ else
+ {
+ js_kctx_new = NULL;
+ CSTD_UNUSED(js_kctx_new);
+ }
+
+ osk_mutex_lock( &js_devdata->runpool_mutex );
+
+ /* If the runpool is full, attempt to fast start our context */
+ if (check_is_runpool_full(kbdev, kctx_new) != MALI_FALSE)
+ {
+ /* No free address spaces - attempt to evict non-running lower priority context */
+ osk_spinlock_irq_lock(&js_devdata->runpool_irq.lock);
+ for(evict_as_nr = 0; evict_as_nr < kbdev->nr_hw_address_spaces; evict_as_nr++)
+ {
+ kbase_context *kctx_evict;
+ js_per_as_data = &js_devdata->runpool_irq.per_as_data[evict_as_nr];
+ kctx_evict = js_per_as_data->kctx;
+
+ /* Look for the AS which is not currently running */
+ if(0 == js_per_as_data->as_busy_refcount && kctx_evict != NULL)
+ {
+ /* Now compare the scheduled priority we are considering evicting with the new ctx priority
+ * and take into consideration if the scheduled priority is a realtime policy or not.
+ * Note that the lower the number, the higher the priority
+ */
+ if((kctx_new == NULL) || kbasep_js_policy_ctx_has_priority(js_policy, kctx_evict, kctx_new))
+ {
+ mali_bool retain_result;
+ kbasep_js_release_result release_result;
+ KBASE_TRACE_ADD( kbdev, JS_FAST_START_EVICTS_CTX, kctx_evict, NULL, 0u, (u32)kctx_new );
+
+ /* Retain the ctx to work on it - this shouldn't be able to fail */
+ retain_result = kbasep_js_runpool_retain_ctx_nolock( kbdev, kctx_evict );
+ OSK_ASSERT( retain_result != MALI_FALSE );
+ CSTD_UNUSED( retain_result );
+
+ /* This will cause the context to be scheduled out on the next runpool_release_ctx(),
+ * and also stop its refcount increasing */
+ kbasep_js_clear_submit_allowed(js_devdata, kctx_evict);
+
+ osk_spinlock_irq_unlock(&js_devdata->runpool_irq.lock);
+ osk_mutex_unlock(&js_devdata->runpool_mutex);
+ if (kctx_new != NULL)
+ {
+ osk_mutex_unlock( &js_kctx_new->ctx.jsctx_mutex );
+ }
+
+ /* Stop working on the target context, start working on the kctx_evict context */
+
+ osk_mutex_lock( &kctx_evict->jctx.sched_info.ctx.jsctx_mutex );
+ osk_mutex_lock( &js_devdata->runpool_mutex );
+ release_result = kbasep_js_runpool_release_ctx_internal( kbdev, kctx_evict );
+ osk_mutex_unlock( &js_devdata->runpool_mutex );
+ /* Only requeue if actually descheduled, which is more robust in case
+ * something else retains it (e.g. two high priority contexts racing
+ * to evict the same lower priority context) */
+ if ( (release_result & KBASEP_JS_RELEASE_RESULT_WAS_DESCHEDULED) != 0u )
+ {
+ kbasep_js_runpool_requeue_or_kill_ctx( kbdev, kctx_evict );
+ }
+ osk_mutex_unlock( &kctx_evict->jctx.sched_info.ctx.jsctx_mutex );
+
+ /* release_result isn't propogated further:
+ * - the caller will be scheduling in a context anyway
+ * - which will also cause new jobs to run */
+
+ /* ctx fast start has taken place */
+ return;
+ }
+ }
+ }
+ osk_spinlock_irq_unlock(&js_devdata->runpool_irq.lock);
+ }
+
+ /* ctx fast start has not taken place */
+ osk_mutex_unlock( &js_devdata->runpool_mutex );
+ if (kctx_new != NULL)
+ {
+ osk_mutex_unlock( &js_kctx_new->ctx.jsctx_mutex );
+ }
+}
+
+mali_bool kbasep_js_add_job( kbase_context *kctx, kbase_jd_atom *atom )
+{
+ kbasep_js_kctx_info *js_kctx_info;
+ kbase_device *kbdev;
+ kbasep_js_device_data *js_devdata;
+ kbasep_js_policy *js_policy;
+
+ mali_bool policy_queue_updated = MALI_FALSE;
+
+ OSK_ASSERT( kctx != NULL );
+ OSK_ASSERT( atom != NULL );
+
+ kbdev = kctx->kbdev;
+ js_devdata = &kbdev->js_data;
+ js_policy = &kbdev->js_data.policy;
+ js_kctx_info = &kctx->jctx.sched_info;
+
+ osk_mutex_lock( &js_kctx_info->ctx.jsctx_mutex );
+ /* Policy-specific initialization of atoms (which cannot fail). Anything that
+ * could've failed must've been done at kbasep_jd_policy_init_job() time. */
+ kbasep_js_policy_register_job( js_policy, kctx, atom );
+
+ osk_mutex_lock( &js_devdata->runpool_mutex );
+ {
+ KBASE_TRACE_ADD_REFCOUNT( kbdev, JS_ADD_JOB, kctx, atom->user_atom, atom->jc,
+ kbasep_js_trace_get_refcnt(kbdev, kctx));
+ }
+
+ /* Refcount ctx.nr_jobs */
+ OSK_ASSERT( js_kctx_info->ctx.nr_jobs < U32_MAX );
+ ++(js_kctx_info->ctx.nr_jobs);
+
+ /* Setup any scheduling information */
+ kbasep_js_clear_job_retry_submit( atom );
+
+ /*
+ * Begin Runpool_irq transaction
+ */
+ osk_spinlock_irq_lock( &js_devdata->runpool_irq.lock );
+ {
+ /* Context Attribute Refcounting */
+ kbasep_js_ctx_attr_ctx_retain_atom( kbdev, kctx, atom );
+
+ /* Enqueue the job in the policy, causing it to be scheduled if the
+ * parent context gets scheduled */
+ kbasep_js_policy_enqueue_job( js_policy, atom );
+ }
+ osk_spinlock_irq_unlock( &js_devdata->runpool_irq.lock );
+ /* End runpool_irq transaction */
+
+ if ( js_kctx_info->ctx.is_scheduled != MALI_FALSE )
+ {
+ /* Handle an already running context - try to run the new job, in case it
+ * matches requirements that aren't matched by any other job in the Run
+ * Pool */
+ kbasep_js_try_run_next_job( kbdev );
+ }
+ osk_mutex_unlock( &js_devdata->runpool_mutex );
+
+ if ( js_kctx_info->ctx.is_scheduled == MALI_FALSE && js_kctx_info->ctx.nr_jobs == 1 )
+ {
+ /* Handle Refcount going from 0 to 1: schedule the context on the Policy Queue */
+ OSK_ASSERT( js_kctx_info->ctx.is_scheduled == MALI_FALSE );
+
+ OSK_PRINT_INFO(OSK_BASE_JM, "JS: Enqueue Context %p", kctx );
+
+ /* This context is becoming active */
+ kbase_pm_context_active(kctx->kbdev);
+
+ osk_mutex_lock( &js_devdata->queue_mutex );
+ kbasep_js_policy_enqueue_ctx( js_policy, kctx );
+ osk_mutex_unlock( &js_devdata->queue_mutex );
+ /* If the runpool is full and this job has a higher priority than the non-running
+ * job in the runpool - evict it so this higher priority job starts faster */
+ osk_mutex_unlock( &js_kctx_info->ctx.jsctx_mutex );
+
+ /* Fast-starting requires the jsctx_mutex to be dropped, because it works on multiple ctxs */
+ kbasep_js_runpool_attempt_fast_start_ctx( kbdev, kctx );
+
+ /* NOTE: Potentially, we can make the scheduling of the head context
+ * happen in a work-queue if we need to wait for the PM to power
+ * up. Also need logic to submit nothing until PM really has completed
+ * powering up. */
+
+ /* Policy Queue was updated - caller must try to schedule the head context */
+ policy_queue_updated = MALI_TRUE;
+ }
+ else
+ {
+ osk_mutex_unlock( &js_kctx_info->ctx.jsctx_mutex );
+ }
+
+ return policy_queue_updated;
+}
+
+void kbasep_js_remove_job( kbase_context *kctx, kbase_jd_atom *atom )
+{
+ kbasep_js_kctx_info *js_kctx_info;
+ kbase_device *kbdev;
+ kbasep_js_device_data *js_devdata;
+ kbasep_js_policy *js_policy;
+ mali_bool attr_state_changed;
+
+ OSK_ASSERT( kctx != NULL );
+ OSK_ASSERT( atom != NULL );
+
+ kbdev = kctx->kbdev;
+ js_devdata = &kbdev->js_data;
+ js_policy = &kbdev->js_data.policy;
+ js_kctx_info = &kctx->jctx.sched_info;
+
+ {
+ KBASE_TRACE_ADD_REFCOUNT( kbdev, JS_REMOVE_JOB, kctx, atom->user_atom, atom->jc,
+ kbasep_js_trace_get_refcnt(kbdev, kctx));
+ }
+
+ /* De-refcount ctx.nr_jobs */
+ OSK_ASSERT( js_kctx_info->ctx.nr_jobs > 0 );
+ --(js_kctx_info->ctx.nr_jobs);
+
+ osk_spinlock_irq_lock( &js_devdata->runpool_irq.lock );
+ attr_state_changed = kbasep_js_ctx_attr_ctx_release_atom( kbdev, kctx, atom );
+ osk_spinlock_irq_unlock( &js_devdata->runpool_irq.lock );
+
+ /* De-register the job from the system */
+ kbasep_js_policy_deregister_job( js_policy, kctx, atom );
+
+ if ( attr_state_changed != MALI_FALSE )
+ {
+ /* A change in runpool ctx attributes might mean we can run more jobs
+ * than before. */
+ osk_mutex_lock( &js_devdata->runpool_mutex );
+ kbasep_js_try_run_next_job( kbdev );
+ osk_mutex_unlock( &js_devdata->runpool_mutex );
+ }
+
+}
+
+
+mali_bool kbasep_js_runpool_retain_ctx( kbase_device *kbdev, kbase_context *kctx )
+{
+ kbasep_js_device_data *js_devdata;
+ mali_bool result;
+ OSK_ASSERT( kbdev != NULL );
+ js_devdata = &kbdev->js_data;
+
+ /* KBASE_TRACE_ADD_REFCOUNT( kbdev, JS_RETAIN_CTX, kctx, NULL, 0,
+ kbasep_js_trace_get_refcnt(kbdev, kctx)); */
+ osk_spinlock_irq_lock( &js_devdata->runpool_irq.lock );
+ result = kbasep_js_runpool_retain_ctx_nolock( kbdev, kctx );
+ osk_spinlock_irq_unlock( &js_devdata->runpool_irq.lock );
+
+ return result;
+}
+
+
+kbase_context* kbasep_js_runpool_lookup_ctx( kbase_device *kbdev, int as_nr )
+{
+ kbasep_js_device_data *js_devdata;
+ kbase_context *found_kctx = NULL;
+ kbasep_js_per_as_data *js_per_as_data;
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( 0 <= as_nr && as_nr < BASE_MAX_NR_AS );
+ js_devdata = &kbdev->js_data;
+ js_per_as_data = &js_devdata->runpool_irq.per_as_data[as_nr];
+
+ osk_spinlock_irq_lock( &js_devdata->runpool_irq.lock );
+
+ found_kctx = js_per_as_data->kctx;
+
+ if ( found_kctx != NULL )
+ {
+ ++(js_per_as_data->as_busy_refcount);
+ }
+
+ osk_spinlock_irq_unlock( &js_devdata->runpool_irq.lock );
+
+ return found_kctx;
+}
+
+/**
+ * Internal function to release the reference on a ctx, only taking the runpool and as transaction mutexes
+ *
+ * This does none of the followup actions for scheduling:
+ * - It does not schedule in a new context
+ * - It does not start more jobs running in the case of an ctx-attribute state change
+ * - It does not requeue or handling dying contexts
+ *
+ * For those tasks, just call kbasep_js_runpool_release_ctx() instead
+ *
+ * Requires:
+ * - Context is scheduled in, and kctx->as_nr matches kctx_as_nr
+ * - Context has a non-zero refcount
+ * - Caller holds js_kctx_info->ctx.jsctx_mutex
+ * - Caller holds js_devdata->runpool_mutex
+ */
+STATIC kbasep_js_release_result kbasep_js_runpool_release_ctx_internal( kbase_device *kbdev, kbase_context *kctx )
+{
+ kbasep_js_device_data *js_devdata;
+ kbasep_js_kctx_info *js_kctx_info;
+ kbasep_js_policy *js_policy;
+ kbasep_js_per_as_data *js_per_as_data;
+
+ kbasep_js_release_result release_result = 0u;
+ int kctx_as_nr;
+ kbase_as *current_as;
+ int new_ref_count;
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( kctx != NULL );
+ js_kctx_info = &kctx->jctx.sched_info;
+ js_devdata = &kbdev->js_data;
+ js_policy = &kbdev->js_data.policy;
+
+ /* Ensure context really is scheduled in */
+ OSK_ASSERT( js_kctx_info->ctx.is_scheduled != MALI_FALSE );
+
+ /* kctx->as_nr and js_per_as_data are only read from here. The caller's
+ * js_ctx_mutex provides a barrier that ensures they are up-to-date.
+ *
+ * They will not change whilst we're reading them, because the refcount
+ * is non-zero (and we ASSERT on that last fact).
+ */
+ kctx_as_nr = kctx->as_nr;
+ OSK_ASSERT( kctx_as_nr != KBASEP_AS_NR_INVALID );
+ js_per_as_data = &js_devdata->runpool_irq.per_as_data[kctx_as_nr];
+ OSK_ASSERT( js_per_as_data->as_busy_refcount > 0 );
+
+ /*
+ * Transaction begins on AS and runpool_irq
+ *
+ * Assert about out calling contract
+ */
+ current_as = &kbdev->as[kctx_as_nr];
+ osk_mutex_lock( ¤t_as->transaction_mutex );
+ osk_spinlock_irq_lock( &js_devdata->runpool_irq.lock );
+ OSK_ASSERT( kctx_as_nr == kctx->as_nr );
+ OSK_ASSERT( js_per_as_data->as_busy_refcount > 0 );
+
+ /* Update refcount */
+ new_ref_count = --(js_per_as_data->as_busy_refcount);
+
+ KBASE_TRACE_ADD_REFCOUNT( kbdev, JS_RELEASE_CTX, kctx, NULL, 0u,
+ new_ref_count);
+
+ if ( new_ref_count == 1 && kctx->jctx.sched_info.ctx.flags & KBASE_CTX_FLAG_PRIVILEGED )
+ {
+ /* Context is kept scheduled into an address space even when there are no jobs, in this case we have
+ * to handle the situation where all jobs have been evicted from the GPU and submission is disabled.
+ *
+ * At this point we re-enable submission to allow further jobs to be executed
+ */
+ kbasep_js_set_submit_allowed( js_devdata, kctx );
+ }
+
+ /* Make a set of checks to see if the context should be scheduled out */
+ if ( new_ref_count == 0
+ && ( kctx->jctx.sched_info.ctx.nr_jobs == 0
+ || kbasep_js_is_submit_allowed( js_devdata, kctx ) == MALI_FALSE ) )
+ {
+ /* Last reference, and we've been told to remove this context from the Run Pool */
+ OSK_PRINT_INFO(OSK_BASE_JM, "JS: RunPool Remove Context %p because as_busy_refcount=%d, jobs=%d, allowed=%d",
+ kctx,
+ new_ref_count,
+ js_kctx_info->ctx.nr_jobs,
+ kbasep_js_is_submit_allowed( js_devdata, kctx ) );
+
+ kbasep_js_policy_runpool_remove_ctx( js_policy, kctx );
+
+ /* Stop any more refcounts occuring on the context */
+ js_per_as_data->kctx = NULL;
+
+ /* Ensure we prevent the context from submitting any new jobs
+ * e.g. from kbasep_js_try_run_next_job_on_slot_irq_nolock() */
+ kbasep_js_clear_submit_allowed( js_devdata, kctx );
+
+ /* Disable the MMU on the affected address space, and indicate it's invalid */
+ kbase_mmu_disable( kctx );
+
+#if MALI_GATOR_SUPPORT
+ kbase_trace_mali_mmu_as_released(kctx->as_nr);
+#endif
+
+ kctx->as_nr = KBASEP_AS_NR_INVALID;
+
+ /* Ctx Attribute handling */
+ if ( kbasep_js_ctx_attr_runpool_release_ctx( kbdev, kctx ) != MALI_FALSE )
+ {
+ release_result |= KBASEP_JS_RELEASE_RESULT_CTX_ATTR_CHANGE;
+ }
+
+ /*
+ * Transaction ends on AS and runpool_irq:
+ *
+ * By this point, the AS-related data is now clear and ready for re-use.
+ *
+ * Since releases only occur once for each previous successful retain, and no more
+ * retains are allowed on this context, no other thread will be operating in this
+ * code whilst we are
+ */
+ osk_spinlock_irq_unlock( &js_devdata->runpool_irq.lock );
+ osk_mutex_unlock( ¤t_as->transaction_mutex );
+
+ /* Free up the address space */
+ js_devdata->as_free |= ((u16)(1u << kctx_as_nr));
+ /* Note: Don't reuse kctx_as_nr now */
+
+ /* update book-keeping info */
+ runpool_dec_context_count( kbdev, kctx );
+ js_kctx_info->ctx.is_scheduled = MALI_FALSE;
+ /* Signal any waiter that the context is not scheduled, so is safe for
+ * termination - once the jsctx_mutex is also dropped, and jobs have
+ * finished. */
+ osk_waitq_set( &js_kctx_info->ctx.not_scheduled_waitq );
+ osk_waitq_clear( &js_kctx_info->ctx.scheduled_waitq );
+
+ /* Queue an action to occur after we've dropped the lock */
+ release_result |= KBASEP_JS_RELEASE_RESULT_WAS_DESCHEDULED;
+
+ }
+ else
+ {
+ osk_spinlock_irq_unlock( &js_devdata->runpool_irq.lock );
+ osk_mutex_unlock( ¤t_as->transaction_mutex );
+ }
+
+ return release_result;
+}
+
+void kbasep_js_runpool_requeue_or_kill_ctx( kbase_device *kbdev, kbase_context *kctx )
+{
+ kbasep_js_device_data *js_devdata;
+ kbasep_js_policy *js_policy;
+ kbasep_js_kctx_info *js_kctx_info;
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( kctx != NULL );
+ js_kctx_info = &kctx->jctx.sched_info;
+ js_policy = &kbdev->js_data.policy;
+ js_devdata = &kbdev->js_data;
+
+ /* This is called if and only if you've you've detached the context from
+ * the Runpool or the Policy Queue, and not added it back to the Runpool */
+ OSK_ASSERT( js_kctx_info->ctx.is_scheduled == MALI_FALSE );
+
+ if ( js_kctx_info->ctx.is_dying != MALI_FALSE )
+ {
+ /* Dying: kill and idle the context */
+
+ /* Notify PM that a context has gone idle */
+ OSK_PRINT_INFO(OSK_BASE_JM, "JS: Idling Context %p (not requeued)", kctx );
+ kbase_pm_context_idle(kbdev);
+
+ /* The killing happens asynchronously */
+ OSK_PRINT_INFO(OSK_BASE_JM, "JS: ** Killing Context %p on RunPool Remove **", kctx );
+ kbasep_js_policy_kill_all_ctx_jobs( js_policy, kctx );
+ }
+ else if ( js_kctx_info->ctx.nr_jobs > 0 )
+ {
+ /* Not dying, has jobs: add back to the queue */
+ OSK_PRINT_INFO(OSK_BASE_JM, "JS: Requeue Context %p", kctx );
+ osk_mutex_lock( &js_devdata->queue_mutex );
+ kbasep_js_policy_enqueue_ctx( js_policy, kctx );
+ osk_mutex_unlock( &js_devdata->queue_mutex );
+ }
+ else
+ {
+ /* Not dying, no jobs: PM-idle the context, don't add back to the queue */
+ OSK_PRINT_INFO(OSK_BASE_JM, "JS: Idling Context %p (not requeued)", kctx );
+ kbase_pm_context_idle(kbdev);
+ }
+}
+
+void kbasep_js_runpool_release_ctx( kbase_device *kbdev, kbase_context *kctx )
+{
+ kbasep_js_device_data *js_devdata;
+ kbasep_js_kctx_info *js_kctx_info;
+
+ kbasep_js_release_result release_result;
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( kctx != NULL );
+ js_kctx_info = &kctx->jctx.sched_info;
+ js_devdata = &kbdev->js_data;
+
+ osk_mutex_lock( &js_kctx_info->ctx.jsctx_mutex );
+
+ osk_mutex_lock( &js_devdata->runpool_mutex );
+ release_result = kbasep_js_runpool_release_ctx_internal( kbdev, kctx );
+ osk_mutex_unlock( &js_devdata->runpool_mutex );
+
+ /* Do we have an action queued whilst the lock was held? */
+ if ( (release_result & KBASEP_JS_RELEASE_RESULT_WAS_DESCHEDULED) != 0u )
+ {
+ kbasep_js_runpool_requeue_or_kill_ctx( kbdev, kctx) ;
+ }
+ /* We've finished with this context for now, so drop the lock for it. */
+ osk_mutex_unlock( &js_kctx_info->ctx.jsctx_mutex );
+
+ if ( (release_result & KBASEP_JS_RELEASE_RESULT_WAS_DESCHEDULED) != 0u )
+ {
+ /* We've freed up an address space, so let's try to schedule in another
+ * context
+ *
+ * Note: if there's a context to schedule in, then it also tries to run
+ * another job, in case the new context has jobs satisfying requirements
+ * that no other context/job in the runpool does */
+ kbasep_js_try_schedule_head_ctx( kbdev );
+ }
+
+ if ( (release_result & KBASEP_JS_RELEASE_RESULT_CTX_ATTR_CHANGE) != 0u )
+ {
+ /* A change in runpool ctx attributes might mean we can run more jobs
+ * than before - and this needs to be done when the above
+ * try_schedule_head_ctx() had no contexts to run */
+ osk_mutex_lock( &js_devdata->runpool_mutex );
+ kbasep_js_try_run_next_job( kbdev );
+ osk_mutex_unlock( &js_devdata->runpool_mutex );
+ }
+
+}
+
+/**
+ * @brief Handle retaining cores for power management and affinity management,
+ * ensuring that cores are powered up and won't violate affinity restrictions.
+ *
+ * This function enters at the following @ref kbase_atom_coreref_state states:
+ *
+ * - NO_CORES_REQUESTED,
+ * - WAITING_FOR_REQUESTED_CORES,
+ * - RECHECK_AFFINITY,
+ *
+ * The transitions are as folows:
+ * - NO_CORES_REQUESTED -> WAITING_FOR_REQUESTED_CORES
+ * - WAITING_FOR_REQUESTED_CORES -> ( WAITING_FOR_REQUESTED_CORES or RECHECK_AFFINITY )
+ * - RECHECK_AFFINITY -> ( WAITING_FOR_REQUESTED_CORES or CHECK_AFFINITY_VIOLATIONS )
+ * - CHECK_AFFINITY_VIOLATIONS -> ( RECHECK_AFFINITY or READY )
+ *
+ * The caller must hold:
+ * - kbasep_js_device_data::runpool_irq::lock
+ *
+ * @return MALI_FALSE when the function makes a transition to the same or lower state, indicating
+ * that the cores are not ready.
+ * @return MALI_TRUE once READY state is reached, indicating that the cores are 'ready' and won't
+ * violate affinity restrictions.
+ *
+ */
+STATIC mali_bool kbasep_js_job_check_ref_cores(kbase_device *kbdev, int js, kbase_jd_atom *katom)
+{
+ u64 tiler_affinity = 0;
+ /* The most recently checked affinity. Having this at this scope allows us
+ * to guarantee that we've checked the affinity in this function call. */
+ u64 recently_chosen_affinity = 0;
+
+ if (katom->core_req & BASE_JD_REQ_T)
+ {
+ tiler_affinity = kbdev->tiler_present_bitmap;
+ }
+
+ /* NOTE: The following uses a number of FALLTHROUGHs to optimize the
+ * calls to this function. Ending of the function is indicated by BREAK OUT */
+ switch ( katom->coreref_state )
+ {
+ /* State when job is first attempted to be run */
+ case KBASE_ATOM_COREREF_STATE_NO_CORES_REQUESTED:
+ OSK_ASSERT( katom->affinity == 0 );
+ /* Compute affinity */
+ kbase_js_choose_affinity( &recently_chosen_affinity, kbdev, katom, js );
+
+ /* Request the cores */
+ if (MALI_ERROR_NONE != kbase_pm_request_cores( kbdev, recently_chosen_affinity, tiler_affinity ))
+ {
+ /* Failed to request cores, don't set the affinity so we try again and return */
+ KBASE_TRACE_ADD_SLOT_INFO( kbdev, JS_CORE_REF_REQUEST_CORES_FAILED, katom->kctx, katom->user_atom, katom->jc, js, (u32)recently_chosen_affinity );
+ /* *** BREAK OUT: No state transition *** */
+ break;
+ }
+
+ katom->affinity = recently_chosen_affinity;
+ /* Proceed to next state */
+ katom->coreref_state = KBASE_ATOM_COREREF_STATE_WAITING_FOR_REQUESTED_CORES;
+
+ /* ***FALLTHROUGH: TRANSITION TO HIGHER STATE*** */
+
+ case KBASE_ATOM_COREREF_STATE_WAITING_FOR_REQUESTED_CORES:
+ {
+ mali_bool cores_ready;
+ OSK_ASSERT( katom->affinity != 0 );
+
+ cores_ready = kbase_pm_register_inuse_cores( kbdev, katom->affinity, tiler_affinity );
+ if ( !cores_ready )
+ {
+ /* Stay in this state and return, to retry at this state later */
+ KBASE_TRACE_ADD_SLOT_INFO( kbdev, JS_CORE_REF_REGISTER_INUSE_FAILED, katom->kctx, katom->user_atom, katom->jc, js, (u32)katom->affinity );
+ /* *** BREAK OUT: No state transition *** */
+ break;
+ }
+ /* Proceed to next state */
+ katom->coreref_state = KBASE_ATOM_COREREF_STATE_RECHECK_AFFINITY;
+ }
+
+ /* ***FALLTHROUGH: TRANSITION TO HIGHER STATE*** */
+
+ case KBASE_ATOM_COREREF_STATE_RECHECK_AFFINITY:
+ OSK_ASSERT( katom->affinity != 0 );
+
+ /* Optimize out choosing the affinity twice in the same function call */
+ if ( recently_chosen_affinity == 0 )
+ {
+ /* See if the affinity changed since a previous call. */
+ kbase_js_choose_affinity( &recently_chosen_affinity, kbdev, katom, js );
+ }
+
+ /* Now see if this requires a different set of cores */
+ if ( recently_chosen_affinity != katom->affinity )
+ {
+ if (MALI_ERROR_NONE != kbase_pm_request_cores( kbdev, recently_chosen_affinity, tiler_affinity ))
+ {
+ /* Failed to request cores, rollback the previous gained set
+ * That also resets the state to NO_CORES_REQUESTED */
+ kbasep_js_job_check_deref_cores( kbdev, katom );
+ KBASE_TRACE_ADD_SLOT_INFO( kbdev, JS_CORE_REF_REQUEST_ON_RECHECK_FAILED, katom->kctx, katom->user_atom, katom->jc, js, (u32)recently_chosen_affinity );
+ /* *** BREAK OUT: Transition to lower state *** */
+ break;
+ }
+ else
+ {
+ mali_bool cores_ready;
+ /* Register new cores whislt we still hold the old ones, to minimize power transitions */
+ cores_ready = kbase_pm_register_inuse_cores( kbdev, recently_chosen_affinity, tiler_affinity );
+ kbasep_js_job_check_deref_cores( kbdev, katom );
+
+ /* Fixup the state that was reduced by deref_cores: */
+ katom->coreref_state = KBASE_ATOM_COREREF_STATE_RECHECK_AFFINITY;
+ katom->affinity = recently_chosen_affinity;
+ /* Now might be waiting for powerup again, with a new affinity */
+ if ( !cores_ready )
+ {
+ /* Return to previous state */
+ katom->coreref_state = KBASE_ATOM_COREREF_STATE_WAITING_FOR_REQUESTED_CORES;
+ KBASE_TRACE_ADD_SLOT_INFO( kbdev, JS_CORE_REF_REGISTER_ON_RECHECK_FAILED, katom->kctx, katom->user_atom, katom->jc, js, (u32)katom->affinity );
+ /* *** BREAK OUT: Transition to lower state *** */
+ break;
+ }
+ }
+ }
+ /* Proceed to next state */
+ katom->coreref_state = KBASE_ATOM_COREREF_STATE_CHECK_AFFINITY_VIOLATIONS;
+
+ /* ***FALLTHROUGH: TRANSITION TO HIGHER STATE*** */
+ case KBASE_ATOM_COREREF_STATE_CHECK_AFFINITY_VIOLATIONS:
+ OSK_ASSERT( katom->affinity != 0 );
+ OSK_ASSERT( katom->affinity == recently_chosen_affinity );
+
+ /* Note: this is where the caller must've taken the runpool_irq.lock */
+
+ /* Check for affinity violations - if there are any, then we just ask
+ * the caller to requeue and try again later */
+ if ( kbase_js_affinity_would_violate( kbdev, js, katom->affinity ) != MALI_FALSE )
+ {
+ /* Cause a re-attempt to submit from this slot on the next job complete */
+ kbase_js_affinity_slot_blocked_an_atom( kbdev, js );
+ /* Return to previous state */
+ katom->coreref_state = KBASE_ATOM_COREREF_STATE_RECHECK_AFFINITY;
+ /* *** BREAK OUT: Transition to lower state *** */
+ KBASE_TRACE_ADD_SLOT_INFO( kbdev, JS_CORE_REF_AFFINITY_WOULD_VIOLATE, katom->kctx, katom->user_atom, katom->jc, js, (u32)katom->affinity );
+ break;
+ }
+
+ /* No affinity violations would result, so the cores are ready */
+ katom->coreref_state = KBASE_ATOM_COREREF_STATE_READY;
+ /* *** BREAK OUT: Cores Ready *** */
+ break;
+
+ default:
+ OSK_ASSERT_MSG( MALI_FALSE, "Unhandled kbase_atom_coreref_state %d", katom->coreref_state );
+ break;
+ }
+
+ return (katom->coreref_state == KBASE_ATOM_COREREF_STATE_READY);
+}
+
+void kbasep_js_job_check_deref_cores(kbase_device *kbdev, struct kbase_jd_atom *katom)
+{
+ u64 tiler_affinity = 0;
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( katom != NULL );
+
+ if (katom->core_req & BASE_JD_REQ_T)
+ {
+ tiler_affinity = kbdev->tiler_present_bitmap;
+ }
+
+ switch ( katom->coreref_state )
+ {
+ case KBASE_ATOM_COREREF_STATE_READY:
+ /* State where atom was submitted to the HW - just proceed to power-down */
+ OSK_ASSERT( katom->affinity != 0 );
+
+ /* *** FALLTHROUGH *** */
+
+ case KBASE_ATOM_COREREF_STATE_RECHECK_AFFINITY:
+ /* State where cores were registered */
+ OSK_ASSERT( katom->affinity != 0 );
+ kbase_pm_release_cores(kbdev, katom->affinity, tiler_affinity);
+
+ /* Note: We do not clear the state for kbase_js_affinity_slot_blocked_an_atom().
+ * That is handled after finishing the job. This might be slightly
+ * suboptimal for some corner cases, but is otherwise not a problem
+ * (and resolves itself after the next job completes). */
+
+ break;
+
+ case KBASE_ATOM_COREREF_STATE_WAITING_FOR_REQUESTED_CORES:
+ /* State where cores were requested, but not registered */
+ OSK_ASSERT( katom->affinity != 0 );
+ kbase_pm_unrequest_cores(kbdev, katom->affinity, tiler_affinity);
+ break;
+
+ case KBASE_ATOM_COREREF_STATE_NO_CORES_REQUESTED:
+ /* Initial state - nothing required */
+ OSK_ASSERT( katom->affinity == 0 );
+ break;
+
+ default:
+ OSK_ASSERT_MSG( MALI_FALSE, "Unhandled coreref_state: %d", katom->coreref_state );
+ break;
+ }
+
+ katom->affinity = 0;
+ katom->coreref_state = KBASE_ATOM_COREREF_STATE_NO_CORES_REQUESTED;
+}
+
+
+
+/*
+ * Note: this function is quite similar to kbasep_js_try_run_next_job_on_slot()
+ */
+mali_bool kbasep_js_try_run_next_job_on_slot_irq_nolock( kbase_device *kbdev, int js, s8 *submit_count )
+{
+ kbasep_js_device_data *js_devdata;
+ mali_bool tried_to_dequeue_jobs_but_failed = MALI_FALSE;
+ mali_bool cores_ready;
+
+ OSK_ASSERT( kbdev != NULL );
+
+ js_devdata = &kbdev->js_data;
+
+ /* The caller of this function may not be aware of Ctx Attribute state changes so we
+ * must recheck if the given slot is still valid. Otherwise do not try to run.
+ */
+ if (kbase_js_can_run_job_on_slot_no_lock( kbdev, js))
+ {
+ /* Keep submitting while there's space to run a job on this job-slot,
+ * and there are jobs to get that match its requirements (see 'break'
+ * statement below) */
+ while ( *submit_count < KBASE_JS_MAX_JOB_SUBMIT_PER_SLOT_PER_IRQ
+ && kbasep_jm_is_submit_slots_free( kbdev, js, NULL ) != MALI_FALSE )
+ {
+ kbase_jd_atom *dequeued_atom;
+ mali_bool has_job = MALI_FALSE;
+
+ /* Dequeue a job that matches the requirements */
+ has_job = kbasep_js_policy_dequeue_job_irq( kbdev, js, &dequeued_atom );
+
+ if ( has_job != MALI_FALSE )
+ {
+ /* NOTE: since the runpool_irq lock is currently held and acts across
+ * all address spaces, any context whose busy refcount has reached
+ * zero won't yet be scheduled out whilst we're trying to run jobs
+ * from it */
+ kbase_context *parent_ctx = dequeued_atom->kctx;
+ mali_bool retain_success;
+
+ /* Retain/power up the cores it needs, check if cores are ready */
+ cores_ready = kbasep_js_job_check_ref_cores( kbdev, js, dequeued_atom );
+
+ if ( cores_ready != MALI_TRUE )
+ {
+ /* The job can't be submitted until the cores are ready, requeue the job */
+ kbasep_js_policy_enqueue_job( &kbdev->js_data.policy, dequeued_atom );
+ break;
+ }
+
+ /* ASSERT that the Policy picked a job from an allowed context */
+ OSK_ASSERT( kbasep_js_is_submit_allowed( js_devdata, parent_ctx) );
+
+ /* Retain the context to stop it from being scheduled out
+ * This is released when the job finishes */
+ retain_success = kbasep_js_runpool_retain_ctx_nolock( kbdev, parent_ctx );
+ OSK_ASSERT( retain_success != MALI_FALSE );
+ CSTD_UNUSED( retain_success );
+
+ /* Retain the affinity on the slot */
+ kbase_js_affinity_retain_slot_cores( kbdev, js, dequeued_atom->affinity );
+
+ /* Check if this job needs the cycle counter enabled before submission */
+ kbasep_js_ref_permon_check_and_enable_cycle_counter( kbdev, dequeued_atom );
+
+ /* Submit the job */
+ kbase_job_submit_nolock( kbdev, dequeued_atom, js );
+
+ ++(*submit_count);
+ }
+ else
+ {
+ tried_to_dequeue_jobs_but_failed = MALI_TRUE;
+ /* No more jobs - stop submitting for this slot */
+ break;
+ }
+ }
+ }
+
+ /* Indicate whether a retry in submission should be tried on a different
+ * dequeue function. These are the reasons why it *must* happen:
+ *
+ * - kbasep_js_policy_dequeue_job_irq() couldn't get any jobs. In this case,
+ * kbasep_js_policy_dequeue_job() might be able to get jobs (must be done
+ * outside of IRQ)
+ * - kbasep_js_policy_dequeue_job_irq() got some jobs, but failed to get a
+ * job in the last call to it. Again, kbasep_js_policy_dequeue_job()
+ * might be able to get jobs.
+ * - the KBASE_JS_MAX_JOB_SUBMIT_PER_SLOT_PER_IRQ threshold was reached
+ * and new scheduling must be performed outside of IRQ mode.
+ *
+ * Failure to indicate this correctly could stop further jobs being processed.
+ *
+ * However, we do not _need_ to indicate a retry for the following:
+ * - kbasep_jm_is_submit_slots_free() was MALI_FALSE, indicating jobs were
+ * already running. When those jobs complete, that will still cause events
+ * that cause us to resume job submission.
+ * - kbase_js_can_run_job_on_slot_no_lock() was MALI_FALSE - this is for
+ * Ctx Attribute handling. That _can_ change outside of IRQ context, but
+ * is handled explicitly by kbasep_js_remove_job() and
+ * kbasep_js_runpool_release_ctx().
+ */
+ return (mali_bool)(tried_to_dequeue_jobs_but_failed || *submit_count >= KBASE_JS_MAX_JOB_SUBMIT_PER_SLOT_PER_IRQ);
+}
+
+void kbasep_js_try_run_next_job_on_slot( kbase_device *kbdev, int js )
+{
+ kbasep_js_device_data *js_devdata;
+ mali_bool has_job;
+ mali_bool cores_ready;
+
+ OSK_ASSERT( kbdev != NULL );
+
+ js_devdata = &kbdev->js_data;
+
+ if (js_devdata->nr_user_contexts_running == 0)
+ {
+ /* There are no contexts with jobs so return early */
+ return;
+ }
+
+ kbase_job_slot_lock(kbdev, js);
+
+ /* Keep submitting while there's space to run a job on this job-slot,
+ * and there are jobs to get that match its requirements (see 'break'
+ * statement below) */
+ if ( kbasep_jm_is_submit_slots_free( kbdev, js, NULL ) != MALI_FALSE )
+ {
+ /* Only lock the Run Pool whilst there's work worth doing */
+ osk_spinlock_irq_lock( &js_devdata->runpool_irq.lock );
+
+ /* The caller of this function may not be aware of Ctx Attribute state changes so we
+ * must recheck if the given slot is still valid. Otherwise do not try to run.
+ */
+ if (kbase_js_can_run_job_on_slot_no_lock( kbdev, js))
+ {
+ do {
+ kbase_jd_atom *dequeued_atom;
+
+ /* Dequeue a job that matches the requirements */
+ has_job = kbasep_js_policy_dequeue_job( kbdev, js, &dequeued_atom );
+
+ if ( has_job != MALI_FALSE )
+ {
+ /* NOTE: since the runpool_irq lock is currently held and acts across
+ * all address spaces, any context whose busy refcount has reached
+ * zero won't yet be scheduled out whilst we're trying to run jobs
+ * from it */
+ kbase_context *parent_ctx = dequeued_atom->kctx;
+ mali_bool retain_success;
+
+ /* Retain/power up the cores it needs, check if cores are ready */
+ cores_ready = kbasep_js_job_check_ref_cores( kbdev, js, dequeued_atom );
+
+ if ( cores_ready != MALI_TRUE )
+ {
+ /* The job can't be submitted until the cores are ready, requeue the job */
+ kbasep_js_policy_enqueue_job( &kbdev->js_data.policy, dequeued_atom );
+ break;
+ }
+ /* ASSERT that the Policy picked a job from an allowed context */
+ OSK_ASSERT( kbasep_js_is_submit_allowed( js_devdata, parent_ctx) );
+
+ /* Retain the context to stop it from being scheduled out
+ * This is released when the job finishes */
+ retain_success = kbasep_js_runpool_retain_ctx_nolock( kbdev, parent_ctx );
+ OSK_ASSERT( retain_success != MALI_FALSE );
+ CSTD_UNUSED( retain_success );
+
+ /* Retain the affinity on the slot */
+ kbase_js_affinity_retain_slot_cores( kbdev, js, dequeued_atom->affinity );
+
+ /* Check if this job needs the cycle counter enabled before submission */
+ kbasep_js_ref_permon_check_and_enable_cycle_counter( kbdev, dequeued_atom );
+
+ /* Submit the job */
+ kbase_job_submit_nolock( kbdev, dequeued_atom, js );
+ }
+
+ } while ( kbasep_jm_is_submit_slots_free( kbdev, js, NULL ) != MALI_FALSE
+ && has_job != MALI_FALSE );
+ }
+
+ osk_spinlock_irq_unlock( &js_devdata->runpool_irq.lock );
+ }
+ kbase_job_slot_unlock(kbdev, js);
+}
+
+void kbasep_js_try_schedule_head_ctx( kbase_device *kbdev )
+{
+ kbasep_js_device_data *js_devdata;
+ mali_bool has_kctx;
+ kbase_context *head_kctx;
+ kbasep_js_kctx_info *js_kctx_info;
+ mali_bool is_runpool_full;
+
+ OSK_ASSERT( kbdev != NULL );
+
+ js_devdata = &kbdev->js_data;
+
+ /* Make a speculative check on the Run Pool - this MUST be repeated once
+ * we've obtained a context from the queue and reobtained the Run Pool
+ * lock */
+ osk_mutex_lock( &js_devdata->runpool_mutex );
+ is_runpool_full = check_is_runpool_full(kbdev, NULL);
+ osk_mutex_unlock( &js_devdata->runpool_mutex );
+
+ if ( is_runpool_full != MALI_FALSE )
+ {
+ /* No free address spaces - nothing to do */
+ return;
+ }
+
+ /* Grab the context off head of queue - if there is one */
+ osk_mutex_lock( &js_devdata->queue_mutex );
+ has_kctx = kbasep_js_policy_dequeue_head_ctx( &js_devdata->policy, &head_kctx );
+ osk_mutex_unlock( &js_devdata->queue_mutex );
+
+ if ( has_kctx == MALI_FALSE )
+ {
+ /* No ctxs to run - nothing to do */
+ return;
+ }
+ js_kctx_info = &head_kctx->jctx.sched_info;
+
+ OSK_PRINT_INFO(OSK_BASE_JM, "JS: Dequeue Context %p", head_kctx );
+
+ /*
+ * Atomic transaction on the Context and Run Pool begins
+ */
+ osk_mutex_lock( &js_kctx_info->ctx.jsctx_mutex );
+ osk_mutex_lock( &js_devdata->runpool_mutex );
+
+ /* Re-check to see if the Run Pool is full
+ * Not just to preserve atomicity, but to check against a specific context
+ * (some contexts are allowed in whereas others may not, due to HW workarounds) */
+ is_runpool_full = check_is_runpool_full(kbdev, head_kctx);
+ if ( is_runpool_full != MALI_FALSE )
+ {
+ /* No free address spaces - roll back the transaction so far and return */
+ osk_mutex_unlock( &js_devdata->runpool_mutex );
+
+ kbasep_js_runpool_requeue_or_kill_ctx( kbdev, head_kctx );
+
+ osk_mutex_unlock( &js_kctx_info->ctx.jsctx_mutex );
+ return;
+ }
+
+ KBASE_TRACE_ADD_REFCOUNT( kbdev, JS_TRY_SCHEDULE_HEAD_CTX, head_kctx, NULL, 0u,
+ kbasep_js_trace_get_refcnt(kbdev, head_kctx));
+
+
+ /* update book-keeping info */
+ js_kctx_info->ctx.is_scheduled = MALI_TRUE;
+
+#if MALI_CUSTOMER_RELEASE == 0
+ if ( js_devdata->nr_user_contexts_running == 0 )
+ {
+ /* Only when there are no other contexts submitting jobs:
+ * Latch in run-time job scheduler timeouts that were set through js_timeouts sysfs file */
+ if (kbdev->js_soft_stop_ticks != 0)
+ {
+ js_devdata->soft_stop_ticks = kbdev->js_soft_stop_ticks;
+ }
+ if (kbdev->js_hard_stop_ticks_ss != 0)
+ {
+ js_devdata->hard_stop_ticks_ss = kbdev->js_hard_stop_ticks_ss;
+ }
+ if (kbdev->js_hard_stop_ticks_nss != 0)
+ {
+ js_devdata->hard_stop_ticks_nss = kbdev->js_hard_stop_ticks_nss;
+ }
+ if (kbdev->js_reset_ticks_ss != 0)
+ {
+ js_devdata->gpu_reset_ticks_ss = kbdev->js_reset_ticks_ss;
+ }
+ if (kbdev->js_reset_ticks_nss != 0)
+ {
+ js_devdata->gpu_reset_ticks_nss = kbdev->js_reset_ticks_nss;
+ }
+ }
+#endif
+
+ runpool_inc_context_count( kbdev, head_kctx );
+ /* Cause any future waiter-on-termination to wait until the context is
+ * descheduled */
+ osk_waitq_clear( &js_kctx_info->ctx.not_scheduled_waitq );
+ osk_waitq_set( &js_kctx_info->ctx.scheduled_waitq );
+
+ /* Do all the necessaries to pick the address space (inc. update book-keeping info)
+ * Add the context to the Run Pool, and allow it to run jobs */
+ assign_and_activate_kctx_addr_space( kbdev, head_kctx );
+
+ if ( (js_kctx_info->ctx.flags & KBASE_CTX_FLAG_PRIVILEGED) != 0 )
+ {
+ /* We need to retain it to keep the corresponding address space */
+ kbasep_js_runpool_retain_ctx(kbdev, head_kctx);
+ }
+
+ /* Try to run the next job, in case this context has jobs that match the
+ * job slot requirements, but none of the other currently running contexts
+ * do */
+ kbasep_js_try_run_next_job( kbdev );
+
+ /* Transaction complete */
+ osk_mutex_unlock( &js_devdata->runpool_mutex );
+ osk_mutex_unlock( &js_kctx_info->ctx.jsctx_mutex );
+ /* Note: after this point, the context could potentially get scheduled out immediately */
+}
+
+void kbasep_js_schedule_privileged_ctx( kbase_device *kbdev, kbase_context *kctx )
+{
+ kbasep_js_kctx_info *js_kctx_info;
+ kbasep_js_device_data *js_devdata;
+ mali_bool is_scheduled;
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( kctx != NULL );
+
+ js_devdata = &kbdev->js_data;
+ js_kctx_info = &kctx->jctx.sched_info;
+
+ osk_mutex_lock( &js_kctx_info->ctx.jsctx_mutex );
+ /* Mark the context as privileged */
+ js_kctx_info->ctx.flags |= KBASE_CTX_FLAG_PRIVILEGED;
+
+ is_scheduled = js_kctx_info->ctx.is_scheduled;
+ if ( is_scheduled == MALI_FALSE )
+ {
+ mali_bool is_runpool_full;
+
+ /* Add the context to the runpool */
+ osk_mutex_lock( &js_devdata->queue_mutex );
+ kbasep_js_policy_enqueue_ctx( &js_devdata->policy, kctx );
+ osk_mutex_unlock( &js_devdata->queue_mutex );
+
+ osk_mutex_lock( &js_devdata->runpool_mutex );
+ {
+ is_runpool_full = check_is_runpool_full( kbdev, kctx);
+ if ( is_runpool_full != MALI_FALSE )
+ {
+ /* Evict jobs from the NEXT registers to free an AS asap */
+ kbasep_js_runpool_evict_next_jobs( kbdev );
+ }
+ }
+ osk_mutex_unlock( &js_devdata->runpool_mutex );
+ osk_mutex_unlock( &js_kctx_info->ctx.jsctx_mutex );
+ /* Fast-starting requires the jsctx_mutex to be dropped, because it works on multiple ctxs */
+
+ if ( is_runpool_full != MALI_FALSE )
+ {
+ /* Evict non-running contexts from the runpool */
+ kbasep_js_runpool_attempt_fast_start_ctx( kbdev, NULL );
+ }
+ /* Try to schedule the context in */
+ kbasep_js_try_schedule_head_ctx( kbdev );
+
+ /* Wait for the context to be scheduled in */
+ osk_waitq_wait(&kctx->jctx.sched_info.ctx.scheduled_waitq);
+ }
+ else
+ {
+ /* Already scheduled in - We need to retain it to keep the corresponding address space */
+ kbasep_js_runpool_retain_ctx(kbdev, kctx);
+ osk_mutex_unlock( &js_kctx_info->ctx.jsctx_mutex );
+
+ }
+}
+
+void kbasep_js_release_privileged_ctx( kbase_device *kbdev, kbase_context *kctx )
+{
+ kbasep_js_kctx_info *js_kctx_info;
+ OSK_ASSERT( kctx != NULL );
+ js_kctx_info = &kctx->jctx.sched_info;
+
+ /* We don't need to use the address space anymore */
+ osk_mutex_lock( &js_kctx_info->ctx.jsctx_mutex );
+ js_kctx_info->ctx.flags &= (~KBASE_CTX_FLAG_PRIVILEGED);
+ osk_mutex_unlock( &js_kctx_info->ctx.jsctx_mutex );
+
+ /* Release the context - it wil lbe scheduled out if there is no pending job */
+ kbasep_js_runpool_release_ctx(kbdev, kctx);
+}
+
+
+void kbasep_js_job_done_slot_irq( kbase_jd_atom *katom, int slot_nr, kbasep_js_tick *end_timestamp, mali_bool start_new_jobs )
+{
+ kbase_device *kbdev;
+ kbasep_js_policy *js_policy;
+ kbasep_js_device_data *js_devdata;
+ mali_bool submit_retry_needed = MALI_TRUE; /* If we don't start jobs here, start them from the workqueue */
+ kbasep_js_tick tick_diff;
+ u32 microseconds_spent = 0u;
+ kbase_context *parent_ctx;
+
+ OSK_ASSERT(katom);
+ parent_ctx = katom->kctx;
+ OSK_ASSERT(parent_ctx);
+ kbdev = parent_ctx->kbdev;
+ OSK_ASSERT(kbdev);
+
+ js_devdata = &kbdev->js_data;
+ js_policy = &kbdev->js_data.policy;
+
+ /*
+ * Release resources before submitting new jobs (bounds the refcount of
+ * the resource to BASE_JM_SUBMIT_SLOTS)
+ */
+#if MALI_GATOR_SUPPORT
+ kbase_trace_mali_job_slots_event(GATOR_MAKE_EVENT(GATOR_JOB_SLOT_STOP, slot_nr));
+#endif
+
+ if (katom->poking && kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8316))
+ {
+ OSK_ASSERT(parent_ctx->as_nr != KBASEP_AS_NR_INVALID);
+ kbase_as_poking_timer_release(&kbdev->as[parent_ctx->as_nr]);
+ katom->poking = 0;
+ }
+
+ /* Check if submitted jobs no longer require the cycle counter to be enabled */
+ kbasep_js_deref_permon_check_and_disable_cycle_counter( kbdev, katom );
+
+ /* Release the affinity from the slot - must happen before next submission to this slot */
+ kbase_js_affinity_release_slot_cores( kbdev, slot_nr, katom->affinity );
+ kbase_js_debug_log_current_affinities( kbdev );
+ /* Calculate the job's time used */
+ if ( end_timestamp != NULL )
+ {
+ /* Only calculating it for jobs that really run on the HW (e.g. removed
+ * from next jobs never actually ran, so really did take zero time) */
+ tick_diff = *end_timestamp - katom->start_timestamp;
+ microseconds_spent = kbasep_js_convert_js_ticks_to_us( tick_diff );
+ /* Round up time spent to the minimum timer resolution */
+ if (microseconds_spent < KBASEP_JS_TICK_RESOLUTION_US)
+ {
+ microseconds_spent = KBASEP_JS_TICK_RESOLUTION_US;
+ }
+ }
+
+ /* Lock the runpool_irq for modifying the runpool_irq data */
+ osk_spinlock_irq_lock( &js_devdata->runpool_irq.lock );
+
+ /* Log the result of the job (completion status, and time spent). */
+ kbasep_js_policy_log_job_result( js_policy, katom, microseconds_spent );
+ /* Determine whether the parent context's timeslice is up */
+ if ( kbasep_js_policy_should_remove_ctx( js_policy, parent_ctx ) != MALI_FALSE )
+ {
+ kbasep_js_clear_submit_allowed( js_devdata, parent_ctx );
+ }
+
+ if ( start_new_jobs != MALI_FALSE )
+ {
+ /* Submit a new job (if there is one) to help keep the GPU's HEAD and NEXT registers full */
+ KBASE_TRACE_ADD_SLOT( kbdev, JS_JOB_DONE_TRY_RUN_NEXT_JOB, parent_ctx, katom->user_atom, katom->jc, slot_nr);
+
+ submit_retry_needed = kbasep_js_try_run_next_job_on_slot_irq_nolock(
+ kbdev,
+ slot_nr,
+ &kbdev->slot_submit_count_irq[slot_nr] );
+ }
+
+ osk_spinlock_irq_unlock( &js_devdata->runpool_irq.lock );
+ /* We've finished modifying runpool_irq data, so the lock is dropped */
+
+ if ( submit_retry_needed != MALI_FALSE || katom->event.event_code == BASE_JD_EVENT_STOPPED )
+ {
+ /* The extra condition on STOPPED jobs is needed because they may be
+ * the only job present, but they won't get re-run until the JD work
+ * queue activates. Crucially, work queues can run items out of order
+ * e.g. on different CPUs, so being able to submit from the IRQ handler
+ * is not a good indication that we don't need to run jobss; the
+ * submitted job could be processed on the work-queue *before* the
+ * stopped job, even though it was submitted after.
+ *
+ * Therefore, we must try to run it, otherwise it might not get run at
+ * all after this. */
+
+ KBASE_TRACE_ADD_SLOT( kbdev, JS_JOB_DONE_RETRY_NEEDED, parent_ctx, katom->user_atom, katom->jc, slot_nr);
+ kbasep_js_set_job_retry_submit_slot( katom, slot_nr );
+ }
+}
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_js.h
+ * Job Scheduler APIs.
+ */
+
+#ifndef _KBASE_JS_H_
+#define _KBASE_JS_H_
+
+#include <malisw/mali_malisw.h>
+#include <osk/mali_osk.h>
+
+#include "mali_kbase_js_defs.h"
+#include "mali_kbase_js_policy.h"
+#include "mali_kbase_defs.h"
+
+#include "mali_kbase_js_ctx_attr.h"
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @addtogroup base_kbase_api
+ * @{
+ */
+
+/**
+ * @addtogroup kbase_js Job Scheduler Internal APIs
+ * @{
+ *
+ * These APIs are Internal to KBase and are available for use by the
+ * @ref kbase_js_policy "Job Scheduler Policy APIs"
+ */
+
+/**
+ * @brief Initialize the Job Scheduler
+ *
+ * The kbasep_js_device_data sub-structure of \a kbdev must be zero
+ * initialized before passing to the kbasep_js_devdata_init() function. This is
+ * to give efficient error path code.
+ */
+mali_error kbasep_js_devdata_init( kbase_device *kbdev );
+
+/**
+ * @brief Halt the Job Scheduler.
+ *
+ * It is safe to call this on \a kbdev even if it the kbasep_js_device_data
+ * sub-structure was never initialized/failed initialization, to give efficient
+ * error-path code.
+ *
+ * For this to work, the kbasep_js_device_data sub-structure of \a kbdev must
+ * be zero initialized before passing to the kbasep_js_devdata_init()
+ * function. This is to give efficient error path code.
+ *
+ * It is a Programming Error to call this whilst there are still kbase_context
+ * structures registered with this scheduler.
+ *
+ */
+void kbasep_js_devdata_halt( kbase_device * kbdev);
+
+/**
+ * @brief Terminate the Job Scheduler
+ *
+ * It is safe to call this on \a kbdev even if it the kbasep_js_device_data
+ * sub-structure was never initialized/failed initialization, to give efficient
+ * error-path code.
+ *
+ * For this to work, the kbasep_js_device_data sub-structure of \a kbdev must
+ * be zero initialized before passing to the kbasep_js_devdata_init()
+ * function. This is to give efficient error path code.
+ *
+ * It is a Programming Error to call this whilst there are still kbase_context
+ * structures registered with this scheduler.
+ */
+void kbasep_js_devdata_term( kbase_device *kbdev );
+
+
+/**
+ * @brief Initialize the Scheduling Component of a kbase_context on the Job Scheduler.
+ *
+ * This effectively registers a kbase_context with a Job Scheduler.
+ *
+ * It does not register any jobs owned by the kbase_context with the scheduler.
+ * Those must be separately registered by kbasep_js_add_job().
+ *
+ * The kbase_context must be zero intitialized before passing to the
+ * kbase_js_init() function. This is to give efficient error path code.
+ */
+mali_error kbasep_js_kctx_init( kbase_context *kctx );
+
+/**
+ * @brief Terminate the Scheduling Component of a kbase_context on the Job Scheduler
+ *
+ * This effectively de-registers a kbase_context from its Job Scheduler
+ *
+ * It is safe to call this on a kbase_context that has never had or failed
+ * initialization of its jctx.sched_info member, to give efficient error-path
+ * code.
+ *
+ * For this to work, the kbase_context must be zero intitialized before passing
+ * to the kbase_js_init() function.
+ *
+ * It is a Programming Error to call this whilst there are still jobs
+ * registered with this context.
+ */
+void kbasep_js_kctx_term( kbase_context *kctx );
+
+/**
+ * @brief Add a job chain to the Job Scheduler, and take necessary actions to
+ * schedule the context/run the job.
+ *
+ * This atomically does the following:
+ * - Update the numbers of jobs information (including NSS state changes)
+ * - Add the job to the run pool if necessary (part of init_job)
+ *
+ * Once this is done, then an appropriate action is taken:
+ * - If the ctx is scheduled, it attempts to start the next job (which might be
+ * this added job)
+ * - Otherwise, and if this is the first job on the context, it enqueues it on
+ * the Policy Queue
+ *
+ * The Policy's Queue can be updated by this in the following ways:
+ * - In the above case that this is the first job on the context
+ * - If the job is high priority and the context is not scheduled, then it
+ * could cause the Policy to schedule out a low-priority context, allowing
+ * this context to be scheduled in.
+ *
+ * If the context is already scheduled on the RunPool, then adding a job to it
+ * is guarenteed not to update the Policy Queue. And so, the caller is
+ * guarenteed to not need to try scheduling a context from the Run Pool - it
+ * can safely assert that the result is MALI_FALSE.
+ *
+ * It is a programming error to have more than U32_MAX jobs in flight at a time.
+ *
+ * The following locking conditions are made on the caller:
+ * - it must \em not hold kbasep_js_kctx_info::ctx::jsctx_mutex.
+ * - it must \em not hold kbasep_js_device_data::runpool_irq::lock (as this will be
+ * obtained internally)
+ * - it must \em not hold kbasep_js_device_data::runpool_mutex (as this will be
+ * obtained internally)
+ * - it must \em not hold kbasep_jd_device_data::queue_mutex (again, it's used internally).
+ *
+ * @return MALI_TRUE indicates that the Policy Queue was updated, and so the
+ * caller will need to try scheduling a context onto the Run Pool.
+ * @return MALI_FALSE indicates that no updates were made to the Policy Queue,
+ * so no further action is required from the caller. This is \b always returned
+ * when the context is currently scheduled.
+ */
+mali_bool kbasep_js_add_job( kbase_context *kctx, kbase_jd_atom *atom );
+
+/**
+ * @brief Remove a job chain from the Job Scheduler
+ *
+ * Removing a job from the Scheduler can cause an NSS/SS state transition. In
+ * this case, slots that previously could not have jobs submitted to might now
+ * be submittable to. For this reason, and NSS/SS state transition will cause
+ * the Scheduler to try to submit new jobs on the jm_slots.
+ *
+ * It is a programming error to call this when:
+ * - \a atom is not a job belonging to kctx.
+ * - \a atom has already been removed from the Job Scheduler.
+ * - \a atom is still in the runpool:
+ * - it has not been killed with kbasep_js_policy_kill_all_ctx_jobs()
+ * - or, it has not been removed with kbasep_js_policy_dequeue_job()
+ * - or, it has not been removed with kbasep_js_policy_dequeue_job_irq()
+ *
+ * The following locking conditions are made on the caller:
+ * - it must hold kbasep_js_kctx_info::ctx::jsctx_mutex.
+ * - it must \em not hold kbasep_js_device_data::runpool_irq::lock, (as this will be
+ * obtained internally)
+ * - it must \em not hold kbasep_js_device_data::runpool_mutex (as this could be
+ obtained internally)
+ * - it must \em not hold kbdev->jm_slots[ \a js ].lock (as this could be
+ * obtained internally)
+ *
+ */
+void kbasep_js_remove_job( kbase_context *kctx, kbase_jd_atom *atom );
+
+/**
+ * @brief Refcount a context as being busy, preventing it from being scheduled
+ * out.
+ *
+ * @note This function can safely be called from IRQ context.
+ *
+ * The following locking conditions are made on the caller:
+ * - it must \em not hold the kbasep_js_device_data::runpool_irq::lock, because
+ * it will be used internally.
+ *
+ * @return value != MALI_FALSE if the retain succeeded, and the context will not be scheduled out.
+ * @return MALI_FALSE if the retain failed (because the context is being/has been scheduled out).
+ */
+mali_bool kbasep_js_runpool_retain_ctx( kbase_device *kbdev, kbase_context *kctx );
+
+/**
+ * @brief Refcount a context as being busy, preventing it from being scheduled
+ * out.
+ *
+ * @note This function can safely be called from IRQ context.
+ *
+ * The following locks must be held by the caller:
+ * - kbasep_js_device_data::runpool_irq::lock
+ *
+ * @return value != MALI_FALSE if the retain succeeded, and the context will not be scheduled out.
+ * @return MALI_FALSE if the retain failed (because the context is being/has been scheduled out).
+ */
+mali_bool kbasep_js_runpool_retain_ctx_nolock( kbase_device *kbdev, kbase_context *kctx );
+
+/**
+ * @brief Lookup a context in the Run Pool based upon its current address space
+ * and ensure that is stays scheduled in.
+ *
+ * The context is refcounted as being busy to prevent it from scheduling
+ * out. It must be released with kbasep_js_runpool_release_ctx() when it is no
+ * longer required to stay scheduled in.
+ *
+ * @note This function can safely be called from IRQ context.
+ *
+ * The following locking conditions are made on the caller:
+ * - it must \em not hold the kbasep_js_device_data::runpoool_irq::lock, because
+ * it will be used internally.
+ *
+ * @return a valid kbase_context on success, which has been refcounted as being busy.
+ * @return NULL on failure, indicating that no context was found in \a as_nr
+ */
+kbase_context* kbasep_js_runpool_lookup_ctx( kbase_device *kbdev, int as_nr );
+
+/**
+ * @brief Handling the requeuing/killing of a context that was evicted from the
+ * policy queue or runpool.
+ *
+ * This should be used whenever handing off a context that has been evicted
+ * from the policy queue or the runpool:
+ * - If the context is not dying and has jobs, it gets re-added to the policy
+ * queue
+ * - Otherwise, it is not added (but PM is informed that it is idle)
+ *
+ * In addition, if the context is dying the jobs are killed asynchronously.
+ *
+ * The following locking conditions are made on the caller:
+ * - it must hold kbasep_js_kctx_info::ctx::jsctx_mutex.
+ * - it must \em not hold kbasep_jd_device_data::queue_mutex (as this will be
+ * obtained internally)
+ */
+void kbasep_js_runpool_requeue_or_kill_ctx( kbase_device *kbdev, kbase_context *kctx );
+
+/**
+ * @brief Release a refcount of a context being busy, allowing it to be
+ * scheduled out.
+ *
+ * When the refcount reaches zero and the context \em might be scheduled out
+ * (depending on whether the Scheudling Policy has deemed it so, or if it has run
+ * out of jobs).
+ *
+ * If the context does get scheduled out, then The following actions will be
+ * taken as part of deschduling a context:
+ * - For the context being descheduled:
+ * - If the context is in the processing of dying (all the jobs are being
+ * removed from it), then descheduling also kills off any jobs remaining in the
+ * context.
+ * - If the context is not dying, and any jobs remain after descheduling the
+ * context then it is re-enqueued to the Policy's Queue.
+ * - Otherwise, the context is still known to the scheduler, but remains absent
+ * from the Policy Queue until a job is next added to it.
+ * - Once the context is descheduled, this also handles scheduling in a new
+ * context (if one is available), and if necessary, running a job from that new
+ * context.
+ *
+ * Unlike retaining a context in the runpool, this function \b cannot be called
+ * from IRQ context.
+ *
+ * It is a programming error to call this on a \a kctx that is not currently
+ * scheduled, or that already has a zero refcount.
+ *
+ * The following locking conditions are made on the caller:
+ * - it must \em not hold the kbasep_js_device_data::runpool_irq::lock, because
+ * it will be used internally.
+ * - it must \em not hold kbasep_js_kctx_info::ctx::jsctx_mutex.
+ * - it must \em not hold kbasep_js_device_data::runpool_mutex (as this will be
+ * obtained internally)
+ * - it must \em not hold the kbase_device::as[n].transaction_mutex (as this will be obtained internally)
+ * - it must \em not hold kbasep_jd_device_data::queue_mutex (as this will be
+ * obtained internally)
+ *
+ */
+void kbasep_js_runpool_release_ctx( kbase_device *kbdev, kbase_context *kctx );
+
+/**
+ * @brief Try to submit the next job on a \b particular slot whilst in IRQ
+ * context, and whilst the caller already holds the job-slot IRQ spinlock.
+ *
+ * \a *submit_count will be checked against
+ * KBASE_JS_MAX_JOB_SUBMIT_PER_SLOT_PER_IRQ to see whether too many jobs have
+ * been submitted. This is to prevent the IRQ handler looping over lots of GPU
+ * NULL jobs, which may complete whilst the IRQ handler is still processing. \a
+ * submit_count itself should point to kbase_device::slot_submit_count_irq[ \a js ],
+ * which is initialized to zero on entry to the IRQ handler.
+ *
+ * The following locks must be held by the caller:
+ * - kbasep_js_device_data::runpool_irq::lock
+ * - kbdev->jm_slots[ \a js ].lock
+ *
+ * @return truthful (i.e. != MALI_FALSE) if there was space to submit in the
+ * GPU, but we couldn't get a job from the Run Pool. This may be because the
+ * Run Pool needs maintenence outside of IRQ context. Therefore, this indicates
+ * that submission should be retried from a work-queue, by using
+ * kbasep_js_try_run_next_job_on_slot().
+ * @return MALI_FALSE if submission had no problems: the GPU is either already
+ * full of jobs in the HEAD and NEXT registers, or we were able to get enough
+ * jobs from the Run Pool to fill the GPU's HEAD and NEXT registers.
+ */
+mali_bool kbasep_js_try_run_next_job_on_slot_irq_nolock( kbase_device *kbdev, int js, s8 *submit_count );
+
+/**
+ * @brief Try to submit the next job on a particular slot, outside of IRQ context
+ *
+ * This obtains the Job Slot lock for the duration of the call only.
+ *
+ * Unlike kbasep_js_try_run_next_job_on_slot_irq_nolock(), there is no limit on
+ * submission, because eventually IRQ_THROTTLE will kick in to prevent us
+ * getting stuck in a loop of submitting GPU NULL jobs. This is because the IRQ
+ * handler will be delayed, and so this function will eventually fill up the
+ * space in our software 'submitted' slot (kbase_jm_slot::submitted).
+ *
+ * In addition, there's no return value - we'll run the maintenence functions
+ * on the Policy's Run Pool, but if there's nothing there after that, then the
+ * Run Pool is truely empty, and so no more action need be taken.
+ *
+ * The following locking conditions are made on the caller:
+ * - it must hold kbasep_js_device_data::runpool_mutex
+ * - it must \em not hold kbasep_js_device_data::runpool_irq::lock (as this will be
+ * obtained internally)
+ * - it must \em not hold kbdev->jm_slots[ \a js ].lock (as this will be
+ * obtained internally)
+ *
+ * @note The caller \em might be holding one of the
+ * kbasep_js_kctx_info::ctx::jsctx_mutex locks.
+ *
+ */
+void kbasep_js_try_run_next_job_on_slot( kbase_device *kbdev, int js );
+
+/**
+ * @brief Try to submit the next job for each slot in the system, outside of IRQ context
+ *
+ * This will internally call kbasep_js_try_run_next_job_on_slot(), so the same
+ * locking conditions on the caller are required.
+ *
+ * The following locking conditions are made on the caller:
+ * - it must hold kbasep_js_device_data::runpool_mutex
+ * - it must \em not hold kbasep_js_device_data::runpool_irq::lock (as this will be
+ * obtained internally)
+ * - it must \em not hold kbdev->jm_slots[ \a js ].lock (as this will be
+ * obtained internally)
+ *
+ * @note The caller \em might be holding one of the
+ * kbasep_js_kctx_info::ctx::jsctx_mutex locks.
+ *
+ */
+void kbasep_js_try_run_next_job( kbase_device *kbdev );
+
+/**
+ * @brief Try to schedule the next context onto the Run Pool
+ *
+ * This checks whether there's space in the Run Pool to accommodate a new
+ * context. If so, it attempts to dequeue a context from the Policy Queue, and
+ * submit this to the Run Pool.
+ *
+ * If the scheduling succeeds, then it also makes a call to
+ * kbasep_js_try_run_next_job(), in case the new context has jobs matching the
+ * job slot requirements, but no other currently scheduled context has such
+ * jobs.
+ *
+ * If any of these actions fail (Run Pool Full, Policy Queue empty, etc) then
+ * the function just returns normally.
+ *
+ * The following locking conditions are made on the caller:
+ * - it must \em not hold the kbasep_js_device_data::runpool_irq::lock, because
+ * it will be used internally.
+ * - it must \em not hold kbasep_js_device_data::runpool_mutex (as this will be
+ * obtained internally)
+ * - it must \em not hold the kbase_device::as[n].transaction_mutex (as this will be obtained internally)
+ * - it must \em not hold kbasep_jd_device_data::queue_mutex (again, it's used internally).
+ * - it must \em not hold kbasep_js_kctx_info::ctx::jsctx_mutex, because it will
+ * be used internally.
+ * - it must \em not hold kbdev->jm_slots[ \a js ].lock (as this will be
+ * obtained internally)
+ *
+ */
+void kbasep_js_try_schedule_head_ctx( kbase_device *kbdev );
+
+/**
+ * @brief Schedule in a privileged context
+ *
+ * This schedules a context in regardless of the context priority.
+ * If the runpool is full, a context will be forced out of the runpool and the function will wait
+ * for the new context to be scheduled in.
+ * The context will be kept scheduled in (and the corresponding address space reserved) until
+ * kbasep_js_release_privileged_ctx is called).
+ *
+ * The following locking conditions are made on the caller:
+ * - it must \em not hold the kbasep_js_device_data::runpool_irq::lock, because
+ * it will be used internally.
+ * - it must \em not hold kbasep_js_device_data::runpool_mutex (as this will be
+ * obtained internally)
+ * - it must \em not hold the kbase_device::as[n].transaction_mutex (as this will be obtained internally)
+ * - it must \em not hold kbasep_jd_device_data::queue_mutex (again, it's used internally).
+ * - it must \em not hold kbasep_js_kctx_info::ctx::jsctx_mutex, because it will
+ * be used internally.
+ * - it must \em not hold kbdev->jm_slots[ \a js ].lock (as this will be
+ * obtained internally)
+ *
+ */
+void kbasep_js_schedule_privileged_ctx( kbase_device *kbdev, kbase_context *kctx );
+
+/**
+ * @brief Release a privileged context, allowing it to be scheduled out.
+ *
+ * See kbasep_js_runpool_release_ctx for potential side effects.
+ *
+ * The following locking conditions are made on the caller:
+ * - it must \em not hold the kbasep_js_device_data::runpool_irq::lock, because
+ * it will be used internally.
+ * - it must \em not hold kbasep_js_kctx_info::ctx::jsctx_mutex.
+ * - it must \em not hold kbasep_js_device_data::runpool_mutex (as this will be
+ * obtained internally)
+ * - it must \em not hold the kbase_device::as[n].transaction_mutex (as this will be obtained internally)
+ *
+ */
+void kbasep_js_release_privileged_ctx( kbase_device *kbdev, kbase_context *kctx );
+
+/**
+ * @brief Handle the Job Scheduler component for the IRQ of a job finishing
+ *
+ * This does the following:
+ * -# Releases resources held by the atom
+ * -# if \a end_timestamp != NULL, updates the runpool's notion of time spent by a running ctx
+ * -# determines whether a context should be marked for scheduling out
+ * -# if start_new_jobs is true, tries to submit the next job on the slot
+ * (picking from all ctxs in the runpool)
+ *
+ * In addition, if submission didn't happen (the submit-from-IRQ function
+ * failed or start_new_jobs == MALI_FALSE), then this sets a message on katom
+ * that submission needs to be retried from the worker thread.
+ *
+ * Normally, the time calculated from end_timestamp is rounded up to the
+ * minimum time precision. Therefore, to ensure the job is recorded as not
+ * spending any time, then set end_timestamp to NULL. For example, this is necessary when
+ * evicting jobs from JSn_HEAD_NEXT (because they didn't actually run).
+ *
+ * NOTE: It's possible to move the steps (2) and (3) (inc calculating job's time
+ * used) into the worker (outside of IRQ context), but this may allow a context
+ * to use up to twice as much timeslice as is allowed by the policy. For
+ * policies that order by time spent, this is not a problem for overall
+ * 'fairness', but can still increase latency between contexts.
+ *
+ * The following locking conditions are made on the caller:
+ * - it must \em not hold the kbasep_js_device_data::runpoool_irq::lock, because
+ * it will be used internally.
+ * - it must hold kbdev->jm_slots[ \a slot_nr ].lock
+ */
+void kbasep_js_job_done_slot_irq( kbase_jd_atom *katom, int slot_nr, kbasep_js_tick *end_timestamp, mali_bool start_new_jobs );
+
+/**
+ * @brief Try to submit the next job on each slot
+ *
+ * The following locks may be used:
+ * - kbasep_js_device_data::runpool_mutex
+ * - kbasep_js_device_data::runpool_irq::lock
+ * - bdev->jm_slots[ \a js ].lock
+ */
+void kbase_js_try_run_jobs( kbase_device *kbdev );
+
+/**
+ * @brief Handle releasing cores for power management and affinity management,
+ * ensuring that cores are powered down and affinity tracking is updated.
+ *
+ * This must only be called on an atom that is not currently running, and has
+ * not been re-queued onto the context (and so does not need locking)
+ *
+ * This function enters at the following @ref kbase_atom_coreref_state states:
+ * - NO_CORES_REQUESTED
+ * - WAITING_FOR_REQUESTED_CORES
+ * - RECHECK_AFFINITY
+ * - READY
+ *
+ * It transitions the above states back to NO_CORES_REQUESTED by the end of the
+ * function call (possibly via intermediate states).
+ *
+ * No locks need be held by the caller, since this takes the necessary Power
+ * Management locks itself. The runpool_irq.lock is not taken (the work that
+ * requires it is handled by kbase_js_affinity_submit_to_blocked_slots() ).
+ *
+ * @note The corresponding kbasep_js_job_check_ref_cores() is private to the
+ * Job Scheduler, and is called automatically when running the next job.
+ */
+void kbasep_js_job_check_deref_cores(kbase_device *kbdev, struct kbase_jd_atom *katom);
+
+/*
+ * Helpers follow
+ */
+
+/**
+ * @brief Check that a context is allowed to submit jobs on this policy
+ *
+ * The purpose of this abstraction is to hide the underlying data size, and wrap up
+ * the long repeated line of code.
+ *
+ * As with any mali_bool, never test the return value with MALI_TRUE.
+ *
+ * The caller must hold kbasep_js_device_data::runpool_irq::lock.
+ */
+static INLINE mali_bool kbasep_js_is_submit_allowed( kbasep_js_device_data *js_devdata, kbase_context *kctx )
+{
+ u16 test_bit;
+
+ /* Ensure context really is scheduled in */
+ OSK_ASSERT( kctx->as_nr != KBASEP_AS_NR_INVALID );
+ OSK_ASSERT( kctx->jctx.sched_info.ctx.is_scheduled != MALI_FALSE );
+
+ test_bit = (u16)(1u << kctx->as_nr);
+
+ return (mali_bool)(js_devdata->runpool_irq.submit_allowed & test_bit);
+}
+
+/**
+ * @brief Allow a context to submit jobs on this policy
+ *
+ * The purpose of this abstraction is to hide the underlying data size, and wrap up
+ * the long repeated line of code.
+ *
+ * The caller must hold kbasep_js_device_data::runpool_irq::lock.
+ */
+static INLINE void kbasep_js_set_submit_allowed( kbasep_js_device_data *js_devdata, kbase_context *kctx )
+{
+ u16 set_bit;
+
+ /* Ensure context really is scheduled in */
+ OSK_ASSERT( kctx->as_nr != KBASEP_AS_NR_INVALID );
+ OSK_ASSERT( kctx->jctx.sched_info.ctx.is_scheduled != MALI_FALSE );
+
+ set_bit = (u16)(1u << kctx->as_nr);
+
+ OSK_PRINT_INFO(OSK_BASE_JM, "JS: Setting Submit Allowed on %p (as=%d)", kctx, kctx->as_nr );
+
+ js_devdata->runpool_irq.submit_allowed |= set_bit;
+}
+
+/**
+ * @brief Prevent a context from submitting more jobs on this policy
+ *
+ * The purpose of this abstraction is to hide the underlying data size, and wrap up
+ * the long repeated line of code.
+ *
+ * The caller must hold kbasep_js_device_data::runpool_irq::lock.
+ */
+static INLINE void kbasep_js_clear_submit_allowed( kbasep_js_device_data *js_devdata, kbase_context *kctx )
+{
+ u16 clear_bit;
+ u16 clear_mask;
+
+ /* Ensure context really is scheduled in */
+ OSK_ASSERT( kctx->as_nr != KBASEP_AS_NR_INVALID );
+ OSK_ASSERT( kctx->jctx.sched_info.ctx.is_scheduled != MALI_FALSE );
+
+ clear_bit = (u16)(1u << kctx->as_nr);
+ clear_mask = ~clear_bit;
+
+ OSK_PRINT_INFO(OSK_BASE_JM, "JS: Clearing Submit Allowed on %p (as=%d)", kctx, kctx->as_nr );
+
+ js_devdata->runpool_irq.submit_allowed &= clear_mask;
+}
+
+/**
+ * @brief Manage the 'retry_submit_on_slot' part of a kbase_jd_atom
+ */
+static INLINE void kbasep_js_clear_job_retry_submit( kbase_jd_atom *atom )
+{
+ atom->retry_submit_on_slot = -1;
+}
+
+static INLINE mali_bool kbasep_js_get_job_retry_submit_slot( kbase_jd_atom *atom, int *res )
+{
+ int js = atom->retry_submit_on_slot;
+ *res = js;
+ return (mali_bool)( js >= 0 );
+}
+
+static INLINE void kbasep_js_set_job_retry_submit_slot( kbase_jd_atom *atom, int js )
+{
+ OSK_ASSERT( 0 <= js && js <= BASE_JM_MAX_NR_SLOTS );
+
+ atom->retry_submit_on_slot = js;
+}
+
+#if OSK_DISABLE_ASSERTS == 0
+/**
+ * Debug Check the refcount of a context. Only use within ASSERTs
+ *
+ * Obtains kbasep_js_device_data::runpool_irq::lock
+ *
+ * @return negative value if the context is not scheduled in
+ * @return current refcount of the context if it is scheduled in. The refcount
+ * is not guarenteed to be kept constant.
+ */
+static INLINE int kbasep_js_debug_check_ctx_refcount( kbase_device *kbdev, kbase_context *kctx )
+{
+ kbasep_js_device_data *js_devdata;
+ int result = -1;
+ int as_nr;
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( kctx != NULL );
+ js_devdata = &kbdev->js_data;
+
+ osk_spinlock_irq_lock( &js_devdata->runpool_irq.lock );
+ as_nr = kctx->as_nr;
+ if ( as_nr != KBASEP_AS_NR_INVALID )
+ {
+ result = js_devdata->runpool_irq.per_as_data[as_nr].as_busy_refcount;
+ }
+ osk_spinlock_irq_unlock( &js_devdata->runpool_irq.lock );
+
+ return result;
+}
+#endif /* OSK_DISABLE_ASSERTS == 0 */
+
+/**
+ * @brief Variant of kbasep_js_runpool_lookup_ctx() that can be used when the
+ * context is guarenteed to be already previously retained.
+ *
+ * It is a programming error to supply the \a as_nr of a context that has not
+ * been previously retained/has a busy refcount of zero. The only exception is
+ * when there is no ctx in \a as_nr (NULL returned).
+ *
+ * The following locking conditions are made on the caller:
+ * - it must \em not hold the kbasep_js_device_data::runpoool_irq::lock, because
+ * it will be used internally.
+ *
+ * @return a valid kbase_context on success, with a refcount that is guarenteed
+ * to be non-zero and unmodified by this function.
+ * @return NULL on failure, indicating that no context was found in \a as_nr
+ */
+static INLINE kbase_context* kbasep_js_runpool_lookup_ctx_noretain( kbase_device *kbdev, int as_nr )
+{
+ kbasep_js_device_data *js_devdata;
+ kbase_context *found_kctx;
+ kbasep_js_per_as_data *js_per_as_data;
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( 0 <= as_nr && as_nr < BASE_MAX_NR_AS );
+ js_devdata = &kbdev->js_data;
+ js_per_as_data = &js_devdata->runpool_irq.per_as_data[as_nr];
+
+ osk_spinlock_irq_lock( &js_devdata->runpool_irq.lock );
+
+ found_kctx = js_per_as_data->kctx;
+ OSK_ASSERT( found_kctx == NULL || js_per_as_data->as_busy_refcount > 0 );
+
+ osk_spinlock_irq_unlock( &js_devdata->runpool_irq.lock );
+
+ return found_kctx;
+}
+
+
+/**
+ * @note MIDBASE-769: OSK to add high resolution timer
+ */
+static INLINE kbasep_js_tick kbasep_js_get_js_ticks( void )
+{
+ return osk_time_now();
+}
+
+/**
+ * Supports about an hour worth of time difference, allows the underlying
+ * clock to be more/less accurate than microseconds
+ *
+ * @note MIDBASE-769: OSK to add high resolution timer
+ */
+static INLINE u32 kbasep_js_convert_js_ticks_to_us( kbasep_js_tick js_tick )
+{
+ return (js_tick * 10000u) / osk_time_mstoticks(10u);
+}
+
+/**
+ * Supports about an hour worth of time difference, allows the underlying
+ * clock to be more/less accurate than microseconds
+ *
+ * @note MIDBASE-769: OSK to add high resolution timer
+ */
+static INLINE kbasep_js_tick kbasep_js_convert_js_us_to_ticks( u32 us )
+{
+ return (us * (kbasep_js_tick)osk_time_mstoticks(1000u)) / 1000000u;
+}
+/**
+ * Determine if ticka comes after tickb
+ *
+ * @note MIDBASE-769: OSK to add high resolution timer
+ */
+static INLINE mali_bool kbasep_js_ticks_after( kbasep_js_tick ticka, kbasep_js_tick tickb )
+{
+ kbasep_js_tick tick_diff = ticka - tickb;
+ const kbasep_js_tick wrapvalue = ((kbasep_js_tick)1u) << ((sizeof(kbasep_js_tick)*8)-1);
+
+ return (mali_bool)(tick_diff < wrapvalue);
+}
+
+/**
+ * This will provide a conversion from time (us) to ticks of the gpu clock
+ * based on the minimum available gpu frequency.
+ * This is usually good to compute best/worst case (where the use of current
+ * frequency is not valid due to DVFS).
+ * e.g.: when you need the number of cycles to guarantee you won't wait for
+ * longer than 'us' time (you might have a shorter wait).
+ */
+static INLINE kbasep_js_gpu_tick kbasep_js_convert_us_to_gpu_ticks_min_freq( kbase_device *kbdev, u32 us )
+{
+ u32 gpu_freq = kbdev->gpu_props.props.core_props.gpu_freq_khz_min;
+ OSK_ASSERT( 0!= gpu_freq );
+ return (us * (gpu_freq / 1000));
+}
+
+/**
+ * This will provide a conversion from time (us) to ticks of the gpu clock
+ * based on the maximum available gpu frequency.
+ * This is usually good to compute best/worst case (where the use of current
+ * frequency is not valid due to DVFS).
+ * e.g.: When you need the number of cycles to guarantee you'll wait at least
+ * 'us' amount of time (but you might wait longer).
+ */
+static INLINE kbasep_js_gpu_tick kbasep_js_convert_us_to_gpu_ticks_max_freq( kbase_device *kbdev, u32 us )
+{
+ u32 gpu_freq = kbdev->gpu_props.props.core_props.gpu_freq_khz_max;
+ OSK_ASSERT( 0!= gpu_freq );
+ return (us * (kbasep_js_gpu_tick)(gpu_freq / 1000));
+}
+
+/**
+ * This will provide a conversion from ticks of the gpu clock to time (us)
+ * based on the minimum available gpu frequency.
+ * This is usually good to compute best/worst case (where the use of current
+ * frequency is not valid due to DVFS).
+ * e.g.: When you need to know the worst-case wait that 'ticks' cycles will
+ * take (you guarantee that you won't wait any longer than this, but it may
+ * be shorter).
+ */
+static INLINE u32 kbasep_js_convert_gpu_ticks_to_us_min_freq( kbase_device *kbdev, kbasep_js_gpu_tick ticks )
+{
+ u32 gpu_freq = kbdev->gpu_props.props.core_props.gpu_freq_khz_min;
+ OSK_ASSERT( 0!= gpu_freq );
+ return (ticks / gpu_freq * 1000);
+}
+
+/**
+ * This will provide a conversion from ticks of the gpu clock to time (us)
+ * based on the maximum available gpu frequency.
+ * This is usually good to compute best/worst case (where the use of current
+ * frequency is not valid due to DVFS).
+ * e.g.: When you need to know the best-case wait for 'tick' cycles (you
+ * guarantee to be waiting for at least this long, but it may be longer).
+ */
+static INLINE u32 kbasep_js_convert_gpu_ticks_to_us_max_freq( kbase_device *kbdev, kbasep_js_gpu_tick ticks )
+{
+ u32 gpu_freq = kbdev->gpu_props.props.core_props.gpu_freq_khz_max;
+ OSK_ASSERT( 0!= gpu_freq );
+ return (ticks / gpu_freq * 1000);
+}
+/** @} */ /* end group kbase_js */
+/** @} */ /* end group base_kbase_api */
+/** @} */ /* end group base_api */
+
+#endif /* _KBASE_JS_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_js_affinity.c
+ * Base kernel affinity manager APIs
+ */
+
+#include <kbase/src/common/mali_kbase.h>
+#include "mali_kbase_js_affinity.h"
+
+#if MALI_DEBUG && 0 /* disabled to avoid compilation warnings */
+
+STATIC void debug_get_binary_string(const u64 n, char *buff, const int size)
+{
+ unsigned int i;
+ for (i = 0; i < size; i++)
+ {
+ buff[i] = ((n >> i) & 1) ? '*' : '-';
+ }
+ buff[size] = '\0';
+}
+
+#define N_CORES 8
+STATIC void debug_print_affinity_info(const kbase_device *kbdev, const kbase_jd_atom *katom, int js, u64 affinity)
+{
+ char buff[N_CORES +1];
+ char buff2[N_CORES +1];
+ base_jd_core_req core_req = katom->atom->core_req;
+ u8 nr_nss_ctxs_running = kbdev->js_data.runpool_irq.ctx_attr_ref_count[KBASEP_JS_CTX_ATTR_NSS];
+ u64 shader_present_bitmap = kbdev->shader_present_bitmap;
+
+ debug_get_binary_string(shader_present_bitmap, buff, N_CORES);
+ debug_get_binary_string(affinity, buff2, N_CORES);
+
+ OSK_PRINT_INFO(OSK_BASE_JM, "Job: NSS COH FS CS T CF V JS | NSS_ctx | GPU:12345678 | AFF:12345678");
+ OSK_PRINT_INFO(OSK_BASE_JM, " %s %s %s %s %s %s %s %u | %u | %s | %s",
+ core_req & BASE_JD_REQ_NSS ? "*" : "-",
+ core_req & BASE_JD_REQ_COHERENT_GROUP ? "*" : "-",
+ core_req & BASE_JD_REQ_FS ? "*" : "-",
+ core_req & BASE_JD_REQ_CS ? "*" : "-",
+ core_req & BASE_JD_REQ_T ? "*" : "-",
+ core_req & BASE_JD_REQ_CF ? "*" : "-",
+ core_req & BASE_JD_REQ_V ? "*" : "-",
+ js, nr_nss_ctxs_running, buff, buff2);
+}
+
+#endif /* MALI_DEBUG */
+
+OSK_STATIC_INLINE mali_bool affinity_job_uses_high_cores( kbase_device *kbdev, kbase_jd_atom *katom )
+{
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8987))
+ {
+ kbase_context *kctx;
+ kbase_context_flags ctx_flags;
+
+ kctx = katom->kctx;
+ ctx_flags = kctx->jctx.sched_info.ctx.flags;
+
+ /* In this HW Workaround, compute-only jobs/contexts use the high cores
+ * during a core-split, all other contexts use the low cores. */
+ return (mali_bool)((katom->core_req & BASE_JD_REQ_ONLY_COMPUTE) != 0
+ || (ctx_flags & KBASE_CTX_FLAG_HINT_ONLY_COMPUTE) != 0);
+ }
+ else
+ {
+ base_jd_core_req core_req = katom->core_req;
+ /* NSS-ness determines whether the high cores in a core split are used */
+ return (mali_bool)(core_req & BASE_JD_REQ_NSS);
+ }
+}
+
+
+/**
+ * @brief Decide whether a split in core affinity is required across job slots
+ *
+ * The following locking conditions are made on the caller:
+ * - it must hold kbasep_js_device_data::runpool_irq::lock
+ *
+ * @param kbdev The kbase device structure of the device
+ * @return MALI_FALSE if a core split is not required
+ * @return != MALI_FALSE if a core split is required.
+ */
+OSK_STATIC_INLINE mali_bool kbase_affinity_requires_split(kbase_device *kbdev)
+{
+ kbasep_js_device_data *js_devdata;
+
+ OSK_ASSERT( kbdev != NULL );
+ js_devdata = &kbdev->js_data;
+
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8987))
+ {
+ s8 nr_compute_ctxs = kbasep_js_ctx_attr_count_on_runpool( kbdev, KBASEP_JS_CTX_ATTR_COMPUTE );
+ s8 nr_noncompute_ctxs = kbasep_js_ctx_attr_count_on_runpool( kbdev, KBASEP_JS_CTX_ATTR_NON_COMPUTE );
+
+ /* In this case, a mix of Compute+Non-Compute determines whether a
+ * core-split is required, to ensure jobs with different numbers of RMUs
+ * don't use the same cores.
+ *
+ * When it's entirely compute, or entirely non-compute, then no split is
+ * required.
+ *
+ * A context can be both Compute and Non-compute, in which case this will
+ * correctly decide that a core-split is required. */
+
+ return (mali_bool)( nr_compute_ctxs > 0 && nr_noncompute_ctxs > 0 );
+ }
+ else
+ {
+ /* NSS/SS state determines whether a core-split is required */
+ return kbasep_js_ctx_attr_is_attr_on_runpool( kbdev, KBASEP_JS_CTX_ATTR_NSS );
+ }
+}
+
+
+
+
+mali_bool kbase_js_can_run_job_on_slot_no_lock( kbase_device *kbdev, int js )
+{
+ /*
+ * Here are the reasons for using job slot 2:
+ * - BASE_HW_ISSUE_8987 (which is entirely used for that purpose)
+ * - NSS atoms (in NSS state, this is entirely used for that)
+ * - In absence of the above two, then:
+ * - Atoms with BASE_JD_REQ_COHERENT_GROUP
+ * - But, only when there aren't contexts with
+ * KBASEP_JS_CTX_ATTR_COMPUTE_ALL_CORES, because the atoms that run on
+ * all cores on slot 1 could be blocked by those using a coherent group
+ * on slot 2
+ * - And, only when you actually have 2 or more coregroups - if you only
+ * have 1 coregroup, then having jobs for slot 2 implies they'd also be
+ * for slot 1, meaning you'll get interference from them. Jobs able to
+ * run on slot 2 could also block jobs that can only run on slot 1
+ * (tiler jobs)
+ */
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8987))
+ {
+ return MALI_TRUE;
+ }
+
+ if ( js != 2 )
+ {
+ return MALI_TRUE;
+ }
+
+ /* Only deal with js==2 now: */
+ if ( kbasep_js_ctx_attr_is_attr_on_runpool( kbdev, KBASEP_JS_CTX_ATTR_NSS ) != MALI_FALSE )
+ {
+ /* In NSS state, slot 2 is used, and exclusively for NSS jobs (which cause a coresplit) */
+ return MALI_TRUE;
+ }
+
+ if ( kbdev->gpu_props.num_core_groups > 1 )
+ {
+ /* Otherwise, only use slot 2 in the 2+ coregroup case */
+ if ( kbasep_js_ctx_attr_is_attr_on_runpool( kbdev, KBASEP_JS_CTX_ATTR_COMPUTE_ALL_CORES ) == MALI_FALSE )
+ {
+ /* ...But only when we *don't* have atoms that run on all cores */
+
+ /* No specific check for BASE_JD_REQ_COHERENT_GROUP atoms - the policy will sort that out */
+ return MALI_TRUE;
+ }
+ }
+
+ /* Above checks failed mean we shouldn't use slot 2 */
+ return MALI_FALSE;
+}
+
+/*
+ * As long as it has been decided to have a deeper modification of
+ * what job scheduler, power manager and affinity manager will
+ * implement, this function is just an intermediate step that
+ * assumes:
+ * - all working cores will be powered on when this is called.
+ * - largest current configuration is a T658 (2x4 cores).
+ * - It has been decided not to have hardcoded values so the low
+ * and high cores in a core split will be evently distributed.
+ * - Odd combinations of core requirements have been filtered out
+ * and do not get to this function (e.g. CS+T+NSS is not
+ * supported here).
+ * - This function is frequently called and can be optimized,
+ * (see notes in loops), but as the functionallity will likely
+ * be modified, optimization has not been addressed.
+*/
+void kbase_js_choose_affinity(u64 *affinity, kbase_device *kbdev, kbase_jd_atom *katom, int js)
+{
+ base_jd_core_req core_req = katom->core_req;
+ u64 shader_present_bitmap = kbdev->shader_present_bitmap;
+ unsigned int num_core_groups = kbdev->gpu_props.num_core_groups;
+
+ OSK_ASSERT(0 != shader_present_bitmap);
+ OSK_ASSERT( js >= 0 );
+
+ if (1 == kbdev->gpu_props.num_cores)
+ {
+ /* trivial case only one core, nothing to do */
+ *affinity = shader_present_bitmap;
+ }
+ else if ( kbase_affinity_requires_split( kbdev ) == MALI_FALSE )
+ {
+ if ( (core_req & (BASE_JD_REQ_COHERENT_GROUP | BASE_JD_REQ_SPECIFIC_COHERENT_GROUP)) )
+ {
+ if ( js == 0 || num_core_groups == 1 )
+ {
+ /* js[0] and single-core-group systems just get the first core group */
+ *affinity = kbdev->gpu_props.props.coherency_info.group[0].core_mask;
+ }
+ else
+ {
+ /* js[1], js[2] use core groups 0, 1 for dual-core-group systems */
+ u32 core_group_idx = ((u32)js) - 1;
+ OSK_ASSERT( core_group_idx < num_core_groups );
+ *affinity = kbdev->gpu_props.props.coherency_info.group[core_group_idx].core_mask;
+ }
+ }
+ else
+ {
+ /* All cores are available when no core split is required */
+ *affinity = shader_present_bitmap;
+ }
+ }
+ else
+ {
+ /* Core split required - divide cores in two non-overlapping groups */
+ u64 low_bitmap, high_bitmap;
+ int n_high_cores = kbdev->gpu_props.num_cores >> 1;
+ OSK_ASSERT(0 != n_high_cores);
+
+ /* compute the reserved high cores bitmap */
+ high_bitmap = ~0;
+ /* note: this can take a while, optimization desirable */
+ while (n_high_cores != osk_count_set_bits(high_bitmap & shader_present_bitmap))
+ {
+ high_bitmap = high_bitmap << 1;
+ }
+ high_bitmap &= shader_present_bitmap;
+
+ /* now decide 4 different situations depending on the low or high
+ * set of cores and requiring coherent group or not */
+ if (affinity_job_uses_high_cores( kbdev, katom ))
+ {
+ OSK_ASSERT(0 != num_core_groups);
+
+ if ( (core_req & (BASE_JD_REQ_COHERENT_GROUP | BASE_JD_REQ_SPECIFIC_COHERENT_GROUP))
+ && (1 != num_core_groups))
+ {
+ /* high set of cores requiring coherency and coherency matters
+ * because we got more than one core group */
+ u64 group1_mask = kbdev->gpu_props.props.coherency_info.group[1].core_mask;
+ *affinity = high_bitmap & group1_mask;
+ }
+ else
+ {
+ /* high set of cores not requiring coherency or coherency is
+ assured as we only have one core_group */
+ *affinity = high_bitmap;
+ }
+ }
+ else
+ {
+ low_bitmap = shader_present_bitmap ^ high_bitmap;
+
+ if ( core_req & (BASE_JD_REQ_COHERENT_GROUP | BASE_JD_REQ_SPECIFIC_COHERENT_GROUP))
+ {
+ /* low set of cores and req coherent group */
+ u64 group0_mask = kbdev->gpu_props.props.coherency_info.group[0].core_mask;
+ u64 low_coh_bitmap = low_bitmap & group0_mask;
+ *affinity = low_coh_bitmap;
+ }
+ else
+ {
+ /* low set of cores and does not req coherent group */
+ *affinity = low_bitmap;
+ }
+ }
+ }
+
+ OSK_ASSERT(*affinity != 0);
+}
+
+OSK_STATIC_INLINE mali_bool kbase_js_affinity_is_violating( kbase_device *kbdev, u64 *affinities )
+{
+ /* This implementation checks whether:
+ * - the two slots involved in Generic thread creation have intersecting affinity
+ * - Cores for the fragment slot (slot 0) would compete with cores for slot 2 when NSS atoms are in use.
+ * - This is due to micro-architectural issues where a job in slot A targetting
+ * cores used by slot B could prevent the job in slot B from making progress
+ * until the job in slot A has completed.
+ * - In our case, when slot 2 is used for batch/NSS atoms, the affinity
+ * intersecting with slot 0 would cause fragment atoms to be delayed by the batch/NSS
+ * atoms.
+ *
+ * @note It just so happens that these restrictions also allow
+ * BASE_HW_ISSUE_8987 to be worked around by placing on job slot 2 the
+ * atoms from ctxs with KBASE_CTX_FLAG_HINT_ONLY_COMPUTE flag set
+ */
+ u64 affinity_set_left;
+ u64 affinity_set_right;
+ u64 intersection;
+ OSK_ASSERT( affinities != NULL );
+
+ affinity_set_left = affinities[1];
+
+ if ( kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8987)
+ || kbasep_js_ctx_attr_is_attr_on_runpool(kbdev, KBASEP_JS_CTX_ATTR_NSS) != MALI_FALSE )
+ {
+ /* The left set also includes those on the Fragment slot when:
+ * - We are using the HW workaround for BASE_HW_ISSUE_8987
+ * - We're in NSS state - to prevent NSS atoms using the same cores as Fragment atoms */
+ affinity_set_left |= affinities[0];
+ }
+
+ affinity_set_right = affinities[2];
+
+ /* A violation occurs when any bit in the left_set is also in the right_set */
+ intersection = affinity_set_left & affinity_set_right;
+
+ return (mali_bool)( intersection != (u64)0u );
+}
+
+mali_bool kbase_js_affinity_would_violate( kbase_device *kbdev, int js, u64 affinity )
+{
+ kbasep_js_device_data *js_devdata;
+ u64 new_affinities[BASE_JM_MAX_NR_SLOTS];
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( js < BASE_JM_MAX_NR_SLOTS );
+ js_devdata = &kbdev->js_data;
+
+ OSK_MEMCPY( new_affinities, js_devdata->runpool_irq.slot_affinities, sizeof(js_devdata->runpool_irq.slot_affinities) );
+
+ new_affinities[ js ] |= affinity;
+
+ return kbase_js_affinity_is_violating( kbdev, new_affinities );
+}
+
+void kbase_js_affinity_retain_slot_cores( kbase_device *kbdev, int js, u64 affinity )
+{
+ kbasep_js_device_data *js_devdata;
+ u64 cores;
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( js < BASE_JM_MAX_NR_SLOTS );
+ js_devdata = &kbdev->js_data;
+
+ OSK_ASSERT( kbase_js_affinity_would_violate( kbdev, js, affinity ) == MALI_FALSE );
+
+ cores = affinity;
+ while (cores)
+ {
+ int bitnum = 63 - osk_clz_64(cores);
+ u64 bit = 1ULL << bitnum;
+ s8 cnt;
+
+ OSK_ASSERT( js_devdata->runpool_irq.slot_affinity_refcount[ js ][bitnum] < BASE_JM_SUBMIT_SLOTS );
+
+ cnt = ++(js_devdata->runpool_irq.slot_affinity_refcount[ js ][bitnum]);
+
+ if ( cnt == 1 )
+ {
+ js_devdata->runpool_irq.slot_affinities[js] |= bit;
+ }
+
+ cores &= ~bit;
+ }
+
+}
+
+void kbase_js_affinity_release_slot_cores( kbase_device *kbdev, int js, u64 affinity )
+{
+ kbasep_js_device_data *js_devdata;
+ u64 cores;
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( js < BASE_JM_MAX_NR_SLOTS );
+ js_devdata = &kbdev->js_data;
+
+ cores = affinity;
+ while (cores)
+ {
+ int bitnum = 63 - osk_clz_64(cores);
+ u64 bit = 1ULL << bitnum;
+ s8 cnt;
+
+ OSK_ASSERT( js_devdata->runpool_irq.slot_affinity_refcount[ js ][bitnum] > 0 );
+
+ cnt = --(js_devdata->runpool_irq.slot_affinity_refcount[ js ][bitnum]);
+
+ if (0 == cnt)
+ {
+ js_devdata->runpool_irq.slot_affinities[js] &= ~bit;
+ }
+
+ cores &= ~bit;
+ }
+
+}
+
+void kbase_js_affinity_slot_blocked_an_atom( kbase_device *kbdev, int js )
+{
+ kbasep_js_device_data *js_devdata;
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( js < BASE_JM_MAX_NR_SLOTS );
+ js_devdata = &kbdev->js_data;
+
+ js_devdata->runpool_irq.slots_blocked_on_affinity |= 1u << js;
+}
+
+void kbase_js_affinity_submit_to_blocked_slots( kbase_device *kbdev )
+{
+ kbasep_js_device_data *js_devdata;
+ u16 slots;
+
+ OSK_ASSERT( kbdev != NULL );
+ js_devdata = &kbdev->js_data;
+
+ osk_mutex_lock( &js_devdata->runpool_mutex );
+
+ /* Must take a copy because submitting jobs will update this member
+ * We don't take a lock here - a data barrier was issued beforehand */
+ slots = js_devdata->runpool_irq.slots_blocked_on_affinity;
+ while (slots)
+ {
+ int bitnum = 31 - osk_clz(slots);
+ u16 bit = 1u << bitnum;
+ slots &= ~bit;
+
+ KBASE_TRACE_ADD_SLOT( kbdev, JS_AFFINITY_SUBMIT_TO_BLOCKED, NULL, NULL, 0u, bitnum);
+
+ osk_spinlock_irq_lock( &js_devdata->runpool_irq.lock );
+ /* must update this before we submit, incase it's set again */
+ js_devdata->runpool_irq.slots_blocked_on_affinity &= ~bit;
+ osk_spinlock_irq_unlock( &js_devdata->runpool_irq.lock );
+
+ kbasep_js_try_run_next_job_on_slot( kbdev, bitnum );
+
+ /* Don't re-read slots_blocked_on_affinity after this - it could loop for a long time */
+ }
+ osk_mutex_unlock( &js_devdata->runpool_mutex );
+
+}
+
+#if MALI_DEBUG != 0
+void kbase_js_debug_log_current_affinities( kbase_device *kbdev )
+{
+ kbasep_js_device_data *js_devdata;
+ int slot_nr;
+
+ OSK_ASSERT( kbdev != NULL );
+ js_devdata = &kbdev->js_data;
+
+ for ( slot_nr = 0; slot_nr < 3 ; ++slot_nr )
+ {
+ KBASE_TRACE_ADD_SLOT_INFO( kbdev, JS_AFFINITY_CURRENT, NULL, NULL, 0u, slot_nr, (u32)js_devdata->runpool_irq.slot_affinities[slot_nr] );
+ }
+}
+#endif /* MALI_DEBUG != 0 */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_js_affinity.h
+ * Affinity Manager internal APIs.
+ */
+
+#ifndef _KBASE_JS_AFFINITY_H_
+#define _KBASE_JS_AFFINITY_H_
+
+
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @addtogroup base_kbase_api
+ * @{
+ */
+
+
+/**
+ * @addtogroup kbase_js_affinity Affinity Manager internal APIs.
+ * @{
+ *
+ */
+
+
+/**
+ * @brief Decide whether it is possible to submit a job to a particular job slot in the current status
+ *
+ * Will check if submitting to the given job slot is allowed in the current
+ * status. For example using job slot 2 while in soft-stoppable state and only
+ * having 1 coregroup is not allowed by the policy. This function should be
+ * called prior to submitting a job to a slot to make sure policy rules are not
+ * violated.
+ *
+ * The following locking conditions are made on the caller:
+ * - it must hold kbasep_js_device_data::runpool_irq::lock
+ *
+ * @param kbdev The kbase device structure of the device
+ * @param js Job slot number to check for allowance
+ */
+mali_bool kbase_js_can_run_job_on_slot_no_lock( kbase_device *kbdev, int js );
+
+/**
+ * @brief Compute affinity for a given job.
+ *
+ * Currently assumes an all-on/all-off power management policy.
+ * Also assumes there is at least one core with tiler available.
+ * Will try to produce an even distribution of cores for SS and
+ * NSS jobs. SS jobs will be given cores starting from core-group
+ * 0 forward to n. NSS jobs will be given cores from core-group n
+ * backwards to 0. This way for example in a T658 SS jobs will
+ * tend to run on cores from core-group 0 and NSS jobs will tend
+ * to run on cores from core-group 1.
+ * An assertion will be raised if computed affinity is 0
+ *
+ * @param[out] affinity Affinity bitmap computed
+ * @param kbdev The kbase device structure of the device
+ * @param katom Job chain of which affinity is going to be found
+ * @param js Slot the job chain is being submitted
+
+ */
+void kbase_js_choose_affinity( u64 *affinity, kbase_device *kbdev, kbase_jd_atom *katom, int js );
+
+/**
+ * @brief Determine whether a proposed \a affinity on job slot \a js would
+ * cause a violation of affinity restrictions.
+ *
+ * The following locks must be held by the caller:
+ * - kbasep_js_device_data::runpool_irq::lock
+ */
+mali_bool kbase_js_affinity_would_violate( kbase_device *kbdev, int js, u64 affinity );
+
+/**
+ * @brief Affinity tracking: retain cores used by a slot
+ *
+ * The following locks must be held by the caller:
+ * - kbasep_js_device_data::runpool_irq::lock
+ */
+void kbase_js_affinity_retain_slot_cores( kbase_device *kbdev, int js, u64 affinity );
+
+/**
+ * @brief Affinity tracking: release cores used by a slot
+ *
+ * Cores \b must be released as soon as a job is dequeued from a slot's 'submit
+ * slots', and before another job is submitted to those slots. Otherwise, the
+ * refcount could exceed the maximum number submittable to a slot,
+ * BASE_JM_SUBMIT_SLOTS.
+ *
+ * The following locks must be held by the caller:
+ * - kbasep_js_device_data::runpool_irq::lock
+ */
+void kbase_js_affinity_release_slot_cores( kbase_device *kbdev, int js, u64 affinity );
+
+/**
+ * @brief Register a slot as blocking atoms due to affinity violations
+ *
+ * Once a slot has been registered, we must check after every atom completion
+ * (including those on different slots) to see if the slot can be
+ * unblocked. This is done by calling
+ * kbase_js_affinity_submit_to_blocked_slots(), which will also deregister the
+ * slot if it no long blocks atoms due to affinity violations.
+ *
+ * The following locks must be held by the caller:
+ * - kbasep_js_device_data::runpool_irq::lock
+ */
+void kbase_js_affinity_slot_blocked_an_atom( kbase_device *kbdev, int js );
+
+/**
+ * @brief Submit to job slots that have registered that an atom was blocked on
+ * the slot previously due to affinity violations.
+ *
+ * This submits to all slots registered by
+ * kbase_js_affinity_slot_blocked_an_atom(). If submission succeeded, then the
+ * slot is deregistered as having blocked atoms due to affinity
+ * violations. Otherwise it stays registered, and the next atom to complete
+ * must attempt to submit to the blocked slots again.
+ *
+ * The following locking conditions are made on the caller:
+ * - it must \em not hold kbasep_js_device_data::runpool_mutex (as this will be
+ * obtained internally)
+ * - it must \em not hold kbdev->jm_slots[ \a js ].lock (as this will be
+ * obtained internally)
+ * - it must \em not hold kbasep_js_device_data::runpool_irq::lock, (as this will be
+ * obtained internally)
+ */
+void kbase_js_affinity_submit_to_blocked_slots( kbase_device *kbdev );
+
+/**
+ * @brief Output to the Trace log the current tracked affinities on all slots
+ */
+#if MALI_DEBUG != 0
+void kbase_js_debug_log_current_affinities( kbase_device *kbdev );
+#else /* MALI_DEBUG != 0 */
+OSK_STATIC_INLINE void kbase_js_debug_log_current_affinities( kbase_device *kbdev )
+{
+}
+#endif /* MALI_DEBUG != 0 */
+
+/** @} */ /* end group kbase_js_affinity */
+/** @} */ /* end group base_kbase_api */
+/** @} */ /* end group base_api */
+
+
+
+
+
+#endif /* _KBASE_JS_AFFINITY_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+#include <kbase/src/common/mali_kbase.h>
+
+/*
+ * Private functions follow
+ */
+
+/**
+ * @brief Check whether a ctx has a certain attribute, and if so, retain that
+ * attribute on the runpool.
+ *
+ * Requires:
+ * - jsctx mutex
+ * - runpool_irq spinlock
+ * - ctx is scheduled on the runpool
+ *
+ * @return MALI_TRUE indicates a change in ctx attributes state of the runpool.
+ * In this state, the scheduler might be able to submit more jobs than
+ * previously, and so the caller should ensure kbasep_js_try_run_next_job() is
+ * called sometime later.
+ * @return MALI_FALSE indicates no change in ctx attributes state of the runpool.
+ */
+STATIC mali_bool kbasep_js_ctx_attr_runpool_retain_attr( kbase_device *kbdev,
+ kbase_context *kctx,
+ kbasep_js_ctx_attr attribute )
+{
+ kbasep_js_device_data *js_devdata;
+ kbasep_js_kctx_info *js_kctx_info;
+ mali_bool runpool_state_changed = MALI_FALSE;
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( kctx != NULL );
+ OSK_ASSERT( attribute < KBASEP_JS_CTX_ATTR_COUNT );
+ js_devdata = &kbdev->js_data;
+ js_kctx_info = &kctx->jctx.sched_info;
+
+ OSK_ASSERT( js_kctx_info->ctx.is_scheduled != MALI_FALSE );
+
+ if ( kbasep_js_ctx_attr_is_attr_on_ctx( kctx, attribute ) != MALI_FALSE )
+ {
+ OSK_ASSERT( js_devdata->runpool_irq.ctx_attr_ref_count[attribute] < S8_MAX );
+ ++(js_devdata->runpool_irq.ctx_attr_ref_count[attribute]);
+
+ if ( js_devdata->runpool_irq.ctx_attr_ref_count[attribute] == 1 )
+ {
+ /* First refcount indicates a state change */
+ runpool_state_changed = MALI_TRUE;
+ KBASE_TRACE_ADD( kbdev, JS_CTX_ATTR_NOW_ON_RUNPOOL, kctx, NULL, 0u, attribute );
+ }
+ }
+
+ return runpool_state_changed;
+}
+
+/**
+ * @brief Check whether a ctx has a certain attribute, and if so, release that
+ * attribute on the runpool.
+ *
+ * Requires:
+ * - jsctx mutex
+ * - runpool_irq spinlock
+ * - ctx is scheduled on the runpool
+ *
+ * @return MALI_TRUE indicates a change in ctx attributes state of the runpool.
+ * In this state, the scheduler might be able to submit more jobs than
+ * previously, and so the caller should ensure kbasep_js_try_run_next_job() is
+ * called sometime later.
+ * @return MALI_FALSE indicates no change in ctx attributes state of the runpool.
+ */
+STATIC mali_bool kbasep_js_ctx_attr_runpool_release_attr( kbase_device *kbdev,
+ kbase_context *kctx,
+ kbasep_js_ctx_attr attribute )
+{
+ kbasep_js_device_data *js_devdata;
+ kbasep_js_kctx_info *js_kctx_info;
+ mali_bool runpool_state_changed = MALI_FALSE;
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( kctx != NULL );
+ OSK_ASSERT( attribute < KBASEP_JS_CTX_ATTR_COUNT );
+ js_devdata = &kbdev->js_data;
+ js_kctx_info = &kctx->jctx.sched_info;
+
+ OSK_ASSERT( js_kctx_info->ctx.is_scheduled != MALI_FALSE );
+
+ if ( kbasep_js_ctx_attr_is_attr_on_ctx( kctx, attribute ) != MALI_FALSE )
+ {
+ OSK_ASSERT( js_devdata->runpool_irq.ctx_attr_ref_count[attribute] > 0 );
+ --(js_devdata->runpool_irq.ctx_attr_ref_count[attribute]);
+
+ if ( js_devdata->runpool_irq.ctx_attr_ref_count[attribute] == 0 )
+ {
+ /* Last de-refcount indicates a state change */
+ runpool_state_changed = MALI_TRUE;
+ KBASE_TRACE_ADD( kbdev, JS_CTX_ATTR_NOW_OFF_RUNPOOL, kctx, NULL, 0u, attribute );
+ }
+ }
+
+ return runpool_state_changed;
+}
+
+/**
+ * @brief Retain a certain attribute on a ctx, also retaining it on the runpool
+ * if the context is scheduled.
+ *
+ * Requires:
+ * - jsctx mutex
+ * - If the context is scheduled, then runpool_irq spinlock must also be held
+ *
+ * @return MALI_TRUE indicates a change in ctx attributes state of the runpool.
+ * This may allow the scheduler to submit more jobs than previously.
+ * @return MALI_FALSE indicates no change in ctx attributes state of the runpool.
+ */
+STATIC mali_bool kbasep_js_ctx_attr_ctx_retain_attr( kbase_device *kbdev,
+ kbase_context *kctx,
+ kbasep_js_ctx_attr attribute )
+{
+ kbasep_js_kctx_info *js_kctx_info;
+ mali_bool runpool_state_changed = MALI_FALSE;
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( kctx != NULL );
+ OSK_ASSERT( attribute < KBASEP_JS_CTX_ATTR_COUNT );
+ js_kctx_info = &kctx->jctx.sched_info;
+
+ OSK_ASSERT( js_kctx_info->ctx.ctx_attr_ref_count[ attribute ] < U32_MAX );
+
+ ++(js_kctx_info->ctx.ctx_attr_ref_count[ attribute ]);
+
+ if ( js_kctx_info->ctx.is_scheduled != MALI_FALSE
+ && js_kctx_info->ctx.ctx_attr_ref_count[ attribute ] == 1 )
+ {
+ /* Only ref-count the attribute on the runpool for the first time this contexts sees this attribute */
+ KBASE_TRACE_ADD( kbdev, JS_CTX_ATTR_NOW_ON_CTX, kctx, NULL, 0u, attribute );
+ runpool_state_changed = kbasep_js_ctx_attr_runpool_retain_attr( kbdev, kctx, attribute );
+ }
+
+ return runpool_state_changed;
+}
+
+/**
+ * @brief Release a certain attribute on a ctx, also releasign it from the runpool
+ * if the context is scheduled.
+ *
+ * Requires:
+ * - jsctx mutex
+ * - If the context is scheduled, then runpool_irq spinlock must also be held
+ *
+ * @return MALI_TRUE indicates a change in ctx attributes state of the runpool.
+ * This may allow the scheduler to submit more jobs than previously.
+ * @return MALI_FALSE indicates no change in ctx attributes state of the runpool.
+ */
+STATIC mali_bool kbasep_js_ctx_attr_ctx_release_attr( kbase_device *kbdev,
+ kbase_context *kctx,
+ kbasep_js_ctx_attr attribute )
+{
+ kbasep_js_kctx_info *js_kctx_info;
+ mali_bool runpool_state_changed = MALI_FALSE;
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( kctx != NULL );
+ OSK_ASSERT( attribute < KBASEP_JS_CTX_ATTR_COUNT );
+ js_kctx_info = &kctx->jctx.sched_info;
+
+ OSK_ASSERT( js_kctx_info->ctx.ctx_attr_ref_count[ attribute ] > 0 );
+
+ if ( js_kctx_info->ctx.is_scheduled != MALI_FALSE
+ && js_kctx_info->ctx.ctx_attr_ref_count[ attribute ] == 1 )
+ {
+ /* Only de-ref-count the attribute on the runpool when this is the last ctx-reference to it */
+ runpool_state_changed = kbasep_js_ctx_attr_runpool_release_attr( kbdev, kctx, attribute );
+ KBASE_TRACE_ADD( kbdev, JS_CTX_ATTR_NOW_OFF_CTX, kctx, NULL, 0u, attribute );
+ }
+
+ /* De-ref must happen afterwards, because kbasep_js_ctx_attr_runpool_release() needs to check it too */
+ --(js_kctx_info->ctx.ctx_attr_ref_count[ attribute ]);
+
+ return runpool_state_changed;
+}
+
+
+/*
+ * More commonly used public functions
+ */
+
+void kbasep_js_ctx_attr_set_initial_attrs( kbase_device *kbdev,
+ kbase_context *kctx )
+{
+ kbasep_js_kctx_info *js_kctx_info;
+ mali_bool runpool_state_changed = MALI_FALSE;
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( kctx != NULL );
+ js_kctx_info = &kctx->jctx.sched_info;
+
+ if ( (js_kctx_info->ctx.flags & KBASE_CTX_FLAG_SUBMIT_DISABLED) != MALI_FALSE )
+ {
+ /* This context never submits, so don't track any scheduling attributes */
+ return;
+ }
+
+ /* Transfer attributes held in the context flags for contexts that have submit enabled */
+
+ if ( (js_kctx_info->ctx.flags & KBASE_CTX_FLAG_HINT_ONLY_COMPUTE) != MALI_FALSE )
+ {
+ /* Compute context */
+ runpool_state_changed |= kbasep_js_ctx_attr_ctx_retain_attr( kbdev, kctx, KBASEP_JS_CTX_ATTR_COMPUTE );
+ }
+ /* NOTE: Whether this is a non-compute context depends on the jobs being
+ * run, e.g. it might be submitting jobs with BASE_JD_REQ_ONLY_COMPUTE */
+
+ /* ... More attributes can be added here ... */
+
+ /* The context should not have been scheduled yet, so ASSERT if this caused
+ * runpool state changes (note that other threads *can't* affect the value
+ * of runpool_state_changed, due to how it's calculated) */
+ OSK_ASSERT( runpool_state_changed == MALI_FALSE );
+ CSTD_UNUSED( runpool_state_changed );
+}
+
+void kbasep_js_ctx_attr_runpool_retain_ctx( kbase_device *kbdev,
+ kbase_context *kctx )
+{
+ mali_bool runpool_state_changed;
+ int i;
+
+ /* Retain any existing attributes */
+ for ( i = 0 ; i < KBASEP_JS_CTX_ATTR_COUNT; ++i )
+ {
+ if ( kbasep_js_ctx_attr_is_attr_on_ctx( kctx, (kbasep_js_ctx_attr)i ) != MALI_FALSE )
+ {
+ /* The context is being scheduled in, so update the runpool with the new attributes */
+ runpool_state_changed = kbasep_js_ctx_attr_runpool_retain_attr( kbdev, kctx, (kbasep_js_ctx_attr)i );
+
+ /* We don't need to know about state changed, because retaining a
+ * context occurs on scheduling it, and that itself will also try
+ * to run new atoms */
+ CSTD_UNUSED( runpool_state_changed );
+ }
+ }
+}
+
+mali_bool kbasep_js_ctx_attr_runpool_release_ctx( kbase_device *kbdev,
+ kbase_context *kctx )
+{
+ mali_bool runpool_state_changed = MALI_FALSE;
+ int i;
+
+ /* Release any existing attributes */
+ for ( i = 0 ; i < KBASEP_JS_CTX_ATTR_COUNT; ++i )
+ {
+ if ( kbasep_js_ctx_attr_is_attr_on_ctx( kctx, (kbasep_js_ctx_attr)i ) != MALI_FALSE )
+ {
+ /* The context is being scheduled out, so update the runpool on the removed attributes */
+ runpool_state_changed |= kbasep_js_ctx_attr_runpool_release_attr( kbdev, kctx, (kbasep_js_ctx_attr)i );
+ }
+ }
+
+ return runpool_state_changed;
+}
+
+void kbasep_js_ctx_attr_ctx_retain_atom( kbase_device *kbdev,
+ kbase_context *kctx,
+ kbase_jd_atom *katom )
+{
+ mali_bool runpool_state_changed = MALI_FALSE;
+ base_jd_core_req core_req;
+
+ OSK_ASSERT( katom );
+ core_req = katom->core_req;
+
+ if ( core_req & BASE_JD_REQ_NSS )
+ {
+ runpool_state_changed |= kbasep_js_ctx_attr_ctx_retain_attr( kbdev, kctx, KBASEP_JS_CTX_ATTR_NSS );
+ }
+
+ if ( core_req & BASE_JD_REQ_ONLY_COMPUTE )
+ {
+ runpool_state_changed |= kbasep_js_ctx_attr_ctx_retain_attr( kbdev, kctx, KBASEP_JS_CTX_ATTR_COMPUTE );
+ }
+ else
+ {
+ runpool_state_changed |= kbasep_js_ctx_attr_ctx_retain_attr( kbdev, kctx, KBASEP_JS_CTX_ATTR_NON_COMPUTE );
+ }
+
+ if ( (core_req & ( BASE_JD_REQ_CS | BASE_JD_REQ_ONLY_COMPUTE | BASE_JD_REQ_T)) != 0
+ && (core_req & ( BASE_JD_REQ_COHERENT_GROUP | BASE_JD_REQ_SPECIFIC_COHERENT_GROUP )) == 0 )
+ {
+ /* Atom that can run on slot1 or slot2, and can use all cores */
+ runpool_state_changed |= kbasep_js_ctx_attr_ctx_retain_attr( kbdev, kctx, KBASEP_JS_CTX_ATTR_COMPUTE_ALL_CORES );
+ }
+
+ /* We don't need to know about state changed, because retaining an
+ * atom occurs on adding it, and that itself will also try to run
+ * new atoms */
+ CSTD_UNUSED( runpool_state_changed );
+}
+
+mali_bool kbasep_js_ctx_attr_ctx_release_atom( kbase_device *kbdev,
+ kbase_context *kctx,
+ kbase_jd_atom *katom )
+{
+ mali_bool runpool_state_changed = MALI_FALSE;
+ base_jd_core_req core_req;
+
+ OSK_ASSERT( katom );
+ core_req = katom->core_req;
+
+ if ( core_req & BASE_JD_REQ_NSS )
+ {
+ runpool_state_changed |= kbasep_js_ctx_attr_ctx_release_attr( kbdev, kctx, KBASEP_JS_CTX_ATTR_NSS );
+ }
+
+ if ( core_req & BASE_JD_REQ_ONLY_COMPUTE )
+ {
+ runpool_state_changed |= kbasep_js_ctx_attr_ctx_release_attr( kbdev, kctx, KBASEP_JS_CTX_ATTR_COMPUTE );
+ }
+ else
+ {
+ runpool_state_changed |= kbasep_js_ctx_attr_ctx_release_attr( kbdev, kctx, KBASEP_JS_CTX_ATTR_NON_COMPUTE );
+ }
+
+ if ( (core_req & ( BASE_JD_REQ_CS | BASE_JD_REQ_ONLY_COMPUTE | BASE_JD_REQ_T )) != 0
+ && (core_req & ( BASE_JD_REQ_COHERENT_GROUP | BASE_JD_REQ_SPECIFIC_COHERENT_GROUP )) == 0 )
+ {
+ /* Atom that can run on slot1 or slot2, and can use all cores */
+ runpool_state_changed |= kbasep_js_ctx_attr_ctx_release_attr( kbdev, kctx, KBASEP_JS_CTX_ATTR_COMPUTE_ALL_CORES );
+ }
+
+
+ return runpool_state_changed;
+}
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_js_ctx_attr.h
+ * Job Scheduler Context Attribute APIs
+ */
+
+
+#ifndef _KBASE_JS_CTX_ATTR_H_
+#define _KBASE_JS_CTX_ATTR_H_
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @addtogroup base_kbase_api
+ * @{
+ */
+
+/**
+ * @addtogroup kbase_js
+ * @{
+ */
+
+/**
+ * Set the initial attributes of a context (when context create flags are set)
+ *
+ * Requires:
+ * - Hold the jsctx_mutex
+ */
+void kbasep_js_ctx_attr_set_initial_attrs( kbase_device *kbdev,
+ kbase_context *kctx );
+
+/**
+ * Retain all attributes of a context
+ *
+ * This occurs on scheduling in the context on the runpool (but after
+ * is_scheduled is set)
+ *
+ * Requires:
+ * - jsctx mutex
+ * - runpool_irq spinlock
+ * - ctx->is_scheduled is true
+ */
+void kbasep_js_ctx_attr_runpool_retain_ctx( kbase_device *kbdev,
+ kbase_context *kctx );
+
+/**
+ * Release all attributes of a context
+ *
+ * This occurs on scheduling out the context from the runpool (but before
+ * is_scheduled is cleared)
+ *
+ * Requires:
+ * - jsctx mutex
+ * - runpool_irq spinlock
+ * - ctx->is_scheduled is true
+ *
+ * @return MALI_TRUE indicates a change in ctx attributes state of the runpool.
+ * In this state, the scheduler might be able to submit more jobs than
+ * previously, and so the caller should ensure kbasep_js_try_run_next_job() is
+ * called sometime later.
+ * @return MALI_FALSE indicates no change in ctx attributes state of the runpool.
+ */
+mali_bool kbasep_js_ctx_attr_runpool_release_ctx( kbase_device *kbdev,
+ kbase_context *kctx );
+
+/**
+ * Retain all attributes of an atom
+ *
+ * This occurs on adding an atom to a context
+ *
+ * Requires:
+ * - jsctx mutex
+ * - If the context is scheduled, then runpool_irq spinlock must also be held
+ */
+void kbasep_js_ctx_attr_ctx_retain_atom( kbase_device *kbdev,
+ kbase_context *kctx,
+ kbase_jd_atom *katom );
+
+/**
+ * Release all attributes of an atom
+ *
+ * This occurs on (permanently) removing an atom from a context
+ *
+ * Requires:
+ * - jsctx mutex
+ * - If the context is scheduled, then runpool_irq spinlock must also be held
+ *
+ * @return MALI_TRUE indicates a change in ctx attributes state of the runpool.
+ * In this state, the scheduler might be able to submit more jobs than
+ * previously, and so the caller should ensure kbasep_js_try_run_next_job() is
+ * called sometime later.
+ * @return MALI_FALSE indicates no change in ctx attributes state of the runpool.
+ */
+mali_bool kbasep_js_ctx_attr_ctx_release_atom( kbase_device *kbdev,
+ kbase_context *kctx,
+ kbase_jd_atom *katom );
+
+/**
+ * Requires:
+ * - runpool_irq spinlock
+ */
+OSK_STATIC_INLINE s8 kbasep_js_ctx_attr_count_on_runpool( kbase_device *kbdev, kbasep_js_ctx_attr attribute )
+{
+ kbasep_js_device_data *js_devdata;
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( attribute < KBASEP_JS_CTX_ATTR_COUNT );
+ js_devdata = &kbdev->js_data;
+
+ return js_devdata->runpool_irq.ctx_attr_ref_count[attribute];
+}
+
+
+/**
+ * Requires:
+ * - runpool_irq spinlock
+ */
+OSK_STATIC_INLINE mali_bool kbasep_js_ctx_attr_is_attr_on_runpool( kbase_device *kbdev, kbasep_js_ctx_attr attribute )
+{
+ /* In general, attributes are 'on' when they have a non-zero refcount (note: the refcount will never be < 0) */
+ return (mali_bool)kbasep_js_ctx_attr_count_on_runpool( kbdev, attribute );
+}
+
+/**
+ * Requires:
+ * - jsctx mutex
+ */
+OSK_STATIC_INLINE mali_bool kbasep_js_ctx_attr_is_attr_on_ctx( kbase_context *kctx, kbasep_js_ctx_attr attribute )
+{
+ kbasep_js_kctx_info *js_kctx_info;
+
+ OSK_ASSERT( kctx != NULL );
+ OSK_ASSERT( attribute < KBASEP_JS_CTX_ATTR_COUNT );
+ js_kctx_info = &kctx->jctx.sched_info;
+
+ /* In general, attributes are 'on' when they have a refcount (which should never be < 0) */
+ return (mali_bool)(js_kctx_info->ctx.ctx_attr_ref_count[ attribute ]);
+}
+
+/** @} */ /* end group kbase_js */
+/** @} */ /* end group base_kbase_api */
+/** @} */ /* end group base_api */
+
+#endif /* _KBASE_JS_DEFS_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_js.h
+ * Job Scheduler Type Definitions
+ */
+
+
+#ifndef _KBASE_JS_DEFS_H_
+#define _KBASE_JS_DEFS_H_
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @addtogroup base_kbase_api
+ * @{
+ */
+
+/**
+ * @addtogroup kbase_js
+ * @{
+ */
+
+/* Types used by the policies must go here */
+enum
+{
+ /** Context has had its creation flags set */
+ KBASE_CTX_FLAG_CREATE_FLAGS_SET = (1u << 0),
+
+ /** Context will not submit any jobs */
+ KBASE_CTX_FLAG_SUBMIT_DISABLED = (1u << 1),
+
+ /** Set if the context uses an address space and should be kept scheduled in */
+ KBASE_CTX_FLAG_PRIVILEGED = (1u << 2),
+
+ /** Kernel-side equivalent of BASE_CONTEXT_HINT_ONLY_COMPUTE. Non-mutable after creation flags set */
+ KBASE_CTX_FLAG_HINT_ONLY_COMPUTE= (1u << 3)
+
+ /* NOTE: Add flags for other things, such as 'is scheduled', and 'is dying' */
+};
+
+typedef u32 kbase_context_flags;
+
+typedef struct kbasep_atom_req
+{
+ base_jd_core_req core_req;
+ kbase_context_flags ctx_req;
+ u32 device_nr;
+} kbasep_atom_req;
+
+#include "mali_kbase_js_policy_cfs.h"
+
+
+/* Wrapper Interface - doxygen is elsewhere */
+typedef union kbasep_js_policy
+{
+#ifdef KBASE_JS_POLICY_AVAILABLE_FCFS
+ kbasep_js_policy_fcfs fcfs;
+#endif
+#ifdef KBASE_JS_POLICY_AVAILABLE_CFS
+ kbasep_js_policy_cfs cfs;
+#endif
+} kbasep_js_policy;
+
+/* Wrapper Interface - doxygen is elsewhere */
+typedef union kbasep_js_policy_ctx_info
+{
+#ifdef KBASE_JS_POLICY_AVAILABLE_FCFS
+ kbasep_js_policy_fcfs_ctx fcfs;
+#endif
+#ifdef KBASE_JS_POLICY_AVAILABLE_CFS
+ kbasep_js_policy_cfs_ctx cfs;
+#endif
+} kbasep_js_policy_ctx_info;
+
+/* Wrapper Interface - doxygen is elsewhere */
+typedef union kbasep_js_policy_job_info
+{
+#ifdef KBASE_JS_POLICY_AVAILABLE_FCFS
+ kbasep_js_policy_fcfs_job fcfs;
+#endif
+#ifdef KBASE_JS_POLICY_AVAILABLE_CFS
+ kbasep_js_policy_cfs_job cfs;
+#endif
+} kbasep_js_policy_job_info;
+
+/**
+ * @brief Maximum number of jobs that can be submitted to a job slot whilst
+ * inside the IRQ handler.
+ *
+ * This is important because GPU NULL jobs can complete whilst the IRQ handler
+ * is running. Otherwise, it potentially allows an unlimited number of GPU NULL
+ * jobs to be submitted inside the IRQ handler, which increases IRQ latency.
+ */
+#define KBASE_JS_MAX_JOB_SUBMIT_PER_SLOT_PER_IRQ 2
+
+/**
+ * @brief the IRQ_THROTTLE time in microseconds
+ *
+ * This will be converted via the GPU's clock frequency into a cycle-count.
+ *
+ * @note we can make an estimate of the GPU's frequency by periodically
+ * sampling its CYCLE_COUNT register
+ */
+#define KBASE_JS_IRQ_THROTTLE_TIME_US 20
+
+/**
+ * @brief Context attributes
+ *
+ * Each context attribute can be thought of as a boolean value that caches some
+ * state information about either the runpool, or the context:
+ * - In the case of the runpool, it is a cache of "Do any contexts owned by
+ * the runpool have attribute X?"
+ * - In the case of a context, it is a cache of "Do any atoms owned by the
+ * context have attribute X?"
+ *
+ * The boolean value of the context attributes often affect scheduling
+ * decisions, such as affinities to use and job slots to use.
+ *
+ * To accomodate changes of state in the context, each attribute is refcounted
+ * in the context, and in the runpool for all running contexts. Specifically:
+ * - The runpool holds a refcount of how many contexts in the runpool have this
+ * attribute.
+ * - The context holds a refcount of how many atoms have this attribute.
+ *
+ * Examples of use:
+ * - Finding out when NSS jobs are in the runpool
+ * - Finding out when there are a mix of @ref BASE_CONTEXT_HINT_ONLY_COMPUTE
+ * and ! @ref BASE_CONTEXT_HINT_ONLY_COMPUTE contexts in the runpool
+ */
+typedef enum
+{
+ /** Attribute indicating an NSS context */
+ KBASEP_JS_CTX_ATTR_NSS,
+
+ /** Attribute indicating a context that contains Compute jobs. That is,
+ * @ref BASE_CONTEXT_HINT_COMPUTE is \b set and/or the context has jobs of type
+ * @ref BASE_JD_REQ_ONLY_COMPUTE
+ *
+ * @note A context can be both 'Compute' and 'Non Compute' if it contains
+ * both types of jobs.
+ */
+ KBASEP_JS_CTX_ATTR_COMPUTE,
+
+ /** Attribute indicating a context that contains Non-Compute jobs. That is,
+ * the context has some jobs that are \b not of type @ref
+ * BASE_JD_REQ_ONLY_COMPUTE. The context usually has
+ * BASE_CONTEXT_HINT_COMPUTE \b clear, but this depends on the HW
+ * workarounds in use in the Job Scheduling Policy.
+ *
+ * @note A context can be both 'Compute' and 'Non Compute' if it contains
+ * both types of jobs.
+ */
+ KBASEP_JS_CTX_ATTR_NON_COMPUTE,
+
+ /** Attribute indicating that a context contains compute-job atoms that
+ * aren't restricted to a coherent group, and can run on all cores.
+ *
+ * Specifically, this is when the atom's \a core_req satisfy:
+ * - (\a core_req & (BASE_JD_REQ_CS | BASE_JD_REQ_ONLY_COMPUTE | BASE_JD_REQ_T) // uses slot 1 or slot 2
+ * - && !(\a core_req & BASE_JD_REQ_COHERENT_GROUP) // not restricted to coherent groups
+ *
+ * Such atoms could be blocked from running if one of the coherent groups
+ * is being used by another job slot, so tracking this context attribute
+ * allows us to prevent such situations.
+ *
+ * @note This doesn't take into account the 1-coregroup case, where all
+ * compute atoms would effectively be able to run on 'all cores', but
+ * contexts will still not always get marked with this attribute. Instead,
+ * it is the caller's responsibility to take into account the number of
+ * coregroups when interpreting this attribute.
+ *
+ * @note Whilst Tiler atoms are normally combined with
+ * BASE_JD_REQ_COHERENT_GROUP, it is possible to send such atoms without
+ * BASE_JD_REQ_COHERENT_GROUP set. This is an unlikely case, but it's easy
+ * enough to handle anyway.
+ */
+ KBASEP_JS_CTX_ATTR_COMPUTE_ALL_CORES,
+
+ /** Must be the last in the enum */
+ KBASEP_JS_CTX_ATTR_COUNT
+} kbasep_js_ctx_attr;
+
+
+/**
+ * Data used by the scheduler that is unique for each Address Space.
+ *
+ * This is used in IRQ context and kbasep_js_device_data::runpoool_irq::lock
+ * must be held whilst accessing this data (inculding reads and atomic
+ * decisions based on the read).
+ */
+typedef struct kbasep_js_per_as_data
+{
+ /**
+ * Ref count of whether this AS is busy, and must not be scheduled out
+ *
+ * When jobs are running this is always positive. However, it can still be
+ * positive when no jobs are running. If all you need is a heuristic to
+ * tell you whether jobs might be running, this should be sufficient.
+ */
+ int as_busy_refcount;
+
+ /** Pointer to the current context on this address space, or NULL for no context */
+ kbase_context *kctx;
+} kbasep_js_per_as_data;
+
+/**
+ * @brief KBase Device Data Job Scheduler sub-structure
+ *
+ * This encapsulates the current context of the Job Scheduler on a particular
+ * device. This context is global to the device, and is not tied to any
+ * particular kbase_context running on the device.
+ *
+ * nr_contexts_running, nr_nss_ctxs_running and as_free are
+ * optimized for packing together (by making them smaller types than u32). The
+ * operations on them should rarely involve masking. The use of signed types for
+ * arithmetic indicates to the compiler that the value will not rollover (which
+ * would be undefined behavior), and so under the Total License model, it is free
+ * to make optimizations based on that (i.e. to remove masking).
+ */
+typedef struct kbasep_js_device_data
+{
+ /** Sub-structure to collect together Job Scheduling data used in IRQ context */
+ struct
+ {
+ /**
+ * Lock for accessing Job Scheduling data used in IRQ context
+ *
+ * This lock must be held whenever this data is accessed (read, or
+ * write). Even for read-only access, memory barriers would be needed.
+ * In any case, it is likely that decisions based on only reading must
+ * also be atomic with respect to data held here and elsewhere in the
+ * Job Scheduler.
+ *
+ * This lock must also be held for accessing:
+ * - kbase_context::as_nr
+ * - Parts of the kbasep_js_policy, dependent on the policy (refer to
+ * the policy in question for more information)
+ * - Parts of kbasep_js_policy_ctx_info, dependent on the policy (refer to
+ * the policy in question for more information)
+ *
+ * If accessing a job slot at the same time, the slot's IRQ lock must
+ * be obtained first to respect lock ordering.
+ */
+ osk_spinlock_irq lock;
+
+ /** Bitvector indicating whether a currently scheduled context is allowed to submit jobs.
+ * When bit 'N' is set in this, it indicates whether the context bound to address space
+ * 'N' (per_as_data[N].kctx) is allowed to submit jobs.
+ *
+ * It is placed here because it's much more memory efficient than having a mali_bool8 in
+ * kbasep_js_per_as_data to store this flag */
+ u16 submit_allowed;
+
+ /** Context Attributes:
+ * Each is large enough to hold a refcount of the number of contexts
+ * that can fit into the runpool. This is currently BASE_MAX_NR_AS
+ *
+ * Note that when BASE_MAX_NR_AS==16 we need 5 bits (not 4) to store
+ * the refcount. Hence, it's not worthwhile reducing this to
+ * bit-manipulation on u32s to save space (where in contrast, 4 bit
+ * sub-fields would be easy to do and would save space).
+ *
+ * Whilst this must not become negative, the sign bit is used for:
+ * - error detection in debug builds
+ * - Optimization: it is undefined for a signed int to overflow, and so
+ * the compiler can optimize for that never happening (thus, no masking
+ * is required on updating the variable) */
+ s8 ctx_attr_ref_count[KBASEP_JS_CTX_ATTR_COUNT];
+
+ /** Data that is unique for each AS */
+ kbasep_js_per_as_data per_as_data[BASE_MAX_NR_AS];
+
+ /*
+ * Affinity management and tracking
+ */
+ /** Bitvector to aid affinity checking. Element 'n' bit 'i' indicates
+ * that slot 'n' is using core i (i.e. slot_affinity_refcount[n][i] > 0) */
+ u64 slot_affinities[BASE_JM_MAX_NR_SLOTS];
+ /** Bitvector indicating which slots \em might have atoms blocked on
+ * them because otherwise they'd violate affinity restrictions */
+ u16 slots_blocked_on_affinity;
+ /** Refcount for each core owned by each slot. Used to generate the
+ * slot_affinities array of bitvectors
+ *
+ * The value of the refcount will not exceed BASE_JM_SUBMIT_SLOTS,
+ * because it is refcounted only when a job is definitely about to be
+ * submitted to a slot, and is de-refcounted immediately after a job
+ * finishes */
+ s8 slot_affinity_refcount[BASE_JM_MAX_NR_SLOTS][64];
+
+ } runpool_irq;
+
+ /**
+ * Run Pool mutex, for managing contexts within the runpool.
+ * You must hold this lock whilst accessing any members that follow
+ *
+ * In addition, this is used to access:
+ * - the kbasep_js_kctx_info::runpool substructure
+ */
+ osk_mutex runpool_mutex;
+
+ /**
+ * Queue Lock, used to access the Policy's queue of contexts independently
+ * of the Run Pool.
+ *
+ * Of course, you don't need the Run Pool lock to access this.
+ */
+ osk_mutex queue_mutex;
+
+ u16 as_free; /**< Bitpattern of free Address Spaces */
+
+ /** Number of currently scheduled user contexts (excluding ones that are not submitting jobs) */
+ s8 nr_user_contexts_running;
+ /** Number of currently scheduled contexts (including ones that are not submitting jobs) */
+ s8 nr_all_contexts_running;
+
+ /**
+ * Policy-specific information.
+ *
+ * Refer to the structure defined by the current policy to determine which
+ * locks must be held when accessing this.
+ */
+ kbasep_js_policy policy;
+
+ /** Core Requirements to match up with base_js_atom's core_req memeber
+ * @note This is a write-once member, and so no locking is required to read */
+ base_jd_core_req js_reqs[BASE_JM_MAX_NR_SLOTS];
+
+ u32 scheduling_tick_ns; /**< Value for KBASE_CONFIG_ATTR_JS_SCHEDULING_TICK_NS */
+ u32 soft_stop_ticks; /**< Value for KBASE_CONFIG_ATTR_JS_SOFT_STOP_TICKS */
+ u32 hard_stop_ticks_ss; /**< Value for KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_SS */
+ u32 hard_stop_ticks_nss; /**< Value for KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_NSS */
+ u32 gpu_reset_ticks_ss; /**< Value for KBASE_CONFIG_ATTR_JS_RESET_TICKS_SS */
+ u32 gpu_reset_ticks_nss; /**< Value for KBASE_CONFIG_ATTR_JS_RESET_TICKS_NSS */
+ u32 ctx_timeslice_ns; /**< Value for KBASE_CONFIG_ATTR_JS_CTX_TIMESLICE_NS */
+ u32 cfs_ctx_runtime_init_slices; /**< Value for KBASE_CONFIG_ATTR_JS_CFS_CTX_RUNTIME_INIT_SLICES */
+ u32 cfs_ctx_runtime_min_slices; /**< Value for KBASE_CONFIG_ATTR_JS_CFS_CTX_RUNTIME_MIN_SLICES */
+#if MALI_DEBUG
+ /* Support soft-stop on a single context */
+ mali_bool softstop_always;
+#endif /* MALI_DEBUG */
+ /** The initalized-flag is placed at the end, to avoid cache-pollution (we should
+ * only be using this during init/term paths).
+ * @note This is a write-once member, and so no locking is required to read */
+ int init_status;
+} kbasep_js_device_data;
+
+
+/**
+ * @brief KBase Context Job Scheduling information structure
+ *
+ * This is a substructure in the kbase_context that encapsulates all the
+ * scheduling information.
+ */
+typedef struct kbasep_js_kctx_info
+{
+ /**
+ * Runpool substructure. This must only be accessed whilst the Run Pool
+ * mutex ( kbasep_js_device_data::runpool_mutex ) is held.
+ *
+ * In addition, the kbasep_js_device_data::runpool_irq::lock may need to be
+ * held for certain sub-members.
+ *
+ * @note some of the members could be moved into kbasep_js_device_data for
+ * improved d-cache/tlb efficiency.
+ */
+ struct
+ {
+ kbasep_js_policy_ctx_info policy_ctx; /**< Policy-specific context */
+ } runpool;
+
+ /**
+ * Job Scheduler Context information sub-structure. These members are
+ * accessed regardless of whether the context is:
+ * - In the Policy's Run Pool
+ * - In the Policy's Queue
+ * - Not queued nor in the Run Pool.
+ *
+ * You must obtain the jsctx_mutex before accessing any other members of
+ * this substructure.
+ *
+ * You may not access any of these members from IRQ context.
+ */
+ struct
+ {
+ osk_mutex jsctx_mutex; /**< Job Scheduler Context lock */
+
+ /** Number of jobs <b>ready to run</b> - does \em not include the jobs waiting in
+ * the dispatcher, and dependency-only jobs. See kbase_jd_context::job_nr
+ * for such jobs*/
+ u32 nr_jobs;
+
+ /** Context Attributes:
+ * Each is large enough to hold a refcount of the number of atoms on
+ * the context. **/
+ u32 ctx_attr_ref_count[KBASEP_JS_CTX_ATTR_COUNT];
+
+ /**
+ * Waitq that reflects whether the context is not scheduled on the run-pool.
+ * This is clear when is_scheduled is true, and set when is_scheduled
+ * is false.
+ *
+ * This waitq can be waited upon to find out when a context is no
+ * longer in the run-pool, and is used in combination with
+ * kbasep_js_policy_try_evict_ctx() to determine when it can be
+ * terminated. However, it should only be terminated once all its jobs
+ * are also terminated (see kbase_jd_context::zero_jobs_waitq).
+ *
+ * Since the waitq is only set under jsctx_mutex, the waiter should
+ * also briefly obtain and drop jsctx_mutex to guarentee that the
+ * setter has completed its work on the kbase_context.
+ */
+ osk_waitq not_scheduled_waitq;
+
+ /**
+ * Waitq that reflects whether the context is scheduled on the run-pool.
+ * This is set when is_scheduled is true, and clear when is_scheduled
+ * is false.
+ */
+ osk_waitq scheduled_waitq;
+
+ kbase_context_flags flags;
+ /* NOTE: Unify the following flags into kbase_context_flags */
+ /**
+ * Is the context scheduled on the Run Pool?
+ *
+ * This is only ever updated whilst the jsctx_mutex is held.
+ */
+ mali_bool is_scheduled;
+ mali_bool is_dying; /**< Is the context in the process of being evicted? */
+ } ctx;
+
+ /* The initalized-flag is placed at the end, to avoid cache-pollution (we should
+ * only be using this during init/term paths) */
+ int init_status;
+} kbasep_js_kctx_info;
+
+
+/**
+ * @brief The JS timer resolution, in microseconds
+ *
+ * Any non-zero difference in time will be at least this size.
+ */
+#define KBASEP_JS_TICK_RESOLUTION_US (1000000u/osk_time_mstoticks(1000))
+
+/**
+ * @note MIDBASE-769: OSK to add high resolution timer
+ *
+ * The underlying tick is an unsigned integral type
+ */
+typedef osk_ticks kbasep_js_tick;
+
+/**
+ * GPU clock ticks.
+ */
+typedef osk_ticks kbasep_js_gpu_tick;
+
+
+#endif /* _KBASE_JS_DEFS_H_ */
+
+
+/** @} */ /* end group kbase_js */
+/** @} */ /* end group base_kbase_api */
+/** @} */ /* end group base_api */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_js_policy.h
+ * Job Scheduler Policy APIs.
+ */
+
+#ifndef _KBASE_JS_POLICY_H_
+#define _KBASE_JS_POLICY_H_
+
+/**
+ * @page page_kbase_js_policy Job Scheduling Policies
+ * The Job Scheduling system is described in the following:
+ * - @subpage page_kbase_js_policy_overview
+ * - @subpage page_kbase_js_policy_operation
+ *
+ * The API details are as follows:
+ * - @ref kbase_jm
+ * - @ref kbase_js
+ * - @ref kbase_js_policy
+ */
+
+/**
+ * @page page_kbase_js_policy_overview Overview of the Policy System
+ *
+ * The Job Scheduler Policy manages:
+ * - The assigning of KBase Contexts to GPU Address Spaces (\em ASs)
+ * - The choosing of Job Chains (\em Jobs) from a KBase context, to run on the
+ * GPU's Job Slots (\em JSs).
+ * - The amount of \em time a context is assigned to (<em>scheduled on</em>) an
+ * Address Space
+ * - The amount of \em time a Job spends running on the GPU
+ *
+ * The Policy implements this management via 2 components:
+ * - A Policy Queue, which manages a set of contexts that are ready to run,
+ * but not currently running.
+ * - A Policy Run Pool, which manages the currently running contexts (one per Address
+ * Space) and the jobs to run on the Job Slots.
+ *
+ * Each Graphics Process in the system has at least one KBase Context. Therefore,
+ * the Policy Queue can be seen as a queue of Processes waiting to run Jobs on
+ * the GPU.
+ *
+ * <!-- The following needs to be all on one line, due to doxygen's parser -->
+ * @dotfile policy_overview.dot "Diagram showing a very simplified overview of the Policy System. IRQ handling, soft/hard-stopping, contexts re-entering the system and Policy details are omitted"
+ *
+ * The main operations on the queue are:
+ * - Enqueuing a Context to it
+ * - Dequeuing a Context from it, to run it.
+ * - Note: requeuing a context is much the same as enqueuing a context, but
+ * occurs when a context is scheduled out of the system to allow other contexts
+ * to run.
+ *
+ * These operations have much the same meaning for the Run Pool - Jobs are
+ * dequeued to run on a Jobslot, and requeued when they are scheduled out of
+ * the GPU.
+ *
+ * @note This is an over-simplification of the Policy APIs - there are more
+ * operations than 'Enqueue'/'Dequeue', and a Dequeue from the Policy Queue
+ * takes at least two function calls: one to Dequeue from the Queue, one to add
+ * to the Run Pool.
+ *
+ * As indicated on the diagram, Jobs permanently leave the scheduling system
+ * when they are completed, otherwise they get dequeued/requeued until this
+ * happens. Similarly, Contexts leave the scheduling system when their jobs
+ * have all completed. However, Contexts may later return to the scheduling
+ * system (not shown on the diagram) if more Bags of Jobs are submitted to
+ * them.
+ */
+
+/**
+ * @page page_kbase_js_policy_operation Policy Operation
+ *
+ * We describe the actions that the Job Scheduler Core takes on the Policy in
+ * the following cases:
+ * - The IRQ Path
+ * - The Job Submission Path
+ * - The High Priority Job Submission Path
+ *
+ * This shows how the Policy APIs will be used by the Job Scheduler core.
+ *
+ * The following diagram shows an example Policy that contains a Low Priority
+ * queue, and a Real-time (High Priority) Queue. The RT queue is examined
+ * before the LowP one on dequeuing from the head. The Low Priority Queue is
+ * ordered by time, and the RT queue is ordered by RT-priority, and then by
+ * time. In addition, it shows that the Job Scheduler Core will start a
+ * Soft-Stop Timer (SS-Timer) when it dequeue's and submits a job. The
+ * Soft-Stop time is set by a global configuration value, and must be a value
+ * appropriate for the policy. For example, this could include "don't run a
+ * soft-stop timer" for a First-Come-First-Served (FCFS) policy.
+ *
+ * <!-- The following needs to be all on one line, due to doxygen's parser -->
+ * @dotfile policy_operation_diagram.dot "Diagram showing the objects managed by an Example Policy, and the operations made upon these objects by the Job Scheduler Core."
+ *
+ * @section sec_kbase_js_policy_operation_prio Dealing with Priority
+ *
+ * Priority applies both to a context as a whole, and to the jobs within a
+ * context. The jobs specify a priority in the base_jd_atom::prio member, which
+ * is relative to that of the context. A positive setting indicates a reduction
+ * in priority, whereas a negative setting indicates a boost in priority. Of
+ * course, the boost in priority should only be honoured when the originating
+ * process has sufficient priviledges, and should be ignored for unpriviledged
+ * processes. The meaning of the combined priority value is up to the policy
+ * itself, and could be a logarithmic scale instead of a linear scale (e.g. the
+ * policy could implement an increase/decrease in priority by 1 results in an
+ * increase/decrease in \em proportion of time spent scheduled in by 25%, an
+ * effective change in timeslice by 11%).
+ *
+ * It is up to the policy whether a boost in priority boosts the priority of
+ * the entire context (e.g. to such an extent where it may pre-empt other
+ * running contexts). If it chooses to do this, the Policy must make sure that
+ * only the high-priority jobs are run, and that the context is scheduled out
+ * once only low priority jobs remain. This ensures that the low priority jobs
+ * within the context do not gain from the priority boost, yet they still get
+ * scheduled correctly with respect to other low priority contexts.
+ *
+ *
+ * @section sec_kbase_js_policy_operation_irq IRQ Path
+ *
+ * The following happens on the IRQ path from the Job Scheduler Core:
+ * - Note the slot that completed (for later)
+ * - Log the time spent by the job (and implicitly, the time spent by the
+ * context)
+ * - call kbasep_js_policy_log_job_result() <em>in the context of the irq
+ * handler.</em>
+ * - This must happen regardless of whether the job completed successfully or
+ * not (otherwise the context gets away with DoS'ing the system with faulty jobs)
+ * - What was the result of the job?
+ * - If Completed: job is just removed from the system
+ * - If Hard-stop or failure: job is removed from the system
+ * - If Soft-stop: queue the book-keeping work onto a work-queue: have a
+ * work-queue call kbasep_js_policy_enqueue_job()
+ * - Check the timeslice used by the owning context
+ * - call kbasep_js_policy_should_remove_ctx() <em>in the context of the irq
+ * handler.</em>
+ * - If this returns true, clear the "allowed" flag.
+ * - Check the ctx's flags for "allowed", "has jobs to run" and "is running
+ * jobs"
+ * - And so, should the context stay scheduled in?
+ * - If No, push onto a work-queue the work of scheduling out the old context,
+ * and getting a new one. That is:
+ * - kbasep_js_policy_runpool_remove_ctx() on old_ctx
+ * - kbasep_js_policy_enqueue_ctx() on old_ctx
+ * - kbasep_js_policy_dequeue_head_ctx() to get new_ctx
+ * - kbasep_js_policy_runpool_add_ctx() on new_ctx
+ * - (all of this work is deferred on a work-queue to keep the IRQ handler quick)
+ * - If there is space in the completed job slots' HEAD/NEXT registers, run the next job:
+ * - kbasep_js_policy_dequeue_job_irq() <em>in the context of the irq
+ * handler</em> with core_req set to that of the completing slot
+ * - if this returned MALI_TRUE, submit the job to the completed slot.
+ * - This is repeated until kbasep_js_policy_dequeue_job_irq() returns
+ * MALI_FALSE, or the job slot has a job queued on both the HEAD and NEXT registers.
+ * - If kbasep_js_policy_dequeue_job_irq() returned false, submit some work to
+ * the work-queue to retry from outside of IRQ context (calling
+ * kbasep_js_policy_dequeue_job() from a work-queue).
+ *
+ * Since the IRQ handler submits new jobs \em and re-checks the IRQ_RAWSTAT,
+ * this sequence could loop a large number of times: this could happen if
+ * the jobs submitted completed on the GPU very quickly (in a few cycles), such
+ * as GPU NULL jobs. Then, the HEAD/NEXT registers will always be free to take
+ * more jobs, causing us to loop until we run out of jobs.
+ *
+ * To mitigate this, we must limit the number of jobs submitted per slot during
+ * the IRQ handler - for example, no more than 2 jobs per slot per IRQ should
+ * be sufficient (to fill up the HEAD + NEXT registers in normal cases). For
+ * Mali-T600 with 3 job slots, this means that up to 6 jobs could be submitted per
+ * slot. Note that IRQ Throttling can make this situation commonplace: 6 jobs
+ * could complete but the IRQ for each of them is delayed by the throttling. By
+ * the time you get the IRQ, all 6 jobs could've completed, meaning you can
+ * submit jobs to fill all 6 HEAD+NEXT registers again.
+ *
+ * @note As much work is deferred as possible, which includes the scheduling
+ * out of a context and scheduling in a new context. However, we can still make
+ * starting a single high-priorty context quick despite this:
+ * - On Mali-T600 family, there is one more AS than JSs.
+ * - This means we can very quickly schedule out one AS, no matter what the
+ * situation (because there will always be one AS that's not currently running
+ * on the job slot - it can only have a job in the NEXT register).
+ * - Even with this scheduling out, fair-share can still be guaranteed e.g. by
+ * a timeline-based Completely Fair Scheduler.
+ * - When our high-priority context comes in, we can do this quick-scheduling
+ * out immediately, and then schedule in the high-priority context without having to block.
+ * - This all assumes that the context to schedule out is of lower
+ * priority. Otherwise, we will have to block waiting for some other low
+ * priority context to finish its jobs. Note that it's likely (but not
+ * impossible) that the high-priority context \b is running jobs, by virtue of
+ * it being high priority.
+ * - Therefore, we can give a high liklihood that on Mali-T600 at least one
+ * high-priority context can be started very quickly. For the general case, we
+ * can guarantee starting (no. ASs) - (no. JSs) high priority contexts
+ * quickly. In any case, there is a high likelihood that we're able to start
+ * more than one high priority context quickly.
+ *
+ * In terms of the functions used in the IRQ handler directly, these are the
+ * perfomance considerations:
+ * - kbase_js_policy_log_job_result():
+ * - This is just adding to a 64-bit value (possibly even a 32-bit value if we
+ * only store the time the job's recently spent - see below on 'priority weighting')
+ * - For priority weighting, a divide operation ('div') could happen, but
+ * this can happen in a deferred context (outside of IRQ) when scheduling out
+ * the ctx; as per our Engineering Specification, the contexts of different
+ * priority still stay scheduled in for the same timeslice, but higher priority
+ * ones scheduled back in more often.
+ * - That is, the weighted and unweighted times must be stored separately, and
+ * the weighted time is only updated \em outside of IRQ context.
+ * - Of course, this divide is more likely to be a 'multiply by inverse of the
+ * weight', assuming that the weight (priority) doesn't change.
+ * - kbasep_js_policy_should_remove_ctx():
+ * - This is usually just a comparison of the stored time value against some
+ * maximum value.
+ * - kbasep_js_policy_dequeue_job_irq():
+ * - For very fast operation, it can keep a very small buffer of 1 element per
+ * job-slot that allows the job at the head of the runpool for each job-slot
+ * to be retreived very quickly (O(1) time). This is complicated by high
+ * priority jobs that may 'jump' the queue, but could be eased by having a
+ * second buffer for high priority jobs. This assumes the requirement is only to
+ * run any high priority job quickly, not to run the highest high priority job
+ * quickly.
+ * - Of course, if a job slot completes two jobs in quick succession, then
+ * kbasep_js_policy_dequeue_job_irq() can return MALI_FALSE on the second call
+ * (because the small quick-access buffer is already exhausted)
+ * - The quick-access buffer must be refilled by the other Policy Job
+ * Management APIs that are called ourside of IRQ context.
+ * - This scheme guarantees that we keep every jobslot busy with at least one
+ * job - good utilization.
+ * - As a side effect, processes that try to submit too many quick-running
+ * jobs (to increase IRQ rate to cause a DoS attack ) will be limited to the
+ * rate at which the kernel work-queue can be serivced. This can be seen as a
+ * benefit.
+ *
+ * @note all deferred work can be wrapped up into one call - we usually need to
+ * indicate that a job/bag is done outside of IRQ context anyway.
+ *
+ *
+ *
+ * @section sec_kbase_js_policy_operation_submit Submission path
+ *
+ * Start with a Context with no jobs present, and assume equal priority of all
+ * contexts in the system. The following work all happens outside of IRQ
+ * Context :
+ * - As soon as job is made 'ready to 'run', then is must be registerd with the Job
+ * Scheduler Policy:
+ * - 'Ready to run' means they've satisified their dependencies in the
+ * Kernel-side Job Dispatch system.
+ * - Call kbasep_js_policy_enqueue_job()
+ * - This indicates that the job should be scheduled (it is ready to run).
+ * - As soon as a ctx changes from having 0 jobs 'ready to run' to >0 jobs
+ * 'ready to run', we enqueue the context on the policy queue:
+ * - Call kbasep_js_policy_enqueue_ctx()
+ * - This indicates that the \em ctx should be scheduled (it is ready to run)
+ *
+ * Next, we need to handle adding a context to the Run Pool - if it's sensible
+ * to do so. This can happen due to two reasons:
+ * -# A context is enqueued as above, and there are ASs free for it to run on
+ * (e.g. it is the first context to be run, in which case it can be added to
+ * the Run Pool immediately after enqueuing on the Policy Queue)
+ * -# A previous IRQ caused another ctx to be scheduled out, requiring that the
+ * context at the head of the queue be scheduled in. Such steps would happen in
+ * a work queue (work deferred from the IRQ context).
+ *
+ * In both cases, we'd handle it as follows:
+ * - Get the context at the Head of the Policy Queue:
+ * - Call kbasep_js_policy_dequeue_head_ctx()
+ * - Assign the Context an Address Space (Assert that there will be one free,
+ * given the above two reasons)
+ * - Add this context to the Run Pool:
+ * - Call kbasep_js_policy_runpool_add_ctx()
+ * - Now see if a job should be run:
+ * - Mostly, this will be done in the IRQ handler at the completion of a
+ * previous job.
+ * - However, there are two cases where this cannot be done: a) The first job
+ * enqueued to the system (there is no previous IRQ to act upon) b) When jobs
+ * are submitted at a low enough rate to not fill up all Job Slots (or, not to
+ * fill both the 'HEAD' and 'NEXT' registers in the job-slots)
+ * - Hence, on each ctx <b>and job</b> submission we should try to see if we
+ * can run a job:
+ * - For each job slot that has free space (in NEXT or HEAD+NEXT registers):
+ * - Call kbasep_js_policy_dequeue_job() with core_req set to that of the
+ * slot
+ * - if we got one, submit it to the job slot.
+ * - This is repeated until kbasep_js_policy_dequeue_job() returns
+ * MALI_FALSE, or the job slot has a job queued on both the HEAD and NEXT registers.
+ *
+ * The above case shows that we should attempt to run jobs in cases where a) a ctx
+ * has been added to the Run Pool, and b) new jobs have been added to a context
+ * in the Run Pool:
+ * - In the latter case, the context is in the runpool because it's got a job
+ * ready to run, or is already running a job
+ * - We could just wait until the IRQ handler fires, but for certain types of
+ * jobs this can take comparatively a long time to complete, e.g. GLES FS jobs
+ * generally take much longer to run that GLES CS jobs, which are vertex shader
+ * jobs. Even worse are NSS jobs, which may run for seconds/minutes.
+ * - Therefore, when a new job appears in the ctx, we must check the job-slots
+ * to see if they're free, and run the jobs as before.
+ *
+ *
+ *
+ * @section sec_kbase_js_policy_operation_submit_hipri Submission path for High Priority Contexts
+ *
+ * For High Priority Contexts on Mali-T600, we can make sure that at least 1 of
+ * them can be scheduled in immediately to start high prioriy jobs. In general,
+ * (no. ASs) - (no JSs) high priority contexts may be started immediately. The
+ * following describes how this happens:
+ *
+ * Similar to the previous section, consider what happens with a high-priority
+ * context (a context with a priority higher than that of any in the Run Pool)
+ * that starts out with no jobs:
+ * - A job becomes ready to run on the context, and so we enqueue the context
+ * on the Policy's Queue.
+ * - However, we'd like to schedule in this context immediately, instead of
+ * waiting for one of the Run Pool contexts' timeslice to expire
+ * - The policy's Enqueue function must detect this (because it is the policy
+ * that embodies the concept of priority), and take appropriate action
+ * - That is, kbasep_js_policy_enqueue_ctx() should check the Policy's Run
+ * Pool to see if a lower priority context should be scheduled out, and then
+ * schedule in the High Priority context.
+ * - For Mali-T600, we can always pick a context to schedule out immediately
+ * (because there are more ASs than JSs), and so scheduling out a victim context
+ * and scheduling in the high priority context can happen immediately.
+ * - If a policy implements fair-sharing, then this can still ensure the
+ * victim later on gets a fair share of the GPU.
+ * - As a note, consider whether the victim can be of equal/higher priority
+ * than the incoming context:
+ * - Usually, higher priority contexts will be the ones currently running
+ * jobs, and so the context with the lowest priority is usually not running
+ * jobs.
+ * - This makes it likely that the victim context is low priority, but
+ * it's not impossible for it to be a high priority one:
+ * - Suppose 3 high priority contexts are submitting only FS jobs, and one low
+ * priority context submitting CS jobs. Then, the context not running jobs will
+ * be one of the hi priority contexts (because only 2 FS jobs can be
+ * queued/running on the GPU HW for Mali-T600).
+ * - The problem can be mitigated by extra action, but it's questionable
+ * whether we need to: we already have a high likelihood that there's at least
+ * one high priority context - that should be good enough.
+ * - And so, this method makes sure that at least one high priority context
+ * can be started very quickly, but more than one high priority contexts could be
+ * delayed (up to one timeslice).
+ * - To improve this, use a GPU with a higher number of Address Spaces vs Job
+ * Slots.
+ * - At this point, let's assume this high priority context has been scheduled
+ * in immediately. The next step is to ensure it can start some jobs quickly.
+ * - It must do this by Soft-Stopping jobs on any of the Job Slots that it can
+ * submit to.
+ * - The rest of the logic for starting the jobs is taken care of by the IRQ
+ * handler. All the policy needs to do is ensure that
+ * kbasep_js_policy_dequeue_job() will return the jobs from the high priority
+ * context.
+ *
+ * @note in SS state, we currently only use 2 job-slots (even for T608, but
+ * this might change in future). In this case, it's always possible to schedule
+ * out 2 ASs quickly (their jobs won't be in the HEAD registers). At the same
+ * time, this maximizes usage of the job-slots (only 2 are in use), because you
+ * can guarantee starting of the jobs from the High Priority contexts immediately too.
+ *
+ *
+ *
+ * @section sec_kbase_js_policy_operation_notes Notes
+ *
+ * - In this design, a separate 'init' is needed from dequeue/requeue, so that
+ * information can be retained between the dequeue/requeue calls. For example,
+ * the total time spent for a context/job could be logged between
+ * dequeue/requeuing, to implement Fair Sharing. In this case, 'init' just
+ * initializes that information to some known state.
+ *
+ *
+ *
+ */
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @addtogroup base_kbase_api
+ * @{
+ */
+
+/**
+ * @addtogroup kbase_js_policy Job Scheduler Policy APIs
+ * @{
+ *
+ * <b>Refer to @ref page_kbase_js_policy for an overview and detailed operation of
+ * the Job Scheduler Policy and its use from the Job Scheduler Core.</b>
+ */
+
+/**
+ * @brief Job Scheduler Policy structure
+ */
+union kbasep_js_policy;
+
+/**
+ * @brief Initialize the Job Scheduler Policy
+ */
+mali_error kbasep_js_policy_init( kbase_device *kbdev );
+
+/**
+ * @brief Terminate the Job Scheduler Policy
+ */
+void kbasep_js_policy_term( kbasep_js_policy *js_policy );
+
+
+
+/**
+ * @addtogroup kbase_js_policy_ctx Job Scheduler Policy, Context Management API
+ * @{
+ *
+ * <b>Refer to @ref page_kbase_js_policy for an overview and detailed operation of
+ * the Job Scheduler Policy and its use from the Job Scheduler Core.</b>
+ */
+
+
+/**
+ * @brief Job Scheduler Policy Ctx Info structure
+ *
+ * This structure is embedded in the kbase_context structure. It is used to:
+ * - track information needed for the policy to schedule the context (e.g. time
+ * used, OS priority etc.)
+ * - link together kbase_contexts into a queue, so that a kbase_context can be
+ * obtained as the container of the policy ctx info. This allows the API to
+ * return what "the next context" should be.
+ * - obtain other information already stored in the kbase_context for
+ * scheduling purposes (e.g process ID to get the priority of the originating
+ * process)
+ */
+union kbasep_js_policy_ctx_info;
+
+/**
+ * @brief Initialize a ctx for use with the Job Scheduler Policy
+ *
+ * This effectively initializes the kbasep_js_policy_ctx_info structure within
+ * the kbase_context (itself located within the kctx->jctx.sched_info structure).
+ */
+mali_error kbasep_js_policy_init_ctx( kbase_device *kbdev, kbase_context *kctx );
+
+/**
+ * @brief Terminate resources associated with using a ctx in the Job Scheduler
+ * Policy.
+ */
+void kbasep_js_policy_term_ctx( kbasep_js_policy *js_policy, kbase_context *kctx );
+
+/**
+ * @brief Enqueue a context onto the Job Scheduler Policy Queue
+ *
+ * If the context enqueued has a priority higher than any in the Run Pool, then
+ * it is the Policy's responsibility to decide whether to schedule out a low
+ * priority context from the Run Pool to allow the high priority context to be
+ * scheduled in.
+ *
+ * If the context has the privileged flag set, it will always be kept at the
+ * head of the queue.
+ *
+ * The caller will be holding kbasep_js_kctx_info::ctx::jsctx_mutex.
+ * The caller will be holding kbasep_js_device_data::queue_mutex.
+ */
+void kbasep_js_policy_enqueue_ctx( kbasep_js_policy *js_policy, kbase_context *kctx );
+
+/**
+ * @brief Dequeue a context from the Head of the Job Scheduler Policy Queue
+ *
+ * The caller will be holding kbasep_js_device_data::queue_mutex.
+ *
+ * @return MALI_TRUE if a context was available, and *kctx_ptr points to
+ * the kctx dequeued.
+ * @return MALI_FALSE if no contexts were available.
+ */
+mali_bool kbasep_js_policy_dequeue_head_ctx( kbasep_js_policy *js_policy, kbase_context **kctx_ptr );
+
+/**
+ * @brief Evict a context from the Job Scheduler Policy Queue
+ *
+ * This is only called as part of destroying a kbase_context.
+ *
+ * There are many reasons why this might fail during the lifetime of a
+ * context. For example, the context is in the process of being scheduled. In
+ * that case a thread doing the scheduling might have a pointer to it, but the
+ * context is neither in the Policy Queue, nor is it in the Run
+ * Pool. Crucially, neither the Policy Queue, Run Pool, or the Context itself
+ * are locked.
+ *
+ * Hence to find out where in the system the context is, it is important to do
+ * more than just check the kbasep_js_kctx_info::ctx::is_scheduled member.
+ *
+ * The caller will be holding kbasep_js_device_data::queue_mutex.
+ *
+ * @return MALI_TRUE if the context was evicted from the Policy Queue
+ * @return MALI_FALSE if the context was not found in the Policy Queue
+ */
+mali_bool kbasep_js_policy_try_evict_ctx( kbasep_js_policy *js_policy, kbase_context *kctx );
+
+/**
+ * @brief Remove all jobs belonging to a non-queued, non-running context.
+ *
+ * This must call kbase_jd_cancel() on each job belonging to the context, which
+ * causes all necessary job cleanup actions to occur on a workqueue.
+ *
+ * At the time of the call, the context is guarenteed to be not-currently
+ * scheduled on the Run Pool (is_scheduled == MALI_FALSE), and not present in
+ * the Policy Queue. This is because one of the following functions was used
+ * recently on the context:
+ * - kbasep_js_policy_evict_ctx()
+ * - kbasep_js_policy_runpool_remove_ctx()
+ *
+ * In both cases, no subsequent call was made on the context to any of:
+ * - kbasep_js_policy_runpool_add_ctx()
+ * - kbasep_js_policy_enqueue_ctx()
+ *
+ * This is only called as part of destroying a kbase_context.
+ *
+ * The locking conditions on the caller are as follows:
+ * - it will be holding kbasep_js_kctx_info::ctx::jsctx_mutex.
+ */
+void kbasep_js_policy_kill_all_ctx_jobs( kbasep_js_policy *js_policy, kbase_context *kctx );
+
+/**
+ * @brief Add a context to the Job Scheduler Policy's Run Pool
+ *
+ * If the context enqueued has a priority higher than any in the Run Pool, then
+ * it is the Policy's responsibility to decide whether to schedule out low
+ * priority jobs that are currently running on the GPU.
+ *
+ * The number of contexts present in the Run Pool will never be more than the
+ * number of Address Spaces.
+ *
+ * The following guarentees are made about the state of the system when this
+ * is called:
+ * - kctx->as_nr member is valid
+ * - the context has its submit_allowed flag set
+ * - kbasep_js_device_data::runpool_irq::per_as_data[kctx->as_nr] is valid
+ * - The refcount of the context is guarenteed to be zero.
+ * - kbasep_js_kctx_info::ctx::is_scheduled will be MALI_TRUE.
+ *
+ * The locking conditions on the caller are as follows:
+ * - it will be holding kbasep_js_kctx_info::ctx::jsctx_mutex.
+ * - it will be holding kbasep_js_device_data::runpool_mutex.
+ * - it will be holding kbasep_js_device_data::runpool_irq::lock (a spinlock)
+ *
+ * Due to a spinlock being held, this function must not call any APIs that sleep.
+ */
+void kbasep_js_policy_runpool_add_ctx( kbasep_js_policy *js_policy, kbase_context *kctx );
+
+/**
+ * @brief Remove a context from the Job Scheduler Policy's Run Pool
+ *
+ * The kctx->as_nr member is valid and the context has its submit_allowed flag
+ * set when this is called. The state of
+ * kbasep_js_device_data::runpool_irq::per_as_data[kctx->as_nr] is also
+ * valid. The refcount of the context is guarenteed to be zero.
+ *
+ * The locking conditions on the caller are as follows:
+ * - it will be holding kbasep_js_kctx_info::ctx::jsctx_mutex.
+ * - it will be holding kbasep_js_device_data::runpool_mutex.
+ * - it will be holding kbasep_js_device_data::runpool_irq::lock (a spinlock)
+ *
+ * Due to a spinlock being held, this function must not call any APIs that sleep.
+ */
+void kbasep_js_policy_runpool_remove_ctx( kbasep_js_policy *js_policy, kbase_context *kctx );
+
+/**
+ * @brief Indicate whether a context should be removed from the Run Pool
+ * (should be scheduled out).
+ *
+ * The kbasep_js_device_data::runpool_irq::lock will be held by the caller.
+ *
+ * @note This API is called from IRQ context.
+ */
+mali_bool kbasep_js_policy_should_remove_ctx( kbasep_js_policy *js_policy, kbase_context *kctx );
+
+/**
+ * @brief Indicate whether a new context has an higher priority than the current context.
+ *
+ *
+ * The caller has the following conditions on locking:
+ * - kbasep_js_kctx_info::ctx::jsctx_mutex will be held for \a new_ctx
+ *
+ * This function must not sleep, because an IRQ spinlock might be held whilst
+ * this is called.
+ *
+ * @note There is nothing to stop the priority of \a current_ctx changing
+ * during or immediately after this function is called (because its jsctx_mutex
+ * cannot be held). Therefore, this function should only be seen as a heuristic
+ * guide as to whether \a new_ctx is higher priority than \a current_ctx
+ */
+mali_bool kbasep_js_policy_ctx_has_priority( kbasep_js_policy *js_policy, kbase_context *current_ctx, kbase_context *new_ctx );
+
+
+/** @} */ /* end group kbase_js_policy_ctx */
+
+/**
+ * @addtogroup kbase_js_policy_job Job Scheduler Policy, Job Chain Management API
+ * @{
+ *
+ * <b>Refer to @ref page_kbase_js_policy for an overview and detailed operation of
+ * the Job Scheduler Policy and its use from the Job Scheduler Core.</b>
+ */
+
+
+
+/**
+ * @brief Job Scheduler Policy Job Info structure
+ *
+ * This structure is embedded in the kbase_jd_atom structure. It is used to:
+ * - track information needed for the policy to schedule the job (e.g. time
+ * used, OS priority etc.)
+ * - link together jobs into a queue/buffer, so that a kbase_jd_atom can be
+ * obtained as the container of the policy job info. This allows the API to
+ * return what "the next job" should be.
+ * - obtain other information already stored in the kbase_context for
+ * scheduling purposes (e.g user-side relative priority)
+ */
+union kbasep_js_policy_job_info;
+
+/**
+ * @brief Initialize a job for use with the Job Scheduler Policy
+ *
+ * This function initializes the kbasep_js_policy_job_info structure within the
+ * kbase_jd_atom. It will only initialize/allocate resources that are specific
+ * to the job.
+ *
+ * That is, this function makes \b no attempt to:
+ * - initialize any context/policy-wide information
+ * - enqueue the job on the policy.
+ *
+ * At some later point, the following functions must be called on the job, in this order:
+ * - kbasep_js_policy_register_job() to register the job and initialize policy/context wide data.
+ * - kbasep_js_policy_enqueue_job() to enqueue the job
+ *
+ * A job must only ever be initialized on the Policy once, and must be
+ * terminated on the Policy before the job is freed.
+ *
+ * The caller will not be holding any locks, and so this function will not
+ * modify any information in \a kctx or \a js_policy.
+ *
+ * @return MALI_ERROR_NONE if initialization was correct.
+ */
+mali_error kbasep_js_policy_init_job( const kbasep_js_policy *js_policy, const kbase_context *kctx, kbase_jd_atom *katom );
+
+/**
+ * @brief Terminate resources associated with using a job in the Job Scheduler
+ * Policy.
+ *
+ * kbasep_js_policy_deregister_job() must have been called on \a katom before
+ * calling this.
+ *
+ * The caller will not be holding any locks, and so this function will not
+ * modify any information in \a kctx or \a js_policy.
+ */
+void kbasep_js_policy_term_job( const kbasep_js_policy *js_policy, const kbase_context *kctx, kbase_jd_atom *katom );
+
+/**
+ * @brief Register context/policy-wide information for a job on the Job Scheduler Policy.
+ *
+ * Registers the job with the policy. This is used to track the job before it
+ * has been enqueued/requeued by kbasep_js_policy_enqueue_job(). Specifically,
+ * it is used to update information under a lock that could not be updated at
+ * kbasep_js_policy_init_job() time (such as context/policy-wide data).
+ *
+ * @note This function will not fail, and hence does not allocate any
+ * resources. Any failures that could occur on registration will be caught
+ * during kbasep_js_policy_init_job() instead.
+ *
+ * A job must only ever be registerd on the Policy once, and must be
+ * deregistered on the Policy on completion (whether or not that completion was
+ * success/failure).
+ *
+ * The caller has the following conditions on locking:
+ * - kbasep_js_kctx_info::ctx::jsctx_mutex will be held.
+ */
+void kbasep_js_policy_register_job( kbasep_js_policy *js_policy, kbase_context *kctx, kbase_jd_atom *katom );
+
+/**
+ * @brief De-register context/policy-wide information for a on the Job Scheduler Policy.
+ *
+ * This must be used before terminating the resources associated with using a
+ * job in the Job Scheduler Policy. That is, it must be called before
+ * kbasep_js_policy_term_job(). This function does not itself terminate any
+ * resources, at most it just updates information in the policy and context.
+ *
+ * The caller has the following conditions on locking:
+ * - kbasep_js_kctx_info::ctx::jsctx_mutex will be held.
+ */
+void kbasep_js_policy_deregister_job( kbasep_js_policy *js_policy, kbase_context *kctx, kbase_jd_atom *katom );
+
+
+/**
+ * @brief Dequeue a Job for a job slot from the Job Scheduler Policy Run Pool
+ *
+ * The job returned by the policy will match at least one of the bits in the
+ * job slot's core requirements (but it may match more than one, or all @ref
+ * base_jd_core_req bits supported by the job slot).
+ *
+ * In addition, the requirements of the job returned will be a subset of those
+ * requested - the job returned will not have requirements that \a job_slot_idx
+ * cannot satisfy.
+ *
+ * The caller will submit the job to the GPU as soon as the GPU's NEXT register
+ * for the corresponding slot is empty. Of course, the GPU will then only run
+ * this new job when the currently executing job (in the jobslot's HEAD
+ * register) has completed.
+ *
+ * @return MALI_TRUE if a job was available, and *kctx_ptr points to
+ * the kctx dequeued.
+ * @return MALI_FALSE if no jobs were available among all ctxs in the Run Pool.
+ *
+ * @note base_jd_core_req is currently a u8 - beware of type conversion.
+ *
+ * @note This API is not called from IRQ context outside of the policy
+ * itself, and so need not operate in O(1) time. Refer to
+ * kbasep_js_policy_dequeue_job_irq() for dequeuing from IRQ context.
+ *
+ * As a result of kbasep_js_policy_dequeue_job_irq(), this function might need to
+ * carry out work to maintain its internal queues both before and after a job
+ * is dequeued.
+ *
+ * The caller has the following conditions on locking:
+ * - kbasep_js_device_data::runpool_lock::irq will be held.
+ * - kbdev->jm_slots[ job_slot_idx ].lock will be held
+ * - kbasep_js_device_data::runpool_mutex will be held.
+ * - kbasep_js_kctx_info::ctx::jsctx_mutex. will be held
+ */
+mali_bool kbasep_js_policy_dequeue_job( kbase_device *kbdev,
+ int job_slot_idx,
+ kbase_jd_atom **katom_ptr );
+
+/**
+ * @brief IRQ Context Fast equivalent of kbasep_js_policy_dequeue_job()
+ *
+ * This is a 'fast' variant of kbasep_js_policy_dequeue_job() that will be
+ * called from IRQ context.
+ *
+ * It is recommended that this is coded to be O(1) and must be capable of
+ * returning at least one job per job-slot to IRQ context. If IRQs occur in
+ * quick succession without any work done in non-irq context, then this
+ * function is allowed to return MALI_FALSE even if there are jobs available
+ * that satisfy the requirements.
+ *
+ * This relaxation of correct dequeuing allows O(1) execution with bounded
+ * memory requirements. For example, in addition to the ctxs' job queues the run
+ * pool can have a buffer that can contain a single job for 'quick access' per job
+ * slot, but this buffer is only refilled from the job queue outside of IRQ
+ * context.
+ *
+ * Therefore, all other Job Scheduled Policy Job Management APIs can be
+ * implemented to refill this buffer/maintain the Run Pool's job queues outside
+ * of IRQ context.
+ *
+ * The caller has the following conditions on locking:
+ * - kbasep_js_device_data::runpool_irq::lock will be held.
+ * - kbdev->jm_slots[ job_slot_idx ].lock will be held
+ *
+ * @note The caller \em might be holding one of the
+ * kbasep_js_kctx_info::ctx::jsctx_mutex locks, if this code is called from
+ * outside of IRQ context.
+ */
+mali_bool kbasep_js_policy_dequeue_job_irq( kbase_device *kbdev,
+ int job_slot_idx,
+ kbase_jd_atom **katom_ptr );
+
+
+/**
+ * @brief Requeue a Job back into the the Job Scheduler Policy Run Pool
+ *
+ * This will be used to enqueue a job after its creation and also to requeue
+ * a job into the Run Pool that was previously dequeued (running). It notifies
+ * the policy that the job should be run again at some point later.
+ *
+ * As a result of kbasep_js_policy_dequeue_job_irq(), this function might need to
+ * carry out work to maintain its internal queues both before and after a job
+ * is requeued.
+ *
+ * The caller has the following conditions on locking:
+ * - kbasep_js_device_data::runpool_irq::lock (a spinlock) will be held.
+ * - kbasep_js_device_data::runpool_mutex will be held.
+ * - kbasep_js_kctx_info::ctx::jsctx_mutex will be held.
+ */
+void kbasep_js_policy_enqueue_job( kbasep_js_policy *js_policy, kbase_jd_atom *katom );
+
+
+/**
+ * @brief Log the result of a job: the time spent on a job/context, and whether
+ * the job failed or not.
+ *
+ * Since a kbase_jd_atom contains a pointer to the kbase_context owning it,
+ * then this can also be used to log time on either/both the job and the
+ * containing context.
+ *
+ * The completion state of the job can be found by examining \a katom->event.event_code
+ *
+ * If the Job failed and the policy is implementing fair-sharing, then the
+ * policy must penalize the failing job/context:
+ * - At the very least, it should penalize the time taken by the amount of
+ * time spent processing the IRQ in SW. This because a job in the NEXT slot
+ * waiting to run will be delayed until the failing job has had the IRQ
+ * cleared.
+ * - \b Optionally, the policy could apply other penalties. For example, based
+ * on a threshold of a number of failing jobs, after which a large penalty is
+ * applied.
+ *
+ * The kbasep_js_device_data::runpool_mutex will be held by the caller.
+ *
+ * @note This API is called from IRQ context.
+ *
+ * The caller has the following conditions on locking:
+ * - kbasep_js_device_data::runpool_irq::lock will be held.
+ *
+ * @param js_policy job scheduler policy
+ * @param katom job dispatch atom
+ * @param time_spent_us the time spent by the job, in microseconds (10^-6 seconds).
+ */
+void kbasep_js_policy_log_job_result( kbasep_js_policy *js_policy, kbase_jd_atom *katom, u32 time_spent_us );
+
+/** @} */ /* end group kbase_js_policy_job */
+
+
+
+/** @} */ /* end group kbase_js_policy */
+/** @} */ /* end group base_kbase_api */
+/** @} */ /* end group base_api */
+
+#endif /* _KBASE_JS_POLICY_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/*
+ * Job Scheduler: Completely Fair Policy Implementation
+ */
+
+#include <kbase/src/common/mali_kbase.h>
+#include <kbase/src/common/mali_kbase_jm.h>
+#include <kbase/src/common/mali_kbase_js.h>
+#include <kbase/src/common/mali_kbase_js_policy_cfs.h>
+
+/**
+ * Define for when dumping is enabled.
+ * This should not be based on the instrumentation level as whether dumping is enabled for a particular level is down to the integrator.
+ * However this is being used for now as otherwise the cinstr headers would be needed.
+ */
+#define CINSTR_DUMPING_ENABLED ( 2 == MALI_INSTRUMENTATION_LEVEL )
+
+/** Fixed point constants used for runtime weight calculations */
+#define WEIGHT_FIXEDPOINT_SHIFT 10
+#define WEIGHT_TABLE_SIZE 40
+#define WEIGHT_0_NICE (WEIGHT_TABLE_SIZE/2)
+#define WEIGHT_0_VAL (1 << WEIGHT_FIXEDPOINT_SHIFT)
+
+#define LOOKUP_VARIANT_MASK ((1u<<KBASEP_JS_MAX_NR_CORE_REQ_VARIANTS) - 1u)
+
+/** Core requirements that all the variants support */
+#define JS_CORE_REQ_ALL_OTHERS \
+ ( BASE_JD_REQ_CF | BASE_JD_REQ_V | BASE_JD_REQ_PERMON | BASE_JD_REQ_EXTERNAL_RESOURCES )
+
+/** Context requirements the all the variants support */
+
+/* In HW issue 8987 workaround, restrict Compute-only contexts and Compute jobs onto job slot[2],
+ * which will ensure their affinity does not intersect GLES jobs */
+#define JS_CTX_REQ_ALL_OTHERS_8987 \
+ ( KBASE_CTX_FLAG_CREATE_FLAGS_SET | KBASE_CTX_FLAG_PRIVILEGED )
+#define JS_CORE_REQ_COMPUTE_SLOT_8987 \
+ ( BASE_JD_REQ_CS )
+#define JS_CORE_REQ_ONLY_COMPUTE_SLOT_8987 \
+ ( BASE_JD_REQ_ONLY_COMPUTE )
+
+/* Otherwise, compute-only contexts/compute jobs can use any job slot */
+#define JS_CTX_REQ_ALL_OTHERS \
+ ( KBASE_CTX_FLAG_CREATE_FLAGS_SET | KBASE_CTX_FLAG_PRIVILEGED | KBASE_CTX_FLAG_HINT_ONLY_COMPUTE)
+#define JS_CORE_REQ_COMPUTE_SLOT \
+ ( BASE_JD_REQ_CS | BASE_JD_REQ_ONLY_COMPUTE )
+
+/* core_req variants are ordered by least restrictive first, so that our
+ * algorithm in cached_variant_idx_init picks the least restrictive variant for
+ * each job . Note that coherent_group requirement is added to all CS variants as the
+ * selection of job-slot does not depend on the coherency requirement. */
+static const kbasep_atom_req core_req_variants[] ={
+ {
+ /* 0: Fragment variant */
+ (JS_CORE_REQ_ALL_OTHERS | BASE_JD_REQ_FS | BASE_JD_REQ_COHERENT_GROUP),
+ (JS_CTX_REQ_ALL_OTHERS),
+ 0
+ },
+ {
+ /* 1: Compute variant, can use all coregroups */
+ (JS_CORE_REQ_ALL_OTHERS | JS_CORE_REQ_COMPUTE_SLOT),
+ (JS_CTX_REQ_ALL_OTHERS),
+ 0
+ },
+ {
+ /* 2: Compute variant, uses only coherent coregroups */
+ (JS_CORE_REQ_ALL_OTHERS | JS_CORE_REQ_COMPUTE_SLOT | BASE_JD_REQ_COHERENT_GROUP ),
+ (JS_CTX_REQ_ALL_OTHERS),
+ 0
+ },
+ {
+ /* 3: Compute variant, might only use coherent coregroup, and must use tiling */
+ (JS_CORE_REQ_ALL_OTHERS | JS_CORE_REQ_COMPUTE_SLOT | BASE_JD_REQ_COHERENT_GROUP | BASE_JD_REQ_T),
+ (JS_CTX_REQ_ALL_OTHERS),
+ 0
+ },
+
+ {
+ /* 4: Variant guarenteed to support NSS atoms.
+ *
+ * In the case of a context that's specified as 'Only Compute', it'll
+ * not allow Tiler or Fragment atoms, and so those get rejected */
+ (JS_CORE_REQ_ALL_OTHERS | JS_CORE_REQ_COMPUTE_SLOT | BASE_JD_REQ_COHERENT_GROUP | BASE_JD_REQ_NSS ),
+ (JS_CTX_REQ_ALL_OTHERS),
+ 0
+ },
+
+ {
+ /* 5: Compute variant for specific-coherent-group targetting CoreGroup 0 */
+ (JS_CORE_REQ_ALL_OTHERS | JS_CORE_REQ_COMPUTE_SLOT | BASE_JD_REQ_COHERENT_GROUP | BASE_JD_REQ_SPECIFIC_COHERENT_GROUP ),
+ (JS_CTX_REQ_ALL_OTHERS),
+ 0 /* device_nr */
+ },
+ {
+ /* 6: Compute variant for specific-coherent-group targetting CoreGroup 1 */
+ (JS_CORE_REQ_ALL_OTHERS | JS_CORE_REQ_COMPUTE_SLOT | BASE_JD_REQ_COHERENT_GROUP | BASE_JD_REQ_SPECIFIC_COHERENT_GROUP ),
+ (JS_CTX_REQ_ALL_OTHERS),
+ 1 /* device_nr */
+ },
+
+ /* Unused core_req variants, to bring the total up to a power of 2 */
+ {
+ /* 7 */
+ 0,
+ 0,
+ 0
+ },
+};
+
+static const kbasep_atom_req core_req_variants_8987[] ={
+ {
+ /* 0: Fragment variant */
+ (JS_CORE_REQ_ALL_OTHERS | BASE_JD_REQ_FS | BASE_JD_REQ_COHERENT_GROUP),
+ (JS_CTX_REQ_ALL_OTHERS_8987),
+ 0
+ },
+ {
+ /* 1: Compute variant, can use all coregroups */
+ (JS_CORE_REQ_ALL_OTHERS | JS_CORE_REQ_COMPUTE_SLOT_8987),
+ (JS_CTX_REQ_ALL_OTHERS_8987),
+ 0
+ },
+ {
+ /* 2: Compute variant, uses only coherent coregroups */
+ (JS_CORE_REQ_ALL_OTHERS | JS_CORE_REQ_COMPUTE_SLOT_8987 | BASE_JD_REQ_COHERENT_GROUP ),
+ (JS_CTX_REQ_ALL_OTHERS_8987),
+ 0
+ },
+ {
+ /* 3: Compute variant, might only use coherent coregroup, and must use tiling */
+ (JS_CORE_REQ_ALL_OTHERS | JS_CORE_REQ_COMPUTE_SLOT_8987 | BASE_JD_REQ_COHERENT_GROUP | BASE_JD_REQ_T),
+ (JS_CTX_REQ_ALL_OTHERS_8987),
+ 0
+ },
+
+ {
+ /* 4: Variant guarenteed to support Compute contexts/atoms
+ *
+ * In the case of a context that's specified as 'Only Compute', it'll
+ * not allow Tiler or Fragment atoms, and so those get rejected
+ *
+ * NOTE: NSS flag cannot be supported, so its flag is cleared on bag
+ * submit */
+ (JS_CORE_REQ_ALL_OTHERS | JS_CORE_REQ_ONLY_COMPUTE_SLOT_8987 | BASE_JD_REQ_COHERENT_GROUP ),
+ (JS_CTX_REQ_ALL_OTHERS_8987 | KBASE_CTX_FLAG_HINT_ONLY_COMPUTE),
+ 0
+ },
+
+ {
+ /* 5: Compute variant for specific-coherent-group targetting CoreGroup 0
+ * Specifically, this only allows 'Only Compute' contexts/atoms */
+ (JS_CORE_REQ_ALL_OTHERS | JS_CORE_REQ_ONLY_COMPUTE_SLOT_8987 | BASE_JD_REQ_COHERENT_GROUP | BASE_JD_REQ_SPECIFIC_COHERENT_GROUP ),
+ (JS_CTX_REQ_ALL_OTHERS_8987 | KBASE_CTX_FLAG_HINT_ONLY_COMPUTE),
+ 0 /* device_nr */
+ },
+ {
+ /* 6: Compute variant for specific-coherent-group targetting CoreGroup 1
+ * Specifically, this only allows 'Only Compute' contexts/atoms */
+ (JS_CORE_REQ_ALL_OTHERS | JS_CORE_REQ_ONLY_COMPUTE_SLOT_8987 | BASE_JD_REQ_COHERENT_GROUP | BASE_JD_REQ_SPECIFIC_COHERENT_GROUP ),
+ (JS_CTX_REQ_ALL_OTHERS_8987 | KBASE_CTX_FLAG_HINT_ONLY_COMPUTE),
+ 1 /* device_nr */
+ },
+ /* Unused core_req variants, to bring the total up to a power of 2 */
+ {
+ /* 7 */
+ 0,
+ 0,
+ 0
+ },
+};
+
+#define CORE_REQ_VARIANT_FRAGMENT 0
+#define CORE_REQ_VARIANT_COMPUTE_ALL_CORES 1
+#define CORE_REQ_VARIANT_COMPUTE_ONLY_COHERENT_GROUP 2
+#define CORE_REQ_VARIANT_COMPUTE_OR_TILING 3
+#define CORE_REQ_VARIANT_COMPUTE_NSS 4
+#define CORE_REQ_VARIANT_COMPUTE_SPECIFIC_COHERENT_0 5
+#define CORE_REQ_VARIANT_COMPUTE_SPECIFIC_COHERENT_1 6
+
+#define CORE_REQ_VARIANT_ONLY_COMPUTE_8987 4
+#define CORE_REQ_VARIANT_ONLY_COMPUTE_8987_SPECIFIC_COHERENT_0 5
+#define CORE_REQ_VARIANT_ONLY_COMPUTE_8987_SPECIFIC_COHERENT_1 6
+
+
+#define NUM_CORE_REQ_VARIANTS NELEMS(core_req_variants)
+#define NUM_CORE_REQ_VARIANTS_8987 NELEMS(core_req_variants_8987)
+
+/** Mappings between job slot and variant lists for Soft-Stoppable State */
+static const u32 variants_supported_ss_state[] =
+{
+ /* js[0] uses Fragment only */
+ (1u << CORE_REQ_VARIANT_FRAGMENT),
+
+ /* js[1] uses: Compute-all-cores, Compute-only-coherent, Compute-or-Tiling,
+ * compute-specific-coregroup-0 */
+ (1u << CORE_REQ_VARIANT_COMPUTE_ALL_CORES)
+ | (1u << CORE_REQ_VARIANT_COMPUTE_ONLY_COHERENT_GROUP)
+ | (1u << CORE_REQ_VARIANT_COMPUTE_OR_TILING)
+ | (1u << CORE_REQ_VARIANT_COMPUTE_SPECIFIC_COHERENT_0),
+
+ /* js[2] uses: Compute-only-coherent, compute-specific-coregroup-1 */
+ (1u << CORE_REQ_VARIANT_COMPUTE_ONLY_COHERENT_GROUP)
+ | (1u << CORE_REQ_VARIANT_COMPUTE_SPECIFIC_COHERENT_1)
+};
+
+/** Mappings between job slot and variant lists for Soft-Stoppable State, when
+ * we have atoms that can use all the cores (KBASEP_JS_CTX_ATTR_COMPUTE_ALL_CORES)
+ * and there's more than one coregroup */
+static const u32 variants_supported_ss_allcore_state[] =
+{
+ /* js[0] uses Fragment only */
+ (1u << CORE_REQ_VARIANT_FRAGMENT),
+
+ /* js[1] uses: Compute-all-cores, Compute-only-coherent, Compute-or-Tiling,
+ * compute-specific-coregroup-0, compute-specific-coregroup-1 */
+ (1u << CORE_REQ_VARIANT_COMPUTE_ALL_CORES)
+ | (1u << CORE_REQ_VARIANT_COMPUTE_ONLY_COHERENT_GROUP)
+ | (1u << CORE_REQ_VARIANT_COMPUTE_OR_TILING)
+ | (1u << CORE_REQ_VARIANT_COMPUTE_SPECIFIC_COHERENT_0)
+ | (1u << CORE_REQ_VARIANT_COMPUTE_SPECIFIC_COHERENT_1),
+
+ /* js[2] not used */
+ 0
+};
+
+/** Mappings between job slot and variant lists for Soft-Stoppable State for
+ * BASE_HW_ISSUE_8987
+ *
+ * @note There is no 'allcores' variant of this, because this HW issue forces all
+ * atoms with BASE_JD_CORE_REQ_SPECIFIC_COHERENT_GROUP to use slot 2 anyway -
+ * hence regardless of whether a specific coregroup is targetted, those atoms
+ * still make progress. */
+static const u32 variants_supported_ss_state_8987[] =
+{
+ /* js[0] uses Fragment only */
+ (1u << CORE_REQ_VARIANT_FRAGMENT),
+
+ /* js[1] uses: Compute-all-cores, Compute-only-coherent, Compute-or-Tiling*/
+ (1u << CORE_REQ_VARIANT_COMPUTE_ALL_CORES)
+ | (1u << CORE_REQ_VARIANT_COMPUTE_ONLY_COHERENT_GROUP)
+ | (1u << CORE_REQ_VARIANT_COMPUTE_OR_TILING),
+
+ /* js[2] uses: All Only-compute atoms (including those targetting a
+ * specific coregroup), and nothing else. This is because their affinity
+ * must not intersect with non-only-compute atoms.
+ *
+ * As a side effect, this causes the 'device_nr' for atoms targetting a
+ * specific coregroup to be ignored */
+ (1u << CORE_REQ_VARIANT_ONLY_COMPUTE_8987)
+ | (1u << CORE_REQ_VARIANT_ONLY_COMPUTE_8987_SPECIFIC_COHERENT_0)
+ | (1u << CORE_REQ_VARIANT_ONLY_COMPUTE_8987_SPECIFIC_COHERENT_1)
+};
+
+/** Mappings between job slot and variant lists for Non-Soft-Stoppable State
+ *
+ * @note There is no 'allcores' variant of this, because NSS state forces all
+ * atoms with BASE_JD_CORE_REQ_SPECIFIC_COHERENT_GROUP to use slot 1 anyway -
+ * hence regardless of whether a specific coregroup is targetted, those atoms
+ * still make progress.
+ *
+ * @note This is effectively not used during BASE_HW_ISSUE_8987, because the
+ * NSS flag is cleared from all atoms */
+static const u32 variants_supported_nss_state[] =
+{
+ /* js[0] uses Fragment only */
+ (1u << CORE_REQ_VARIANT_FRAGMENT),
+
+ /* js[1] uses: Compute-all-cores, Compute-only-coherent, Compute-or-Tiling,
+ * Compute-targetting-specific-coregroup
+ *
+ * Due to NSS atoms, this causes the 'device_nr' for atoms targetting a
+ * specific coregroup to be ignored (otherwise the Non-NSS atoms targetting
+ * a coregroup would be unreasonably delayed) */
+ (1u << CORE_REQ_VARIANT_COMPUTE_ALL_CORES)
+ | (1u << CORE_REQ_VARIANT_COMPUTE_ONLY_COHERENT_GROUP)
+ | (1u << CORE_REQ_VARIANT_COMPUTE_OR_TILING)
+ | (1u << CORE_REQ_VARIANT_COMPUTE_SPECIFIC_COHERENT_0)
+ | (1u << CORE_REQ_VARIANT_COMPUTE_SPECIFIC_COHERENT_1),
+
+ /* js[2] uses: NSS only */
+ (1u << CORE_REQ_VARIANT_COMPUTE_NSS)
+};
+
+/* Defines for easy asserts 'is scheduled'/'is queued'/'is neither queued norscheduled' */
+#define KBASEP_JS_CHECKFLAG_QUEUED (1u << 0) /**< Check the queued state */
+#define KBASEP_JS_CHECKFLAG_SCHEDULED (1u << 1) /**< Check the scheduled state */
+#define KBASEP_JS_CHECKFLAG_IS_QUEUED (1u << 2) /**< Expect queued state to be set */
+#define KBASEP_JS_CHECKFLAG_IS_SCHEDULED (1u << 3) /**< Expect scheduled state to be set */
+
+enum
+{
+ KBASEP_JS_CHECK_NOTQUEUED = KBASEP_JS_CHECKFLAG_QUEUED,
+ KBASEP_JS_CHECK_NOTSCHEDULED = KBASEP_JS_CHECKFLAG_SCHEDULED,
+ KBASEP_JS_CHECK_QUEUED = KBASEP_JS_CHECKFLAG_QUEUED | KBASEP_JS_CHECKFLAG_IS_QUEUED,
+ KBASEP_JS_CHECK_SCHEDULED = KBASEP_JS_CHECKFLAG_SCHEDULED | KBASEP_JS_CHECKFLAG_IS_SCHEDULED
+};
+
+typedef u32 kbasep_js_check;
+
+/*
+ * Private Functions
+ */
+
+/* Table autogenerated using util built from: kbase/scripts/gen_cfs_weight_of_prio.c */
+
+/* weight = 1.25 */
+static const int weight_of_priority[] =
+{
+ /* -20 */ 11, 14, 18, 23,
+ /* -16 */ 29, 36, 45, 56,
+ /* -12 */ 70, 88, 110, 137,
+ /* -8 */ 171, 214, 268, 335,
+ /* -4 */ 419, 524, 655, 819,
+ /* 0 */ 1024, 1280, 1600, 2000,
+ /* 4 */ 2500, 3125, 3906, 4883,
+ /* 8 */ 6104, 7630, 9538, 11923,
+ /* 12 */ 14904, 18630, 23288, 29110,
+ /* 16 */ 36388, 45485, 56856, 71070
+};
+
+/**
+ * @note There is nothing to stop the priority of the ctx containing \a
+ * ctx_info changing during or immediately after this function is called
+ * (because its jsctx_mutex cannot be held during IRQ). Therefore, this
+ * function should only be seen as a heuristic guide as to the priority weight
+ * of the context.
+ */
+STATIC u64 priority_weight(kbasep_js_policy_cfs_ctx *ctx_info, u32 time_us)
+{
+ u64 time_delta_us;
+ int priority;
+ priority = ctx_info->process_priority + ctx_info->bag_priority;
+
+ /* Adjust runtime_us using priority weight if required */
+ if(priority != 0 && time_us != 0)
+ {
+ int clamped_priority;
+
+ /* Clamp values to min..max weights */
+ if(priority > OSK_PROCESS_PRIORITY_MAX)
+ {
+ clamped_priority = OSK_PROCESS_PRIORITY_MAX;
+ }
+ else if(priority < OSK_PROCESS_PRIORITY_MIN)
+ {
+ clamped_priority = OSK_PROCESS_PRIORITY_MIN;
+ }
+ else
+ {
+ clamped_priority = priority;
+ }
+
+ /* Fixed point multiplication */
+ time_delta_us = ((u64)time_us * weight_of_priority[WEIGHT_0_NICE + clamped_priority]);
+ /* Remove fraction */
+ time_delta_us = time_delta_us >> WEIGHT_FIXEDPOINT_SHIFT;
+ /* Make sure the time always increases */
+ if(0 == time_delta_us)
+ {
+ time_delta_us++;
+ }
+ }
+ else
+ {
+ time_delta_us = time_us;
+ }
+
+ return time_delta_us;
+}
+
+#if KBASE_TRACE_ENABLE != 0
+STATIC int kbasep_js_policy_trace_get_refcnt_nolock( kbase_device *kbdev, kbase_context *kctx )
+{
+ kbasep_js_device_data *js_devdata;
+ int as_nr;
+ int refcnt = 0;
+
+ js_devdata = &kbdev->js_data;
+
+ as_nr = kctx->as_nr;
+ if ( as_nr != KBASEP_AS_NR_INVALID )
+ {
+ kbasep_js_per_as_data *js_per_as_data;
+ js_per_as_data = &js_devdata->runpool_irq.per_as_data[as_nr];
+
+ refcnt = js_per_as_data->as_busy_refcount;
+ }
+
+ return refcnt;
+}
+
+STATIC INLINE int kbasep_js_policy_trace_get_refcnt( kbase_device *kbdev, kbase_context *kctx )
+{
+ kbasep_js_device_data *js_devdata;
+ int refcnt = 0;
+
+ js_devdata = &kbdev->js_data;
+
+ osk_spinlock_irq_lock( &js_devdata->runpool_irq.lock );
+ refcnt = kbasep_js_policy_trace_get_refcnt_nolock( kbdev, kctx );
+ osk_spinlock_irq_unlock( &js_devdata->runpool_irq.lock );
+
+ return refcnt;
+}
+#else /* KBASE_TRACE_ENABLE != 0 */
+STATIC int kbasep_js_policy_trace_get_refcnt_nolock( kbase_device *kbdev, kbase_context *kctx )
+{
+ CSTD_UNUSED( kbdev );
+ CSTD_UNUSED( kctx );
+ return 0;
+}
+
+STATIC INLINE int kbasep_js_policy_trace_get_refcnt( kbase_device *kbdev, kbase_context *kctx )
+{
+ CSTD_UNUSED( kbdev );
+ CSTD_UNUSED( kctx );
+ return 0;
+}
+#endif /* KBASE_TRACE_ENABLE != 0 */
+
+
+#if MALI_DEBUG != 0
+STATIC void kbasep_js_debug_check( kbasep_js_policy_cfs *policy_info, kbase_context *kctx, kbasep_js_check check_flag )
+{
+ /* This function uses the ternary operator and non-explicit comparisons,
+ * because it makes for much shorter, easier to read code */
+
+ if ( check_flag & KBASEP_JS_CHECKFLAG_QUEUED )
+ {
+ mali_bool is_queued;
+ mali_bool expect_queued;
+ is_queued = ( OSK_DLIST_MEMBER_OF( &policy_info->ctx_queue_head,
+ kctx,
+ jctx.sched_info.runpool.policy_ctx.cfs.list ) )? MALI_TRUE: MALI_FALSE;
+
+ if(!is_queued)
+ {
+ is_queued = ( OSK_DLIST_MEMBER_OF( &policy_info->ctx_rt_queue_head,
+ kctx,
+ jctx.sched_info.runpool.policy_ctx.cfs.list ) )? MALI_TRUE: MALI_FALSE;
+ }
+
+ expect_queued = ( check_flag & KBASEP_JS_CHECKFLAG_IS_QUEUED ) ? MALI_TRUE : MALI_FALSE;
+
+ OSK_ASSERT_MSG( expect_queued == is_queued,
+ "Expected context %p to be %s but it was %s\n",
+ kctx,
+ (expect_queued) ?"queued":"not queued",
+ (is_queued) ?"queued":"not queued" );
+
+ }
+
+ if ( check_flag & KBASEP_JS_CHECKFLAG_SCHEDULED )
+ {
+ mali_bool is_scheduled;
+ mali_bool expect_scheduled;
+ is_scheduled = ( OSK_DLIST_MEMBER_OF( &policy_info->scheduled_ctxs_head,
+ kctx,
+ jctx.sched_info.runpool.policy_ctx.cfs.list ) )? MALI_TRUE: MALI_FALSE;
+
+ expect_scheduled = ( check_flag & KBASEP_JS_CHECKFLAG_IS_SCHEDULED ) ? MALI_TRUE : MALI_FALSE;
+ OSK_ASSERT_MSG( expect_scheduled == is_scheduled,
+ "Expected context %p to be %s but it was %s\n",
+ kctx,
+ (expect_scheduled)?"scheduled":"not scheduled",
+ (is_scheduled) ?"scheduled":"not scheduled" );
+
+ }
+
+}
+#else /* MALI_DEBUG != 0 */
+STATIC void kbasep_js_debug_check( kbasep_js_policy_cfs *policy_info, kbase_context *kctx, kbasep_js_check check_flag )
+{
+ CSTD_UNUSED( policy_info );
+ CSTD_UNUSED( kctx );
+ CSTD_UNUSED( check_flag );
+ return;
+}
+#endif /* MALI_DEBUG != 0 */
+
+STATIC INLINE void set_slot_to_variant_lookup( u32 *bit_array, u32 slot_idx, u32 variants_supported )
+{
+ u32 overall_bit_idx = slot_idx * KBASEP_JS_MAX_NR_CORE_REQ_VARIANTS;
+ u32 word_idx = overall_bit_idx / 32;
+ u32 bit_idx = overall_bit_idx % 32;
+
+ OSK_ASSERT( slot_idx < BASE_JM_MAX_NR_SLOTS );
+ OSK_ASSERT( (variants_supported & ~LOOKUP_VARIANT_MASK) == 0 );
+
+ bit_array[word_idx] |= variants_supported << bit_idx;
+}
+
+
+STATIC INLINE u32 get_slot_to_variant_lookup( u32 *bit_array, u32 slot_idx )
+{
+ u32 overall_bit_idx = slot_idx * KBASEP_JS_MAX_NR_CORE_REQ_VARIANTS;
+ u32 word_idx = overall_bit_idx / 32;
+ u32 bit_idx = overall_bit_idx % 32;
+
+ u32 res;
+
+ OSK_ASSERT( slot_idx < BASE_JM_MAX_NR_SLOTS );
+
+ res = bit_array[word_idx] >> bit_idx;
+ res &= LOOKUP_VARIANT_MASK;
+
+ return res;
+}
+
+/* Check the core_req_variants: make sure that every job slot is satisifed by
+ * one of the variants. This checks that cached_variant_idx_init will produce a
+ * valid result for jobs that make maximum use of the job slots.
+ *
+ * @note The checks are limited to the job slots - this does not check that
+ * every context requirement is covered (because some are intentionally not
+ * supported, such as KBASE_CTX_FLAG_SUBMIT_DISABLED) */
+#if MALI_DEBUG
+STATIC void debug_check_core_req_variants( kbase_device *kbdev, kbasep_js_policy_cfs *policy_info )
+{
+ kbasep_js_device_data *js_devdata;
+ u32 i;
+ int j;
+
+ js_devdata = &kbdev->js_data;
+
+ for ( j = 0 ; j < kbdev->gpu_props.num_job_slots ; ++j )
+ {
+ base_jd_core_req job_core_req;
+ mali_bool found = MALI_FALSE;
+
+ job_core_req = js_devdata->js_reqs[j];
+ for ( i = 0; i < policy_info->num_core_req_variants ; ++i )
+ {
+ base_jd_core_req var_core_req;
+ var_core_req = policy_info->core_req_variants[i].core_req;
+
+ if ( (var_core_req & job_core_req) == job_core_req )
+ {
+ found = MALI_TRUE;
+ break;
+ }
+ }
+
+ /* Early-out on any failure */
+ OSK_ASSERT_MSG( found != MALI_FALSE,
+ "Job slot %d features 0x%x not matched by core_req_variants. "
+ "Rework core_req_variants and vairants_supported_<...>_state[] to match\n",
+ j,
+ job_core_req );
+ }
+}
+#endif
+
+STATIC void build_core_req_variants( kbase_device *kbdev, kbasep_js_policy_cfs *policy_info )
+{
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( policy_info != NULL );
+ CSTD_UNUSED( kbdev );
+
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8987))
+ {
+ OSK_ASSERT( NUM_CORE_REQ_VARIANTS_8987 <= KBASEP_JS_MAX_NR_CORE_REQ_VARIANTS );
+
+ /* Assume a static set of variants */
+ OSK_MEMCPY( policy_info->core_req_variants, core_req_variants_8987, sizeof(core_req_variants_8987) );
+
+ policy_info->num_core_req_variants = NUM_CORE_REQ_VARIANTS_8987;
+ }
+ else
+ {
+ OSK_ASSERT( NUM_CORE_REQ_VARIANTS <= KBASEP_JS_MAX_NR_CORE_REQ_VARIANTS );
+
+ /* Assume a static set of variants */
+ OSK_MEMCPY( policy_info->core_req_variants, core_req_variants, sizeof(core_req_variants) );
+
+ policy_info->num_core_req_variants = NUM_CORE_REQ_VARIANTS;
+ }
+
+ OSK_DEBUG_CODE( debug_check_core_req_variants( kbdev, policy_info ) );
+}
+
+
+STATIC void build_slot_lookups( kbase_device *kbdev, kbasep_js_policy_cfs *policy_info )
+{
+ u8 i;
+ const u32 *variants_supported_ss_for_this_hw = variants_supported_ss_state;
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( policy_info != NULL );
+
+ OSK_ASSERT( kbdev->gpu_props.num_job_slots <= NELEMS(variants_supported_ss_state) );
+ OSK_ASSERT( kbdev->gpu_props.num_job_slots <= NELEMS(variants_supported_ss_allcore_state) );
+ OSK_ASSERT( kbdev->gpu_props.num_job_slots <= NELEMS(variants_supported_ss_state_8987) );
+ OSK_ASSERT( kbdev->gpu_props.num_job_slots <= NELEMS(variants_supported_nss_state) );
+
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8987))
+ {
+ variants_supported_ss_for_this_hw = variants_supported_ss_state_8987;
+ }
+
+ /* Given the static set of variants, provide a static set of lookups */
+ for ( i = 0; i < kbdev->gpu_props.num_job_slots; ++i )
+ {
+ set_slot_to_variant_lookup( policy_info->slot_to_variant_lookup_ss_state,
+ i,
+ variants_supported_ss_for_this_hw[i] );
+
+ set_slot_to_variant_lookup( policy_info->slot_to_variant_lookup_ss_allcore_state,
+ i,
+ variants_supported_ss_allcore_state[i] );
+
+ set_slot_to_variant_lookup( policy_info->slot_to_variant_lookup_nss_state,
+ i,
+ variants_supported_nss_state[i] );
+ }
+
+}
+
+STATIC mali_error cached_variant_idx_init( const kbasep_js_policy_cfs *policy_info, const kbase_context *kctx, kbase_jd_atom *atom )
+{
+ kbasep_js_policy_cfs_job *job_info;
+ u32 i;
+ base_jd_core_req job_core_req;
+ u32 job_device_nr;
+ kbase_context_flags ctx_flags;
+ const kbasep_js_kctx_info *js_kctx_info;
+ const kbase_device *kbdev;
+
+ OSK_ASSERT( policy_info != NULL );
+ OSK_ASSERT( kctx != NULL );
+ OSK_ASSERT( atom != NULL );
+
+ kbdev = CONTAINER_OF(policy_info, const kbase_device, js_data.policy.cfs);
+ job_info = &atom->sched_info.cfs;
+ job_core_req = atom->core_req;
+ job_device_nr = atom->device_nr;
+ js_kctx_info = &kctx->jctx.sched_info;
+ ctx_flags = js_kctx_info->ctx.flags;
+
+ /* Initial check for atoms targetting a specific coregroup */
+ if ( (job_core_req & BASE_JD_REQ_SPECIFIC_COHERENT_GROUP) != MALI_FALSE
+ && job_device_nr >= kbdev->gpu_props.num_core_groups )
+ {
+ /* device_nr exceeds the number of coregroups - not allowed by
+ * @ref base_jd_atom API contract */
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+
+ /* Pick a core_req variant that matches us. Since they're ordered by least
+ * restrictive first, it picks the least restrictive variant */
+ for ( i = 0; i < policy_info->num_core_req_variants ; ++i )
+ {
+ base_jd_core_req var_core_req;
+ kbase_context_flags var_ctx_req;
+ u32 var_device_nr;
+ var_core_req = policy_info->core_req_variants[i].core_req;
+ var_ctx_req = policy_info->core_req_variants[i].ctx_req;
+ var_device_nr = policy_info->core_req_variants[i].device_nr;
+
+ if ( (var_core_req & job_core_req) == job_core_req
+ && (var_ctx_req & ctx_flags) == ctx_flags
+ && ((var_core_req & BASE_JD_REQ_SPECIFIC_COHERENT_GROUP)==MALI_FALSE || var_device_nr == job_device_nr ) )
+ {
+ job_info->cached_variant_idx = i;
+ return MALI_ERROR_NONE;
+ }
+ }
+
+ /* Could not find a matching requirement, this should only be caused by an
+ * attempt to attack the driver. */
+ return MALI_ERROR_FUNCTION_FAILED;
+}
+
+STATIC mali_bool dequeue_job( kbase_device *kbdev,
+ kbase_context *kctx,
+ u32 variants_supported,
+ kbase_jd_atom **katom_ptr,
+ int job_slot_idx)
+{
+ kbasep_js_device_data *js_devdata;
+ kbasep_js_policy_cfs *policy_info;
+ kbasep_js_policy_cfs_ctx *ctx_info;
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( katom_ptr != NULL );
+ OSK_ASSERT( kctx != NULL );
+
+ js_devdata = &kbdev->js_data;
+ policy_info = &js_devdata->policy.cfs;
+ ctx_info = &kctx->jctx.sched_info.runpool.policy_ctx.cfs;
+
+ /* Only submit jobs from contexts that are allowed */
+ if ( kbasep_js_is_submit_allowed( js_devdata, kctx ) != MALI_FALSE )
+ {
+ /* Check each variant in turn */
+ while ( variants_supported != 0 )
+ {
+ long variant_idx;
+ osk_dlist *job_list;
+ variant_idx = osk_find_first_set_bit( variants_supported );
+ job_list = &ctx_info->job_list_head[variant_idx];
+
+ if ( OSK_DLIST_IS_EMPTY( job_list ) == MALI_FALSE )
+ {
+ /* Found a context with a matching job */
+ {
+ kbase_jd_atom *front_atom = OSK_DLIST_FRONT( job_list, kbase_jd_atom, sched_info.cfs.list );
+ KBASE_TRACE_ADD_SLOT( kbdev, JS_POLICY_DEQUEUE_JOB, front_atom->kctx, front_atom->user_atom,
+ front_atom->jc, job_slot_idx );
+ }
+ *katom_ptr = OSK_DLIST_POP_FRONT( job_list, kbase_jd_atom, sched_info.cfs.list );
+
+ (*katom_ptr)->sched_info.cfs.ticks = 0;
+
+ /* Put this context at the back of the Run Pool */
+ OSK_DLIST_REMOVE( &policy_info->scheduled_ctxs_head,
+ kctx,
+ jctx.sched_info.runpool.policy_ctx.cfs.list );
+ OSK_DLIST_PUSH_BACK( &policy_info->scheduled_ctxs_head,
+ kctx,
+ kbase_context,
+ jctx.sched_info.runpool.policy_ctx.cfs.list );
+
+ return MALI_TRUE;
+ }
+
+ variants_supported &= ~(1u << variant_idx);
+ }
+ /* All variants checked by here */
+ }
+
+ /* The context does not have a matching job */
+
+ return MALI_FALSE;
+}
+
+/**
+ * Hold the runpool_irq spinlock for this
+ */
+OSK_STATIC_INLINE mali_bool timer_callback_should_run( kbase_device *kbdev )
+{
+ kbasep_js_device_data *js_devdata;
+ s8 nr_running_ctxs;
+
+ OSK_ASSERT(kbdev != NULL);
+ js_devdata = &kbdev->js_data;
+
+ /* nr_user_contexts_running is updated with the runpool_mutex. However, the
+ * locking in the caller gives us a barrier that ensures nr_user_contexts is
+ * up-to-date for reading */
+ nr_running_ctxs = js_devdata->nr_user_contexts_running;
+
+#if MALI_DEBUG
+ if(js_devdata->softstop_always)
+ {
+ /* Debug support for allowing soft-stop on a single context */
+ return MALI_TRUE;
+ }
+#endif /* MALI_DEBUG */
+
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_9435))
+ {
+ /* Timeouts would have to be 4x longer (due to micro-architectural design)
+ * to support OpenCL conformance tests, so only run the timer when there's:
+ * - 2 or more CL contexts
+ * - 1 or more GLES contexts
+ *
+ * NOTE: We will treat a context that has both Compute and Non-Compute jobs
+ * will be treated as an OpenCL context (hence, we don't check
+ * KBASEP_JS_CTX_ATTR_NON_COMPUTE).
+ */
+ {
+ s8 nr_compute_ctxs = kbasep_js_ctx_attr_count_on_runpool( kbdev, KBASEP_JS_CTX_ATTR_COMPUTE );
+ s8 nr_noncompute_ctxs = nr_running_ctxs - nr_compute_ctxs;
+
+ return (mali_bool)( nr_compute_ctxs >= 2 || nr_noncompute_ctxs > 0 );
+ }
+ }
+ else
+ {
+ /* Run the timer callback whenever you have at least 1 context */
+ return (mali_bool)(nr_running_ctxs > 0);
+ }
+}
+
+static void timer_callback(void *data)
+{
+ kbase_device *kbdev = (kbase_device*)data;
+ kbasep_js_device_data *js_devdata;
+ kbasep_js_policy_cfs *policy_info;
+ int s;
+ osk_error osk_err;
+ mali_bool reset_needed = MALI_FALSE;
+
+ OSK_ASSERT(kbdev != NULL);
+
+ js_devdata = &kbdev->js_data;
+ policy_info = &js_devdata->policy.cfs;
+
+ /* Loop through the slots */
+ for(s=0; s<kbdev->gpu_props.num_job_slots; s++)
+ {
+ kbase_jm_slot *slot = kbase_job_slot_lock(kbdev, s);
+ kbase_jd_atom *atom = NULL;
+
+ if (kbasep_jm_nr_jobs_submitted(slot) > 0)
+ {
+ atom = kbasep_jm_peek_idx_submit_slot(slot, 0);
+ OSK_ASSERT( atom != NULL );
+
+ if ( kbasep_jm_is_dummy_workaround_job( kbdev, atom ) != MALI_FALSE )
+ {
+ /* Prevent further use of the atom - never cause a soft-stop, hard-stop, or a GPU reset due to it. */
+ atom = NULL;
+ }
+ }
+
+ if ( atom != NULL )
+ {
+ /* The current version of the model doesn't support Soft-Stop */
+ if (!kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_5736))
+ {
+ u32 ticks = atom->sched_info.cfs.ticks ++;
+
+#if !CINSTR_DUMPING_ENABLED
+ if ( (atom->core_req & BASE_JD_REQ_NSS) == 0 )
+ {
+ /* Job is Soft-Stoppable */
+ if (ticks == js_devdata->soft_stop_ticks)
+ {
+ /* Job has been scheduled for at least js_devdata->soft_stop_ticks ticks.
+ * Soft stop the slot so we can run other jobs.
+ */
+ OSK_PRINT_INFO( OSK_BASE_JM, "Soft-stop" );
+
+#if KBASE_DISABLE_SCHEDULING_SOFT_STOPS == 0
+ kbase_job_slot_softstop(kbdev, s, atom);
+#endif
+ }
+ else if (ticks == js_devdata->hard_stop_ticks_ss)
+ {
+ /* Job has been scheduled for at least js_devdata->hard_stop_ticks_ss ticks.
+ * It should have been soft-stopped by now. Hard stop the slot.
+ */
+#if KBASE_DISABLE_SCHEDULING_HARD_STOPS == 0
+ OSK_PRINT_WARN(OSK_BASE_JM, "JS: Job Hard-Stopped (took more than %lu ticks at %lu ms/tick)", ticks, js_devdata->scheduling_tick_ns/1000000u );
+ kbase_job_slot_hardstop(atom->kctx, s, atom);
+#endif
+ }
+ else if (ticks == js_devdata->gpu_reset_ticks_ss)
+ {
+ /* Job has been scheduled for at least js_devdata->gpu_reset_ticks_ss ticks.
+ * It should have left the GPU by now. Signal that the GPU needs to be reset.
+ */
+ reset_needed = MALI_TRUE;
+ }
+ }
+ else
+#endif /* !CINSTR_DUMPING_ENABLED */
+ /* NOTE: During CINSTR_DUMPING_ENABLED, we use the NSS-timeouts for *all* atoms,
+ * which makes the hard-stop and GPU reset timeout much longer. We also ensure
+ * that we don't soft-stop at all.
+ *
+ * Otherwise, this next block is only used for NSS-atoms */
+ {
+ /* Job is Non Soft-Stoppable */
+ if (ticks == js_devdata->soft_stop_ticks)
+ {
+ /* Job has been scheduled for at least js_devdata->soft_stop_ticks.
+ * Let's try to soft-stop it even if it's supposed to be NSS.
+ */
+ OSK_PRINT_INFO( OSK_BASE_JM, "Soft-stop" );
+
+#if (KBASE_DISABLE_SCHEDULING_SOFT_STOPS == 0) && (CINSTR_DUMPING_ENABLED == 0)
+ kbase_job_slot_softstop(kbdev, s, atom);
+#endif
+ }
+ else if (ticks == js_devdata->hard_stop_ticks_nss)
+ {
+ /* Job has been scheduled for at least js_devdata->hard_stop_ticks_nss ticks.
+ * Hard stop the slot.
+ */
+#if KBASE_DISABLE_SCHEDULING_HARD_STOPS == 0
+ OSK_PRINT_WARN(OSK_BASE_JM, "JS: Job Hard-Stopped (took more than %lu ticks at %lu ms/tick)", ticks, js_devdata->scheduling_tick_ns/1000000u );
+ kbase_job_slot_hardstop(atom->kctx, s, atom);
+#endif
+ }
+ else if (ticks == js_devdata->gpu_reset_ticks_nss)
+ {
+ /* Job has been scheduled for at least js_devdata->gpu_reset_ticks_nss ticks.
+ * It should have left the GPU by now. Signal that the GPU needs to be reset.
+ */
+ reset_needed = MALI_TRUE;
+ }
+ }
+ }
+ }
+ kbase_job_slot_unlock(kbdev, s);
+ }
+
+ if (reset_needed)
+ {
+ OSK_PRINT_WARN(OSK_BASE_JM, "JS: Job has been on the GPU for too long");
+ if (kbase_prepare_to_reset_gpu(kbdev))
+ {
+ kbase_reset_gpu(kbdev);
+ }
+ }
+
+ /* the timer is re-issued if there is contexts in the run-pool */
+ osk_spinlock_irq_lock(&js_devdata->runpool_irq.lock);
+
+ if (timer_callback_should_run(kbdev) != MALI_FALSE)
+ {
+ osk_err = osk_timer_start_ns(&policy_info->timer, js_devdata->scheduling_tick_ns);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ policy_info->timer_running = MALI_FALSE;
+ }
+ }
+ else
+ {
+ KBASE_TRACE_ADD( kbdev, JS_POLICY_TIMER_END, NULL, NULL, 0u, 0u );
+ policy_info->timer_running = MALI_FALSE;
+ }
+
+ osk_spinlock_irq_unlock(&js_devdata->runpool_irq.lock);
+}
+
+/*
+ * Non-private functions
+ */
+
+mali_error kbasep_js_policy_init( kbase_device *kbdev )
+{
+ kbasep_js_device_data *js_devdata;
+ kbasep_js_policy_cfs *policy_info;
+
+ OSK_ASSERT( kbdev != NULL );
+ js_devdata = &kbdev->js_data;
+ policy_info = &js_devdata->policy.cfs;
+
+ OSK_DLIST_INIT( &policy_info->ctx_queue_head );
+ OSK_DLIST_INIT( &policy_info->scheduled_ctxs_head );
+ OSK_DLIST_INIT( &policy_info->ctx_rt_queue_head );
+
+ if (osk_timer_init(&policy_info->timer) != OSK_ERR_NONE)
+ {
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+
+ osk_timer_callback_set( &policy_info->timer, timer_callback, kbdev );
+
+ policy_info->timer_running = MALI_FALSE;
+
+ policy_info->head_runtime_us = 0;
+
+ /* Build up the core_req variants */
+ build_core_req_variants( kbdev, policy_info );
+ /* Build the slot to variant lookups */
+ build_slot_lookups(kbdev, policy_info );
+
+ return MALI_ERROR_NONE;
+}
+
+void kbasep_js_policy_term( kbasep_js_policy *js_policy )
+{
+ kbasep_js_policy_cfs *policy_info;
+
+ OSK_ASSERT( js_policy != NULL );
+ policy_info = &js_policy->cfs;
+
+ /* ASSERT that there are no contexts queued */
+ OSK_ASSERT( OSK_DLIST_IS_EMPTY( &policy_info->ctx_queue_head ) != MALI_FALSE );
+ /* ASSERT that there are no contexts scheduled */
+ OSK_ASSERT( OSK_DLIST_IS_EMPTY( &policy_info->scheduled_ctxs_head ) != MALI_FALSE );
+
+ /* ASSERT that there are no contexts queued */
+ OSK_ASSERT( OSK_DLIST_IS_EMPTY( &policy_info->ctx_rt_queue_head ) != MALI_FALSE );
+
+ osk_timer_stop(&policy_info->timer);
+ osk_timer_term(&policy_info->timer);
+}
+
+mali_error kbasep_js_policy_init_ctx( kbase_device *kbdev, kbase_context *kctx )
+{
+ kbasep_js_device_data *js_devdata;
+ kbasep_js_policy_cfs_ctx *ctx_info;
+ kbasep_js_policy_cfs *policy_info;
+ osk_process_priority prio;
+ u32 i;
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( kctx != NULL );
+
+ js_devdata = &kbdev->js_data;
+ policy_info = &kbdev->js_data.policy.cfs;
+ ctx_info = &kctx->jctx.sched_info.runpool.policy_ctx.cfs;
+
+ KBASE_TRACE_ADD_REFCOUNT( kbdev, JS_POLICY_INIT_CTX, kctx, NULL, 0u,
+ kbasep_js_policy_trace_get_refcnt( kbdev, kctx ));
+
+ for ( i = 0 ; i < policy_info->num_core_req_variants ; ++i )
+ {
+ OSK_DLIST_INIT( &ctx_info->job_list_head[i] );
+ }
+
+ osk_get_process_priority(&prio);
+ ctx_info->process_rt_policy = prio.is_realtime;
+ ctx_info->process_priority = prio.priority;
+ ctx_info->bag_total_priority = 0;
+ ctx_info->bag_total_nr_atoms = 0;
+
+ /* Initial runtime (relative to least-run context runtime)
+ *
+ * This uses the Policy Queue's most up-to-date head_runtime_us by using the
+ * queue mutex to issue memory barriers - also ensure future updates to
+ * head_runtime_us occur strictly after this context is initialized */
+ osk_mutex_lock( &js_devdata->queue_mutex );
+
+ /* No need to hold the the runpool_irq.lock here, because we're initializing
+ * the value, and the context is definitely not being updated in the
+ * runpool at this point. The queue_mutex ensures the memory barrier. */
+ ctx_info->runtime_us = policy_info->head_runtime_us +
+ priority_weight(ctx_info,
+ (u64)js_devdata->cfs_ctx_runtime_init_slices * (u64)(js_devdata->ctx_timeslice_ns/1000u));
+
+ osk_mutex_unlock( &js_devdata->queue_mutex );
+
+ return MALI_ERROR_NONE;
+}
+
+void kbasep_js_policy_term_ctx( kbasep_js_policy *js_policy, kbase_context *kctx )
+{
+ kbasep_js_policy_cfs_ctx *ctx_info;
+ kbasep_js_policy_cfs *policy_info;
+ u32 i;
+
+ OSK_ASSERT( js_policy != NULL );
+ OSK_ASSERT( kctx != NULL );
+
+ policy_info = &js_policy->cfs;
+ ctx_info = &kctx->jctx.sched_info.runpool.policy_ctx.cfs;
+
+ {
+ kbase_device *kbdev = CONTAINER_OF( js_policy, kbase_device, js_data.policy );
+ KBASE_TRACE_ADD_REFCOUNT( kbdev, JS_POLICY_TERM_CTX, kctx, NULL, 0u,
+ kbasep_js_policy_trace_get_refcnt( kbdev, kctx ));
+ }
+
+ /* ASSERT that no jobs are present */
+ for ( i = 0 ; i < policy_info->num_core_req_variants ; ++i )
+ {
+ OSK_ASSERT( OSK_DLIST_IS_EMPTY( &ctx_info->job_list_head[i] ) != MALI_FALSE );
+ }
+
+ /* No work to do */
+}
+
+
+/*
+ * Context Management
+ */
+
+void kbasep_js_policy_enqueue_ctx( kbasep_js_policy *js_policy, kbase_context *kctx )
+{
+ kbasep_js_policy_cfs *policy_info;
+ kbasep_js_policy_cfs_ctx *ctx_info;
+ kbase_context *list_kctx = NULL;
+ kbasep_js_device_data *js_devdata;
+ osk_dlist *queue_head;
+
+ OSK_ASSERT( js_policy != NULL );
+ OSK_ASSERT( kctx != NULL );
+
+ policy_info = &js_policy->cfs;
+ ctx_info = &kctx->jctx.sched_info.runpool.policy_ctx.cfs;
+ js_devdata = CONTAINER_OF( js_policy, kbasep_js_device_data, policy );
+
+ {
+ kbase_device *kbdev = CONTAINER_OF( js_policy, kbase_device, js_data.policy );
+ KBASE_TRACE_ADD_REFCOUNT( kbdev, JS_POLICY_ENQUEUE_CTX, kctx, NULL, 0u,
+ kbasep_js_policy_trace_get_refcnt( kbdev, kctx ));
+ }
+
+ /* ASSERT about scheduled-ness/queued-ness */
+ kbasep_js_debug_check( policy_info, kctx, KBASEP_JS_CHECK_NOTQUEUED );
+
+ /* Clamp the runtime to prevent DoS attacks through "stored-up" runtime */
+ if (policy_info->head_runtime_us > ctx_info->runtime_us
+ + (u64)js_devdata->cfs_ctx_runtime_min_slices * (u64)(js_devdata->ctx_timeslice_ns/1000u))
+ {
+ /* No need to hold the the runpool_irq.lock here, because we're essentially
+ * initializing the value, and the context is definitely not being updated in the
+ * runpool at this point. The queue_mutex held by the caller ensures the memory
+ * barrier. */
+ ctx_info->runtime_us = policy_info->head_runtime_us
+ - (u64)js_devdata->cfs_ctx_runtime_min_slices * (u64)(js_devdata->ctx_timeslice_ns/1000u);
+ }
+
+ /* Find the position where the context should be enqueued */
+ if(ctx_info->process_rt_policy)
+ {
+ queue_head = &policy_info->ctx_rt_queue_head;
+ }
+ else
+ {
+ queue_head = &policy_info->ctx_queue_head;
+ }
+
+ OSK_DLIST_FOREACH( queue_head,
+ kbase_context,
+ jctx.sched_info.runpool.policy_ctx.cfs.list,
+ list_kctx )
+ {
+ kbasep_js_policy_cfs_ctx *list_ctx_info;
+ list_ctx_info = &list_kctx->jctx.sched_info.runpool.policy_ctx.cfs;
+
+ if ( (kctx->jctx.sched_info.ctx.flags & KBASE_CTX_FLAG_PRIVILEGED) != 0 )
+ {
+ break;
+ }
+
+ if ( (list_ctx_info->runtime_us > ctx_info->runtime_us) &&
+ ((list_kctx->jctx.sched_info.ctx.flags & KBASE_CTX_FLAG_PRIVILEGED) == 0) )
+ {
+ break;
+ }
+ }
+
+ /* Add the context to the queue */
+ if (OSK_DLIST_IS_VALID( list_kctx, jctx.sched_info.runpool.policy_ctx.cfs.list ) == MALI_TRUE)
+ {
+ OSK_DLIST_INSERT_BEFORE( queue_head,
+ kctx,
+ list_kctx,
+ kbase_context,
+ jctx.sched_info.runpool.policy_ctx.cfs.list );
+ }
+ else
+ {
+ OSK_DLIST_PUSH_BACK( queue_head,
+ kctx,
+ kbase_context,
+ jctx.sched_info.runpool.policy_ctx.cfs.list );
+ }
+}
+
+mali_bool kbasep_js_policy_dequeue_head_ctx( kbasep_js_policy *js_policy, kbase_context **kctx_ptr )
+{
+ kbasep_js_policy_cfs *policy_info;
+ kbase_context *head_ctx;
+ osk_dlist *queue_head;
+
+ OSK_ASSERT( js_policy != NULL );
+ OSK_ASSERT( kctx_ptr != NULL );
+
+ policy_info = &js_policy->cfs;
+
+ /* attempt to dequeue from the 'realttime' queue first */
+ if ( OSK_DLIST_IS_EMPTY( &policy_info->ctx_rt_queue_head ) != MALI_FALSE )
+ {
+ if ( OSK_DLIST_IS_EMPTY( &policy_info->ctx_queue_head ) != MALI_FALSE )
+ {
+ /* Nothing to dequeue */
+ return MALI_FALSE;
+ }
+ else
+ {
+ queue_head = &policy_info->ctx_queue_head;
+ }
+ }
+ else
+ {
+ queue_head = &policy_info->ctx_rt_queue_head;
+ }
+
+ /* Contexts are dequeued from the front of the queue */
+ *kctx_ptr = OSK_DLIST_POP_FRONT( queue_head,
+ kbase_context,
+ jctx.sched_info.runpool.policy_ctx.cfs.list );
+
+ {
+ kbase_device *kbdev = CONTAINER_OF( js_policy, kbase_device, js_data.policy );
+ kbase_context *kctx = *kctx_ptr;
+ KBASE_TRACE_ADD_REFCOUNT( kbdev, JS_POLICY_DEQUEUE_HEAD_CTX, kctx, NULL, 0u,
+ kbasep_js_policy_trace_get_refcnt( kbdev, kctx ));
+ }
+
+
+ /* Update the head runtime */
+ head_ctx = OSK_DLIST_FRONT( queue_head,
+ kbase_context,
+ jctx.sched_info.runpool.policy_ctx.cfs.list );
+ if (OSK_DLIST_IS_VALID( head_ctx, jctx.sched_info.runpool.policy_ctx.cfs.list ) == MALI_TRUE)
+ {
+ /* No need to hold the the runpool_irq.lock here for reading - the
+ * context is definitely not being updated in the runpool at this
+ * point. The queue_mutex held by the caller ensures the memory barrier. */
+ u64 head_runtime = head_ctx->jctx.sched_info.runpool.policy_ctx.cfs.runtime_us;
+
+ if (head_runtime > policy_info->head_runtime_us)
+ {
+ policy_info->head_runtime_us = head_runtime;
+ }
+ }
+
+ return MALI_TRUE;
+}
+
+mali_bool kbasep_js_policy_try_evict_ctx( kbasep_js_policy *js_policy, kbase_context *kctx )
+{
+ kbasep_js_policy_cfs_ctx *ctx_info;
+ kbasep_js_policy_cfs *policy_info;
+ mali_bool is_present;
+ osk_dlist *queue_head;
+ osk_dlist *qhead;
+
+ OSK_ASSERT( js_policy != NULL );
+ OSK_ASSERT( kctx != NULL );
+
+ policy_info = &js_policy->cfs;
+ ctx_info = &kctx->jctx.sched_info.runpool.policy_ctx.cfs;
+
+ if(ctx_info->process_rt_policy)
+ {
+ queue_head = &policy_info->ctx_rt_queue_head;
+ }
+ else
+ {
+ queue_head = &policy_info->ctx_queue_head;
+ }
+ qhead = queue_head;
+
+ is_present = OSK_DLIST_MEMBER_OF( qhead,
+ kctx,
+ jctx.sched_info.runpool.policy_ctx.cfs.list );
+
+ {
+ kbase_device *kbdev = CONTAINER_OF( js_policy, kbase_device, js_data.policy );
+ KBASE_TRACE_ADD_REFCOUNT_INFO( kbdev, JS_POLICY_TRY_EVICT_CTX, kctx, NULL, 0u,
+ kbasep_js_policy_trace_get_refcnt( kbdev, kctx ), is_present);
+ }
+
+ if ( is_present != MALI_FALSE )
+ {
+ kbase_context *head_ctx;
+ qhead = queue_head;
+ /* Remove the context */
+ OSK_DLIST_REMOVE( qhead,
+ kctx,
+ jctx.sched_info.runpool.policy_ctx.cfs.list );
+
+ qhead = queue_head;
+ /* Update the head runtime */
+ head_ctx = OSK_DLIST_FRONT( qhead,
+ kbase_context,
+ jctx.sched_info.runpool.policy_ctx.cfs.list );
+ if (OSK_DLIST_IS_VALID( head_ctx, jctx.sched_info.runpool.policy_ctx.cfs.list ) == MALI_TRUE)
+ {
+ /* No need to hold the the runpool_irq.lock here for reading - the
+ * context is definitely not being updated in the runpool at this
+ * point. The queue_mutex held by the caller ensures the memory barrier. */
+ u64 head_runtime = head_ctx->jctx.sched_info.runpool.policy_ctx.cfs.runtime_us;
+
+ if (head_runtime > policy_info->head_runtime_us)
+ {
+ policy_info->head_runtime_us = head_runtime;
+ }
+ }
+ }
+
+ return is_present;
+}
+
+void kbasep_js_policy_kill_all_ctx_jobs( kbasep_js_policy *js_policy, kbase_context *kctx )
+{
+ kbasep_js_policy_cfs *policy_info;
+ kbasep_js_policy_cfs_ctx *ctx_info;
+ u32 i;
+
+ OSK_ASSERT( js_policy != NULL );
+ OSK_ASSERT( kctx != NULL );
+
+ policy_info = &js_policy->cfs;
+ ctx_info = &kctx->jctx.sched_info.runpool.policy_ctx.cfs;
+
+ {
+ kbase_device *kbdev = CONTAINER_OF( js_policy, kbase_device, js_data.policy );
+ KBASE_TRACE_ADD_REFCOUNT( kbdev, JS_POLICY_KILL_ALL_CTX_JOBS, kctx, NULL, 0u,
+ kbasep_js_policy_trace_get_refcnt( kbdev, kctx ));
+ }
+
+ /* Kill jobs on each variant in turn */
+ for ( i = 0; i < policy_info->num_core_req_variants; ++i )
+ {
+ osk_dlist *job_list;
+ job_list = &ctx_info->job_list_head[i];
+
+ /* Call kbase_jd_cancel() on all kbase_jd_atoms in this list, whilst removing them from the list */
+ OSK_DLIST_EMPTY_LIST( job_list, kbase_jd_atom, sched_info.cfs.list, kbase_jd_cancel );
+ }
+
+}
+
+void kbasep_js_policy_runpool_add_ctx( kbasep_js_policy *js_policy, kbase_context *kctx )
+{
+ kbasep_js_policy_cfs *policy_info;
+ kbasep_js_device_data *js_devdata;
+ kbase_device *kbdev;
+ osk_error osk_err;
+
+ OSK_ASSERT( js_policy != NULL );
+ OSK_ASSERT( kctx != NULL );
+
+ policy_info = &js_policy->cfs;
+ js_devdata = CONTAINER_OF( js_policy, kbasep_js_device_data, policy );
+ kbdev = CONTAINER_OF( js_policy, kbase_device, js_data.policy );
+
+ {
+ KBASE_TRACE_ADD_REFCOUNT( kbdev, JS_POLICY_RUNPOOL_ADD_CTX, kctx, NULL, 0u,
+ kbasep_js_policy_trace_get_refcnt_nolock( kbdev, kctx ));
+ }
+
+ /* ASSERT about scheduled-ness/queued-ness */
+ kbasep_js_debug_check( policy_info, kctx, KBASEP_JS_CHECK_NOTSCHEDULED );
+
+ /* All enqueued contexts go to the back of the runpool */
+ OSK_DLIST_PUSH_BACK( &policy_info->scheduled_ctxs_head,
+ kctx,
+ kbase_context,
+ jctx.sched_info.runpool.policy_ctx.cfs.list );
+
+ if ( timer_callback_should_run(kbdev) != MALI_FALSE
+ && policy_info->timer_running == MALI_FALSE )
+ {
+ osk_err = osk_timer_start_ns(&policy_info->timer, js_devdata->scheduling_tick_ns);
+ if (OSK_ERR_NONE == osk_err)
+ {
+ kbase_device *kbdev = CONTAINER_OF( js_policy, kbase_device, js_data.policy );
+ KBASE_TRACE_ADD( kbdev, JS_POLICY_TIMER_START, NULL, NULL, 0u, 0u );
+ policy_info->timer_running = MALI_TRUE;
+ }
+ }
+}
+
+void kbasep_js_policy_runpool_remove_ctx( kbasep_js_policy *js_policy, kbase_context *kctx )
+{
+ kbasep_js_policy_cfs *policy_info;
+
+ OSK_ASSERT( js_policy != NULL );
+ OSK_ASSERT( kctx != NULL );
+
+ policy_info = &js_policy->cfs;
+
+ {
+ kbase_device *kbdev = CONTAINER_OF( js_policy, kbase_device, js_data.policy );
+ KBASE_TRACE_ADD_REFCOUNT( kbdev, JS_POLICY_RUNPOOL_REMOVE_CTX, kctx, NULL, 0u,
+ kbasep_js_policy_trace_get_refcnt_nolock( kbdev, kctx ));
+ }
+
+ /* ASSERT about scheduled-ness/queued-ness */
+ kbasep_js_debug_check( policy_info, kctx, KBASEP_JS_CHECK_SCHEDULED );
+
+ /* No searching or significant list maintenance required to remove this context */
+ OSK_DLIST_REMOVE( &policy_info->scheduled_ctxs_head,
+ kctx,
+ jctx.sched_info.runpool.policy_ctx.cfs.list );
+}
+
+mali_bool kbasep_js_policy_should_remove_ctx( kbasep_js_policy *js_policy, kbase_context *kctx )
+{
+ kbasep_js_policy_cfs_ctx *ctx_info;
+ kbasep_js_policy_cfs *policy_info;
+ kbase_context *head_ctx;
+ kbasep_js_device_data *js_devdata;
+ osk_dlist *queue_head;
+
+ OSK_ASSERT( js_policy != NULL );
+ OSK_ASSERT( kctx != NULL );
+
+ policy_info = &js_policy->cfs;
+ ctx_info = &kctx->jctx.sched_info.runpool.policy_ctx.cfs;
+ js_devdata = CONTAINER_OF( js_policy, kbasep_js_device_data, policy );
+
+ if(ctx_info->process_rt_policy)
+ {
+ queue_head = &policy_info->ctx_rt_queue_head;
+ }
+ else
+ {
+ queue_head = &policy_info->ctx_queue_head;
+ }
+
+ head_ctx = OSK_DLIST_FRONT( queue_head,
+ kbase_context,
+ jctx.sched_info.runpool.policy_ctx.cfs.list );
+ if (OSK_DLIST_IS_VALID( head_ctx, jctx.sched_info.runpool.policy_ctx.cfs.list ) == MALI_TRUE)
+ {
+ u64 head_runtime_us = head_ctx->jctx.sched_info.runpool.policy_ctx.cfs.runtime_us;
+
+ if ((head_runtime_us + priority_weight(ctx_info, (u64)(js_devdata->ctx_timeslice_ns/1000u)))
+ < ctx_info->runtime_us)
+ {
+ /* The context is scheduled out if it's not the least-run context anymore.
+ * The "real" head runtime is used instead of the cached runtime so the current
+ * context is not scheduled out when there is less contexts than address spaces.
+ */
+ return MALI_TRUE;
+ }
+ }
+
+ return MALI_FALSE;
+}
+
+/*
+ * Job Chain Management
+ */
+
+mali_error kbasep_js_policy_init_job( const kbasep_js_policy *js_policy, const kbase_context *kctx, kbase_jd_atom *katom )
+{
+ const kbasep_js_policy_cfs *policy_info;
+
+ OSK_ASSERT( js_policy != NULL );
+ OSK_ASSERT( katom != NULL );
+ OSK_ASSERT( kctx != NULL );
+
+ policy_info = &js_policy->cfs;
+
+ /* Determine the job's index into the job list head, will return error if the
+ * atom is malformed and so is reported. */
+ return cached_variant_idx_init( policy_info, kctx, katom );
+}
+
+void kbasep_js_policy_term_job( const kbasep_js_policy *js_policy, const kbase_context *kctx, kbase_jd_atom *katom )
+{
+ kbasep_js_policy_cfs_job *job_info;
+ const kbasep_js_policy_cfs_ctx *ctx_info;
+
+ OSK_ASSERT( js_policy != NULL );
+ CSTD_UNUSED(js_policy);
+ OSK_ASSERT( katom != NULL );
+ OSK_ASSERT( kctx != NULL );
+
+ job_info = &katom->sched_info.cfs;
+ ctx_info = &kctx->jctx.sched_info.runpool.policy_ctx.cfs;
+
+ /* We need not do anything, so we just ASSERT that this job was correctly removed from the relevant lists */
+ OSK_ASSERT( OSK_DLIST_MEMBER_OF( &ctx_info->job_list_head[job_info->cached_variant_idx],
+ katom,
+ sched_info.cfs.list ) == MALI_FALSE );
+}
+
+void kbasep_js_policy_register_job( kbasep_js_policy *js_policy, kbase_context *kctx, kbase_jd_atom *katom )
+{
+ kbasep_js_policy_cfs_ctx *ctx_info;
+
+ OSK_ASSERT( js_policy != NULL );
+ OSK_ASSERT( katom != NULL );
+ OSK_ASSERT( kctx != NULL );
+
+ ctx_info = &kctx->jctx.sched_info.runpool.policy_ctx.cfs;
+
+ /* Adjust context priority to include the new job */
+ ctx_info->bag_total_nr_atoms++;
+ ctx_info->bag_total_priority += katom->nice_prio;
+
+ /* Get average priority and convert to NICE range -20..19 */
+ if(ctx_info->bag_total_nr_atoms)
+ {
+ ctx_info->bag_priority = (ctx_info->bag_total_priority / ctx_info->bag_total_nr_atoms) - 20;
+ }
+}
+
+void kbasep_js_policy_deregister_job( kbasep_js_policy *js_policy, kbase_context *kctx, kbase_jd_atom *katom )
+{
+ kbasep_js_policy_cfs_ctx *ctx_info;
+
+ OSK_ASSERT( js_policy != NULL );
+ CSTD_UNUSED(js_policy);
+ OSK_ASSERT( katom != NULL );
+ OSK_ASSERT( kctx != NULL );
+
+ ctx_info = &kctx->jctx.sched_info.runpool.policy_ctx.cfs;
+
+ /* Adjust context priority to no longer include removed job */
+ OSK_ASSERT(ctx_info->bag_total_nr_atoms > 0);
+ ctx_info->bag_total_nr_atoms--;
+ ctx_info->bag_total_priority -= katom->nice_prio;
+ OSK_ASSERT(ctx_info->bag_total_priority >= 0);
+
+ /* Get average priority and convert to NICE range -20..19 */
+ if(ctx_info->bag_total_nr_atoms)
+ {
+ ctx_info->bag_priority = (ctx_info->bag_total_priority / ctx_info->bag_total_nr_atoms) - 20;
+ }
+}
+KBASE_EXPORT_TEST_API(kbasep_js_policy_deregister_job)
+
+mali_bool kbasep_js_policy_dequeue_job( kbase_device *kbdev,
+ int job_slot_idx,
+ kbase_jd_atom **katom_ptr )
+{
+ kbasep_js_device_data *js_devdata;
+ kbasep_js_policy_cfs *policy_info;
+ kbase_context *kctx;
+ u32 variants_supported;
+
+ OSK_ASSERT( kbdev != NULL );
+ OSK_ASSERT( katom_ptr != NULL );
+ OSK_ASSERT( job_slot_idx < BASE_JM_MAX_NR_SLOTS );
+
+ js_devdata = &kbdev->js_data;
+ policy_info = &js_devdata->policy.cfs;
+
+ /* Get the variants for this slot */
+ if ( kbasep_js_ctx_attr_is_attr_on_runpool( kbdev, KBASEP_JS_CTX_ATTR_NSS ) != MALI_FALSE )
+ {
+ /* NSS-state */
+ variants_supported = get_slot_to_variant_lookup( policy_info->slot_to_variant_lookup_nss_state, job_slot_idx );
+ }
+ else if ( kbdev->gpu_props.num_core_groups > 1
+ && kbasep_js_ctx_attr_is_attr_on_runpool( kbdev, KBASEP_JS_CTX_ATTR_COMPUTE_ALL_CORES ) != MALI_FALSE )
+ {
+ /* SS-allcore state, and there's more than one coregroup */
+ variants_supported = get_slot_to_variant_lookup( policy_info->slot_to_variant_lookup_ss_allcore_state, job_slot_idx );
+ }
+ else
+ {
+ /* SS-state */
+ variants_supported = get_slot_to_variant_lookup( policy_info->slot_to_variant_lookup_ss_state, job_slot_idx );
+ }
+
+ /* First pass through the runpool we consider the realtime priority jobs */
+ OSK_DLIST_FOREACH( &policy_info->scheduled_ctxs_head,
+ kbase_context,
+ jctx.sched_info.runpool.policy_ctx.cfs.list,
+ kctx )
+ {
+ if(kctx->jctx.sched_info.runpool.policy_ctx.cfs.process_rt_policy)
+ {
+ if(dequeue_job(kbdev, kctx, variants_supported, katom_ptr, job_slot_idx))
+ {
+ /* Realtime policy job matched */
+ return MALI_TRUE;
+ }
+ }
+ }
+
+ /* Second pass through the runpool we consider the non-realtime priority jobs */
+ OSK_DLIST_FOREACH( &policy_info->scheduled_ctxs_head,
+ kbase_context,
+ jctx.sched_info.runpool.policy_ctx.cfs.list,
+ kctx )
+ {
+ if(kctx->jctx.sched_info.runpool.policy_ctx.cfs.process_rt_policy == MALI_FALSE)
+ {
+ if(dequeue_job(kbdev, kctx, variants_supported, katom_ptr, job_slot_idx))
+ {
+ /* Non-realtime policy job matched */
+ return MALI_TRUE;
+ }
+ }
+ }
+
+ /* By this point, no contexts had a matching job */
+ return MALI_FALSE;
+}
+
+mali_bool kbasep_js_policy_dequeue_job_irq( kbase_device *kbdev,
+ int job_slot_idx,
+ kbase_jd_atom **katom_ptr )
+{
+ /* IRQ and non-IRQ variants of this are the same (though, the IRQ variant could be made faster) */
+
+ /* KBASE_TRACE_ADD_SLOT( kbdev, JS_POLICY_DEQUEUE_JOB_IRQ, NULL, NULL, 0u,
+ job_slot_idx); */
+ return kbasep_js_policy_dequeue_job( kbdev, job_slot_idx, katom_ptr );
+}
+
+
+void kbasep_js_policy_enqueue_job( kbasep_js_policy *js_policy, kbase_jd_atom *katom )
+{
+ kbasep_js_policy_cfs_job *job_info;
+ kbasep_js_policy_cfs_ctx *ctx_info;
+ kbase_context *parent_ctx;
+
+ OSK_ASSERT( js_policy != NULL );
+ OSK_ASSERT( katom != NULL );
+ parent_ctx = katom->kctx;
+ OSK_ASSERT( parent_ctx != NULL );
+
+ job_info = &katom->sched_info.cfs;
+ ctx_info = &parent_ctx->jctx.sched_info.runpool.policy_ctx.cfs;
+
+ {
+ kbase_device *kbdev = CONTAINER_OF( js_policy, kbase_device, js_data.policy );
+ KBASE_TRACE_ADD( kbdev, JS_POLICY_ENQUEUE_JOB, katom->kctx, katom->user_atom, katom->jc,
+ 0 );
+ }
+
+ OSK_DLIST_PUSH_BACK( &ctx_info->job_list_head[job_info->cached_variant_idx],
+ katom,
+ kbase_jd_atom,
+ sched_info.cfs.list );
+}
+
+void kbasep_js_policy_log_job_result( kbasep_js_policy *js_policy, kbase_jd_atom *katom, u32 time_spent_us )
+{
+ kbasep_js_policy_cfs_ctx *ctx_info;
+ kbase_context *parent_ctx;
+ OSK_ASSERT( js_policy != NULL );
+ OSK_ASSERT( katom != NULL );
+ CSTD_UNUSED( js_policy );
+
+ parent_ctx = katom->kctx;
+ OSK_ASSERT( parent_ctx != NULL );
+
+ ctx_info = &parent_ctx->jctx.sched_info.runpool.policy_ctx.cfs;
+
+ ctx_info->runtime_us += priority_weight(ctx_info, time_spent_us);
+}
+
+mali_bool kbasep_js_policy_ctx_has_priority( kbasep_js_policy *js_policy, kbase_context *current_ctx, kbase_context *new_ctx )
+{
+ kbasep_js_policy_cfs_ctx *current_ctx_info;
+ kbasep_js_policy_cfs_ctx *new_ctx_info;
+
+ OSK_ASSERT( current_ctx != NULL );
+ OSK_ASSERT( new_ctx != NULL );
+ CSTD_UNUSED(js_policy);
+
+ current_ctx_info = ¤t_ctx->jctx.sched_info.runpool.policy_ctx.cfs;
+ new_ctx_info = &new_ctx->jctx.sched_info.runpool.policy_ctx.cfs;
+
+ if((current_ctx_info->process_rt_policy == MALI_FALSE) &&
+ (new_ctx_info->process_rt_policy == MALI_TRUE))
+ {
+ return MALI_TRUE;
+ }
+
+ if((current_ctx_info->process_rt_policy == new_ctx_info->process_rt_policy) &&
+ (current_ctx_info->bag_priority > new_ctx_info->bag_priority))
+ {
+ return MALI_TRUE;
+ }
+
+ return MALI_FALSE;
+}
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_js_policy_cfs.h
+ * Completely Fair Job Scheduler Policy structure definitions
+ */
+
+#ifndef _KBASE_JS_POLICY_CFS_H_
+#define _KBASE_JS_POLICY_CFS_H_
+
+#define KBASE_JS_POLICY_AVAILABLE_CFS
+
+/** @addtogroup base_api
+ * @{ */
+/** @addtogroup base_kbase_api
+ * @{ */
+/** @addtogroup kbase_js_policy
+ * @{ */
+
+/**
+ * Internally, this policy keeps a few internal queues for different variants
+ * of core requirements, which are used to decide how to schedule onto the
+ * different job slots.
+ *
+ * Currently, one extra variant is supported: an NSS variant.
+ *
+ * Must be a power of 2 to keep the lookup math simple
+ */
+#define KBASEP_JS_MAX_NR_CORE_REQ_VARIANTS_LOG2 3
+#define KBASEP_JS_MAX_NR_CORE_REQ_VARIANTS (1u << KBASEP_JS_MAX_NR_CORE_REQ_VARIANTS_LOG2 )
+
+/** Bits needed in the lookup to support all slots */
+#define KBASEP_JS_VARIANT_LOOKUP_BITS_NEEDED (BASE_JM_MAX_NR_SLOTS * KBASEP_JS_MAX_NR_CORE_REQ_VARIANTS)
+/** Number of u32s needed in the lookup array to support all slots */
+#define KBASEP_JS_VARIANT_LOOKUP_WORDS_NEEDED ((KBASEP_JS_VARIANT_LOOKUP_BITS_NEEDED + 31) / 32)
+
+typedef struct kbasep_js_policy_cfs
+{
+ /** List of all contexts in the context queue. Hold
+ * kbasep_js_device_data::queue_mutex whilst accessing. */
+ osk_dlist ctx_queue_head;
+
+ /** List of all contexts in the realtime (priority) context queue */
+ osk_dlist ctx_rt_queue_head;
+
+ /** List of scheduled contexts. Hold kbasep_jd_device_data::runpool_irq::lock
+ * whilst accessing, which is a spinlock */
+ osk_dlist scheduled_ctxs_head;
+
+ /** Number of valid elements in the core_req_variants member, and the
+ * kbasep_js_policy_rr_ctx::job_list_head array */
+ u32 num_core_req_variants;
+
+ /** Variants of the core requirements */
+ kbasep_atom_req core_req_variants[KBASEP_JS_MAX_NR_CORE_REQ_VARIANTS];
+
+ /* Lookups per job slot against which core_req_variants match it */
+ u32 slot_to_variant_lookup_ss_state[KBASEP_JS_VARIANT_LOOKUP_WORDS_NEEDED];
+ u32 slot_to_variant_lookup_ss_allcore_state[KBASEP_JS_VARIANT_LOOKUP_WORDS_NEEDED];
+ u32 slot_to_variant_lookup_nss_state[KBASEP_JS_VARIANT_LOOKUP_WORDS_NEEDED];
+
+ /* The timer tick used for rescheduling jobs */
+ osk_timer timer;
+
+ /* Is the timer running?
+ *
+ * The kbasep_js_device_data::runpool_irq::lock (a spinlock) must be held
+ * whilst accessing this */
+ mali_bool timer_running;
+
+ /* Number of us the least-run context has been running for
+ *
+ * The kbasep_js_device_data::queue_mutex must be held whilst updating this
+ * Reads are possible without this mutex, but an older value might be read
+ * if no memory barriers are issued beforehand. */
+ u64 head_runtime_us;
+} kbasep_js_policy_cfs;
+
+/**
+ * This policy contains a single linked list of all contexts.
+ */
+typedef struct kbasep_js_policy_cfs_ctx
+{
+ /** Link implementing the Policy's Queue, and Currently Scheduled list */
+ osk_dlist_item list;
+
+ /** Job lists for use when in the Run Pool - only using
+ * kbasep_js_policy_fcfs::num_unique_slots of them. We still need to track
+ * the jobs when we're not in the runpool, so this member is accessed from
+ * outside the policy queue (for the first job), inside the policy queue,
+ * and inside the runpool.
+ *
+ * If the context is in the runpool, then this must only be accessed with
+ * kbasep_js_device_data::runpool_irq::lock held
+ *
+ * Jobs are still added to this list even when the context is not in the
+ * runpool. In that case, the kbasep_js_kctx_info::ctx::jsctx_mutex must be
+ * held before accessing this. */
+ osk_dlist job_list_head[KBASEP_JS_MAX_NR_CORE_REQ_VARIANTS];
+
+ /** Number of us this context has been running for
+ *
+ * The kbasep_js_device_data::runpool_irq::lock (a spinlock) must be held
+ * whilst updating this. Initializing will occur on context init and
+ * context enqueue (which can only occur in one thread at a time), but
+ * multi-thread access only occurs while the context is in the runpool.
+ *
+ * Reads are possible without this spinlock, but an older value might be read
+ * if no memory barriers are issued beforehand */
+ u64 runtime_us;
+
+ /* Calling process policy scheme is a realtime scheduler and will use the priority queue
+ * Non-mutable after ctx init */
+ mali_bool process_rt_policy;
+ /* Calling process NICE priority */
+ int process_priority;
+ /* Average NICE priority of all atoms in bag:
+ * Hold the kbasep_js_kctx_info::ctx::jsctx_mutex when accessing */
+ int bag_priority;
+ /* Total NICE priority of all atoms in bag
+ * Hold the kbasep_js_kctx_info::ctx::jsctx_mutex when accessing */
+ int bag_total_priority;
+ /* Total number of atoms in the bag
+ * Hold the kbasep_js_kctx_info::ctx::jsctx_mutex when accessing */
+ int bag_total_nr_atoms;
+
+} kbasep_js_policy_cfs_ctx;
+
+/**
+ * In this policy, each Job is part of at most one of the per_corereq lists
+ */
+typedef struct kbasep_js_policy_cfs_job
+{
+ osk_dlist_item list; /**< Link implementing the Run Pool list/Jobs owned by the ctx */
+ u32 cached_variant_idx; /**< Cached index of the list this should be entered into on re-queue */
+
+ /** Number of ticks that this job has been executing for
+ *
+ * To access this, the kbdev->jm_slots[ js ].lock must be held for the slot 'js'
+ * that that atom is running/queued/about to be queued upon */
+ u32 ticks;
+} kbasep_js_policy_cfs_job;
+
+/** @} */ /* end group kbase_js_policy */
+/** @} */ /* end group base_kbase_api */
+/** @} */ /* end group base_api */
+
+#endif
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_mem.c
+ * Base kernel memory APIs
+ */
+#ifdef CONFIG_DMA_SHARED_BUFFER
+#include <linux/dma-buf.h>
+#endif /* CONFIG_DMA_SHARED_BUFFER */
+#include <osk/mali_osk.h>
+#include <kbase/src/common/mali_kbase.h>
+#include <kbase/src/common/mali_midg_regmap.h>
+#include <kbase/src/common/mali_kbase_cache_policy.h>
+#include <kbase/src/common/mali_kbase_hw.h>
+#include <kbase/src/common/mali_kbase_gator.h>
+
+typedef struct kbasep_memory_region_performance
+{
+ kbase_memory_performance cpu_performance;
+ kbase_memory_performance gpu_performance;
+} kbasep_memory_region_performance;
+
+static mali_bool kbasep_allocator_order_list_create( osk_phy_allocator * allocators,
+ kbasep_memory_region_performance *region_performance,
+ int memory_region_count, osk_phy_allocator ***sorted_allocs, int allocator_order_count);
+
+/*
+ * An iterator which uses one of the orders listed in kbase_phys_allocator_order enum to iterate over allocators array.
+ */
+typedef struct kbase_phys_allocator_iterator
+{
+ unsigned int cur_idx;
+ kbase_phys_allocator_array * array;
+ kbase_phys_allocator_order order;
+} kbase_phys_allocator_iterator;
+
+
+mali_error kbase_mem_init(kbase_device * kbdev)
+{
+ CSTD_UNUSED(kbdev);
+ /* nothing to do, zero-inited when kbase_device was created */
+ return MALI_ERROR_NONE;
+}
+
+void kbase_mem_halt(kbase_device * kbdev)
+{
+ CSTD_UNUSED(kbdev);
+}
+
+void kbase_mem_term(kbase_device * kbdev)
+{
+ u32 i;
+ kbasep_mem_device * memdev;
+ OSK_ASSERT(kbdev);
+
+ memdev = &kbdev->memdev;
+
+ for (i = 0; i < memdev->allocators.count; i++)
+ {
+ osk_phy_allocator_term(&memdev->allocators.allocs[i]);
+ }
+ osk_free(memdev->allocators.allocs);
+ osk_free(memdev->allocators.sorted_allocs[0]);
+
+ kbase_mem_usage_term(&memdev->usage);
+}
+KBASE_EXPORT_TEST_API(kbase_mem_term)
+
+static mali_error kbase_phys_it_init(kbase_device * kbdev, kbase_phys_allocator_iterator * it, kbase_phys_allocator_order order)
+{
+ OSK_ASSERT(kbdev);
+ OSK_ASSERT(it);
+
+ if (!kbdev->memdev.allocators.count)
+ {
+ return MALI_ERROR_OUT_OF_MEMORY;
+ }
+
+ it->cur_idx = 0;
+ it->array = &kbdev->memdev.allocators;
+ it->order = order;
+
+#if MALI_DEBUG
+ it->array->it_bound = MALI_TRUE;
+#endif /* MALI_DEBUG */
+
+ return MALI_ERROR_NONE;
+}
+
+static void kbase_phys_it_term(kbase_phys_allocator_iterator * it)
+{
+ OSK_ASSERT(it);
+ it->cur_idx = 0;
+#if MALI_DEBUG
+ it->array->it_bound = MALI_FALSE;
+#endif /* MALI_DEBUG */
+ it->array = NULL;
+ return;
+}
+
+static osk_phy_allocator * kbase_phys_it_deref(kbase_phys_allocator_iterator * it)
+{
+ OSK_ASSERT(it);
+ OSK_ASSERT(it->array);
+
+ if (it->cur_idx < it->array->count)
+ {
+ return it->array->sorted_allocs[it->order][it->cur_idx];
+ }
+ else
+ {
+ return NULL;
+ }
+}
+
+static osk_phy_allocator * kbase_phys_it_deref_and_advance(kbase_phys_allocator_iterator * it)
+{
+ osk_phy_allocator * alloc;
+
+ OSK_ASSERT(it);
+ OSK_ASSERT(it->array);
+
+ alloc = kbase_phys_it_deref(it);
+ if (alloc)
+ {
+ it->cur_idx++;
+ }
+ return alloc;
+}
+
+/*
+ * Page free helper.
+ * Handles that commit objects tracks the pages we free
+ */
+static void kbase_free_phy_pages_helper(kbase_va_region * reg, u32 nr_pages);
+
+mali_error kbase_mem_usage_init(kbasep_mem_usage * usage, u32 max_pages)
+{
+ OSK_ASSERT(usage);
+ osk_atomic_set(&usage->cur_pages, 0);
+ /* query the max page count */
+ usage->max_pages = max_pages;
+
+ return MALI_ERROR_NONE;
+}
+
+void kbase_mem_usage_term(kbasep_mem_usage * usage)
+{
+ OSK_ASSERT(usage);
+ /* No memory should be in use now */
+ OSK_ASSERT(0 == osk_atomic_get(&usage->cur_pages));
+ /* So any new alloc requests will fail */
+ usage->max_pages = 0;
+ /* So we assert on double term */
+ osk_atomic_set(&usage->cur_pages, U32_MAX);
+}
+
+mali_error kbase_mem_usage_request_pages(kbasep_mem_usage *usage, u32 nr_pages)
+{
+ u32 cur_pages;
+ u32 old_cur_pages;
+
+ OSK_ASSERT(usage);
+ OSK_ASSERT(nr_pages); /* 0 pages would be an error in the calling code */
+
+ /*
+ * Fetch the initial cur_pages value
+ * each loop iteration below fetches
+ * it as part of the store attempt
+ */
+ cur_pages = osk_atomic_get(&usage->cur_pages);
+
+ /* this check allows the simple if test in the loop below */
+ if (usage->max_pages < nr_pages)
+ {
+ goto usage_cap_exceeded;
+ }
+
+ do
+ {
+ u32 new_cur_pages;
+
+ /* enough pages to fullfill the request? */
+ if (usage->max_pages - nr_pages < cur_pages)
+ {
+usage_cap_exceeded:
+ OSK_PRINT_WARN( OSK_BASE_MEM,
+ "Memory usage cap has been reached:\n"
+ "\t%lu pages currently used\n"
+ "\t%lu pages usage cap\n"
+ "\t%lu new pages requested\n"
+ "\twould result in %lu pages over the cap\n",
+ cur_pages,
+ usage->max_pages,
+ nr_pages,
+ cur_pages + nr_pages - usage->max_pages
+ );
+ return MALI_ERROR_OUT_OF_MEMORY;
+ }
+
+ /* try to atomically commit the new count */
+ old_cur_pages = cur_pages;
+ new_cur_pages = cur_pages + nr_pages;
+ cur_pages = osk_atomic_compare_and_swap(&usage->cur_pages, old_cur_pages, new_cur_pages);
+ /* cur_pages will be like old_cur_pages if there was no race */
+ } while (cur_pages != old_cur_pages);
+
+#if MALI_GATOR_SUPPORT
+ kbase_trace_mali_total_alloc_pages_change((long long int)nr_pages);
+#endif
+
+ return MALI_ERROR_NONE;
+}
+KBASE_EXPORT_TEST_API(kbase_mem_usage_request_pages)
+
+void kbase_mem_usage_release_pages(kbasep_mem_usage * usage, u32 nr_pages)
+{
+ OSK_ASSERT(usage);
+ OSK_ASSERT(nr_pages <= osk_atomic_get(&usage->cur_pages));
+
+ osk_atomic_sub(&usage->cur_pages, nr_pages);
+#if MALI_GATOR_SUPPORT
+ kbase_trace_mali_total_alloc_pages_change(-((long long int)nr_pages));
+#endif
+}
+KBASE_EXPORT_TEST_API(kbase_mem_usage_release_pages)
+
+/**
+ * @brief Wait for GPU write flush - only in use for BASE_HW_ISSUE_6367
+ *
+ * Wait 1000 GPU clock cycles. This delay is known to give the GPU time to flush its write buffer.
+ * @note If GPU resets occur then the counters are reset to zero, the delay may not be as expected.
+ */
+#if MALI_NO_MALI
+static void kbase_wait_write_flush(struct kbase_context *kctx) { }
+#else
+static void kbase_wait_write_flush(struct kbase_context *kctx)
+{
+ u32 base_count = 0;
+ kbase_pm_context_active(kctx->kbdev);
+ kbase_pm_request_gpu_cycle_counter(kctx->kbdev);
+ while( MALI_TRUE )
+ {
+ u32 new_count;
+ new_count = kbase_reg_read(kctx->kbdev, GPU_CONTROL_REG(CYCLE_COUNT_LO), NULL);
+ /* First time around, just store the count. */
+ if( base_count == 0 )
+ {
+ base_count = new_count;
+ continue;
+ }
+
+ /* No need to handle wrapping, unsigned maths works for this. */
+ if( (new_count - base_count) > 1000 )
+ {
+ break;
+ }
+ }
+ kbase_pm_release_gpu_cycle_counter(kctx->kbdev);
+ kbase_pm_context_idle(kctx->kbdev);
+}
+#endif
+
+/**
+ * @brief Check the zone compatibility of two regions.
+ */
+STATIC int kbase_match_zone(struct kbase_va_region *reg1, struct kbase_va_region *reg2)
+{
+ return ((reg1->flags & KBASE_REG_ZONE_MASK) == (reg2->flags & KBASE_REG_ZONE_MASK));
+}
+KBASE_EXPORT_TEST_API(kbase_match_zone)
+
+/**
+ * @brief Allocate a free region object.
+ *
+ * The allocated object is not part of any list yet, and is flagged as
+ * KBASE_REG_FREE. No mapping is allocated yet.
+ *
+ * zone is KBASE_REG_ZONE_TMEM, KBASE_REG_ZONE_PMEM, or KBASE_REG_ZONE_EXEC
+ *
+ */
+struct kbase_va_region *kbase_alloc_free_region(struct kbase_context *kctx, u64 start_pfn, u32 nr_pages, u32 zone)
+{
+ struct kbase_va_region *new_reg;
+
+ OSK_ASSERT(kctx != NULL);
+
+ /* zone argument should only contain zone related region flags */
+ OSK_ASSERT((zone & ~KBASE_REG_ZONE_MASK) == 0);
+ OSK_ASSERT(nr_pages > 0);
+ OSK_ASSERT( start_pfn + nr_pages <= (UINT64_MAX / OSK_PAGE_SIZE) ); /* 64-bit address range is the max */
+
+ new_reg = osk_calloc(sizeof(*new_reg));
+ if (!new_reg)
+ {
+ OSK_PRINT_WARN(OSK_BASE_MEM, "calloc failed");
+ return NULL;
+ }
+
+ new_reg->kctx = kctx;
+ new_reg->flags = zone | KBASE_REG_FREE;
+
+ if ( KBASE_REG_ZONE_TMEM == zone || KBASE_REG_ZONE_EXEC == zone )
+ {
+ new_reg->flags |= KBASE_REG_GROWABLE;
+ }
+
+ /* not imported by default */
+ new_reg->imported_type = BASE_TMEM_IMPORT_TYPE_INVALID;
+
+ new_reg->start_pfn = start_pfn;
+ new_reg->nr_pages = nr_pages;
+ OSK_DLIST_INIT(&new_reg->map_list);
+ new_reg->root_commit.allocator = NULL;
+ new_reg->last_commit = &new_reg->root_commit;
+
+ return new_reg;
+}
+KBASE_EXPORT_TEST_API(kbase_alloc_free_region)
+
+/**
+ * @brief Free a region object.
+ *
+ * The described region must be freed of any mapping.
+ *
+ * If the region is not flagged as KBASE_REG_FREE, the destructor
+ * kbase_free_phy_pages() will be called.
+ */
+void kbase_free_alloced_region(struct kbase_va_region *reg)
+{
+ OSK_ASSERT(NULL != reg);
+ OSK_ASSERT(OSK_DLIST_IS_EMPTY(®->map_list));
+ if (!(reg->flags & KBASE_REG_FREE))
+ {
+ kbase_free_phy_pages(reg);
+ OSK_DEBUG_CODE(
+ /* To detect use-after-free in debug builds */
+ reg->flags |= KBASE_REG_FREE
+ );
+ }
+ osk_free(reg);
+}
+KBASE_EXPORT_TEST_API(kbase_free_alloced_region)
+
+/**
+ * @brief Insert a region object in the global list.
+ *
+ * The region new_reg is inserted at start_pfn by replacing at_reg
+ * partially or completely. at_reg must be a KBASE_REG_FREE region
+ * that contains start_pfn and at least nr_pages from start_pfn. It
+ * must be called with the context region lock held. Internal use
+ * only.
+ */
+static mali_error kbase_insert_va_region_nolock(struct kbase_context *kctx,
+ struct kbase_va_region *new_reg,
+ struct kbase_va_region *at_reg,
+ u64 start_pfn, u32 nr_pages)
+{
+ struct kbase_va_region *new_front_reg;
+ mali_error err = MALI_ERROR_NONE;
+
+ /* Must be a free region */
+ OSK_ASSERT((at_reg->flags & KBASE_REG_FREE) != 0);
+ /* start_pfn should be contained within at_reg */
+ OSK_ASSERT((start_pfn >= at_reg->start_pfn) && (start_pfn < at_reg->start_pfn + at_reg->nr_pages));
+ /* at least nr_pages from start_pfn should be contained within at_reg */
+ OSK_ASSERT(start_pfn + nr_pages <= at_reg->start_pfn + at_reg->nr_pages );
+
+ new_reg->start_pfn = start_pfn;
+ new_reg->nr_pages = nr_pages;
+
+ /* Trivial replacement case */
+ if (at_reg->start_pfn == start_pfn && at_reg->nr_pages == nr_pages)
+ {
+ OSK_DLIST_INSERT_BEFORE(&kctx->reg_list, new_reg, at_reg, struct kbase_va_region, link);
+ OSK_DLIST_REMOVE(&kctx->reg_list, at_reg, link);
+ kbase_free_alloced_region(at_reg);
+ }
+ /* Begin case */
+ else if (at_reg->start_pfn == start_pfn)
+ {
+ at_reg->start_pfn += nr_pages;
+ OSK_ASSERT(at_reg->nr_pages >= nr_pages);
+ at_reg->nr_pages -= nr_pages;
+
+ OSK_DLIST_INSERT_BEFORE(&kctx->reg_list, new_reg, at_reg, struct kbase_va_region, link);
+ }
+ /* End case */
+ else if ((at_reg->start_pfn + at_reg->nr_pages) == (start_pfn + nr_pages))
+ {
+ at_reg->nr_pages -= nr_pages;
+
+ OSK_DLIST_INSERT_AFTER(&kctx->reg_list, new_reg, at_reg, struct kbase_va_region, link);
+ }
+ /* Middle of the road... */
+ else
+ {
+ new_front_reg = kbase_alloc_free_region(kctx, at_reg->start_pfn,
+ start_pfn - at_reg->start_pfn,
+ at_reg->flags & KBASE_REG_ZONE_MASK);
+ if (new_front_reg)
+ {
+ at_reg->nr_pages -= nr_pages + new_front_reg->nr_pages;
+ at_reg->start_pfn = start_pfn + nr_pages;
+
+ OSK_DLIST_INSERT_BEFORE(&kctx->reg_list, new_front_reg, at_reg, struct kbase_va_region, link);
+ OSK_DLIST_INSERT_BEFORE(&kctx->reg_list, new_reg, at_reg, struct kbase_va_region, link);
+ }
+ else
+ {
+ err = MALI_ERROR_OUT_OF_MEMORY;
+ }
+ }
+
+ return err;
+}
+
+/**
+ * @brief Remove a region object from the global list.
+ *
+ * The region reg is removed, possibly by merging with other free and
+ * compatible adjacent regions. It must be called with the context
+ * region lock held. The associated memory is not released (see
+ * kbase_free_alloced_region). Internal use only.
+ */
+STATIC mali_error kbase_remove_va_region(struct kbase_context *kctx, struct kbase_va_region *reg)
+{
+ struct kbase_va_region *prev;
+ struct kbase_va_region *next;
+ int merged_front = 0;
+ int merged_back = 0;
+ mali_error err = MALI_ERROR_NONE;
+
+ prev = OSK_DLIST_PREV(reg, struct kbase_va_region, link);
+ if (!OSK_DLIST_IS_VALID(prev, link))
+ {
+ prev = NULL;
+ }
+
+ next = OSK_DLIST_NEXT(reg, struct kbase_va_region, link);
+ OSK_ASSERT(NULL != next);
+ if (!OSK_DLIST_IS_VALID(next, link))
+ {
+ next = NULL;
+ }
+
+ /* Try to merge with front first */
+ if (prev && (prev->flags & KBASE_REG_FREE) && kbase_match_zone(prev, reg))
+ {
+ /* We're compatible with the previous VMA, merge with it */
+ OSK_DLIST_REMOVE(&kctx->reg_list, reg, link);
+ prev->nr_pages += reg->nr_pages;
+ reg = prev;
+ merged_front = 1;
+ }
+
+ /* Try to merge with back next */
+ if (next && (next->flags & KBASE_REG_FREE) && kbase_match_zone(next, reg))
+ {
+ /* We're compatible with the next VMA, merge with it */
+ next->start_pfn = reg->start_pfn;
+ next->nr_pages += reg->nr_pages;
+ OSK_DLIST_REMOVE(&kctx->reg_list, reg, link);
+
+ if (merged_front)
+ {
+ /* we already merged with prev, free it */
+ kbase_free_alloced_region(prev);
+ }
+
+ merged_back = 1;
+ }
+
+ if (!(merged_front || merged_back))
+ {
+ /*
+ * We didn't merge anything. Add a new free
+ * placeholder and remove the original one.
+ */
+ struct kbase_va_region *free_reg;
+
+ free_reg = kbase_alloc_free_region(kctx, reg->start_pfn, reg->nr_pages, reg->flags & KBASE_REG_ZONE_MASK);
+ if (!free_reg)
+ {
+ err = MALI_ERROR_OUT_OF_MEMORY;
+ goto out;
+ }
+
+ OSK_DLIST_INSERT_BEFORE(&kctx->reg_list, free_reg, reg, struct kbase_va_region, link);
+ OSK_DLIST_REMOVE(&kctx->reg_list, reg, link);
+ }
+
+out:
+ return err;
+}
+KBASE_EXPORT_TEST_API(kbase_remove_va_region)
+
+/**
+ * @brief Add a region to the global list.
+ *
+ * Add reg to the global list, according to its zone. If addr is
+ * non-null, this address is used directly (as in the PMEM
+ * case). Alignment can be enforced by specifying a number of pages
+ * (which *must* be a power of 2).
+ *
+ * Context region list lock must be held.
+ *
+ * Mostly used by kbase_gpu_mmap(), but also useful to register the
+ * ring-buffer region.
+ */
+mali_error kbase_add_va_region(struct kbase_context *kctx,
+ struct kbase_va_region *reg,
+ mali_addr64 addr, u32 nr_pages,
+ u32 align)
+{
+ struct kbase_va_region *tmp;
+ u64 gpu_pfn = addr >> OSK_PAGE_SHIFT;
+ mali_error err = MALI_ERROR_NONE;
+
+ OSK_ASSERT(NULL != kctx);
+ OSK_ASSERT(NULL != reg);
+
+ if (!align)
+ {
+ align = 1;
+ }
+
+ /* must be a power of 2 */
+ OSK_ASSERT((align & (align - 1)) == 0);
+ OSK_ASSERT( nr_pages > 0 );
+
+ if (gpu_pfn)
+ {
+ OSK_ASSERT(!(gpu_pfn & (align - 1)));
+
+ /*
+ * So we want a specific address. Parse the list until
+ * we find the enclosing region, which *must* be free.
+ */
+ OSK_DLIST_FOREACH(&kctx->reg_list, struct kbase_va_region, link, tmp)
+ {
+ if (tmp->start_pfn <= gpu_pfn &&
+ (tmp->start_pfn + tmp->nr_pages) >= (gpu_pfn + nr_pages))
+ {
+ /* We have the candidate */
+ if (!kbase_match_zone(tmp, reg))
+ {
+ /* Wrong zone, fail */
+ err = MALI_ERROR_OUT_OF_GPU_MEMORY;
+ OSK_PRINT_WARN(OSK_BASE_MEM, "Zone mismatch: %d != %d", tmp->flags & KBASE_REG_ZONE_MASK, reg->flags & KBASE_REG_ZONE_MASK);
+ goto out;
+ }
+
+ if (!(tmp->flags & KBASE_REG_FREE))
+ {
+ OSK_PRINT_WARN(OSK_BASE_MEM, "!(tmp->flags & KBASE_REG_FREE): tmp->start_pfn=0x%llx tmp->flags=0x%x tmp->nr_pages=0x%x gpu_pfn=0x%llx nr_pages=0x%x\n",
+ tmp->start_pfn, tmp->flags, tmp->nr_pages, gpu_pfn, nr_pages);
+ OSK_PRINT_WARN(OSK_BASE_MEM, "in function %s (%p, %p, 0x%llx, 0x%x, 0x%x)\n", __func__,
+ kctx,reg,addr, nr_pages, align);
+ /* Busy, fail */
+ err = MALI_ERROR_OUT_OF_GPU_MEMORY;
+ goto out;
+ }
+
+ err = kbase_insert_va_region_nolock(kctx, reg, tmp, gpu_pfn, nr_pages);
+ if (err) OSK_PRINT_WARN(OSK_BASE_MEM, "Failed to insert va region");
+ goto out;
+ }
+ }
+
+ err = MALI_ERROR_OUT_OF_GPU_MEMORY;
+ OSK_PRINT_WARN(OSK_BASE_MEM, "Out of mem");
+ goto out;
+ }
+
+ /* Find the first free region that accomodates our requirements */
+ OSK_DLIST_FOREACH(&kctx->reg_list, struct kbase_va_region, link, tmp)
+ {
+ if (tmp->nr_pages >= nr_pages &&
+ (tmp->flags & KBASE_REG_FREE) &&
+ kbase_match_zone(tmp, reg))
+ {
+ /* Check alignment */
+ u64 start_pfn;
+ start_pfn = (tmp->start_pfn + align - 1) & ~(align - 1);
+
+ if (
+ (start_pfn >= tmp->start_pfn) &&
+ (start_pfn <= (tmp->start_pfn + (tmp->nr_pages - 1))) &&
+ ((start_pfn + nr_pages - 1) <= (tmp->start_pfn + tmp->nr_pages -1))
+ )
+ {
+ /* It fits, let's use it */
+ err = kbase_insert_va_region_nolock(kctx, reg, tmp, start_pfn, nr_pages);
+ goto out;
+ }
+ }
+ }
+
+ err = MALI_ERROR_OUT_OF_GPU_MEMORY;
+out:
+ return err;
+}
+KBASE_EXPORT_TEST_API(kbase_add_va_region)
+
+void kbase_mmu_update(struct kbase_context *kctx)
+{
+ /* Use GPU implementation-defined caching policy. */
+ u32 memattr = ASn_MEMATTR_IMPL_DEF_CACHE_POLICY;
+ u32 pgd_high;
+
+ OSK_ASSERT(NULL != kctx);
+ /* ASSERT that the context has a valid as_nr, which is only the case
+ * when it's scheduled in.
+ *
+ * as_nr won't change because the caller has the runpool_irq lock */
+ OSK_ASSERT( kctx->as_nr != KBASEP_AS_NR_INVALID );
+
+ pgd_high = sizeof(kctx->pgd)>4?(kctx->pgd >> 32):0;
+
+ kbase_reg_write(kctx->kbdev,
+ MMU_AS_REG(kctx->as_nr, ASn_TRANSTAB_LO),
+ (kctx->pgd & ASn_TRANSTAB_ADDR_SPACE_MASK) | ASn_TRANSTAB_READ_INNER
+ | ASn_TRANSTAB_ADRMODE_TABLE, kctx);
+
+ /* Need to use a conditional expression to avoid "right shift count >= width of type"
+ * error when using an if statement - although the size_of condition is evaluated at compile
+ * time the unused branch is not removed until after it is type-checked and the error
+ * produced.
+ */
+ pgd_high = sizeof(kctx->pgd)>4?(kctx->pgd >> 32):0;
+
+ kbase_reg_write(kctx->kbdev,
+ MMU_AS_REG(kctx->as_nr, ASn_TRANSTAB_HI),
+ pgd_high, kctx);
+
+ kbase_reg_write(kctx->kbdev,
+ MMU_AS_REG(kctx->as_nr, ASn_MEMATTR_LO),
+ memattr, kctx);
+ kbase_reg_write(kctx->kbdev,
+ MMU_AS_REG(kctx->as_nr, ASn_MEMATTR_HI),
+ memattr, kctx);
+ kbase_reg_write(kctx->kbdev,
+ MMU_AS_REG(kctx->as_nr, ASn_COMMAND),
+ ASn_COMMAND_UPDATE, kctx);
+}
+KBASE_EXPORT_TEST_API(kbase_mmu_update)
+
+void kbase_mmu_disable (kbase_context *kctx)
+{
+ OSK_ASSERT(NULL != kctx);
+ /* ASSERT that the context has a valid as_nr, which is only the case
+ * when it's scheduled in.
+ *
+ * as_nr won't change because the caller has the runpool_irq lock */
+ OSK_ASSERT( kctx->as_nr != KBASEP_AS_NR_INVALID );
+
+ kbase_reg_write(kctx->kbdev,
+ MMU_AS_REG(kctx->as_nr, ASn_TRANSTAB_LO),
+ 0, kctx);
+ kbase_reg_write(kctx->kbdev,
+ MMU_AS_REG(kctx->as_nr, ASn_TRANSTAB_HI),
+ 0, kctx);
+ kbase_reg_write(kctx->kbdev,
+ MMU_AS_REG(kctx->as_nr, ASn_COMMAND),
+ ASn_COMMAND_UPDATE, kctx);
+}
+KBASE_EXPORT_TEST_API(kbase_mmu_disable)
+
+mali_error kbase_gpu_mmap(struct kbase_context *kctx,
+ struct kbase_va_region *reg,
+ mali_addr64 addr, u32 nr_pages,
+ u32 align)
+{
+ mali_error err;
+ OSK_ASSERT(NULL != kctx);
+ OSK_ASSERT(NULL != reg);
+
+ err = kbase_add_va_region(kctx, reg, addr, nr_pages, align);
+ if (MALI_ERROR_NONE != err)
+ {
+ return err;
+ }
+
+ err = kbase_mmu_insert_pages(kctx, reg->start_pfn,
+ kbase_get_phy_pages(reg),
+ reg->nr_alloc_pages, reg->flags & ((1 << KBASE_REG_FLAGS_NR_BITS)-1));
+ if(MALI_ERROR_NONE != err)
+ {
+ kbase_remove_va_region(kctx, reg);
+ }
+
+ return err;
+}
+KBASE_EXPORT_TEST_API(kbase_gpu_mmap)
+
+mali_error kbase_gpu_munmap(struct kbase_context *kctx, struct kbase_va_region *reg)
+{
+ mali_error err;
+
+ if(reg->start_pfn == 0 )
+ {
+ return MALI_ERROR_NONE;
+ }
+ err = kbase_mmu_teardown_pages(kctx, reg->start_pfn, reg->nr_alloc_pages);
+ if(MALI_ERROR_NONE != err)
+ {
+ return err;
+ }
+
+ err = kbase_remove_va_region(kctx, reg);
+ return err;
+}
+
+kbase_va_region *kbase_region_lookup(kbase_context *kctx, mali_addr64 gpu_addr)
+{
+ kbase_va_region *tmp;
+ u64 gpu_pfn = gpu_addr >> OSK_PAGE_SHIFT;
+ OSK_ASSERT(NULL != kctx);
+
+ OSK_DLIST_FOREACH(&kctx->reg_list, kbase_va_region, link, tmp)
+ {
+ if (gpu_pfn >= tmp->start_pfn && (gpu_pfn < tmp->start_pfn + tmp->nr_pages))
+ {
+ return tmp;
+ }
+ }
+
+ return NULL;
+}
+KBASE_EXPORT_TEST_API(kbase_region_lookup)
+
+struct kbase_va_region *kbase_validate_region(struct kbase_context *kctx, mali_addr64 gpu_addr)
+{
+ struct kbase_va_region *tmp;
+ u64 gpu_pfn = gpu_addr >> OSK_PAGE_SHIFT;
+ OSK_ASSERT(NULL != kctx);
+
+ OSK_DLIST_FOREACH(&kctx->reg_list, struct kbase_va_region, link, tmp)
+ {
+ if (tmp->start_pfn == gpu_pfn)
+ {
+ return tmp;
+ }
+ }
+
+ return NULL;
+}
+KBASE_EXPORT_TEST_API(kbase_validate_region)
+
+/**
+ * @brief Find a mapping keyed with ptr in region reg
+ */
+STATIC struct kbase_cpu_mapping *kbase_find_cpu_mapping(struct kbase_va_region *reg,
+ const void *ptr)
+{
+ struct kbase_cpu_mapping *map;
+ OSK_ASSERT(NULL != reg);
+ OSK_DLIST_FOREACH(®->map_list, struct kbase_cpu_mapping, link, map)
+ {
+ if (map->private == ptr)
+ {
+ return map;
+ }
+ }
+
+ return NULL;
+}
+KBASE_EXPORT_TEST_API(kbase_find_cpu_mapping)
+
+STATIC struct kbase_cpu_mapping *kbasep_find_enclosing_cpu_mapping_of_region(
+ const struct kbase_va_region *reg,
+ osk_virt_addr uaddr,
+ size_t size)
+{
+ struct kbase_cpu_mapping *map;
+
+ OSK_ASSERT(NULL != reg);
+
+ if ((uintptr_t)uaddr + size < (uintptr_t)uaddr) /* overflow check */
+ {
+ return NULL;
+ }
+
+ OSK_DLIST_FOREACH(®->map_list, struct kbase_cpu_mapping, link, map)
+ {
+ if (map->uaddr <= uaddr &&
+ ((uintptr_t)map->uaddr + (map->nr_pages << OSK_PAGE_SHIFT)) >= ((uintptr_t)uaddr + size))
+ {
+ return map;
+ }
+ }
+
+ return NULL;
+}
+KBASE_EXPORT_TEST_API(kbasep_find_enclosing_cpu_mapping_of_region)
+
+static void kbase_dump_mappings(struct kbase_va_region *reg)
+{
+ struct kbase_cpu_mapping *map;
+
+ OSK_ASSERT(NULL != reg);
+
+ OSK_DLIST_FOREACH(®->map_list, struct kbase_cpu_mapping, link, map)
+ {
+ OSK_PRINT_WARN(OSK_BASE_MEM, "uaddr %p nr_pages %d page_off %016llx vma %p\n",
+ map->uaddr, map->nr_pages,
+ map->page_off, map->private);
+ }
+}
+
+/**
+ * @brief Delete a mapping keyed with ptr in region reg
+ */
+mali_error kbase_cpu_free_mapping(struct kbase_va_region *reg, const void *ptr)
+{
+ struct kbase_cpu_mapping *map;
+ mali_error err = MALI_ERROR_NONE;
+ OSK_ASSERT(NULL != reg);
+ map = kbase_find_cpu_mapping(reg, ptr);
+ if (!map)
+ {
+ OSK_PRINT_WARN(OSK_BASE_MEM, "Freeing unknown mapping %p in region %p\n", ptr, (void*)reg);
+ kbase_dump_mappings(reg);
+ err = MALI_ERROR_FUNCTION_FAILED;
+ goto out;
+ }
+
+ OSK_DLIST_REMOVE(®->map_list, map, link);
+ osk_free(map);
+out:
+ return err;
+}
+KBASE_EXPORT_TEST_API(kbase_cpu_free_mapping)
+
+struct kbase_cpu_mapping *kbasep_find_enclosing_cpu_mapping(
+ struct kbase_context *kctx,
+ mali_addr64 gpu_addr,
+ osk_virt_addr uaddr,
+ size_t size )
+{
+ struct kbase_cpu_mapping *map = NULL;
+ const struct kbase_va_region *reg;
+
+ OSKP_ASSERT( kctx != NULL );
+
+ kbase_gpu_vm_lock(kctx);
+
+ reg = kbase_region_lookup( kctx, gpu_addr );
+ if ( NULL != reg )
+ {
+ map = kbasep_find_enclosing_cpu_mapping_of_region( reg, uaddr, size);
+ }
+
+ kbase_gpu_vm_unlock(kctx);
+
+ return map;
+}
+KBASE_EXPORT_TEST_API(kbasep_find_enclosing_cpu_mapping)
+
+static mali_error kbase_do_syncset(struct kbase_context *kctx, base_syncset *set,
+ osk_sync_kmem_fn sync_fn)
+{
+ mali_error err = MALI_ERROR_NONE;
+ struct basep_syncset *sset = &set->basep_sset;
+ struct kbase_va_region *reg;
+ struct kbase_cpu_mapping *map;
+ osk_phy_addr *pa;
+ u64 page_off, page_count, size_in_pages;
+ osk_virt_addr start;
+ size_t size;
+ u64 i;
+ u32 offset_within_page;
+ osk_phy_addr base_phy_addr = 0;
+ osk_virt_addr base_virt_addr = 0;
+ size_t area_size = 0;
+
+ kbase_os_mem_map_lock(kctx);
+
+ kbase_gpu_vm_lock(kctx);
+
+ /* find the region where the virtual address is contained */
+ reg = kbase_region_lookup(kctx, sset->mem_handle);
+ if (!reg)
+ {
+ err = MALI_ERROR_FUNCTION_FAILED;
+ goto out_unlock;
+ }
+
+ if (!(reg->flags & KBASE_REG_CPU_CACHED))
+ {
+ goto out_unlock;
+ }
+
+ start = (osk_virt_addr)(uintptr_t)sset->user_addr;
+ size = sset->size;
+
+ map = kbasep_find_enclosing_cpu_mapping_of_region(reg, start, size);
+ if (!map)
+ {
+ err = MALI_ERROR_FUNCTION_FAILED;
+ goto out_unlock;
+ }
+
+ offset_within_page = (uintptr_t)start & (OSK_PAGE_SIZE - 1);
+ size_in_pages = (size + offset_within_page + (OSK_PAGE_SIZE - 1)) & OSK_PAGE_MASK;
+ page_off = map->page_off + (((uintptr_t)start - (uintptr_t)map->uaddr) >> OSK_PAGE_SHIFT);
+ page_count = (size_in_pages >> OSK_PAGE_SHIFT);
+ pa = kbase_get_phy_pages(reg);
+
+ for (i = 0; i < page_count; i++)
+ {
+ u32 offset = (uintptr_t)start & (OSK_PAGE_SIZE - 1);
+ osk_phy_addr paddr = pa[page_off + i] + offset;
+ size_t sz = OSK_MIN(((size_t)OSK_PAGE_SIZE - offset), size);
+
+ if (paddr == base_phy_addr + area_size &&
+ start == (osk_virt_addr)((uintptr_t)base_virt_addr + area_size))
+ {
+ area_size += sz;
+ }
+ else if (area_size > 0)
+ {
+ sync_fn(base_phy_addr, base_virt_addr, area_size);
+ area_size = 0;
+ }
+
+ if (area_size == 0)
+ {
+ base_phy_addr = paddr;
+ base_virt_addr = start;
+ area_size = sz;
+ }
+
+ start = (osk_virt_addr)((uintptr_t)start + sz);
+ size -= sz;
+ }
+
+ if (area_size > 0)
+ {
+ sync_fn(base_phy_addr, base_virt_addr, area_size);
+ }
+
+ OSK_ASSERT(size == 0);
+
+out_unlock:
+ kbase_gpu_vm_unlock(kctx);
+ kbase_os_mem_map_unlock(kctx);
+ return err;
+}
+
+static mali_error kbase_sync_to_memory(kbase_context *kctx, base_syncset *syncset)
+{
+ return kbase_do_syncset(kctx, syncset, osk_sync_to_memory);
+}
+
+static mali_error kbase_sync_to_cpu(kbase_context *kctx, base_syncset *syncset)
+{
+ return kbase_do_syncset(kctx, syncset, osk_sync_to_cpu);
+}
+
+mali_error kbase_sync_now(kbase_context *kctx, base_syncset *syncset)
+{
+ mali_error err = MALI_ERROR_FUNCTION_FAILED;
+ struct basep_syncset *sset;
+
+ OSK_ASSERT( NULL != kctx );
+ OSK_ASSERT( NULL != syncset );
+
+ sset = &syncset->basep_sset;
+
+ switch(sset->type)
+ {
+ case BASE_SYNCSET_OP_MSYNC:
+ err = kbase_sync_to_memory(kctx, syncset);
+ break;
+
+ case BASE_SYNCSET_OP_CSYNC:
+ err = kbase_sync_to_cpu(kctx, syncset);
+ break;
+
+ default:
+ OSK_PRINT_WARN(OSK_BASE_MEM, "Unknown msync op %d\n", sset->type);
+ break;
+ }
+
+ return err;
+}
+KBASE_EXPORT_TEST_API(kbase_sync_now)
+
+void kbase_pre_job_sync(kbase_context *kctx, base_syncset *syncsets, u32 nr)
+{
+ u32 i;
+
+ OSK_ASSERT(NULL != kctx);
+ OSK_ASSERT(NULL != syncsets);
+
+ for (i = 0; i < nr; i++)
+ {
+ u8 type = syncsets[i].basep_sset.type;
+
+ switch(type)
+ {
+ case BASE_SYNCSET_OP_MSYNC:
+ kbase_sync_to_memory(kctx, &syncsets[i]);
+ break;
+
+ case BASE_SYNCSET_OP_CSYNC:
+ continue;
+
+ default:
+ OSK_PRINT_WARN(OSK_BASE_MEM, "Unknown msync op %d\n", type);
+ break;
+ }
+ }
+}
+KBASE_EXPORT_TEST_API(kbase_pre_job_sync)
+
+void kbase_post_job_sync(kbase_context *kctx, base_syncset *syncsets, u32 nr)
+{
+ u32 i;
+
+ OSK_ASSERT(NULL != kctx);
+ OSK_ASSERT(NULL != syncsets);
+
+ for (i = 0; i < nr; i++)
+ {
+ struct basep_syncset *sset = &syncsets[i].basep_sset;
+ switch(sset->type)
+ {
+ case BASE_SYNCSET_OP_CSYNC:
+ kbase_sync_to_cpu(kctx, &syncsets[i]);
+ break;
+
+ case BASE_SYNCSET_OP_MSYNC:
+ continue;
+
+ default:
+ OSK_PRINT_WARN(OSK_BASE_MEM, "Unknown msync op %d\n", sset->type);
+ break;
+ }
+ }
+}
+KBASE_EXPORT_TEST_API(kbase_post_job_sync)
+
+/* vm lock must be held */
+mali_error kbase_mem_free_region(struct kbase_context *kctx,
+ kbase_va_region *reg)
+{
+ mali_error err;
+ OSK_ASSERT(NULL != kctx);
+ OSK_ASSERT(NULL != reg);
+
+ if (!OSK_DLIST_IS_EMPTY(®->map_list))
+ {
+ /*
+ * We still have mappings, can't free
+ * memory. This also handles the race
+ * condition with the unmap code (see
+ * kbase_cpu_vm_close()).
+ */
+ OSK_PRINT_WARN(OSK_BASE_MEM, "Pending CPU mappings, not freeing memory!\n");
+ err = MALI_ERROR_FUNCTION_FAILED;
+ goto out;
+ }
+
+ err = kbase_gpu_munmap(kctx, reg);
+ if (err)
+ {
+ OSK_PRINT_WARN(OSK_BASE_MEM, "Could not unmap from the GPU...\n");
+ goto out;
+ }
+
+ if (kbase_hw_has_issue(kctx->kbdev, BASE_HW_ISSUE_6367))
+ {
+ /* Wait for GPU to flush write buffer before freeing physical pages */
+ kbase_wait_write_flush(kctx);
+ }
+
+ /* This will also free the physical pages */
+ kbase_free_alloced_region(reg);
+
+out:
+ return err;
+}
+KBASE_EXPORT_TEST_API(kbase_mem_free_region)
+
+/**
+ * @brief Free the region from the GPU and unregister it.
+ *
+ * This function implements the free operation on a memory segment.
+ * It will loudly fail if called with outstanding mappings.
+ */
+mali_error kbase_mem_free(struct kbase_context *kctx, mali_addr64 gpu_addr)
+{
+ mali_error err = MALI_ERROR_NONE;
+ struct kbase_va_region *reg;
+
+ OSK_ASSERT(kctx != NULL);
+
+ if (0 == gpu_addr)
+ {
+ OSK_PRINT_WARN(OSK_BASE_MEM, "gpu_addr 0 is reserved for the ringbuffer and it's an error to try to free it using kbase_mem_free\n");
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+ kbase_gpu_vm_lock(kctx);
+
+ if (gpu_addr < OSK_PAGE_SIZE)
+ {
+ /* an OS specific cookie, ask the OS specific code to validate it */
+ reg = kbase_lookup_cookie(kctx, gpu_addr);
+ if (!reg)
+ {
+ err = MALI_ERROR_FUNCTION_FAILED;
+ goto out_unlock;
+ }
+
+ /* ask to unlink the cookie as we'll free it */
+ kbase_unlink_cookie(kctx, gpu_addr, reg);
+
+ kbase_free_alloced_region(reg);
+ }
+ else
+ {
+ /* A real GPU va */
+
+ /* Validate the region */
+ reg = kbase_validate_region(kctx, gpu_addr);
+ if (!reg)
+ {
+ OSK_ASSERT_MSG(0, "Trying to free nonexistent region\n 0x%llX", gpu_addr);
+ err = MALI_ERROR_FUNCTION_FAILED;
+ goto out_unlock;
+ }
+
+ err = kbase_mem_free_region(kctx, reg);
+ }
+
+out_unlock:
+ kbase_gpu_vm_unlock(kctx);
+ return err;
+}
+KBASE_EXPORT_TEST_API(kbase_mem_free)
+
+void kbase_update_region_flags(struct kbase_va_region *reg, u32 flags, mali_bool is_growable)
+{
+ OSK_ASSERT(NULL != reg);
+ OSK_ASSERT((flags & ~((1 << BASE_MEM_FLAGS_NR_BITS) - 1)) == 0);
+
+ reg->flags |= kbase_cache_enabled(flags, reg->nr_pages);
+
+ if ((flags & BASE_MEM_GROW_ON_GPF) || is_growable)
+ {
+ reg->flags |= KBASE_REG_GROWABLE;
+
+ if (flags & BASE_MEM_GROW_ON_GPF)
+ {
+ reg->flags |= KBASE_REG_PF_GROW;
+ }
+ }
+ else
+ {
+ /* As this region is not growable but the default is growable, we
+ explicitly clear the growable flag. */
+ reg->flags &= ~KBASE_REG_GROWABLE;
+ }
+
+ if (flags & BASE_MEM_PROT_CPU_WR)
+ {
+ reg->flags |= KBASE_REG_CPU_WR;
+ }
+
+ if (flags & BASE_MEM_PROT_CPU_RD)
+ {
+ reg->flags |= KBASE_REG_CPU_RD;
+ }
+
+ if (flags & BASE_MEM_PROT_GPU_WR)
+ {
+ reg->flags |= KBASE_REG_GPU_WR;
+ }
+
+ if (flags & BASE_MEM_PROT_GPU_RD)
+ {
+ reg->flags |= KBASE_REG_GPU_RD;
+ }
+
+ if (0 == (flags & BASE_MEM_PROT_GPU_EX))
+ {
+ reg->flags |= KBASE_REG_GPU_NX;
+ }
+
+ if (flags & BASE_MEM_COHERENT_LOCAL)
+ {
+ reg->flags |= KBASE_REG_SHARE_IN;
+ }
+ else if (flags & BASE_MEM_COHERENT_SYSTEM)
+ {
+ reg->flags |= KBASE_REG_SHARE_BOTH;
+ }
+}
+
+static void kbase_free_phy_pages_helper(kbase_va_region * reg, u32 nr_pages_to_free)
+{
+ osk_phy_addr *page_array;
+
+ u32 nr_pages;
+
+ OSK_ASSERT(reg);
+ OSK_ASSERT(reg->kctx);
+
+ /* Can't call this on TB buffers */
+ OSK_ASSERT(0 == (reg->flags & KBASE_REG_IS_TB));
+ /* can't be called on imported types */
+ OSK_ASSERT(BASE_TMEM_IMPORT_TYPE_INVALID == reg->imported_type);
+ /* Free of too many pages attempted! */
+ OSK_ASSERT(reg->nr_alloc_pages >= nr_pages_to_free);
+ /* A complete free is required if not marked as growable */
+ OSK_ASSERT((reg->flags & KBASE_REG_GROWABLE) || (reg->nr_alloc_pages == nr_pages_to_free));
+
+ if (0 == nr_pages_to_free)
+ {
+ /* early out if nothing to free */
+ return;
+ }
+
+ nr_pages = nr_pages_to_free;
+
+ page_array = kbase_get_phy_pages(reg);
+
+ OSK_ASSERT(nr_pages_to_free == 0 || page_array != NULL);
+
+ while (nr_pages)
+ {
+ kbase_mem_commit * commit;
+ commit = reg->last_commit;
+
+ if (nr_pages >= commit->nr_pages)
+ {
+ /* free the whole commit */
+ kbase_phy_pages_free(reg->kctx->kbdev, commit->allocator, commit->nr_pages,
+ page_array + reg->nr_alloc_pages - commit->nr_pages);
+
+ /* update page counts */
+ nr_pages -= commit->nr_pages;
+ reg->nr_alloc_pages -= commit->nr_pages;
+
+ /* free the node (unless it's the root node) */
+ if (commit != ®->root_commit)
+ {
+ reg->last_commit = commit->prev;
+ osk_free(commit);
+ }
+ else
+ {
+ /* mark the root node as having no commit */
+ commit->nr_pages = 0;
+ OSK_ASSERT(nr_pages == 0);
+ OSK_ASSERT(reg->nr_alloc_pages == 0);
+ break;
+ }
+ }
+ else
+ {
+ /* partial free of this commit */
+ kbase_phy_pages_free(reg->kctx->kbdev, commit->allocator, nr_pages,
+ page_array + reg->nr_alloc_pages - nr_pages);
+ commit->nr_pages -= nr_pages;
+ reg->nr_alloc_pages -= nr_pages;
+ break; /* end the loop */
+ }
+ }
+
+ kbase_mem_usage_release_pages(®->kctx->usage, nr_pages_to_free);
+}
+KBASE_EXPORT_TEST_API(kbase_update_region_flags)
+
+u32 kbase_phy_pages_alloc(struct kbase_device *kbdev, osk_phy_allocator *allocator, u32 nr_pages,
+ osk_phy_addr *pages)
+{
+ OSK_ASSERT(kbdev != NULL);
+ OSK_ASSERT(allocator != NULL);
+ OSK_ASSERT(pages != NULL);
+
+ if (allocator->type == OSKP_PHY_ALLOCATOR_OS)
+ {
+ u32 pages_allocated;
+
+ /* Claim pages from OS shared quota. Note that shared OS memory may be used by different allocators. That's why
+ * page request is made here and not on per-allocator basis */
+ if (MALI_ERROR_NONE != kbase_mem_usage_request_pages(&kbdev->memdev.usage, nr_pages))
+ {
+ return 0;
+ }
+
+ pages_allocated = osk_phy_pages_alloc(allocator, nr_pages, pages);
+
+ if (pages_allocated < nr_pages)
+ {
+ kbase_mem_usage_release_pages(&kbdev->memdev.usage, nr_pages - pages_allocated);
+ }
+ return pages_allocated;
+ }
+ else
+ {
+ /* Dedicated memory is tracked per allocator. Memory limits are checked in osk_phy_pages_alloc function */
+ return osk_phy_pages_alloc(allocator, nr_pages, pages);
+ }
+}
+KBASE_EXPORT_TEST_API(kbase_phy_pages_alloc)
+
+void kbase_phy_pages_free(struct kbase_device *kbdev, osk_phy_allocator *allocator, u32 nr_pages, osk_phy_addr *pages)
+{
+ OSK_ASSERT(kbdev != NULL);
+ OSK_ASSERT(allocator != NULL);
+ OSK_ASSERT(pages != NULL);
+
+ osk_phy_pages_free(allocator, nr_pages, pages);
+
+ if (allocator->type == OSKP_PHY_ALLOCATOR_OS)
+ {
+ /* release pages from OS shared quota */
+ kbase_mem_usage_release_pages(&kbdev->memdev.usage, nr_pages);
+ }
+}
+KBASE_EXPORT_TEST_API(kbase_phy_pages_free)
+
+
+mali_error kbase_alloc_phy_pages_helper(struct kbase_va_region *reg, u32 nr_pages_requested)
+{
+ kbase_phys_allocator_iterator it;
+ osk_phy_addr *page_array;
+ u32 nr_pages_left;
+ u32 num_pages_on_start;
+ u32 pages_committed;
+ kbase_phys_allocator_order order;
+ u32 performance_flags;
+
+ OSK_ASSERT(reg);
+ OSK_ASSERT(reg->kctx);
+
+ /* Can't call this on TB or UMP buffers */
+ OSK_ASSERT(0 == (reg->flags & KBASE_REG_IS_TB));
+ /* can't be called on imported types */
+ OSK_ASSERT(BASE_TMEM_IMPORT_TYPE_INVALID == reg->imported_type);
+ /* Growth of too many pages attempted! (written this way to catch overflow)) */
+ OSK_ASSERT(reg->nr_pages - reg->nr_alloc_pages >= nr_pages_requested);
+ /* A complete commit is required if not marked as growable */
+ OSK_ASSERT((reg->flags & KBASE_REG_GROWABLE) || (reg->nr_pages == nr_pages_requested));
+
+ if (0 == nr_pages_requested)
+ {
+ /* early out if nothing to do */
+ return MALI_ERROR_NONE;
+ }
+
+ /* track the number pages so we can roll back on alloc fail */
+ num_pages_on_start = reg->nr_alloc_pages;
+ nr_pages_left = nr_pages_requested;
+
+ page_array = kbase_get_phy_pages(reg);
+ OSK_ASSERT(page_array);
+
+ /* claim the pages from our per-context quota */
+ if (MALI_ERROR_NONE != kbase_mem_usage_request_pages(®->kctx->usage, nr_pages_requested))
+ {
+ return MALI_ERROR_OUT_OF_MEMORY;
+ }
+
+ /* First try to extend the last commit */
+ if (reg->last_commit->allocator)
+ {
+ pages_committed = kbase_phy_pages_alloc(reg->kctx->kbdev, reg->last_commit->allocator, nr_pages_left,
+ page_array + reg->nr_alloc_pages);
+ reg->last_commit->nr_pages += pages_committed;
+ reg->nr_alloc_pages += pages_committed;
+ nr_pages_left -= pages_committed;
+
+ if (!nr_pages_left)
+ {
+ return MALI_ERROR_NONE;
+ }
+ }
+
+ performance_flags = reg->flags & (KBASE_REG_CPU_CACHED | KBASE_REG_GPU_CACHED);
+
+ if (performance_flags == 0)
+ {
+ order = ALLOCATOR_ORDER_CONFIG;
+ }
+ else if (performance_flags == KBASE_REG_CPU_CACHED)
+ {
+ order = ALLOCATOR_ORDER_CPU_PERFORMANCE;
+ }
+ else if (performance_flags == KBASE_REG_GPU_CACHED)
+ {
+ order = ALLOCATOR_ORDER_GPU_PERFORMANCE;
+ }
+ else
+ {
+ order = ALLOCATOR_ORDER_CPU_GPU_PERFORMANCE;
+ }
+
+ /* If not fully commited (or no prev allocator) we need to ask all the allocators */
+
+ /* initialize the iterator we use to loop over the memory providers */
+ if (MALI_ERROR_NONE == kbase_phys_it_init(reg->kctx->kbdev, &it, order))
+ {
+ for (;nr_pages_left && kbase_phys_it_deref(&it); kbase_phys_it_deref_and_advance(&it))
+ {
+ pages_committed = kbase_phy_pages_alloc(reg->kctx->kbdev, kbase_phys_it_deref(&it), nr_pages_left,
+ page_array + reg->nr_alloc_pages);
+
+ OSK_ASSERT(pages_committed <= nr_pages_left);
+
+ if (pages_committed)
+ {
+ /* got some pages, track them */
+ kbase_mem_commit * commit;
+
+ if (reg->last_commit->allocator)
+ {
+ commit = (kbase_mem_commit*)osk_calloc(sizeof(*commit));
+ if (commit == NULL)
+ {
+ kbase_phy_pages_free(reg->kctx->kbdev, kbase_phys_it_deref(&it), pages_committed,
+ page_array + reg->nr_alloc_pages);
+ break;
+ }
+ commit->prev = reg->last_commit;
+ }
+ else
+ {
+ commit = reg->last_commit;
+ }
+
+ commit->allocator = kbase_phys_it_deref(&it);
+ commit->nr_pages = pages_committed;
+
+ reg->last_commit = commit;
+ reg->nr_alloc_pages += pages_committed;
+
+ nr_pages_left -= pages_committed;
+ }
+ }
+
+ /* no need for the iterator any more */
+ kbase_phys_it_term(&it);
+
+ if (nr_pages_left == 0)
+ {
+ return MALI_ERROR_NONE;
+ }
+ }
+
+ /* failed to allocate enough memory, roll back */
+ if (reg->nr_alloc_pages != num_pages_on_start)
+ {
+ /*we need the auxiliary var below since kbase_free_phy_pages_helper updates reg->nr_alloc_pages*/
+ u32 track_nr_alloc_pages = reg->nr_alloc_pages;
+ /* kbase_free_phy_pages_helper implicitly calls kbase_mem_usage_release_pages */
+ kbase_free_phy_pages_helper(reg, reg->nr_alloc_pages - num_pages_on_start);
+ /* Release the remaining pages */
+ kbase_mem_usage_release_pages(®->kctx->usage,
+ nr_pages_requested - (track_nr_alloc_pages - num_pages_on_start));
+ }
+ else
+ {
+ kbase_mem_usage_release_pages(®->kctx->usage, nr_pages_requested);
+ }
+ return MALI_ERROR_OUT_OF_MEMORY;
+}
+
+
+/* Frees all allocated pages of a region */
+void kbase_free_phy_pages(struct kbase_va_region *reg)
+{
+ osk_phy_addr *page_array;
+ OSK_ASSERT(NULL != reg);
+
+ page_array = kbase_get_phy_pages(reg);
+
+ if (reg->imported_type != BASE_TMEM_IMPORT_TYPE_INVALID)
+ {
+ switch (reg->imported_type)
+ {
+#if MALI_USE_UMP
+ case BASE_TMEM_IMPORT_TYPE_UMP:
+ {
+ ump_dd_handle umph;
+ umph = (ump_dd_handle)reg->imported_metadata.ump_handle;
+ ump_dd_release(umph);
+ break;
+ }
+#endif /* MALI_USE_UMP */
+#ifdef CONFIG_DMA_SHARED_BUFFER
+ case BASE_TMEM_IMPORT_TYPE_UMM:
+ {
+ dma_buf_detach(reg->imported_metadata.umm.dma_buf, reg->imported_metadata.umm.dma_attachment);
+ dma_buf_put(reg->imported_metadata.umm.dma_buf);
+ break;
+ }
+#endif /* CONFIG_DMA_SHARED_BUFFER */
+ default:
+ /* unsupported types should never reach this point */
+ OSK_ASSERT(0);
+ break;
+ }
+ reg->imported_type = BASE_TMEM_IMPORT_TYPE_INVALID;
+ }
+ else
+ {
+ if (reg->flags & KBASE_REG_IS_TB)
+ {
+ /* trace buffer being freed. Disconnect, then use osk_vfree */
+ /* save tb so we can free it after the disconnect call */
+ void * tb;
+ tb = reg->kctx->jctx.tb;
+ kbase_device_trace_buffer_uninstall(reg->kctx);
+ osk_vfree(tb);
+ }
+ else if (reg->flags & KBASE_REG_IS_RB)
+ {
+ /* nothing to do */
+ }
+ else
+ {
+ kbase_free_phy_pages_helper(reg, reg->nr_alloc_pages);
+ }
+ }
+
+ kbase_set_phy_pages(reg, NULL);
+ osk_vfree(page_array);
+}
+KBASE_EXPORT_TEST_API(kbase_free_phy_pages)
+
+int kbase_alloc_phy_pages(struct kbase_va_region *reg, u32 vsize, u32 size)
+{
+ osk_phy_addr *page_array;
+
+ OSK_ASSERT( NULL != reg );
+ OSK_ASSERT( vsize > 0 );
+
+ /* validate user provided arguments */
+ if (size > vsize || vsize > reg->nr_pages)
+ {
+ goto out_term;
+ }
+
+ /* Prevent vsize*sizeof from wrapping around.
+ * For instance, if vsize is 2**29+1, we'll allocate 1 byte and the alloc won't fail.
+ */
+ if ((size_t)vsize > ((size_t)-1 / sizeof(*page_array)))
+ {
+ goto out_term;
+ }
+
+ page_array = osk_vmalloc(vsize * sizeof(*page_array));
+ if (!page_array)
+ {
+ goto out_term;
+ }
+
+ kbase_set_phy_pages(reg, page_array);
+
+ if (MALI_ERROR_NONE != kbase_alloc_phy_pages_helper(reg, size))
+ {
+ goto out_free;
+ }
+
+ return 0;
+
+out_free:
+ osk_vfree(page_array);
+out_term:
+ return -1;
+}
+KBASE_EXPORT_TEST_API(kbase_alloc_phy_pages)
+
+/** @brief Round to +inf a tmem growable delta in pages */
+STATIC mali_bool kbasep_tmem_growable_round_delta( kbase_device *kbdev, s32 *delta_ptr )
+{
+ s32 delta;
+
+ OSK_ASSERT( delta_ptr != NULL );
+
+ delta = *delta_ptr;
+
+ if (delta >= 0)
+ {
+ u32 new_delta_unsigned = kbasep_tmem_growable_round_size( kbdev, (u32)delta );
+ if ( new_delta_unsigned > S32_MAX )
+ {
+ /* Can't support a delta of this size */
+ return MALI_FALSE;
+ }
+
+ *delta_ptr = (s32)new_delta_unsigned;
+ }
+ else
+ {
+ u32 new_delta_unsigned = (u32)-delta;
+ /* Round down */
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_9630))
+ {
+ new_delta_unsigned = new_delta_unsigned & ~(KBASEP_TMEM_GROWABLE_BLOCKSIZE_PAGES_HW_ISSUE_9630-1);
+ }
+ else if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8316))
+ {
+ new_delta_unsigned = new_delta_unsigned & ~(KBASEP_TMEM_GROWABLE_BLOCKSIZE_PAGES_HW_ISSUE_8316-1);
+ }
+ else
+ {
+ new_delta_unsigned = new_delta_unsigned & ~(KBASEP_TMEM_GROWABLE_BLOCKSIZE_PAGES-1);
+ }
+
+ *delta_ptr = (s32)-new_delta_unsigned;
+ }
+
+ return MALI_TRUE;
+}
+
+mali_bool kbase_check_alloc_flags(u32 flags)
+{
+ /* At least one flags should be set */
+ if (flags == 0)
+ {
+ return MALI_FALSE;
+ }
+ /* Either the GPU or CPU must be reading from the allocated memory */
+ if ((flags & (BASE_MEM_PROT_CPU_RD | BASE_MEM_PROT_GPU_RD)) == 0)
+ {
+ return MALI_FALSE;
+ }
+ /* Either the GPU or CPU must be writing to the allocated memory */
+ if ((flags & (BASE_MEM_PROT_CPU_WR | BASE_MEM_PROT_GPU_WR)) == 0)
+ {
+ return MALI_FALSE;
+ }
+ /* GPU cannot be writing to GPU executable memory and cannot grow the memory on page fault. */
+ if ((flags & BASE_MEM_PROT_GPU_EX) && (flags & (BASE_MEM_PROT_GPU_WR | BASE_MEM_GROW_ON_GPF)))
+ {
+ return MALI_FALSE;
+ }
+ /* GPU should have at least read or write access otherwise there is no
+ reason for allocating pmem/tmem. */
+ if ((flags & (BASE_MEM_PROT_GPU_RD | BASE_MEM_PROT_GPU_WR)) == 0)
+ {
+ return MALI_FALSE;
+ }
+
+ return MALI_TRUE;
+}
+
+struct kbase_va_region *kbase_tmem_alloc(struct kbase_context *kctx,
+ u32 vsize, u32 psize,
+ u32 extent, u32 flags, mali_bool is_growable)
+{
+ struct kbase_va_region *reg;
+ mali_error err;
+ u32 align = 1;
+ u32 vsize_rounded = vsize;
+ u32 psize_rounded = psize;
+ u32 extent_rounded = extent;
+ u32 zone = KBASE_REG_ZONE_TMEM;
+
+ if ( 0 == vsize )
+ {
+ goto out1;
+ }
+
+ OSK_ASSERT(NULL != kctx);
+
+ if (!kbase_check_alloc_flags(flags))
+ {
+ goto out1;
+ }
+
+ if ((flags & BASE_MEM_GROW_ON_GPF) != MALI_FALSE)
+ {
+ /* Round up the sizes for growable on GPU page fault memory */
+ vsize_rounded = kbasep_tmem_growable_round_size( kctx->kbdev, vsize );
+ psize_rounded = kbasep_tmem_growable_round_size( kctx->kbdev, psize );
+ extent_rounded = kbasep_tmem_growable_round_size( kctx->kbdev, extent );
+
+ if ( vsize_rounded < vsize || psize_rounded < psize || extent_rounded < extent )
+ {
+ /* values too large to round */
+ return NULL;
+ }
+ }
+
+ if (flags & BASE_MEM_PROT_GPU_EX)
+ {
+ zone = KBASE_REG_ZONE_EXEC;
+ }
+
+ if ( extent > 0 && !(flags & BASE_MEM_GROW_ON_GPF))
+ {
+ OSK_PRINT_WARN(OSK_BASE_MEM, "BASE_MEM_GROW_ON_GPF flag not set when extent is greater than 0");
+ goto out1;
+ }
+
+ reg = kbase_alloc_free_region(kctx, 0, vsize_rounded, zone);
+ if (!reg)
+ {
+ goto out1;
+ }
+
+ reg->flags &= ~KBASE_REG_FREE;
+
+ kbase_update_region_flags(reg, flags, is_growable);
+
+ if (kbase_alloc_phy_pages(reg, vsize_rounded, psize_rounded))
+ {
+ goto out2;
+ }
+
+ reg->nr_alloc_pages = psize_rounded;
+ reg->extent = extent_rounded;
+
+ kbase_gpu_vm_lock(kctx);
+ err = kbase_gpu_mmap(kctx, reg, 0, vsize_rounded, align);
+ kbase_gpu_vm_unlock(kctx);
+
+ if (err)
+ {
+ OSK_PRINT_WARN(OSK_BASE_MEM, "kbase_gpu_mmap failed\n");
+ goto out3;
+ }
+
+ return reg;
+
+out3:
+ kbase_free_phy_pages(reg);
+out2:
+ osk_free(reg);
+out1:
+ return NULL;
+}
+KBASE_EXPORT_TEST_API(kbase_tmem_alloc)
+
+mali_error kbase_tmem_resize(struct kbase_context *kctx, mali_addr64 gpu_addr, s32 delta, u32 *size, base_backing_threshold_status * failure_reason)
+{
+ kbase_va_region *reg;
+ mali_error ret = MALI_ERROR_FUNCTION_FAILED;
+#if !( (MALI_INFINITE_CACHE != 0) && !MALI_BACKEND_KERNEL )
+ /* tmem is already mapped to max_pages, so no resizing needed */
+ osk_phy_addr *phy_pages;
+#endif /* !( (MALI_INFINITE_CACHE != 0) && !MALI_BACKEND_KERNEL ) */
+ OSK_ASSERT(NULL != kctx);
+ OSK_ASSERT(size);
+ OSK_ASSERT(failure_reason);
+ OSK_ASSERT(gpu_addr != 0);
+
+ kbase_gpu_vm_lock(kctx);
+
+ /* Validate the region */
+ reg = kbase_validate_region(kctx, gpu_addr);
+ if (!reg || (reg->flags & KBASE_REG_FREE) )
+ {
+ /* not a valid region or is free memory*/
+ *failure_reason = BASE_BACKING_THRESHOLD_ERROR_INVALID_ARGUMENTS;
+ goto out_unlock;
+ }
+
+#if !( (MALI_INFINITE_CACHE != 0) && !MALI_BACKEND_KERNEL )
+ /* tmem is already mapped to max_pages, don't try to resize */
+
+ if (!( (KBASE_REG_ZONE_MASK & reg->flags) == KBASE_REG_ZONE_TMEM ||
+ (KBASE_REG_ZONE_MASK & reg->flags) == KBASE_REG_ZONE_EXEC ) )
+ {
+ /* not a valid region - not tmem or exec region */
+ *failure_reason = BASE_BACKING_THRESHOLD_ERROR_INVALID_ARGUMENTS;
+ goto out_unlock;
+ }
+ if (0 == (reg->flags & KBASE_REG_GROWABLE))
+ {
+ /* not growable */
+ *failure_reason = BASE_BACKING_THRESHOLD_ERROR_NOT_GROWABLE;
+ goto out_unlock;
+ }
+
+ if ( (delta != 0) && !OSK_DLIST_IS_EMPTY(®->map_list))
+ {
+ /* We still have mappings */
+ *failure_reason = BASE_BACKING_THRESHOLD_ERROR_MAPPED;
+ goto out_unlock;
+ }
+
+ if ( reg->flags & KBASE_REG_PF_GROW )
+ {
+ /* Apply rounding to +inf on the delta, which may cause a negative delta to become zero */
+ if ( kbasep_tmem_growable_round_delta( kctx->kbdev, &delta ) == MALI_FALSE )
+ {
+ /* Can't round this big a delta */
+ *failure_reason = BASE_BACKING_THRESHOLD_ERROR_INVALID_ARGUMENTS;
+ goto out_unlock;
+ }
+ }
+
+ if (delta < 0 && (u32)-delta > reg->nr_alloc_pages)
+ {
+ /* Underflow */
+ *failure_reason = BASE_BACKING_THRESHOLD_ERROR_INVALID_ARGUMENTS;
+ goto out_unlock;
+ }
+ if (reg->nr_alloc_pages + delta > reg->nr_pages)
+ {
+ /* Would overflow the VA region */
+ *failure_reason = BASE_BACKING_THRESHOLD_ERROR_INVALID_ARGUMENTS;
+ goto out_unlock;
+ }
+
+ phy_pages = kbase_get_phy_pages(reg);
+
+ if (delta > 0)
+ {
+ mali_error err;
+
+ /* Allocate some more pages */
+ if (MALI_ERROR_NONE != kbase_alloc_phy_pages_helper(reg, delta))
+ {
+ *failure_reason = BASE_BACKING_THRESHOLD_ERROR_OOM;
+ goto out_unlock;
+ }
+ err = kbase_mmu_insert_pages(kctx, reg->start_pfn + reg->nr_alloc_pages - delta,
+ phy_pages + reg->nr_alloc_pages - delta,
+ delta, reg->flags);
+ if(MALI_ERROR_NONE != err)
+ {
+ kbase_free_phy_pages_helper(reg, delta);
+ *failure_reason = BASE_BACKING_THRESHOLD_ERROR_OOM;
+ goto out_unlock;
+ }
+ }
+ else if (delta < 0)
+ {
+ mali_error err;
+ /* Free some pages */
+
+ /* Get the absolute value of delta. Note that we have to add one before and after the negation to avoid
+ * overflowing when delta is INT_MIN */
+ u32 num_pages = (u32)(-(delta+1))+1;
+
+ err = kbase_mmu_teardown_pages(kctx, reg->start_pfn + reg->nr_alloc_pages - num_pages, num_pages);
+ if(MALI_ERROR_NONE != err)
+ {
+ *failure_reason = BASE_BACKING_THRESHOLD_ERROR_OOM;
+ goto out_unlock;
+ }
+
+ if (kbase_hw_has_issue(kctx->kbdev, BASE_HW_ISSUE_6367))
+ {
+ /* Wait for GPU to flush write buffer before freeing physical pages */
+ kbase_wait_write_flush(kctx);
+ }
+
+ kbase_free_phy_pages_helper(reg, num_pages);
+ }
+ /* else just a size query */
+
+#endif /* !( (MALI_INFINITE_CACHE != 0) && !MALI_BACKEND_KERNEL ) */
+
+ *size = reg->nr_alloc_pages;
+
+ ret = MALI_ERROR_NONE;
+
+out_unlock:
+ kbase_gpu_vm_unlock(kctx);
+ return ret;
+}
+KBASE_EXPORT_TEST_API(kbase_tmem_resize)
+
+#if MALI_USE_UMP
+
+static struct kbase_va_region *kbase_tmem_from_ump(struct kbase_context *kctx, ump_secure_id id, u64 * const pages)
+{
+ struct kbase_va_region *reg;
+ mali_error err;
+ ump_dd_handle umph;
+ u64 vsize;
+ u64 block_count;
+ const ump_dd_physical_block_64 * block_array;
+ osk_phy_addr *page_array;
+ u64 i, j;
+ int page = 0;
+ ump_alloc_flags ump_flags;
+ ump_alloc_flags cpu_flags;
+ ump_alloc_flags gpu_flags;
+
+ OSK_ASSERT(NULL != pages);
+
+ umph = ump_dd_from_secure_id(id);
+ if (UMP_DD_INVALID_MEMORY_HANDLE == umph)
+ {
+ return NULL;
+ }
+
+ ump_flags = ump_dd_allocation_flags_get(umph);
+ cpu_flags = (ump_flags >> UMP_DEVICE_CPU_SHIFT) & UMP_DEVICE_MASK;
+ gpu_flags = (ump_flags >> kctx->kbdev->memdev.ump_device_id) & UMP_DEVICE_MASK;
+
+ vsize = ump_dd_size_get_64(umph);
+ vsize >>= OSK_PAGE_SHIFT;
+
+ reg = kbase_alloc_free_region(kctx, 0, vsize, KBASE_REG_ZONE_TMEM);
+ if (!reg)
+ {
+ goto out1;
+ }
+
+ reg->flags &= ~KBASE_REG_FREE;
+ reg->flags |= KBASE_REG_GPU_NX; /* UMP is always No eXecute */
+ reg->flags &= ~KBASE_REG_GROWABLE; /* UMP cannot be grown */
+
+ reg->imported_type = BASE_TMEM_IMPORT_TYPE_UMP;
+
+ reg->imported_metadata.ump_handle = umph;
+
+ if ((cpu_flags & (UMP_HINT_DEVICE_RD|UMP_HINT_DEVICE_WR)) == (UMP_HINT_DEVICE_RD|UMP_HINT_DEVICE_WR))
+ {
+ reg->flags |= KBASE_REG_CPU_CACHED;
+ }
+
+ if (cpu_flags & UMP_PROT_DEVICE_WR)
+ {
+ reg->flags |= KBASE_REG_CPU_WR;
+ }
+
+ if (cpu_flags & UMP_PROT_DEVICE_RD)
+ {
+ reg->flags |= KBASE_REG_CPU_RD;
+ }
+
+
+ if ((gpu_flags & (UMP_HINT_DEVICE_RD|UMP_HINT_DEVICE_WR)) == (UMP_HINT_DEVICE_RD|UMP_HINT_DEVICE_WR))
+ {
+ reg->flags |= KBASE_REG_GPU_CACHED;
+ }
+
+ if (gpu_flags & UMP_PROT_DEVICE_WR)
+ {
+ reg->flags |= KBASE_REG_GPU_WR;
+ }
+
+ if (gpu_flags & UMP_PROT_DEVICE_RD)
+ {
+ reg->flags |= KBASE_REG_GPU_RD;
+ }
+
+ /* ump phys block query */
+ ump_dd_phys_blocks_get_64(umph, &block_count, &block_array);
+
+ page_array = osk_vmalloc(vsize * sizeof(*page_array));
+ if (!page_array)
+ {
+ goto out2;
+ }
+
+ for (i = 0; i < block_count; i++)
+ {
+ for (j = 0; j < (block_array[i].size >> OSK_PAGE_SHIFT); j++)
+ {
+ page_array[page] = block_array[i].addr + (j << OSK_PAGE_SHIFT);
+ page++;
+ }
+ }
+
+ kbase_set_phy_pages(reg, page_array);
+
+ reg->nr_alloc_pages = vsize;
+ reg->extent = vsize;
+
+ kbase_gpu_vm_lock(kctx);
+ err = kbase_gpu_mmap(kctx, reg, 0, vsize, 1/* no alignment */);
+ kbase_gpu_vm_unlock(kctx);
+
+ if (err)
+ {
+ OSK_PRINT_WARN(OSK_BASE_MEM, "kbase_gpu_mmap failed\n");
+ goto out3;
+ }
+
+ *pages = vsize;
+
+ return reg;
+
+out3:
+ osk_vfree(page_array);
+out2:
+ osk_free(reg);
+out1:
+ ump_dd_release(umph);
+ return NULL;
+
+
+}
+
+#endif /* MALI_USE_UMP */
+
+#ifdef CONFIG_DMA_SHARED_BUFFER
+static struct kbase_va_region *kbase_tmem_from_umm(struct kbase_context *kctx, int fd, u64 * const pages)
+{
+ struct kbase_va_region *reg;
+ struct dma_buf * dma_buf;
+ struct dma_buf_attachment * dma_attachment;
+ osk_phy_addr *page_array;
+ unsigned long nr_pages;
+ mali_error err;
+
+ dma_buf = dma_buf_get(fd);
+ if (IS_ERR_OR_NULL(dma_buf))
+ goto no_buf;
+
+ dma_attachment = dma_buf_attach(dma_buf, kctx->kbdev->osdev.dev);
+ if (!dma_attachment)
+ goto no_attachment;
+
+ nr_pages = (dma_buf->size + PAGE_SIZE - 1) >> PAGE_SHIFT;
+
+ reg = kbase_alloc_free_region(kctx, 0, nr_pages , KBASE_REG_ZONE_TMEM);
+ if (!reg)
+ goto no_region;
+
+ reg->flags &= ~KBASE_REG_FREE;
+ reg->flags |= KBASE_REG_GPU_NX; /* UMM is always No eXecute */
+ reg->flags &= ~KBASE_REG_GROWABLE; /* UMM cannot be grown */
+ reg->flags |= KBASE_REG_NO_CPU_MAP; /* UMM can't be mapped on the CPU *//*TODO: GP this flag to be removed to allow mapping to CPU*/
+
+ reg->flags |= KBASE_REG_GPU_CACHED;
+
+ /* no read or write permission given on import, only on run do we give the right permissions */
+
+ reg->imported_type = BASE_TMEM_IMPORT_TYPE_UMM;
+
+ reg->imported_metadata.umm.st = NULL;
+ reg->imported_metadata.umm.dma_buf = dma_buf;
+ reg->imported_metadata.umm.dma_attachment = dma_attachment;
+ reg->imported_metadata.umm.current_mapping_usage_count = 0;
+
+ page_array = osk_vmalloc(nr_pages * sizeof(*page_array));
+ if (!page_array)
+ goto no_page_array;
+
+ OSK_MEMSET(page_array, 0, nr_pages * sizeof(*page_array));
+
+ kbase_set_phy_pages(reg, page_array);
+
+ reg->nr_alloc_pages = nr_pages;
+ reg->extent = nr_pages;
+
+ kbase_gpu_vm_lock(kctx);
+ err = kbase_add_va_region(kctx, reg, 0, nr_pages, 1);
+ kbase_gpu_vm_unlock(kctx);
+ if (err != MALI_ERROR_NONE)
+ goto no_addr_reserve;
+
+ *pages = nr_pages;
+
+ return reg;
+
+no_addr_reserve:
+ osk_vfree(page_array);
+no_page_array:
+ osk_free(reg);
+no_region:
+ dma_buf_detach(dma_buf, dma_attachment);
+no_attachment:
+ dma_buf_put(dma_buf);
+no_buf:
+ return NULL;
+}
+#endif /* CONFIG_DMA_SHARED_BUFFER */
+
+struct kbase_va_region *kbase_tmem_import(struct kbase_context *kctx, base_tmem_import_type type, int handle, u64 * const pages)
+{
+ switch (type)
+ {
+#if MALI_USE_UMP
+ case BASE_TMEM_IMPORT_TYPE_UMP:
+ {
+ ump_secure_id id;
+ id = (ump_secure_id)handle;
+ return kbase_tmem_from_ump(kctx, id, pages);
+ }
+#endif /* MALI_USE_UMP */
+#ifdef CONFIG_DMA_SHARED_BUFFER
+ case BASE_TMEM_IMPORT_TYPE_UMM:
+ return kbase_tmem_from_umm(kctx, handle, pages);
+#endif /* CONFIG_DMA_SHARED_BUFFER */
+ default:
+ return NULL;
+ }
+}
+
+/**
+ * @brief Acquire the per-context region list lock
+ */
+void kbase_gpu_vm_lock(struct kbase_context *kctx)
+{
+ OSK_ASSERT(kctx != NULL);
+ osk_mutex_lock(&kctx->reg_lock);
+}
+KBASE_EXPORT_TEST_API(kbase_gpu_vm_lock)
+
+/**
+ * @brief Release the per-context region list lock
+ */
+void kbase_gpu_vm_unlock(struct kbase_context *kctx)
+{
+ OSK_ASSERT(kctx != NULL);
+ osk_mutex_unlock(&kctx->reg_lock);
+}
+KBASE_EXPORT_TEST_API(kbase_gpu_vm_unlock)
+
+/* will be called during init time only */
+mali_error kbase_register_memory_regions(kbase_device * kbdev, const kbase_attribute *attributes)
+{
+ int total_regions;
+ int dedicated_regions;
+ int allocators_initialized;
+ osk_phy_allocator * allocs;
+ kbase_memory_performance shared_memory_performance;
+ kbasep_memory_region_performance *region_performance;
+ kbase_memory_resource *resource;
+ const kbase_attribute *current_attribute;
+ u32 max_shared_memory;
+ kbasep_mem_device * memdev;
+
+ OSK_ASSERT(kbdev);
+ OSK_ASSERT(attributes);
+
+ memdev = &kbdev->memdev;
+
+ /* Programming error to register memory after we've started using the iterator interface */
+#if MALI_DEBUG
+ OSK_ASSERT(memdev->allocators.it_bound == MALI_FALSE);
+#endif /* MALI_DEBUG */
+
+ max_shared_memory = (u32) kbasep_get_config_value(kbdev, attributes, KBASE_CONFIG_ATTR_MEMORY_OS_SHARED_MAX);
+ shared_memory_performance =
+ (kbase_memory_performance)kbasep_get_config_value(kbdev, attributes, KBASE_CONFIG_ATTR_MEMORY_OS_SHARED_PERF_GPU);
+ /* count dedicated_memory_regions */
+ dedicated_regions = kbasep_get_config_attribute_count_by_id(attributes, KBASE_CONFIG_ATTR_MEMORY_RESOURCE);
+
+ total_regions = dedicated_regions;
+ if (max_shared_memory > 0)
+ {
+ total_regions++;
+ }
+
+ if (total_regions == 0)
+ {
+ OSK_PRINT_ERROR(OSK_BASE_MEM, "No memory regions specified");
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+
+ region_performance = osk_malloc(sizeof(kbasep_memory_region_performance) * total_regions);
+
+ if (region_performance == NULL)
+ {
+ goto out;
+ }
+
+ allocs = osk_malloc(sizeof(osk_phy_allocator) * total_regions);
+ if (allocs == NULL)
+ {
+ goto out_perf;
+ }
+
+ current_attribute = attributes;
+ allocators_initialized = 0;
+ while (current_attribute != NULL)
+ {
+ current_attribute = kbasep_get_next_attribute(current_attribute, KBASE_CONFIG_ATTR_MEMORY_RESOURCE);
+
+ if (current_attribute != NULL)
+ {
+ resource = (kbase_memory_resource *)current_attribute->data;
+ if (OSK_ERR_NONE != osk_phy_allocator_init(&allocs[allocators_initialized], resource->base,
+ (u32)(resource->size >> OSK_PAGE_SHIFT), resource->name))
+ {
+ goto out_allocator_term;
+ }
+
+ kbasep_get_memory_performance(resource, ®ion_performance[allocators_initialized].cpu_performance,
+ ®ion_performance[allocators_initialized].gpu_performance);
+ current_attribute++;
+ allocators_initialized++;
+ }
+ }
+
+ /* register shared memory region */
+ if (max_shared_memory > 0)
+ {
+ if (OSK_ERR_NONE != osk_phy_allocator_init(&allocs[allocators_initialized], 0,
+ max_shared_memory >> OSK_PAGE_SHIFT, NULL))
+ {
+ goto out_allocator_term;
+ }
+
+ region_performance[allocators_initialized].cpu_performance = KBASE_MEM_PERF_NORMAL;
+ region_performance[allocators_initialized].gpu_performance = shared_memory_performance;
+ allocators_initialized++;
+ }
+
+ if (MALI_ERROR_NONE != kbase_mem_usage_init(&memdev->usage, max_shared_memory >> OSK_PAGE_SHIFT))
+ {
+ goto out_allocator_term;
+ }
+
+ if (MALI_ERROR_NONE != kbasep_allocator_order_list_create(allocs, region_performance, total_regions, memdev->allocators.sorted_allocs,
+ ALLOCATOR_ORDER_COUNT))
+ {
+ goto out_memctx_term;
+ }
+
+ memdev->allocators.allocs = allocs;
+ memdev->allocators.count = total_regions;
+
+ osk_free(region_performance);
+
+ return MALI_ERROR_NONE;
+
+out_memctx_term:
+ kbase_mem_usage_term(&memdev->usage);
+out_allocator_term:
+ while (allocators_initialized-- > 0)
+ {
+ osk_phy_allocator_term(&allocs[allocators_initialized]);
+ }
+ osk_free(allocs);
+out_perf:
+ osk_free(region_performance);
+out:
+ return MALI_ERROR_OUT_OF_MEMORY;
+}
+KBASE_EXPORT_TEST_API(kbase_register_memory_regions)
+
+static mali_error kbasep_allocator_order_list_create( osk_phy_allocator * allocators,
+ kbasep_memory_region_performance *region_performance, int memory_region_count,
+ osk_phy_allocator ***sorted_allocs, int allocator_order_count)
+{
+ int performance;
+ int regions_sorted;
+ int i;
+ void *sorted_alloc_mem_block;
+
+ sorted_alloc_mem_block = osk_malloc(sizeof(osk_phy_allocator **) * memory_region_count * allocator_order_count);
+ if (sorted_alloc_mem_block == NULL)
+ {
+ goto out;
+ }
+
+ /* each allocator list points to memory in recently allocated block */
+ for (i = 0; i < ALLOCATOR_ORDER_COUNT; i++)
+ {
+ sorted_allocs[i] = (osk_phy_allocator **)sorted_alloc_mem_block + memory_region_count*i;
+ }
+
+ /* use the same order as in config file */
+ for (i = 0; i < memory_region_count; i++)
+ {
+ sorted_allocs[ALLOCATOR_ORDER_CONFIG][i] = &allocators[i];
+ }
+
+ /* Sort allocators by GPU performance */
+ performance = KBASE_MEM_PERF_FAST;
+ regions_sorted = 0;
+ while (performance >= KBASE_MEM_PERF_SLOW)
+ {
+ for (i = 0; i < memory_region_count; i++)
+ {
+ if (region_performance[i].gpu_performance == (kbase_memory_performance)performance)
+ {
+ sorted_allocs[ALLOCATOR_ORDER_GPU_PERFORMANCE][regions_sorted] = &allocators[i];
+ regions_sorted++;
+ }
+ }
+ performance--;
+ }
+
+ /* Sort allocators by CPU performance */
+ performance = KBASE_MEM_PERF_FAST;
+ regions_sorted = 0;
+ while (performance >= KBASE_MEM_PERF_SLOW)
+ {
+ for (i = 0; i < memory_region_count; i++)
+ {
+ if ((int)region_performance[i].cpu_performance == performance)
+ {
+ sorted_allocs[ALLOCATOR_ORDER_CPU_PERFORMANCE][regions_sorted] = &allocators[i];
+ regions_sorted++;
+ }
+ }
+ performance--;
+ }
+
+ /* Sort allocators by CPU and GPU performance (equally important) */
+ performance = 2 * KBASE_MEM_PERF_FAST;
+ regions_sorted = 0;
+ while (performance >= 2*KBASE_MEM_PERF_SLOW)
+ {
+ for (i = 0; i < memory_region_count; i++)
+ {
+ if ((int)(region_performance[i].cpu_performance + region_performance[i].gpu_performance) == performance)
+ {
+ sorted_allocs[ALLOCATOR_ORDER_CPU_GPU_PERFORMANCE][regions_sorted] = &allocators[i];
+ regions_sorted++;
+ }
+ }
+ performance--;
+ }
+ return MALI_ERROR_NONE;
+out:
+ return MALI_ERROR_OUT_OF_MEMORY;
+}
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_mem.h
+ * Base kernel memory APIs
+ */
+
+#ifndef _KBASE_MEM_H_
+#define _KBASE_MEM_H_
+
+#ifndef _KBASE_H_
+#error "Don't include this file directly, use mali_kbase.h instead"
+#endif
+
+#include <malisw/mali_malisw.h>
+#include <osk/mali_osk.h>
+#if MALI_USE_UMP
+#ifndef __KERNEL__
+#include <ump/src/library/common/ump_user.h>
+#endif
+#include <ump/ump_kernel_interface.h>
+#endif /* MALI_USE_UMP */
+#include <kbase/mali_base_kernel.h>
+#include <kbase/src/common/mali_kbase_hw.h>
+#include "mali_kbase_pm.h"
+#include "mali_kbase_defs.h"
+
+/* Part of the workaround for uTLB invalid pages is to ensure we grow/shrink tmem by 4 pages at a time */
+#define KBASEP_TMEM_GROWABLE_BLOCKSIZE_PAGES_LOG2_HW_ISSUE_8316 (2) /* round to 4 pages */
+
+/* Part of the workaround for PRLAM-9630 requires us to grow/shrink memory by 8 pages.
+The MMU reads in 8 page table entries from memory at a time, if we have more than one page fault within the same 8 pages and
+page tables are updated accordingly, the MMU does not re-read the page table entries from memory for the subsequent page table
+updates and generates duplicate page faults as the page table information used by the MMU is not valid. */
+#define KBASEP_TMEM_GROWABLE_BLOCKSIZE_PAGES_LOG2_HW_ISSUE_9630 (3) /* round to 8 pages */
+
+#define KBASEP_TMEM_GROWABLE_BLOCKSIZE_PAGES_LOG2 (0) /* round to 1 page */
+
+/* This must always be a power of 2 */
+#define KBASEP_TMEM_GROWABLE_BLOCKSIZE_PAGES (1u << KBASEP_TMEM_GROWABLE_BLOCKSIZE_PAGES_LOG2)
+#define KBASEP_TMEM_GROWABLE_BLOCKSIZE_PAGES_HW_ISSUE_8316 (1u << KBASEP_TMEM_GROWABLE_BLOCKSIZE_PAGES_LOG2_HW_ISSUE_8316)
+#define KBASEP_TMEM_GROWABLE_BLOCKSIZE_PAGES_HW_ISSUE_9630 (1u << KBASEP_TMEM_GROWABLE_BLOCKSIZE_PAGES_LOG2_HW_ISSUE_9630)
+/**
+ * A CPU mapping
+ */
+typedef struct kbase_cpu_mapping
+{
+ osk_dlist_item link;
+ osk_virt_addr uaddr;
+ u32 nr_pages;
+ mali_size64 page_off;
+ void *private; /* Use for VMA */
+} kbase_cpu_mapping;
+
+/**
+ * A physical memory (sub-)commit
+ */
+typedef struct kbase_mem_commit
+{
+ osk_phy_allocator * allocator;
+ u32 nr_pages;
+ struct kbase_mem_commit * prev;
+ /*
+ * The offset of the commit is implict by
+ * the prev_commit link position of this node
+ */
+} kbase_mem_commit;
+
+/**
+ * A GPU memory region, and attributes for CPU mappings.
+ */
+typedef struct kbase_va_region
+{
+ osk_dlist_item link;
+
+ struct kbase_context *kctx; /* Backlink to base context */
+
+ u64 start_pfn; /* The PFN in GPU space */
+ u32 nr_pages; /* VA size */
+
+#define KBASE_REG_FREE (1ul << 0) /* Free region */
+#define KBASE_REG_CPU_WR (1ul << 1) /* CPU write access */
+#define KBASE_REG_GPU_WR (1ul << 2) /* GPU write access */
+#define KBASE_REG_GPU_NX (1ul << 3) /* No eXectue flag */
+#define KBASE_REG_CPU_CACHED (1ul << 4) /* Is CPU cached? */
+#define KBASE_REG_GPU_CACHED (1ul << 5) /* Is GPU cached? */
+
+#define KBASE_REG_GROWABLE (1ul << 6) /* Is growable? */
+#define KBASE_REG_PF_GROW (1ul << 7) /* Can grow on pf? */
+
+#define KBASE_REG_IS_RB (1ul << 8) /* Is ringbuffer? */
+#define KBASE_REG_IS_MMU_DUMP (1ul << 9) /* Is an MMU dump */
+#define KBASE_REG_IS_TB (1ul << 10) /* Is register trace buffer? */
+
+#define KBASE_REG_SHARE_IN (1ul << 11) /* inner shareable coherency */
+#define KBASE_REG_SHARE_BOTH (1ul << 12) /* inner & outer shareable coherency */
+
+#define KBASE_REG_NO_CPU_MAP (1ul << 13) /* buffer can not be mapped on the CPU, only available for the GPU */
+
+#define KBASE_REG_ZONE_MASK (3ul << 14) /* Space for 4 different zones */
+#define KBASE_REG_ZONE(x) (((x) & 3) << 14)
+
+#define KBASE_REG_GPU_RD (1ul<<16) /* GPU write access */
+#define KBASE_REG_CPU_RD (1ul<<17) /* CPU read access */
+
+#define KBASE_REG_FLAGS_NR_BITS 18 /* Number of bits used by kbase_va_region flags */
+
+#define KBASE_REG_ZONE_PMEM KBASE_REG_ZONE(0)
+
+#ifndef KBASE_REG_ZONE_TMEM /* To become 0 on a 64bit platform */
+/*
+ * On a 32bit platform, TMEM should be wired from 4GB to the VA limit
+ * of the GPU, which is currently hardcoded at 48 bits. Unfortunately,
+ * the Linux mmap() interface limits us to 2^32 pages (2^44 bytes, see
+ * mmap64 man page for reference).
+ */
+#define KBASE_REG_ZONE_EXEC KBASE_REG_ZONE(1) /* Dedicated 4GB region for shader code */
+#define KBASE_REG_ZONE_EXEC_BASE ((1ULL << 32) >> OSK_PAGE_SHIFT)
+#define KBASE_REG_ZONE_EXEC_SIZE (((1ULL << 33) >> OSK_PAGE_SHIFT) - \
+ KBASE_REG_ZONE_EXEC_BASE)
+
+#define KBASE_REG_ZONE_TMEM KBASE_REG_ZONE(2)
+#define KBASE_REG_ZONE_TMEM_BASE ((1ULL << 33) >> OSK_PAGE_SHIFT) /* Starting after KBASE_REG_ZONE_EXEC */
+#define KBASE_REG_ZONE_TMEM_SIZE (((1ULL << 44) >> OSK_PAGE_SHIFT) - \
+ KBASE_REG_ZONE_TMEM_BASE)
+#endif
+
+#define KBASE_REG_COOKIE_MASK (~((1ul << KBASE_REG_FLAGS_NR_BITS)-1))
+#define KBASE_REG_COOKIE(x) ((x << KBASE_REG_FLAGS_NR_BITS) & KBASE_REG_COOKIE_MASK)
+
+/* Bit mask of cookies that not used for PMEM but reserved for other uses */
+#define KBASE_REG_RESERVED_COOKIES 7ULL
+/* The reserved cookie values */
+#define KBASE_REG_COOKIE_RB 0
+#define KBASE_REG_COOKIE_MMU_DUMP 1
+#define KBASE_REG_COOKIE_TB 2
+
+ u32 flags;
+
+ u32 nr_alloc_pages; /* nr of pages allocated */
+ u32 extent; /* nr of pages alloc'd on PF */
+
+ /* two variables to track our physical commits: */
+
+ /* We always have a root commit.
+ * Most allocation will only have this one.
+ * */
+ kbase_mem_commit root_commit;
+
+ /* This one is initialized to point to the root_commit,
+ * but if a new and separate commit is needed it will point
+ * to the last (still valid) commit we've done */
+ kbase_mem_commit * last_commit;
+
+ osk_phy_addr *phy_pages;
+
+ osk_dlist map_list;
+
+ /* non-NULL if this memory object is a kds_resource */
+ struct kds_resource * kds_res;
+
+ base_tmem_import_type imported_type;
+
+ /* member in union valid based on imported_type */
+ union
+ {
+#if MALI_USE_UMP == 1
+ ump_dd_handle ump_handle;
+#endif /*MALI_USE_UMP == 1*/
+#ifdef CONFIG_DMA_SHARED_BUFFER
+ struct
+ {
+ struct dma_buf * dma_buf;
+ struct dma_buf_attachment * dma_attachment;
+ unsigned int current_mapping_usage_count;
+ struct sg_table * st;
+ } umm;
+#endif /* CONFIG_DMA_SHARED_BUFFER */
+ } imported_metadata;
+
+} kbase_va_region;
+
+/* Common functions */
+static INLINE osk_phy_addr *kbase_get_phy_pages(struct kbase_va_region *reg)
+{
+ OSK_ASSERT(reg);
+
+ return reg->phy_pages;
+}
+
+static INLINE void kbase_set_phy_pages(struct kbase_va_region *reg, osk_phy_addr *phy_pages)
+{
+ OSK_ASSERT(reg);
+
+ reg->phy_pages = phy_pages;
+}
+
+/**
+ * @brief Allocate physical memory and track shared OS memory usage.
+ *
+ * This function is kbase wrapper of osk_phy_pages_alloc. Apart from allocating memory it also tracks shared OS memory
+ * usage and fails whenever shared memory limits would be exceeded.
+ *
+ * @param[in] kbdev pointer to kbase_device structure for which memory is allocated
+ * @param[in] allocator initialized physical allocator
+ * @param[in] nr_pages number of physical pages to allocate
+ * @param[out] pages array of \a nr_pages elements storing the physical
+ * address of an allocated page
+ * @return The number of pages successfully allocated,
+ * which might be lower than requested, including zero pages.
+ *
+ * @see ::osk_phy_pages_alloc
+ */
+u32 kbase_phy_pages_alloc(struct kbase_device *kbdev, osk_phy_allocator *allocator, u32 nr_pages, osk_phy_addr *pages);
+
+/**
+ * @brief Free physical memory and track shared memory usage
+ *
+ * This function, like osk_phy_pages_free, frees physical memory but also tracks shared OS memory usage.
+ *
+ * @param[in] kbdev pointer to kbase_device for which memory is allocated
+ * @param[in] allocator initialized physical allocator
+ * @param[in] nr_pages number of physical pages to allocate
+ * @param[out] pages array of \a nr_pages elements storing the physical
+ * address of an allocated page
+ *
+ * @see ::osk_phy_pages_free
+ */
+void kbase_phy_pages_free(struct kbase_device *kbdev, osk_phy_allocator *allocator, u32 nr_pages, osk_phy_addr *pages);
+
+/**
+ * @brief Register shared and dedicated memory regions
+ *
+ * Function registers shared and dedicated memory regions (registers physical allocator for each region)
+ * using given configuration attributes. Additionally, several ordered lists of physical allocators are created with
+ * different sort order (based on CPU, GPU, CPU+GPU performance and order in config). If there are many memory regions
+ * with the same performance, then order in which they appeared in config is important. Shared OS memory is treated as if
+ * it's defined after dedicated memory regions, so unless it matches region's performance flags better, it's chosen last.
+ *
+ * @param[in] kbdev pointer to kbase_device for which regions are registered
+ * @param[in] attributes array of configuration attributes. It must be terminated with KBASE_CONFIG_ATTR_END attribute
+ *
+ * @return MALI_ERROR_NONE if no error occurred. Error code otherwise
+ *
+ * @see ::kbase_alloc_phy_pages_helper
+ */
+mali_error kbase_register_memory_regions(kbase_device * kbdev, const kbase_attribute *attributes);
+
+/**
+ * @brief Frees memory regions registered for the given device.
+ *
+ * @param[in] kbdev pointer to kbase device for which memory regions are to be freed
+ */
+void kbase_free_memory_regions(kbase_device * kbdev);
+
+mali_error kbase_mem_init(kbase_device * kbdev);
+void kbase_mem_halt(kbase_device * kbdev);
+void kbase_mem_term(kbase_device * kbdev);
+
+
+/**
+ * @brief Initializes memory context which tracks memory usage.
+ *
+ * Function initializes memory context with given max_pages value.
+ *
+ * @param[in] usage usage tracker
+ * @param[in] max_pages maximum pages allowed to be allocated within this memory context
+ *
+ * @return MALI_ERROR_NONE in case of error. Error code otherwise.
+ */
+mali_error kbase_mem_usage_init(kbasep_mem_usage * usage, u32 max_pages);
+
+/*
+ * @brief Terminates given memory context
+ *
+ * @param[in] usage usage tracker
+ *
+ * @return MALI_ERROR_NONE in case of error. Error code otherwise.
+ */
+void kbase_mem_usage_term(kbasep_mem_usage *usage);
+
+/*
+ * @brief Requests a number of pages from the given context.
+ *
+ * Function requests a number of pages from the given context. Context is updated only if it contains enough number of
+ * free pages. Otherwise function returns error and no pages are claimed.
+ *
+ * @param[in] usage usage tracker
+ * @param[in] nr_pages number of pages requested
+ *
+ * @return MALI_ERROR_NONE when context page request succeeded. Error code otherwise.
+ */
+mali_error kbase_mem_usage_request_pages(kbasep_mem_usage *usage, u32 nr_pages);
+
+/*
+ * @brief Release a number of pages from the given context.
+ *
+ * @param[in] usage usage tracker
+ * @param[in] nr_pages number of pages to be released
+ */
+void kbase_mem_usage_release_pages(kbasep_mem_usage *usage, u32 nr_pages);
+
+struct kbase_va_region *kbase_alloc_free_region(struct kbase_context *kctx, u64 start_pfn, u32 nr_pages, u32 zone);
+void kbase_free_alloced_region(struct kbase_va_region *reg);
+mali_error kbase_add_va_region(struct kbase_context *kctx,
+ struct kbase_va_region *reg,
+ mali_addr64 addr, u32 nr_pages,
+ u32 align);
+kbase_va_region *kbase_region_lookup(kbase_context *kctx, mali_addr64 gpu_addr);
+
+mali_error kbase_gpu_mmap(struct kbase_context *kctx,
+ struct kbase_va_region *reg,
+ mali_addr64 addr, u32 nr_pages,
+ u32 align);
+mali_bool kbase_check_alloc_flags(u32 flags);
+void kbase_update_region_flags(struct kbase_va_region *reg, u32 flags, mali_bool is_growable);
+
+void kbase_gpu_vm_lock(struct kbase_context *kctx);
+void kbase_gpu_vm_unlock(struct kbase_context *kctx);
+
+void kbase_free_phy_pages(struct kbase_va_region *reg);
+int kbase_alloc_phy_pages(struct kbase_va_region *reg, u32 vsize, u32 size);
+
+mali_error kbase_cpu_free_mapping(struct kbase_va_region *reg, const void *ptr);
+
+mali_error kbase_mmu_init(struct kbase_context *kctx);
+void kbase_mmu_term(struct kbase_context *kctx);
+
+osk_phy_addr kbase_mmu_alloc_pgd(kbase_context *kctx);
+void kbase_mmu_free_pgd(struct kbase_context *kctx);
+mali_error kbase_mmu_insert_pages(struct kbase_context *kctx, u64 vpfn,
+ osk_phy_addr *phys, u32 nr, u32 flags);
+mali_error kbase_mmu_teardown_pages(struct kbase_context *kctx, u64 vpfn, u32 nr);
+
+/**
+ * @brief Check that a pointer is actually a valid region.
+ *
+ * Must be called with context lock held.
+ */
+struct kbase_va_region *kbase_validate_region(struct kbase_context *kctx, mali_addr64 gpu_addr);
+
+/**
+ * @brief Register region and map it on the GPU.
+ *
+ * Call kbase_add_va_region() and map the region on the GPU.
+ */
+mali_error kbase_gpu_mmap(struct kbase_context *kctx,
+ struct kbase_va_region *reg,
+ mali_addr64 addr, u32 nr_pages,
+ u32 align);
+
+/**
+ * @brief Remove the region from the GPU and unregister it.
+ *
+ * Must be called with context lock held.
+ */
+mali_error kbase_gpu_munmap(struct kbase_context *kctx, struct kbase_va_region *reg);
+
+/**
+ * The caller has the following locking conditions:
+ * - It must hold kbase_as::transaction_mutex on kctx's address space
+ * - It must hold the kbasep_js_device_data::runpool_irq::lock
+ */
+void kbase_mmu_update(struct kbase_context *kctx);
+
+/**
+ * The caller has the following locking conditions:
+ * - It must hold kbase_as::transaction_mutex on kctx's address space
+ * - It must hold the kbasep_js_device_data::runpool_irq::lock
+ */
+void kbase_mmu_disable (kbase_context *kctx);
+
+void kbase_mmu_interrupt(kbase_device * kbdev, u32 irq_stat);
+
+/**
+ * @brief Allocates physical pages using registered physical allocators.
+ *
+ * Function allocates physical pages using registered physical allocators. Allocator list is iterated until all pages
+ * are successfully allocated. Function tries to match the most appropriate order of iteration basing on
+ * KBASE_REG_CPU_CACHED and KBASE_REG_GPU_CACHED flags of the region.
+ *
+ * @param[in] reg memory region in which physical pages are supposed to be allocated
+ * @param[in] nr_pages number of physical pages to allocate
+ *
+ * @return MALI_ERROR_NONE if all pages have been successfully allocated. Error code otherwise
+ *
+ * @see kbase_register_memory_regions
+ */
+mali_error kbase_alloc_phy_pages_helper(kbase_va_region *reg, u32 nr_pages);
+
+/** Dump the MMU tables to a buffer
+ *
+ * This function allocates a buffer (of @c nr_pages pages) to hold a dump of the MMU tables and fills it. If the
+ * buffer is too small then the return value will be NULL.
+ *
+ * The GPU vm lock must be held when calling this function.
+ *
+ * The buffer returned should be freed with @ref osk_vfree when it is no longer required.
+ *
+ * @param[in] kctx The kbase context to dump
+ * @param[in] nr_pages The number of pages to allocate for the buffer.
+ *
+ * @return The address of the buffer containing the MMU dump or NULL on error (including if the @c nr_pages is too
+ * small)
+ */
+void *kbase_mmu_dump(struct kbase_context *kctx,int nr_pages);
+
+mali_error kbase_sync_now(kbase_context *kctx, base_syncset *syncset);
+void kbase_pre_job_sync(kbase_context *kctx, base_syncset *syncsets, u32 nr);
+void kbase_post_job_sync(kbase_context *kctx, base_syncset *syncsets, u32 nr);
+
+struct kbase_va_region *kbase_tmem_alloc(struct kbase_context *kctx,
+ u32 vsize, u32 psize,
+ u32 extent, u32 flags, mali_bool is_growable);
+
+/** Resize a tmem region
+ *
+ * This function changes the number of physical pages committed to a tmem region.
+ *
+ * @param[in] kctx The kbase context which the tmem belongs to
+ * @param[in] gpu_addr The base address of the tmem region
+ * @param[in] delta The number of pages to grow or shrink by
+ * @param[out] size The number of pages of memory committed after growing/shrinking
+ * @param[out] failure_reason Error code describing reason of failure.
+ *
+ * @return MALI_ERROR_NONE on success
+ */
+mali_error kbase_tmem_resize(struct kbase_context *kctx, mali_addr64 gpu_addr, s32 delta, u32 *size, base_backing_threshold_status * failure_reason);
+
+/**
+ * Import external memory.
+ *
+ * This function supports importing external memory.
+ * If imported a kbase_va_region is created of the tmem type.
+ * The region might not be mappable on the CPU depending on the imported type.
+ * If not mappable the KBASE_REG_NO_CPU_MAP bit will be set.
+ *
+ * Import will fail if (but not limited to):
+ * @li Unsupported import type
+ * @li Handle not valid for the type
+ * @li Access to a handle was not valid
+ * @li The underlying memory can't be accessed by the GPU
+ * @li No VA space found to map the memory
+ * @li Resources to track the region was not available
+ *
+ * @param[in] kctx The kbase context which the tmem will be created in
+ * @param type The type of memory to import
+ * @param handle Handle to the memory to import
+ * @param[out] pages Where to store the number of pages imported
+ * @return A region pointer on success, NULL on failure
+ */
+struct kbase_va_region *kbase_tmem_import(struct kbase_context *kctx, base_tmem_import_type type, int handle, u64 * const pages);
+
+
+/* OS specific functions */
+struct kbase_va_region * kbase_lookup_cookie(struct kbase_context * kctx, mali_addr64 cookie);
+void kbase_unlink_cookie(struct kbase_context * kctx, mali_addr64 cookie, struct kbase_va_region * reg);
+mali_error kbase_mem_free(struct kbase_context *kctx, mali_addr64 gpu_addr);
+mali_error kbase_mem_free_region(struct kbase_context *kctx,
+ struct kbase_va_region *reg);
+void kbase_os_mem_map_lock(struct kbase_context * kctx);
+void kbase_os_mem_map_unlock(struct kbase_context * kctx);
+
+/**
+ * @brief Find a CPU mapping of a memory allocation containing a given address range
+ *
+ * Searches for a CPU mapping of any part of the region starting at @p gpu_addr that
+ * fully encloses the CPU virtual address range specified by @p uaddr and @p size.
+ * Returns a failure indication if only part of the address range lies within a
+ * CPU mapping, or the address range lies within a CPU mapping of a different region.
+ *
+ * @param[in,out] kctx The kernel base context used for the allocation.
+ * @param[in] gpu_addr GPU address of the start of the allocated region
+ * within which to search.
+ * @param[in] uaddr Start of the CPU virtual address range.
+ * @param[in] size Size of the CPU virtual address range (in bytes).
+ *
+ * @return A pointer to a descriptor of the CPU mapping that fully encloses
+ * the specified address range, or NULL if none was found.
+ */
+struct kbase_cpu_mapping *kbasep_find_enclosing_cpu_mapping(
+ struct kbase_context *kctx,
+ mali_addr64 gpu_addr,
+ osk_virt_addr uaddr,
+ size_t size );
+
+/**
+ * @brief Round TMem Growable no. pages to allow for HW workarounds/block allocators
+ *
+ * For success, the caller should check that the unsigned return value is
+ * higher than the \a nr_pages parameter.
+ *
+ * @param[in] nr_pages Size value (in pages) to round
+ * @return the rounded-up number of pages (which may have wraped around to zero)
+ */
+static INLINE u32 kbasep_tmem_growable_round_size( kbase_device *kbdev, u32 nr_pages )
+{
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_9630))
+ {
+ return (nr_pages + KBASEP_TMEM_GROWABLE_BLOCKSIZE_PAGES_HW_ISSUE_9630 - 1) & ~(KBASEP_TMEM_GROWABLE_BLOCKSIZE_PAGES_HW_ISSUE_9630 - 1);
+ }
+ else if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8316))
+ {
+ return (nr_pages + KBASEP_TMEM_GROWABLE_BLOCKSIZE_PAGES_HW_ISSUE_8316 - 1) & ~(KBASEP_TMEM_GROWABLE_BLOCKSIZE_PAGES_HW_ISSUE_8316 - 1);
+ }
+ else
+ {
+ return (nr_pages + KBASEP_TMEM_GROWABLE_BLOCKSIZE_PAGES - 1) & ~(KBASEP_TMEM_GROWABLE_BLOCKSIZE_PAGES-1);
+ }
+}
+
+void kbasep_as_poke_timer_callback(void* arg);
+void kbase_as_poking_timer_retain(kbase_as * as);
+void kbase_as_poking_timer_release(kbase_as * as);
+
+
+#endif /* _KBASE_MEM_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_mmu.c
+ * Base kernel MMU management.
+ */
+
+/* #define DEBUG 1 */
+#include <osk/mali_osk.h>
+#include <kbase/src/common/mali_kbase.h>
+#include <kbase/src/common/mali_midg_regmap.h>
+#include <kbase/src/common/mali_kbase_gator.h>
+
+#define beenthere(f, a...) OSK_PRINT_INFO(OSK_BASE_MMU, "%s:" f, __func__, ##a)
+
+#include <kbase/src/common/mali_kbase_defs.h>
+#include <kbase/src/common/mali_kbase_hw.h>
+
+#define KBASE_MMU_PAGE_ENTRIES 512
+
+/*
+ * Definitions:
+ * - PGD: Page Directory.
+ * - PTE: Page Table Entry. A 64bit value pointing to the next
+ * level of translation
+ * - ATE: Address Transation Entry. A 64bit value pointing to
+ * a 4kB physical page.
+ */
+
+static void kbase_mmu_report_fault_and_kill(kbase_context *kctx, kbase_as * as, mali_addr64 fault_addr);
+static u64 lock_region(kbase_device * kbdev, u64 pfn, u32 num_pages);
+
+static void ksync_kern_vrange_gpu(osk_phy_addr paddr, osk_virt_addr vaddr, size_t size)
+{
+ osk_sync_to_memory(paddr, vaddr, size);
+}
+
+static u32 make_multiple(u32 minimum, u32 multiple)
+{
+ u32 remainder = minimum % multiple;
+ if (remainder == 0)
+ {
+ return minimum;
+ }
+ else
+ {
+ return minimum + multiple - remainder;
+ }
+}
+
+static void mmu_mask_reenable(kbase_device * kbdev, kbase_context *kctx, kbase_as * as)
+{
+ u32 mask;
+ osk_spinlock_irq_lock(&kbdev->mmu_mask_change);
+ mask = kbase_reg_read(kbdev, MMU_REG(MMU_IRQ_MASK), kctx);
+ mask |= ((1UL << as->number) | (1UL << (MMU_REGS_BUS_ERROR_FLAG(as->number))));
+ kbase_reg_write(kbdev, MMU_REG(MMU_IRQ_MASK), mask, kctx);
+ osk_spinlock_irq_unlock(&kbdev->mmu_mask_change);
+}
+
+static void page_fault_worker(osk_workq_work *data)
+{
+ u64 fault_pfn;
+ u32 new_pages;
+ u32 fault_rel_pfn;
+ kbase_as * faulting_as;
+ int as_no;
+ kbase_context * kctx;
+ kbase_device * kbdev;
+ kbase_va_region *region;
+ mali_error err;
+
+ u32 fault_status;
+
+ faulting_as = CONTAINER_OF(data, kbase_as, work_pagefault);
+ fault_pfn = faulting_as->fault_addr >> OSK_PAGE_SHIFT;
+ as_no = faulting_as->number;
+
+ kbdev = CONTAINER_OF( faulting_as, kbase_device, as[as_no] );
+
+ /* Grab the context that was already refcounted in kbase_mmu_interrupt().
+ * Therefore, it cannot be scheduled out of this AS until we explicitly release it
+ *
+ * NOTE: NULL can be returned here if we're gracefully handling a spurious interrupt */
+ kctx = kbasep_js_runpool_lookup_ctx_noretain( kbdev, as_no );
+
+ if ( kctx == NULL )
+ {
+ /* Address space has no context, terminate the work */
+ u32 reg;
+ /* AS transaction begin */
+ osk_mutex_lock(&faulting_as->transaction_mutex);
+ reg = kbase_reg_read(kbdev, MMU_AS_REG(as_no, ASn_TRANSTAB_LO), NULL);
+ reg = (reg & (~(u32)MMU_TRANSTAB_ADRMODE_MASK)) | ASn_TRANSTAB_ADRMODE_UNMAPPED;
+ kbase_reg_write(kbdev, MMU_AS_REG(as_no, ASn_TRANSTAB_LO), reg, NULL);
+ kbase_reg_write(kbdev, MMU_AS_REG(as_no, ASn_COMMAND), ASn_COMMAND_UPDATE, NULL);
+ kbase_reg_write(kbdev, MMU_REG(MMU_IRQ_CLEAR), (1UL << as_no), NULL);
+ osk_mutex_unlock(&faulting_as->transaction_mutex);
+ /* AS transaction end */
+
+ mmu_mask_reenable(kbdev, NULL, faulting_as);
+ return;
+ }
+
+ fault_status = kbase_reg_read(kbdev, MMU_AS_REG(as_no, ASn_FAULTSTATUS), NULL);
+
+ OSK_ASSERT( kctx->kbdev == kbdev );
+
+ kbase_gpu_vm_lock(kctx);
+
+ /* find the region object for this VA */
+ region = kbase_region_lookup(kctx, faulting_as->fault_addr);
+ if (NULL == region || (GROWABLE_FLAGS_REQUIRED != (region->flags & GROWABLE_FLAGS_MASK)))
+ {
+ kbase_gpu_vm_unlock(kctx);
+ /* failed to find the region or mismatch of the flags */
+ kbase_mmu_report_fault_and_kill(kctx, faulting_as, faulting_as->fault_addr);
+ goto fault_done;
+ }
+
+ if ((((fault_status & ASn_FAULTSTATUS_ACCESS_TYPE_MASK) == ASn_FAULTSTATUS_ACCESS_TYPE_READ) &&
+ !(region->flags & KBASE_REG_GPU_RD)) ||
+ (((fault_status & ASn_FAULTSTATUS_ACCESS_TYPE_MASK) == ASn_FAULTSTATUS_ACCESS_TYPE_WRITE) &&
+ !(region->flags & KBASE_REG_GPU_WR)) ||
+ (((fault_status & ASn_FAULTSTATUS_ACCESS_TYPE_MASK) == ASn_FAULTSTATUS_ACCESS_TYPE_EX) &&
+ (region->flags & KBASE_REG_GPU_NX)))
+ {
+ OSK_PRINT_WARN(OSK_BASE_MMU, "Access permissions don't match: region->flags=0x%x", region->flags);
+ kbase_gpu_vm_unlock(kctx);
+ kbase_mmu_report_fault_and_kill(kctx, faulting_as, faulting_as->fault_addr);
+ goto fault_done;
+ }
+
+ /* find the size we need to grow it by */
+ /* we know the result fit in a u32 due to kbase_region_lookup
+ * validating the fault_adress to be within a u32 from the start_pfn */
+ fault_rel_pfn = fault_pfn - region->start_pfn;
+
+ if (fault_rel_pfn < region->nr_alloc_pages)
+ {
+ OSK_PRINT_WARN(OSK_BASE_MMU, "Fault in allocated region of growable TMEM: Ignoring");
+ kbase_reg_write(kbdev, MMU_REG(MMU_IRQ_CLEAR), (1UL << as_no), NULL);
+ mmu_mask_reenable(kbdev, kctx, faulting_as);
+ kbase_gpu_vm_unlock(kctx);
+ goto fault_done;
+ }
+
+ new_pages = make_multiple(fault_rel_pfn - region->nr_alloc_pages + 1, region->extent);
+ if (new_pages + region->nr_alloc_pages > region->nr_pages)
+ {
+ /* cap to max vsize */
+ new_pages = region->nr_pages - region->nr_alloc_pages;
+ }
+
+ if (0 == new_pages)
+ {
+ /* Duplicate of a fault we've already handled, nothing to do */
+ kbase_reg_write(kbdev, MMU_REG(MMU_IRQ_CLEAR), (1UL << as_no), NULL);
+ mmu_mask_reenable(kbdev, kctx, faulting_as);
+ kbase_gpu_vm_unlock(kctx);
+ goto fault_done;
+ }
+
+ if (MALI_ERROR_NONE == kbase_alloc_phy_pages_helper(region, new_pages))
+ {
+ /* alloc success */
+ mali_addr64 lock_addr;
+ OSK_ASSERT(region->nr_alloc_pages <= region->nr_pages);
+
+ /* AS transaction begin */
+ osk_mutex_lock(&faulting_as->transaction_mutex);
+
+ /* Lock the VA region we're about to update */
+ lock_addr = lock_region(kbdev, faulting_as->fault_addr >> OSK_PAGE_SHIFT, new_pages);
+ kbase_reg_write(kbdev, MMU_AS_REG(as_no, ASn_LOCKADDR_LO), lock_addr & 0xFFFFFFFFUL, kctx);
+ kbase_reg_write(kbdev, MMU_AS_REG(as_no, ASn_LOCKADDR_HI), lock_addr >> 32, kctx);
+ kbase_reg_write(kbdev, MMU_AS_REG(as_no, ASn_COMMAND), ASn_COMMAND_LOCK, kctx);
+
+ /* set up the new pages */
+ err = kbase_mmu_insert_pages(kctx, region->start_pfn + region->nr_alloc_pages - new_pages,
+ ®ion->phy_pages[region->nr_alloc_pages - new_pages],
+ new_pages, region->flags);
+ if(MALI_ERROR_NONE != err)
+ {
+ /* failed to insert pages, handle as a normal PF */
+ osk_mutex_unlock(&faulting_as->transaction_mutex);
+ kbase_gpu_vm_unlock(kctx);
+ /* The locked VA region will be unlocked and the cache invalidated in here */
+ kbase_mmu_report_fault_and_kill(kctx, faulting_as, faulting_as->fault_addr);
+ goto fault_done;
+ }
+
+#if MALI_GATOR_SUPPORT
+ kbase_trace_mali_page_fault_insert_pages(as_no, new_pages);
+#endif
+ /* clear the irq */
+ /* MUST BE BEFORE THE FLUSH/UNLOCK */
+ kbase_reg_write(kbdev, MMU_REG(MMU_IRQ_CLEAR), (1UL << as_no), NULL);
+
+ /* flush L2 and unlock the VA (resumes the MMU) */
+ kbase_reg_write(kbdev, MMU_AS_REG(as_no, ASn_COMMAND), ASn_COMMAND_FLUSH, kctx);
+
+ /* wait for the flush to complete */
+ while (kbase_reg_read(kbdev, MMU_AS_REG(as_no, ASn_STATUS), kctx) & 1);
+
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_9630))
+ {
+ /* Issue an UNLOCK command to ensure that valid page tables are re-read by the GPU after an update.
+ Note that, the FLUSH command should perform all the actions necessary, however the bus logs show
+ that if multiple page faults occur within an 8 page region the MMU does not always re-read the
+ updated page table entries for later faults or is only partially read, it subsequently raises the
+ page fault IRQ for the same addresses, the unlock ensures that the MMU cache is flushed, so updates
+ can be re-read. As the region is now unlocked we need to issue 2 UNLOCK commands in order to flush the
+ MMU/uTLB, see PRLAM-8812.
+ */
+ kbase_reg_write(kctx->kbdev, MMU_AS_REG(kctx->as_nr, ASn_COMMAND), ASn_COMMAND_UNLOCK, kctx);
+ kbase_reg_write(kctx->kbdev, MMU_AS_REG(kctx->as_nr, ASn_COMMAND), ASn_COMMAND_UNLOCK, kctx);
+ }
+
+ osk_mutex_unlock(&faulting_as->transaction_mutex);
+ /* AS transaction end */
+
+ /* reenable this in the mask */
+ mmu_mask_reenable(kbdev, kctx, faulting_as);
+ kbase_gpu_vm_unlock(kctx);
+ }
+ else
+ {
+ /* failed to extend, handle as a normal PF */
+ kbase_gpu_vm_unlock(kctx);
+ kbase_mmu_report_fault_and_kill(kctx, faulting_as, faulting_as->fault_addr);
+ }
+
+
+fault_done:
+ /* By this point, the fault was handled in some way, so release the ctx refcount */
+ kbasep_js_runpool_release_ctx( kbdev, kctx );
+}
+
+osk_phy_addr kbase_mmu_alloc_pgd(kbase_context *kctx)
+{
+ osk_phy_addr pgd;
+ u64 *page;
+ int i;
+ u32 count;
+ OSK_ASSERT( NULL != kctx);
+ if (MALI_ERROR_NONE != kbase_mem_usage_request_pages(&kctx->usage, 1))
+ {
+ return 0;
+ }
+
+ count = kbase_phy_pages_alloc(kctx->kbdev, &kctx->pgd_allocator, 1, &pgd);
+ if (count != 1)
+ {
+ kbase_mem_usage_release_pages(&kctx->usage, 1);
+ return 0;
+ }
+
+ page = osk_kmap(pgd);
+ if(NULL == page)
+ {
+ kbase_phy_pages_free(kctx->kbdev, &kctx->pgd_allocator, 1, &pgd);
+ kbase_mem_usage_release_pages(&kctx->usage, 1);
+ return 0;
+ }
+
+ for (i = 0; i < KBASE_MMU_PAGE_ENTRIES; i++)
+ page[i] = ENTRY_IS_INVAL;
+
+ /* Clean the full page */
+ ksync_kern_vrange_gpu(pgd, page, KBASE_MMU_PAGE_ENTRIES * sizeof(u64));
+ osk_kunmap(pgd, page);
+ return pgd;
+}
+KBASE_EXPORT_TEST_API(kbase_mmu_alloc_pgd)
+
+static osk_phy_addr mmu_pte_to_phy_addr(u64 entry)
+{
+ if (!(entry & 1))
+ return 0;
+
+ return entry & ~0xFFF;
+}
+
+static u64 mmu_phyaddr_to_pte(osk_phy_addr phy)
+{
+ return (phy & ~0xFFF) | ENTRY_IS_PTE;
+}
+
+static u64 mmu_phyaddr_to_ate(osk_phy_addr phy, u64 flags)
+{
+ return (phy & ~0xFFF) | (flags & ENTRY_FLAGS_MASK) | ENTRY_IS_ATE;
+}
+
+/* Given PGD PFN for level N, return PGD PFN for level N+1 */
+static osk_phy_addr mmu_get_next_pgd(struct kbase_context *kctx,
+ osk_phy_addr pgd, u64 vpfn, int level)
+{
+ u64 *page;
+ osk_phy_addr target_pgd;
+
+ OSK_ASSERT(pgd);
+
+ /*
+ * Architecture spec defines level-0 as being the top-most.
+ * This is a bit unfortunate here, but we keep the same convention.
+ */
+ vpfn >>= (3 - level) * 9;
+ vpfn &= 0x1FF;
+
+ page = osk_kmap(pgd);
+ if(NULL == page)
+ {
+ OSK_PRINT_WARN(OSK_BASE_MMU, "mmu_get_next_pgd: kmap failure\n");
+ return 0;
+ }
+
+ target_pgd = mmu_pte_to_phy_addr(page[vpfn]);
+
+ if (!target_pgd) {
+ target_pgd = kbase_mmu_alloc_pgd(kctx);
+ if(!target_pgd)
+ {
+ OSK_PRINT_WARN(OSK_BASE_MMU, "mmu_get_next_pgd: kbase_mmu_alloc_pgd failure\n");
+ osk_kunmap(pgd, page);
+ return 0;
+ }
+
+ page[vpfn] = mmu_phyaddr_to_pte(target_pgd);
+ ksync_kern_vrange_gpu(pgd + (vpfn * sizeof(u64)), page + vpfn, sizeof(u64));
+ /* Rely on the caller to update the address space flags. */
+ }
+
+ osk_kunmap(pgd, page);
+ return target_pgd;
+}
+
+static osk_phy_addr mmu_get_bottom_pgd(struct kbase_context *kctx, u64 vpfn)
+{
+ osk_phy_addr pgd;
+ int l;
+
+ pgd = kctx->pgd;
+
+ for (l = MIDGARD_MMU_TOPLEVEL; l < 3; l++) {
+ pgd = mmu_get_next_pgd(kctx, pgd, vpfn, l);
+ /* Handle failure condition */
+ if(!pgd)
+ {
+ OSK_PRINT_WARN(OSK_BASE_MMU, "mmu_get_bottom_pgd: mmu_get_next_pgd failure\n");
+ return 0;
+ }
+ }
+
+ return pgd;
+}
+
+static osk_phy_addr mmu_insert_pages_recover_get_next_pgd(struct kbase_context *kctx,
+ osk_phy_addr pgd, u64 vpfn, int level)
+{
+ u64 *page;
+ osk_phy_addr target_pgd;
+
+ OSK_ASSERT(pgd);
+ CSTD_UNUSED(kctx);
+
+ /*
+ * Architecture spec defines level-0 as being the top-most.
+ * This is a bit unfortunate here, but we keep the same convention.
+ */
+ vpfn >>= (3 - level) * 9;
+ vpfn &= 0x1FF;
+
+ page = osk_kmap_atomic(pgd);
+ /* osk_kmap_atomic should NEVER fail */
+ OSK_ASSERT(NULL != page);
+
+ target_pgd = mmu_pte_to_phy_addr(page[vpfn]);
+ /* As we are recovering from what has already been set up, we should have a target_pgd */
+ OSK_ASSERT(0 != target_pgd);
+
+ osk_kunmap_atomic(pgd, page);
+ return target_pgd;
+}
+
+static osk_phy_addr mmu_insert_pages_recover_get_bottom_pgd(struct kbase_context *kctx, u64 vpfn)
+{
+ osk_phy_addr pgd;
+ int l;
+
+ pgd = kctx->pgd;
+
+ for (l = MIDGARD_MMU_TOPLEVEL; l < 3; l++) {
+ pgd = mmu_insert_pages_recover_get_next_pgd(kctx, pgd, vpfn, l);
+ /* Should never fail */
+ OSK_ASSERT(0 != pgd);
+ }
+
+ return pgd;
+}
+
+static void mmu_insert_pages_failure_recovery(struct kbase_context *kctx, u64 vpfn,
+ osk_phy_addr *phys, u32 nr)
+{
+ osk_phy_addr pgd;
+ u64 *pgd_page;
+
+ OSK_ASSERT( NULL != kctx );
+ OSK_ASSERT( 0 != vpfn );
+ OSK_ASSERT( vpfn <= (UINT64_MAX / OSK_PAGE_SIZE) ); /* 64-bit address range is the max */
+
+ while (nr) {
+ u32 i;
+ u32 index = vpfn & 0x1FF;
+ u32 count = KBASE_MMU_PAGE_ENTRIES - index;
+
+ if (count > nr)
+ {
+ count = nr;
+ }
+
+ pgd = mmu_insert_pages_recover_get_bottom_pgd(kctx, vpfn);
+ OSK_ASSERT(0 != pgd);
+
+ pgd_page = osk_kmap_atomic(pgd);
+ OSK_ASSERT(NULL != pgd_page);
+
+ /* Invalidate the entries we added */
+ for (i = 0; i < count; i++) {
+ pgd_page[index + i] = ENTRY_IS_INVAL;
+ }
+
+ phys += count;
+ vpfn += count;
+ nr -= count;
+
+ ksync_kern_vrange_gpu(pgd + (index * sizeof(u64)), pgd_page + index, count * sizeof(u64));
+
+ osk_kunmap_atomic(pgd, pgd_page);
+ }
+}
+
+/*
+ * Map 'nr' pages pointed to by 'phys' at GPU PFN 'vpfn'
+ */
+mali_error kbase_mmu_insert_pages(struct kbase_context *kctx, u64 vpfn,
+ osk_phy_addr *phys, u32 nr, u32 flags)
+{
+ osk_phy_addr pgd;
+ u64 *pgd_page;
+ u64 mmu_flags = 0;
+ /* In case the insert_pages only partially completes we need to be able to recover */
+ mali_bool recover_required = MALI_FALSE;
+ u64 recover_vpfn = vpfn;
+ osk_phy_addr *recover_phys = phys;
+ u32 recover_count = 0;
+
+ OSK_ASSERT( NULL != kctx );
+ OSK_ASSERT( 0 != vpfn );
+ OSK_ASSERT( (flags & ~((1 << KBASE_REG_FLAGS_NR_BITS) - 1)) == 0 );
+ OSK_ASSERT( vpfn <= (UINT64_MAX / OSK_PAGE_SIZE) ); /* 64-bit address range is the max */
+
+ mmu_flags |= (flags & KBASE_REG_GPU_WR) ? ENTRY_WR_BIT : 0; /* write perm if requested */
+ mmu_flags |= (flags & KBASE_REG_GPU_RD) ? ENTRY_RD_BIT : 0; /* read perm if requested */
+ mmu_flags |= (flags & KBASE_REG_GPU_NX) ? ENTRY_NX_BIT : 0; /* nx if requested */
+
+ if (flags & KBASE_REG_SHARE_BOTH)
+ {
+ /* inner and outer shareable */
+ mmu_flags |= SHARE_BOTH_BITS;
+ }
+ else if (flags & KBASE_REG_SHARE_IN)
+ {
+ /* inner shareable coherency */
+ mmu_flags |= SHARE_INNER_BITS;
+ }
+
+ while (nr) {
+ u32 i;
+ u32 index = vpfn & 0x1FF;
+ u32 count = KBASE_MMU_PAGE_ENTRIES - index;
+
+ if (count > nr)
+ count = nr;
+
+ /*
+ * Repeatedly calling mmu_get_bottom_pte() is clearly
+ * suboptimal. We don't have to re-parse the whole tree
+ * each time (just cache the l0-l2 sequence).
+ * On the other hand, it's only a gain when we map more than
+ * 256 pages at once (on average). Do we really care?
+ */
+ pgd = mmu_get_bottom_pgd(kctx, vpfn);
+ if(!pgd)
+ {
+ OSK_PRINT_WARN(OSK_BASE_MMU, "kbase_mmu_insert_pages: mmu_get_bottom_pgd failure\n");
+ if(recover_required)
+ {
+ /* Invalidate the pages we have partially completed */
+ mmu_insert_pages_failure_recovery(kctx, recover_vpfn, recover_phys, recover_count);
+ }
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+
+ pgd_page = osk_kmap(pgd);
+ if(!pgd_page)
+ {
+ OSK_PRINT_WARN(OSK_BASE_MMU, "kbase_mmu_insert_pages: kmap failure\n");
+ if(recover_required)
+ {
+ /* Invalidate the pages we have partially completed */
+ mmu_insert_pages_failure_recovery(kctx, recover_vpfn, recover_phys, recover_count);
+ }
+ return MALI_ERROR_OUT_OF_MEMORY;
+ }
+
+ for (i = 0; i < count; i++) {
+ OSK_ASSERT(0 == (pgd_page[index + i] & 1UL));
+ pgd_page[index + i] = mmu_phyaddr_to_ate(phys[i], mmu_flags);
+ }
+
+ phys += count;
+ vpfn += count;
+ nr -= count;
+
+ ksync_kern_vrange_gpu(pgd + (index * sizeof(u64)), pgd_page + index, count * sizeof(u64));
+
+ osk_kunmap(pgd, pgd_page);
+ /* We have started modifying the page table. If further pages need inserting and fail we need to
+ * undo what has already taken place */
+ recover_required = MALI_TRUE;
+ recover_count+= count;
+ }
+ return MALI_ERROR_NONE;
+}
+KBASE_EXPORT_TEST_API(kbase_mmu_insert_pages)
+
+/*
+ * We actually only discard the ATE, and not the page table
+ * pages. There is a potential DoS here, as we'll leak memory by
+ * having PTEs that are potentially unused. Will require physical
+ * page accounting, so MMU pages are part of the process allocation.
+ *
+ * IMPORTANT: This uses kbasep_js_runpool_release_ctx() when the context is
+ * currently scheduled into the runpool, and so potentially uses a lot of locks.
+ * These locks must be taken in the correct order with respect to others
+ * already held by the caller. Refer to kbasep_js_runpool_release_ctx() for more
+ * information.
+ */
+ mali_error kbase_mmu_teardown_pages(struct kbase_context *kctx, u64 vpfn, u32 nr)
+{
+ osk_phy_addr pgd;
+ u64 *pgd_page;
+ kbase_device *kbdev;
+ mali_bool ctx_is_in_runpool;
+ u32 requested_nr = nr;
+
+ beenthere("kctx %p vpfn %lx nr %d", (void *)kctx, (unsigned long)vpfn, nr);
+
+ OSK_ASSERT(NULL != kctx);
+
+ if (0 == nr)
+ {
+ /* early out if nothing to do */
+ return MALI_ERROR_NONE;
+ }
+
+ kbdev = kctx->kbdev;
+
+ while (nr)
+ {
+ u32 i;
+ u32 index = vpfn & 0x1FF;
+ u32 count = KBASE_MMU_PAGE_ENTRIES - index;
+ if (count > nr)
+ count = nr;
+
+ pgd = mmu_get_bottom_pgd(kctx, vpfn);
+ if(!pgd)
+ {
+ OSK_PRINT_WARN(OSK_BASE_MMU, "kbase_mmu_teardown_pages: mmu_get_bottom_pgd failure\n");
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+
+ pgd_page = osk_kmap(pgd);
+ if(!pgd_page)
+ {
+ OSK_PRINT_WARN(OSK_BASE_MMU, "kbase_mmu_teardown_pages: kmap failure\n");
+ return MALI_ERROR_OUT_OF_MEMORY;
+ }
+
+ for (i = 0; i < count; i++) {
+ /*
+ * Possible micro-optimisation: only write to the
+ * low 32bits. That's enough to invalidate the mapping.
+ */
+ pgd_page[index + i] = ENTRY_IS_INVAL;
+ }
+
+ vpfn += count;
+ nr -= count;
+
+ ksync_kern_vrange_gpu(pgd + (index * sizeof(u64)), pgd_page + index, count * sizeof(u64));
+
+ osk_kunmap(pgd, pgd_page);
+ }
+
+ /* We must flush if we're currently running jobs. At the very least, we need to retain the
+ * context to ensure it doesn't schedule out whilst we're trying to flush it */
+ ctx_is_in_runpool = kbasep_js_runpool_retain_ctx( kbdev, kctx );
+
+ if ( ctx_is_in_runpool )
+ {
+ OSK_ASSERT( kctx->as_nr != KBASEP_AS_NR_INVALID );
+
+ /* Second level check is to try to only do this when jobs are running. The refcount is
+ * a heuristic for this. */
+ if ( kbdev->js_data.runpool_irq.per_as_data[kctx->as_nr].as_busy_refcount >= 2 )
+ {
+ /* Lock the VA region we're about to update */
+ u64 lock_addr = lock_region(kbdev, vpfn, requested_nr);
+
+ /* AS transaction begin */
+ osk_mutex_lock(&kbdev->as[kctx->as_nr].transaction_mutex);
+ kbase_reg_write(kctx->kbdev, MMU_AS_REG(kctx->as_nr, ASn_LOCKADDR_LO), lock_addr & 0xFFFFFFFFUL, kctx);
+ kbase_reg_write(kctx->kbdev, MMU_AS_REG(kctx->as_nr, ASn_LOCKADDR_HI), lock_addr >> 32, kctx);
+ kbase_reg_write(kctx->kbdev, MMU_AS_REG(kctx->as_nr, ASn_COMMAND), ASn_COMMAND_LOCK, kctx);
+
+ /* flush L2 and unlock the VA */
+ kbase_reg_write(kctx->kbdev, MMU_AS_REG(kctx->as_nr, ASn_COMMAND), ASn_COMMAND_FLUSH, kctx);
+
+ /* wait for the flush to complete */
+ while (kbase_reg_read(kctx->kbdev, MMU_AS_REG(kctx->as_nr, ASn_STATUS), kctx) & ASn_STATUS_FLUSH_ACTIVE);
+
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_9630))
+ {
+ /* Issue an UNLOCK command to ensure that valid page tables are re-read by the GPU after an update.
+ Note that, the FLUSH command should perform all the actions necessary, however the bus logs show
+ that if multiple page faults occur within an 8 page region the MMU does not always re-read the
+ updated page table entries for later faults or is only partially read, it subsequently raises the
+ page fault IRQ for the same addresses, the unlock ensures that the MMU cache is flushed, so updates
+ can be re-read. As the region is now unlocked we need to issue 2 UNLOCK commands in order to flush the
+ MMU/uTLB, see PRLAM-8812.
+ */
+ kbase_reg_write(kctx->kbdev, MMU_AS_REG(kctx->as_nr, ASn_COMMAND), ASn_COMMAND_UNLOCK, kctx);
+ kbase_reg_write(kctx->kbdev, MMU_AS_REG(kctx->as_nr, ASn_COMMAND), ASn_COMMAND_UNLOCK, kctx);
+ }
+
+ osk_mutex_unlock(&kbdev->as[kctx->as_nr].transaction_mutex);
+ /* AS transaction end */
+ }
+ kbasep_js_runpool_release_ctx( kbdev, kctx );
+ }
+ return MALI_ERROR_NONE;
+}
+KBASE_EXPORT_TEST_API(kbase_mmu_teardown_pages)
+
+static int mmu_pte_is_valid(u64 pte)
+{
+ return ((pte & 3) == ENTRY_IS_ATE);
+}
+
+/* This is a debug feature only */
+static void mmu_check_unused(kbase_context *kctx, osk_phy_addr pgd)
+{
+ u64 *page;
+ int i;
+ CSTD_UNUSED(kctx);
+
+ page = osk_kmap_atomic(pgd);
+ /* kmap_atomic should NEVER fail. */
+ OSK_ASSERT(NULL != page);
+
+ for (i = 0; i < KBASE_MMU_PAGE_ENTRIES; i++)
+ {
+ if (mmu_pte_is_valid(page[i]))
+ {
+ beenthere("live pte %016lx", (unsigned long)page[i]);
+ }
+ }
+ osk_kunmap_atomic(pgd, page);
+}
+
+static void mmu_teardown_level(kbase_context *kctx, osk_phy_addr pgd, int level, int zap, u64 *pgd_page_buffer)
+{
+ osk_phy_addr target_pgd;
+ u64 *pgd_page;
+ int i;
+
+ pgd_page = osk_kmap_atomic(pgd);
+ /* kmap_atomic should NEVER fail. */
+ OSK_ASSERT(NULL != pgd_page);
+ /* Copy the page to our preallocated buffer so that we can minimize kmap_atomic usage */
+ memcpy(pgd_page_buffer, pgd_page, OSK_PAGE_SIZE);
+ osk_kunmap_atomic(pgd, pgd_page);
+ pgd_page = pgd_page_buffer;
+
+ for (i = 0; i < KBASE_MMU_PAGE_ENTRIES; i++) {
+ target_pgd = mmu_pte_to_phy_addr(pgd_page[i]);
+
+ if (target_pgd) {
+ if (level < 2)
+ {
+ mmu_teardown_level(kctx, target_pgd, level + 1, zap, pgd_page_buffer+(OSK_PAGE_SIZE/sizeof(u64)));
+ }
+ else {
+ /*
+ * So target_pte is a level-3 page.
+ * As a leaf, it is safe to free it.
+ * Unless we have live pages attached to it!
+ */
+ mmu_check_unused(kctx, target_pgd);
+ }
+
+ beenthere("pte %lx level %d", (unsigned long)target_pgd, level + 1);
+ if (zap)
+ {
+ kbase_phy_pages_free(kctx->kbdev, &kctx->pgd_allocator, 1, &target_pgd);
+ kbase_mem_usage_release_pages(&kctx->usage, 1);
+ }
+ }
+ }
+}
+
+mali_error kbase_mmu_init(struct kbase_context *kctx)
+{
+ OSK_ASSERT(NULL != kctx);
+ OSK_ASSERT(NULL == kctx->mmu_teardown_pages);
+
+ /* Preallocate MMU depth of four pages for mmu_teardown_level to use */
+ kctx->mmu_teardown_pages = osk_malloc(OSK_PAGE_SIZE*4);
+ if(NULL == kctx->mmu_teardown_pages)
+ {
+ return MALI_ERROR_OUT_OF_MEMORY;
+ }
+ return MALI_ERROR_NONE;
+}
+
+void kbase_mmu_term(struct kbase_context *kctx)
+{
+ OSK_ASSERT(NULL != kctx);
+ OSK_ASSERT(NULL != kctx->mmu_teardown_pages);
+
+ osk_free(kctx->mmu_teardown_pages);
+ kctx->mmu_teardown_pages = NULL;
+}
+
+void kbase_mmu_free_pgd(struct kbase_context *kctx)
+{
+ OSK_ASSERT(NULL != kctx);
+ OSK_ASSERT(NULL != kctx->mmu_teardown_pages);
+
+ mmu_teardown_level(kctx, kctx->pgd, MIDGARD_MMU_TOPLEVEL, 1, kctx->mmu_teardown_pages);
+
+ beenthere("pgd %lx", (unsigned long)kctx->pgd);
+ kbase_phy_pages_free(kctx->kbdev, &kctx->pgd_allocator, 1, &kctx->pgd);
+ kbase_mem_usage_release_pages(&kctx->usage, 1);
+}
+KBASE_EXPORT_TEST_API(kbase_mmu_free_pgd)
+
+static size_t kbasep_mmu_dump_level(kbase_context *kctx, osk_phy_addr pgd, int level, char **buffer, size_t *size_left)
+{
+ osk_phy_addr target_pgd;
+ u64 *pgd_page;
+ int i;
+ size_t size = KBASE_MMU_PAGE_ENTRIES*sizeof(u64)+sizeof(u64);
+ size_t dump_size;
+
+ pgd_page = osk_kmap(pgd);
+ if(!pgd_page)
+ {
+ OSK_PRINT_WARN(OSK_BASE_MMU, "kbasep_mmu_dump_level: kmap failure\n");
+ return 0;
+ }
+
+ if (*size_left >= size)
+ {
+ /* A modified physical address that contains the page table level */
+ u64 m_pgd = pgd | level;
+
+ /* Put the modified physical address in the output buffer */
+ memcpy(*buffer, &m_pgd, sizeof(m_pgd));
+ *buffer += sizeof(m_pgd);
+
+ /* Followed by the page table itself */
+ memcpy(*buffer, pgd_page, sizeof(u64)*KBASE_MMU_PAGE_ENTRIES);
+ *buffer += sizeof(u64)*KBASE_MMU_PAGE_ENTRIES;
+
+ *size_left -= size;
+ }
+
+ for (i = 0; i < KBASE_MMU_PAGE_ENTRIES; i++) {
+ if ((pgd_page[i] & ENTRY_IS_PTE) == ENTRY_IS_PTE) {
+ target_pgd = mmu_pte_to_phy_addr(pgd_page[i]);
+
+ dump_size = kbasep_mmu_dump_level(kctx, target_pgd, level + 1, buffer, size_left);
+ if(!dump_size)
+ {
+ osk_kunmap(pgd, pgd_page);
+ return 0;
+ }
+ size += dump_size;
+ }
+ }
+
+ osk_kunmap(pgd, pgd_page);
+
+ return size;
+}
+
+void *kbase_mmu_dump(struct kbase_context *kctx,int nr_pages)
+{
+ void *kaddr;
+ size_t size_left;
+
+ OSK_ASSERT(kctx);
+
+ if (0 == nr_pages)
+ {
+ /* can't find in a 0 sized buffer, early out */
+ return NULL;
+ }
+
+ size_left = nr_pages * OSK_PAGE_SIZE;
+
+ kaddr = osk_vmalloc(size_left);
+
+ if (kaddr)
+ {
+ u64 end_marker = 0xFFULL;
+ char *buffer = (char*)kaddr;
+
+ size_t size = kbasep_mmu_dump_level(kctx, kctx->pgd, MIDGARD_MMU_TOPLEVEL, &buffer, &size_left);
+ if(!size)
+ {
+ osk_vfree(kaddr);
+ return NULL;
+ }
+
+ /* Add on the size for the end marker */
+ size += sizeof(u64);
+
+ if (size > nr_pages * OSK_PAGE_SIZE || size_left < sizeof(u64)) {
+ /* The buffer isn't big enough - free the memory and return failure */
+ osk_vfree(kaddr);
+ return NULL;
+ }
+
+ /* Add the end marker */
+ memcpy(buffer, &end_marker, sizeof(u64));
+ }
+
+ return kaddr;
+}
+KBASE_EXPORT_TEST_API(kbase_mmu_dump)
+
+static u64 lock_region(kbase_device *kbdev, u64 pfn, u32 num_pages)
+{
+ u64 region;
+
+ /* can't lock a zero sized range */
+ OSK_ASSERT(num_pages);
+
+ region = pfn << OSK_PAGE_SHIFT;
+ /*
+ * osk_clz returns (given the ASSERT above):
+ * 32-bit: 0 .. 31
+ * 64-bit: 0 .. 63
+ *
+ * 32-bit: 32 + 10 - osk_clz(num_pages)
+ * results in the range (11 .. 42)
+ * 64-bit: 64 + 10 - osk_clz(num_pages)
+ * results in the range (11 .. 42)
+ */
+
+ /* gracefully handle num_pages being zero */
+ if (0 == num_pages)
+ {
+ region |= 11;
+ }
+ else
+ {
+ u8 region_width;
+ region_width = ( OSK_BITS_PER_LONG + 10 - osk_clz(num_pages) );
+ if (num_pages != (1ul << (region_width - 11)))
+ {
+ /* not pow2, so must go up to the next pow2 */
+ region_width += 1;
+ }
+ OSK_ASSERT(region_width <= KBASE_LOCK_REGION_MAX_SIZE);
+ OSK_ASSERT(region_width >= KBASE_LOCK_REGION_MIN_SIZE);
+ region |= region_width;
+ }
+
+ return region;
+}
+
+static void bus_fault_worker(osk_workq_work *data)
+{
+ const int num_as = 16;
+ kbase_as * faulting_as;
+ int as_no;
+ kbase_context * kctx;
+ kbase_device * kbdev;
+ u32 reg;
+ mali_bool reset_status= MALI_FALSE;
+
+ faulting_as = CONTAINER_OF(data, kbase_as, work_busfault);
+ as_no = faulting_as->number;
+
+ kbdev = CONTAINER_OF( faulting_as, kbase_device, as[as_no] );
+
+ /* Grab the context that was already refcounted in kbase_mmu_interrupt().
+ * Therefore, it cannot be scheduled out of this AS until we explicitly release it
+ *
+ * NOTE: NULL can be returned here if we're gracefully handling a spurious interrupt */
+ kctx = kbasep_js_runpool_lookup_ctx_noretain( kbdev, as_no );
+
+ /* switch to UNMAPPED mode, will abort all jobs and stop any hw counter dumping */
+ /* AS transaction begin */
+ osk_mutex_lock(&kbdev->as[as_no].transaction_mutex);
+
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8245))
+ {
+ /* Due to H/W issue 8245 we need to reset the GPU after using UNMAPPED mode.
+ * We start the reset before switching to UNMAPPED to ensure that unrelated jobs
+ * are evicted from the GPU before the switch.
+ */
+ reset_status = kbase_prepare_to_reset_gpu(kbdev);
+ }
+
+ reg = kbase_reg_read(kbdev, MMU_AS_REG(as_no, ASn_TRANSTAB_LO), kctx);
+ reg &= ~3;
+ kbase_reg_write(kbdev, MMU_AS_REG(as_no, ASn_TRANSTAB_LO), reg, kctx);
+ kbase_reg_write(kbdev, MMU_AS_REG(as_no, ASn_COMMAND), ASn_COMMAND_UPDATE, kctx);
+
+ kbase_reg_write(kbdev, MMU_REG(MMU_IRQ_CLEAR), (1UL << as_no) | (1UL << (as_no + num_as)) , NULL);
+ osk_mutex_unlock(&kbdev->as[as_no].transaction_mutex);
+ /* AS transaction end */
+
+ mmu_mask_reenable( kbdev, kctx, faulting_as );
+
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8245) && reset_status)
+ {
+ kbase_reset_gpu(kbdev);
+ }
+
+ /* By this point, the fault was handled in some way, so release the ctx refcount */
+ if ( kctx != NULL )
+ {
+ kbasep_js_runpool_release_ctx( kbdev, kctx );
+ }
+}
+
+void kbase_mmu_interrupt(kbase_device * kbdev, u32 irq_stat)
+{
+ const int num_as = 16;
+ kbasep_js_device_data *js_devdata;
+ const int busfault_shift = 16;
+ const int pf_shift = 0;
+ const unsigned long mask = (1UL << num_as) - 1;
+
+ u64 fault_addr;
+ u32 new_mask;
+ u32 tmp;
+
+ u32 bf_bits = (irq_stat >> busfault_shift) & mask; /* bus faults */
+ /* Ignore ASes with both pf and bf */
+ u32 pf_bits = ((irq_stat >> pf_shift) & mask) & ~bf_bits; /* page faults */
+
+ OSK_ASSERT( NULL != kbdev);
+
+ js_devdata = &kbdev->js_data;
+
+ /* remember current mask */
+ osk_spinlock_irq_lock(&kbdev->mmu_mask_change);
+ new_mask = kbase_reg_read(kbdev, MMU_REG(MMU_IRQ_MASK), NULL);
+ /* mask interrupts for now */
+ kbase_reg_write(kbdev, MMU_REG(MMU_IRQ_MASK), 0, NULL);
+ osk_spinlock_irq_unlock(&kbdev->mmu_mask_change);
+
+ while (bf_bits)
+ {
+ kbase_as * as;
+ int as_no;
+ kbase_context * kctx;
+
+ /* the while logic ensures we have a bit set, no need to check for not-found here */
+ as_no = osk_find_first_set_bit(bf_bits);
+
+ /* Refcount the kctx ASAP - it shouldn't disappear anyway, since Bus/Page faults
+ * _should_ only occur whilst jobs are running, and a job causing the Bus/Page fault
+ * shouldn't complete until the MMU is updated */
+ kctx = kbasep_js_runpool_lookup_ctx( kbdev, as_no );
+
+ /* mark as handled */
+ bf_bits &= ~(1UL << as_no);
+
+ /* find faulting address */
+ fault_addr = kbase_reg_read(kbdev, MMU_AS_REG(as_no, ASn_FAULTADDRESS_HI), kctx);
+ fault_addr <<= 32;
+ fault_addr |= kbase_reg_read(kbdev, MMU_AS_REG(as_no, ASn_FAULTADDRESS_LO), kctx);
+
+ if (kctx)
+ {
+ /* hw counters dumping in progress, signal the other thread that it failed */
+ if ((kbdev->hwcnt.kctx == kctx) && (kbdev->hwcnt.state == KBASE_INSTR_STATE_DUMPING))
+ {
+ kbdev->hwcnt.state = KBASE_INSTR_STATE_FAULT;
+ }
+
+ /* Stop the kctx from submitting more jobs and cause it to be scheduled
+ * out/rescheduled when all references to it are released */
+ osk_spinlock_irq_lock( &js_devdata->runpool_irq.lock );
+ kbasep_js_clear_submit_allowed( js_devdata, kctx );
+ osk_spinlock_irq_unlock( &js_devdata->runpool_irq.lock );
+
+ OSK_PRINT_WARN(OSK_BASE_MMU, "Bus error in AS%d at 0x%016llx\n", as_no, fault_addr);
+ }
+ else
+ {
+ OSK_PRINT_WARN(OSK_BASE_MMU,
+ "Bus error in AS%d at 0x%016llx with no context present! "
+ "Suprious IRQ or SW Design Error?\n",
+ as_no, fault_addr);
+ }
+
+ as = &kbdev->as[as_no];
+
+ /* remove the queued BFs from the mask */
+ new_mask &= ~(1UL << (as_no + num_as));
+
+ /* We need to switch to UNMAPPED mode - but we do this in a worker so that we can sleep */
+ osk_workq_work_init(&as->work_busfault, bus_fault_worker);
+ osk_workq_submit(&as->pf_wq, &as->work_busfault);
+ }
+
+ /*
+ * pf_bits is non-zero if we have at least one AS with a page fault and no bus fault.
+ * Handle the PFs in our worker thread.
+ */
+ while (pf_bits)
+ {
+ kbase_as * as;
+ /* the while logic ensures we have a bit set, no need to check for not-found here */
+ int as_no = osk_find_first_set_bit(pf_bits);
+ kbase_context * kctx;
+
+ /* Refcount the kctx ASAP - it shouldn't disappear anyway, since Bus/Page faults
+ * _should_ only occur whilst jobs are running, and a job causing the Bus/Page fault
+ * shouldn't complete until the MMU is updated */
+ kctx = kbasep_js_runpool_lookup_ctx( kbdev, as_no );
+
+ /* mark as handled */
+ pf_bits &= ~(1UL << as_no);
+
+ /* find faulting address */
+ fault_addr = kbase_reg_read(kbdev, MMU_AS_REG(as_no, ASn_FAULTADDRESS_HI), kctx);
+ fault_addr <<= 32;
+ fault_addr |= kbase_reg_read(kbdev, MMU_AS_REG(as_no, ASn_FAULTADDRESS_LO), kctx);
+
+ if ( kctx == NULL )
+ {
+ OSK_PRINT_WARN(OSK_BASE_MMU,
+ "Page fault in AS%d at 0x%016llx with no context present! "
+ "Suprious IRQ or SW Design Error?\n",
+ as_no, fault_addr);
+ }
+
+ as = &kbdev->as[as_no];
+
+ /* remove the queued PFs from the mask */
+ new_mask &= ~((1UL << as_no) | (1UL << (as_no + num_as)));
+
+ /* queue work pending for this AS */
+ as->fault_addr = fault_addr;
+
+ osk_workq_work_init(&as->work_pagefault, page_fault_worker);
+ osk_workq_submit(&as->pf_wq, &as->work_pagefault);
+ }
+
+ /* reenable interrupts */
+ osk_spinlock_irq_lock(&kbdev->mmu_mask_change);
+ tmp = kbase_reg_read(kbdev, MMU_REG(MMU_IRQ_MASK), NULL);
+ new_mask |= tmp;
+ kbase_reg_write(kbdev, MMU_REG(MMU_IRQ_MASK), new_mask, NULL);
+ osk_spinlock_irq_unlock(&kbdev->mmu_mask_change);
+}
+KBASE_EXPORT_TEST_API(kbase_mmu_interrupt)
+
+const char *kbase_exception_name(u32 exception_code)
+{
+ const char *e;
+
+ switch(exception_code)
+ {
+ /* Non-Fault Status code */
+ case 0x00: e = "NOT_STARTED/IDLE/OK"; break;
+ case 0x01: e = "DONE"; break;
+ case 0x02: e = "INTERRUPTED"; break;
+ case 0x03: e = "STOPPED"; break;
+ case 0x04: e = "TERMINATED"; break;
+ case 0x08: e = "ACTIVE"; break;
+ /* Job exceptions */
+ case 0x40: e = "JOB_CONFIG_FAULT"; break;
+ case 0x41: e = "JOB_POWER_FAULT"; break;
+ case 0x42: e = "JOB_READ_FAULT"; break;
+ case 0x43: e = "JOB_WRITE_FAULT"; break;
+ case 0x44: e = "JOB_AFFINITY_FAULT"; break;
+ case 0x48: e = "JOB_BUS_FAULT"; break;
+ case 0x50: e = "INSTR_INVALID_PC"; break;
+ case 0x51: e = "INSTR_INVALID_ENC"; break;
+ case 0x52: e = "INSTR_TYPE_MISMATCH"; break;
+ case 0x53: e = "INSTR_OPERAND_FAULT"; break;
+ case 0x54: e = "INSTR_TLS_FAULT"; break;
+ case 0x55: e = "INSTR_BARRIER_FAULT"; break;
+ case 0x56: e = "INSTR_ALIGN_FAULT"; break;
+ case 0x58: e = "DATA_INVALID_FAULT"; break;
+ case 0x59: e = "TILE_RANGE_FAULT"; break;
+ case 0x5A: e = "ADDR_RANGE_FAULT"; break;
+ case 0x60: e = "OUT_OF_MEMORY"; break;
+ /* GPU exceptions */
+ case 0x80: e = "DELAYED_BUS_FAULT"; break;
+ case 0x81: e = "SHAREABILITY_FAULT"; break;
+ /* MMU exceptions */
+ case 0xC0: case 0xC1: case 0xC2: case 0xC3:
+ case 0xC4: case 0xC5: case 0xC6: case 0xC7:
+ e = "TRANSLATION_FAULT"; break;
+ case 0xC8: e = "PERMISSION_FAULT"; break;
+ case 0xD0: case 0xD1: case 0xD2: case 0xD3:
+ case 0xD4: case 0xD5: case 0xD6: case 0xD7:
+ e = "TRANSTAB_BUS_FAULT"; break;
+ case 0xD8: e = "ACCESS_FLAG"; break;
+ default:
+ e = "UNKNOWN"; break;
+ };
+
+ return e;
+}
+
+/**
+ * The caller must ensure it's retained the ctx to prevent it from being scheduled out whilst it's being worked on.
+ */
+static void kbase_mmu_report_fault_and_kill(kbase_context *kctx, kbase_as * as, mali_addr64 fault_addr)
+{
+ u32 fault_status;
+ u32 reg;
+ int exception_type;
+ int access_type;
+ int source_id;
+ int as_no;
+ kbase_device * kbdev;
+ kbasep_js_device_data *js_devdata;
+ mali_bool reset_status = MALI_FALSE;
+#if MALI_DEBUG
+ static const char *access_type_names[] = { "RESERVED", "EXECUTE", "READ", "WRITE" };
+#endif
+
+ OSK_ASSERT(as);
+ OSK_ASSERT(kctx);
+ CSTD_UNUSED(fault_addr);
+
+ as_no = as->number;
+ kbdev = kctx->kbdev;
+ js_devdata = &kbdev->js_data;
+
+ /* ASSERT that the context won't leave the runpool */
+ OSK_ASSERT( kbasep_js_debug_check_ctx_refcount( kbdev, kctx ) > 0 );
+
+ fault_status = kbase_reg_read(kbdev, MMU_AS_REG(as_no, ASn_FAULTSTATUS), kctx);
+
+ /* decode the fault status */
+ exception_type = fault_status & 0xFF;
+ access_type = (fault_status >> 8) & 0x3;
+ source_id = (fault_status >> 16);
+
+ /* terminal fault, print info about the fault */
+ OSK_PRINT_WARN(OSK_BASE_MMU, "Fault in AS%d at VA 0x%016llX", as_no, fault_addr);
+ OSK_PRINT_WARN(OSK_BASE_MMU, "raw fault status 0x%X", fault_status);
+ OSK_PRINT_WARN(OSK_BASE_MMU, "decoded fault status (%s):", (fault_status & (1 << 10) ? "DECODER FAULT" : "SLAVE FAULT"));
+ OSK_PRINT_WARN(OSK_BASE_MMU, "exception type 0x%X: %s", exception_type, kbase_exception_name(exception_type));
+ OSK_PRINT_WARN(OSK_BASE_MMU, "access type 0x%X: %s", access_type, access_type_names[access_type]);
+ OSK_PRINT_WARN(OSK_BASE_MMU, "source id 0x%X", source_id);
+
+ /* hardware counters dump fault handling */
+ if ((kbdev->hwcnt.kctx) &&
+ (kbdev->hwcnt.kctx->as_nr == as_no) &&
+ (kbdev->hwcnt.state == KBASE_INSTR_STATE_DUMPING))
+ {
+ u32 num_core_groups = kbdev->gpu_props.num_core_groups;
+ if ((fault_addr >= kbdev->hwcnt.addr) && (fault_addr < (kbdev->hwcnt.addr + (num_core_groups * 2048))))
+ {
+ kbdev->hwcnt.state = KBASE_INSTR_STATE_FAULT;
+ }
+ }
+
+ /* Stop the kctx from submitting more jobs and cause it to be scheduled
+ * out/rescheduled - this will occur on releasing the context's refcount */
+ osk_spinlock_irq_lock( &js_devdata->runpool_irq.lock );
+ kbasep_js_clear_submit_allowed( js_devdata, kctx );
+ osk_spinlock_irq_unlock( &js_devdata->runpool_irq.lock );
+
+ /* Kill any running jobs from the context. Submit is disallowed, so no more jobs from this
+ * context can appear in the job slots from this point on */
+ kbase_job_kill_jobs_from_context(kctx);
+ /* AS transaction begin */
+ osk_mutex_lock(&as->transaction_mutex);
+
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8245))
+ {
+ /* Due to H/W issue 8245 we need to reset the GPU after using UNMAPPED mode.
+ * We start the reset before switching to UNMAPPED to ensure that unrelated jobs
+ * are evicted from the GPU before the switch.
+ */
+ reset_status = kbase_prepare_to_reset_gpu(kbdev);
+ }
+
+ /* switch to UNMAPPED mode, will abort all jobs and stop any hw counter dumping */
+ reg = kbase_reg_read(kbdev, MMU_AS_REG(as_no, ASn_TRANSTAB_LO), kctx);
+ reg &= ~3;
+ kbase_reg_write(kbdev, MMU_AS_REG(as_no, ASn_TRANSTAB_LO), reg, kctx);
+ kbase_reg_write(kbdev, MMU_AS_REG(as_no, ASn_COMMAND), ASn_COMMAND_UPDATE, kctx);
+
+ kbase_reg_write(kbdev, MMU_REG(MMU_IRQ_CLEAR), (1UL << as_no), NULL);
+
+ osk_mutex_unlock(&as->transaction_mutex);
+ /* AS transaction end */
+ mmu_mask_reenable(kbdev, kctx, as);
+
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8245) && reset_status)
+ {
+ kbase_reset_gpu(kbdev);
+ }
+}
+
+void kbasep_as_do_poke(osk_workq_work * work)
+{
+ kbase_as * as;
+ kbase_device * kbdev;
+
+ OSK_ASSERT(work);
+ as = CONTAINER_OF(work, kbase_as, poke_work);
+ kbdev = CONTAINER_OF(as, kbase_device, as[as->number]);
+
+ kbase_pm_context_active(kbdev);
+
+ /* AS transaction begin */
+ osk_mutex_lock(&as->transaction_mutex);
+ /* Force a uTLB invalidate */
+ kbase_reg_write(kbdev, MMU_AS_REG(as->number, ASn_COMMAND), ASn_COMMAND_UNLOCK, NULL);
+ osk_mutex_unlock(&as->transaction_mutex);
+ /* AS transaction end */
+
+ kbase_pm_context_idle(kbdev);
+
+ if (osk_atomic_get(&as->poke_refcount))
+ {
+ osk_error err;
+ /* still someone depending on the UNLOCK, schedule a run */
+ err = osk_timer_modify(&as->poke_timer, 5/*ms*/);
+ if (err != OSK_ERR_NONE)
+ {
+ OSK_PRINT_WARN(OSK_BASE_MMU, "Failed to enable the BASE_HW_ISSUE_8316 workaround");
+ }
+ }
+}
+
+void kbasep_as_poke_timer_callback(void* arg)
+{
+ kbase_as * as;
+ as = (kbase_as*)arg;
+ osk_workq_submit(&as->poke_wq, &as->poke_work);
+}
+
+void kbase_as_poking_timer_retain(kbase_as * as)
+{
+ OSK_ASSERT(as);
+
+ if (1 == osk_atomic_inc(&as->poke_refcount))
+ {
+ /* need to start poking */
+ osk_workq_submit(&as->poke_wq, &as->poke_work);
+
+ }
+}
+
+void kbase_as_poking_timer_release(kbase_as * as)
+{
+ OSK_ASSERT(as);
+ osk_atomic_dec(&as->poke_refcount);
+}
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_pm.c
+ * Base kernel power management APIs
+ */
+
+#include <osk/mali_osk.h>
+#include <kbase/src/common/mali_kbase.h>
+#include <kbase/src/common/mali_midg_regmap.h>
+
+#include <kbase/src/common/mali_kbase_pm.h>
+
+/* Policy operation structures */
+extern const kbase_pm_policy kbase_pm_always_on_policy_ops;
+extern const kbase_pm_policy kbase_pm_demand_policy_ops;
+
+/** A list of the power policies available in the system */
+static const kbase_pm_policy * const policy_list[] =
+{
+/* It is not possible to modify power management run-time in a model build. Therefore an
+ * instrumented model build must use always on power management policy. */
+#if MALI_NO_MALI || ( MALI_KBASEP_MODEL && MALI_INSTRUMENTATION_LEVEL )
+ &kbase_pm_always_on_policy_ops,
+ &kbase_pm_demand_policy_ops
+#else
+ &kbase_pm_demand_policy_ops,
+ &kbase_pm_always_on_policy_ops
+#endif
+};
+
+/** The number of policies available in the system.
+ * This is derived from the number of functions listed in policy_get_functions.
+ */
+#define POLICY_COUNT (sizeof(policy_list)/sizeof(*policy_list))
+
+void kbase_pm_register_access_enable(kbase_device *kbdev)
+{
+ kbase_pm_callback_conf *callbacks;
+
+ callbacks = (kbase_pm_callback_conf*) kbasep_get_config_value(kbdev, kbdev->config_attributes,
+ KBASE_CONFIG_ATTR_POWER_MANAGEMENT_CALLBACKS);
+
+ if (callbacks)
+ {
+ callbacks->power_on_callback(kbdev);
+ }
+}
+
+void kbase_pm_register_access_disable(kbase_device *kbdev)
+{
+ kbase_pm_callback_conf *callbacks;
+
+ callbacks = (kbase_pm_callback_conf*) kbasep_get_config_value(kbdev, kbdev->config_attributes,
+ KBASE_CONFIG_ATTR_POWER_MANAGEMENT_CALLBACKS);
+
+ if (callbacks)
+ {
+ callbacks->power_off_callback(kbdev);
+ }
+}
+
+mali_error kbase_pm_init(kbase_device *kbdev)
+{
+ mali_error ret = MALI_ERROR_NONE;
+ osk_error osk_err;
+ kbase_pm_callback_conf *callbacks;
+
+ OSK_ASSERT(kbdev != NULL);
+
+ kbdev->pm.gpu_powered = MALI_FALSE;
+
+ callbacks = (kbase_pm_callback_conf*) kbasep_get_config_value(kbdev, kbdev->config_attributes,
+ KBASE_CONFIG_ATTR_POWER_MANAGEMENT_CALLBACKS);
+ if (callbacks)
+ {
+ kbdev->pm.callback_power_on = callbacks->power_on_callback;
+ kbdev->pm.callback_power_off = callbacks->power_off_callback;
+ }
+ else
+ {
+ kbdev->pm.callback_power_on = NULL;
+ kbdev->pm.callback_power_off = NULL;
+ }
+
+ /* Initialise the metrics subsystem */
+ ret = kbasep_pm_metrics_init(kbdev);
+ if (MALI_ERROR_NONE != ret)
+ {
+ return ret;
+ }
+
+ osk_err = osk_waitq_init(&kbdev->pm.power_up_waitqueue);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ goto power_up_waitq_fail;
+ }
+
+ osk_err = osk_waitq_init(&kbdev->pm.power_down_waitqueue);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ goto power_down_waitq_fail;
+ }
+
+ osk_err = osk_waitq_init(&kbdev->pm.policy_outstanding_event);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ goto policy_outstanding_event_waitq_fail;
+ }
+ osk_waitq_set(&kbdev->pm.policy_outstanding_event);
+
+ osk_err = osk_workq_init(&kbdev->pm.workqueue, "kbase_pm", OSK_WORKQ_NON_REENTRANT);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ goto workq_fail;
+ }
+
+ osk_err = osk_spinlock_irq_init(&kbdev->pm.power_change_lock, OSK_LOCK_ORDER_POWER_MGMT);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ goto power_change_lock_fail;
+ }
+
+ osk_err = osk_spinlock_irq_init(&kbdev->pm.active_count_lock, OSK_LOCK_ORDER_POWER_MGMT_ACTIVE);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ goto active_count_lock_fail;
+ }
+
+ osk_err = osk_spinlock_irq_init(&kbdev->pm.gpu_cycle_counter_requests_lock, OSK_LOCK_ORDER_POWER_MGMT_GPU_CYCLE_COUNTER);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ goto gpu_cycle_counter_requests_lock_fail;
+ }
+
+ osk_err = osk_spinlock_irq_init(&kbdev->pm.gpu_powered_lock, OSK_LOCK_ORDER_POWER_MGMT);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ goto gpu_powered_lock_fail;
+ }
+
+ return MALI_ERROR_NONE;
+
+gpu_powered_lock_fail:
+ osk_spinlock_irq_term(&kbdev->pm.gpu_cycle_counter_requests_lock);
+gpu_cycle_counter_requests_lock_fail:
+ osk_spinlock_irq_term(&kbdev->pm.active_count_lock);
+active_count_lock_fail:
+ osk_spinlock_irq_term(&kbdev->pm.power_change_lock);
+power_change_lock_fail:
+ osk_workq_term(&kbdev->pm.workqueue);
+workq_fail:
+ osk_waitq_term(&kbdev->pm.power_down_waitqueue);
+policy_outstanding_event_waitq_fail:
+ osk_waitq_term(&kbdev->pm.policy_outstanding_event);
+power_down_waitq_fail:
+ osk_waitq_term(&kbdev->pm.power_up_waitqueue);
+power_up_waitq_fail:
+ kbasep_pm_metrics_term(kbdev);
+ return MALI_ERROR_FUNCTION_FAILED;
+}
+KBASE_EXPORT_TEST_API(kbase_pm_init)
+
+mali_error kbase_pm_powerup(kbase_device *kbdev)
+{
+ mali_error ret;
+
+ OSK_ASSERT(kbdev != NULL);
+
+ ret = kbase_pm_init_hw(kbdev);
+ if (ret != MALI_ERROR_NONE)
+ {
+ return ret;
+ }
+
+ kbase_pm_power_transitioning(kbdev);
+
+ kbasep_pm_read_present_cores(kbdev);
+
+ /* Pretend the GPU is active to prevent a power policy turning the GPU cores off */
+ osk_spinlock_irq_lock(&kbdev->pm.active_count_lock);
+ kbdev->pm.active_count = 1;
+ osk_spinlock_irq_unlock(&kbdev->pm.active_count_lock);
+
+ osk_spinlock_irq_lock(&kbdev->pm.gpu_cycle_counter_requests_lock);
+ /* Ensure cycle counter is off */
+ kbdev->pm.gpu_cycle_counter_requests = 0;
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(GPU_COMMAND), GPU_COMMAND_CYCLE_COUNT_STOP, NULL);
+ osk_spinlock_irq_unlock(&kbdev->pm.gpu_cycle_counter_requests_lock);
+
+ osk_atomic_set(&kbdev->pm.pending_events, 0);
+
+ osk_atomic_set(&kbdev->pm.work_active,(u32)KBASE_PM_WORK_ACTIVE_STATE_INACTIVE);
+
+ kbdev->pm.new_policy = NULL;
+ kbdev->pm.current_policy = policy_list[0];
+ kbdev->pm.current_policy->init(kbdev);
+
+ kbase_pm_send_event(kbdev, KBASE_PM_EVENT_POLICY_INIT);
+
+ /* Idle the GPU */
+ kbase_pm_context_idle(kbdev);
+
+ return MALI_ERROR_NONE;
+}
+KBASE_EXPORT_TEST_API(kbase_pm_powerup)
+
+void kbase_pm_power_transitioning(kbase_device *kbdev)
+{
+ OSK_ASSERT(kbdev != NULL);
+
+ /* Clear the wait queues that are used to detect successful power up or down */
+ osk_waitq_clear(&kbdev->pm.power_up_waitqueue);
+ osk_waitq_clear(&kbdev->pm.power_down_waitqueue);
+}
+
+KBASE_EXPORT_TEST_API(kbase_pm_power_transitioning)
+
+void kbase_pm_power_up_done(kbase_device *kbdev)
+{
+ OSK_ASSERT(kbdev != NULL);
+
+ osk_waitq_set(&kbdev->pm.power_up_waitqueue);
+}
+KBASE_EXPORT_TEST_API(kbase_pm_power_up_done)
+
+void kbase_pm_reset_done(kbase_device *kbdev)
+{
+ OSK_ASSERT(kbdev != NULL);
+
+ osk_waitq_set(&kbdev->pm.power_up_waitqueue);
+}
+KBASE_EXPORT_TEST_API(kbase_pm_reset_done)
+
+void kbase_pm_power_down_done(kbase_device *kbdev)
+{
+ OSK_ASSERT(kbdev != NULL);
+
+ osk_waitq_set(&kbdev->pm.power_down_waitqueue);
+}
+KBASE_EXPORT_TEST_API(kbase_pm_power_down_done)
+
+static void kbase_pm_wait_for_no_outstanding_events(kbase_device *kbdev)
+{
+ osk_waitq_wait(&kbdev->pm.policy_outstanding_event);
+}
+
+void kbase_pm_context_active(kbase_device *kbdev)
+{
+ int c;
+
+ OSK_ASSERT(kbdev != NULL);
+
+ osk_spinlock_irq_lock(&kbdev->pm.active_count_lock);
+ c = ++kbdev->pm.active_count;
+ osk_spinlock_irq_unlock(&kbdev->pm.active_count_lock);
+
+ KBASE_TRACE_ADD_REFCOUNT( kbdev, PM_CONTEXT_ACTIVE, NULL, NULL, 0u, c );
+
+ if (c == 1)
+ {
+ /* First context active */
+ kbase_pm_send_event(kbdev, KBASE_PM_EVENT_GPU_ACTIVE);
+
+ kbasep_pm_record_gpu_active(kbdev);
+ }
+ /* Synchronise with the power policy to ensure that the event has been noticed */
+ kbase_pm_wait_for_no_outstanding_events(kbdev);
+
+ kbase_pm_wait_for_power_up(kbdev);
+}
+KBASE_EXPORT_TEST_API(kbase_pm_context_active)
+
+void kbase_pm_context_idle(kbase_device *kbdev)
+{
+ int c;
+
+ OSK_ASSERT(kbdev != NULL);
+
+ osk_spinlock_irq_lock(&kbdev->pm.active_count_lock);
+
+ c = --kbdev->pm.active_count;
+
+ KBASE_TRACE_ADD_REFCOUNT( kbdev, PM_CONTEXT_IDLE, NULL, NULL, 0u, c );
+
+ OSK_ASSERT(c >= 0);
+
+ if (c == 0)
+ {
+ /* Last context has gone idle */
+ kbase_pm_send_event(kbdev, KBASE_PM_EVENT_GPU_IDLE);
+
+ kbasep_pm_record_gpu_idle(kbdev);
+ }
+
+ /* We must wait for the above functions to finish (in the case c==0) before releasing the lock otherwise there is
+ * a race with another thread calling kbase_pm_context_active - in this case the IDLE message could be sent
+ * *after* the ACTIVE message causing the policy and metrics systems to become confused
+ */
+ osk_spinlock_irq_unlock(&kbdev->pm.active_count_lock);
+}
+KBASE_EXPORT_TEST_API(kbase_pm_context_idle)
+
+void kbase_pm_halt(kbase_device *kbdev)
+{
+ OSK_ASSERT(kbdev != NULL);
+
+ if (kbdev->pm.current_policy != NULL)
+ {
+ /* Turn the GPU off */
+ kbase_pm_send_event(kbdev, KBASE_PM_EVENT_SYSTEM_SUSPEND);
+ /* Wait for the policy to acknowledge */
+ kbase_pm_wait_for_power_down(kbdev);
+ }
+}
+KBASE_EXPORT_TEST_API(kbase_pm_halt)
+
+void kbase_pm_term(kbase_device *kbdev)
+{
+ OSK_ASSERT(kbdev != NULL);
+ OSK_ASSERT(kbdev->pm.active_count == 0);
+ OSK_ASSERT(kbdev->pm.gpu_cycle_counter_requests == 0);
+ /* Destroy the workqueue - this ensures that all messages have been processed */
+ osk_workq_term(&kbdev->pm.workqueue);
+
+ if (kbdev->pm.current_policy != NULL)
+ {
+ /* Free any resources the policy allocated */
+ kbdev->pm.current_policy->term(kbdev);
+ }
+
+ /* Free the wait queues */
+ osk_waitq_term(&kbdev->pm.power_up_waitqueue);
+ osk_waitq_term(&kbdev->pm.power_down_waitqueue);
+ osk_waitq_term(&kbdev->pm.policy_outstanding_event);
+
+ /* Synchronise with other threads */
+ osk_spinlock_irq_lock(&kbdev->pm.power_change_lock);
+ osk_spinlock_irq_unlock(&kbdev->pm.power_change_lock);
+
+ /* Free the spinlocks */
+ osk_spinlock_irq_term(&kbdev->pm.power_change_lock);
+ osk_spinlock_irq_term(&kbdev->pm.active_count_lock);
+ osk_spinlock_irq_term(&kbdev->pm.gpu_cycle_counter_requests_lock);
+ osk_spinlock_irq_term(&kbdev->pm.gpu_powered_lock);
+
+ /* Shut down the metrics subsystem */
+ kbasep_pm_metrics_term(kbdev);
+}
+KBASE_EXPORT_TEST_API(kbase_pm_term)
+
+void kbase_pm_wait_for_power_up(kbase_device *kbdev)
+{
+ OSK_ASSERT(kbdev != NULL);
+
+ osk_waitq_wait(&kbdev->pm.power_up_waitqueue);
+}
+KBASE_EXPORT_TEST_API(kbase_pm_wait_for_power_up)
+
+void kbase_pm_wait_for_power_down(kbase_device *kbdev)
+{
+ OSK_ASSERT(kbdev != NULL);
+
+ osk_waitq_wait(&kbdev->pm.power_down_waitqueue);
+}
+KBASE_EXPORT_TEST_API(kbase_pm_wait_for_power_down)
+
+int kbase_pm_list_policies(const kbase_pm_policy * const **list)
+{
+ if (!list)
+ return POLICY_COUNT;
+
+ *list = policy_list;
+
+ return POLICY_COUNT;
+}
+KBASE_EXPORT_TEST_API(kbase_pm_list_policies)
+
+const kbase_pm_policy *kbase_pm_get_policy(kbase_device *kbdev)
+{
+ OSK_ASSERT(kbdev != NULL);
+
+ return kbdev->pm.current_policy;
+}
+KBASE_EXPORT_TEST_API(kbase_pm_get_policy)
+
+void kbase_pm_set_policy(kbase_device *kbdev, const kbase_pm_policy *new_policy)
+{
+ OSK_ASSERT(kbdev != NULL);
+ OSK_ASSERT(new_policy != NULL);
+
+ if (kbdev->pm.new_policy) {
+ /* A policy change is already outstanding */
+ return;
+ }
+ /* During a policy change we pretend the GPU is active */
+ kbase_pm_context_active(kbdev);
+
+ kbdev->pm.new_policy = new_policy;
+ kbase_pm_send_event(kbdev, KBASE_PM_EVENT_POLICY_CHANGE);
+}
+KBASE_EXPORT_TEST_API(kbase_pm_set_policy)
+
+void kbase_pm_change_policy(kbase_device *kbdev)
+{
+ OSK_ASSERT(kbdev != NULL);
+
+ kbdev->pm.current_policy->term(kbdev);
+ kbdev->pm.current_policy = kbdev->pm.new_policy;
+ kbdev->pm.current_policy->init(kbdev);
+ kbase_pm_send_event(kbdev, KBASE_PM_EVENT_POLICY_INIT);
+
+ /* Now the policy change is finished, we release our fake context active reference */
+ kbase_pm_context_idle(kbdev);
+
+ kbdev->pm.new_policy = NULL;
+}
+KBASE_EXPORT_TEST_API(kbase_pm_change_policy)
+
+/** Callback for the power management work queue.
+ *
+ * This function is called on the power management work queue and is responsible for delivering events to the active
+ * power policy. It manipulates the @ref kbase_pm_device_data.work_active field of @ref kbase_pm_device_data to track
+ * whether all events have been consumed.
+ *
+ * @param data A pointer to the @c pm.work field of the @ref kbase_device struct
+ */
+
+STATIC void kbase_pm_worker(osk_workq_work *data)
+{
+ kbase_device *kbdev = CONTAINER_OF(data, kbase_device, pm.work);
+ int pending_events;
+ int old_value;
+ int i;
+
+ do
+ {
+ osk_atomic_set(&kbdev->pm.work_active, (u32)KBASE_PM_WORK_ACTIVE_STATE_PROCESSING);
+
+ /* Atomically read and clear the bit mask */
+ pending_events = osk_atomic_get(&kbdev->pm.pending_events);
+
+ do
+ {
+ old_value = pending_events;
+ pending_events = osk_atomic_compare_and_swap(&kbdev->pm.pending_events, old_value, 0);
+ } while (old_value != pending_events);
+
+ for(i = 0; pending_events; i++)
+ {
+ if (pending_events & (1 << i))
+ {
+ kbdev->pm.current_policy->event(kbdev, (kbase_pm_event)i);
+
+ pending_events &= ~(1 << i);
+ }
+ }
+ i = osk_atomic_compare_and_swap(&kbdev->pm.work_active,
+ (u32)KBASE_PM_WORK_ACTIVE_STATE_PROCESSING,
+ (u32)KBASE_PM_WORK_ACTIVE_STATE_INACTIVE);
+ } while (i == (u32)KBASE_PM_WORK_ACTIVE_STATE_PENDING_EVT);
+ osk_waitq_set(&kbdev->pm.policy_outstanding_event);
+}
+KBASE_EXPORT_TEST_API(kbase_pm_worker)
+
+/** Merge an event into the list of events to deliver.
+ *
+ * This ensures that if, for example, a GPU_IDLE is immediately followed by a GPU_ACTIVE then instead of delivering
+ * both messages to the policy the GPU_IDLE is simply discarded.
+ *
+ * In particular in the sequence GPU_IDLE, GPU_ACTIVE, GPU_IDLE the resultant message is GPU_IDLE and not (GPU_IDLE
+ * and GPU_ACTIVE).
+ *
+ * @param old_events The bit mask of events that were previously pending
+ * @param new_event The event that should be merged into old_events
+ *
+ * @return The combination of old_events and the new event
+ */
+STATIC int kbasep_pm_merge_event(int old_events, kbase_pm_event new_event)
+{
+ switch(new_event) {
+ case KBASE_PM_EVENT_POLICY_INIT:
+ /* On policy initialisation, ignore any pending old_events. */
+ return ( 1 << KBASE_PM_EVENT_POLICY_INIT);
+
+ case KBASE_PM_EVENT_GPU_STATE_CHANGED:
+ case KBASE_PM_EVENT_POLICY_CHANGE:
+ case KBASE_PM_EVENT_CHANGE_GPU_STATE:
+ /* Just merge these events into the list */
+ return old_events | (1 << new_event);
+ case KBASE_PM_EVENT_SYSTEM_SUSPEND:
+ if (old_events & (1 << KBASE_PM_EVENT_SYSTEM_RESUME))
+ {
+ return old_events & ~(1 << KBASE_PM_EVENT_SYSTEM_RESUME);
+ }
+ return old_events | (1 << new_event);
+ case KBASE_PM_EVENT_SYSTEM_RESUME:
+ if (old_events & (1 << KBASE_PM_EVENT_SYSTEM_SUSPEND))
+ {
+ return old_events & ~(1 << KBASE_PM_EVENT_SYSTEM_SUSPEND);
+ }
+ return old_events | (1 << new_event);
+ case KBASE_PM_EVENT_GPU_ACTIVE:
+ if (old_events & (1 << KBASE_PM_EVENT_GPU_IDLE))
+ {
+ return old_events & ~(1 << KBASE_PM_EVENT_GPU_IDLE);
+ }
+ return old_events | (1 << new_event);
+ case KBASE_PM_EVENT_GPU_IDLE:
+ if (old_events & (1 << KBASE_PM_EVENT_GPU_ACTIVE))
+ {
+ return old_events & ~(1 << KBASE_PM_EVENT_GPU_ACTIVE);
+ }
+ return old_events | (1 << new_event);
+ default:
+ /* Unrecognised event - this should never happen */
+ OSK_ASSERT(0);
+ return old_events | (1 << new_event);
+ }
+}
+KBASE_EXPORT_TEST_API(kbasep_pm_merge_event)
+
+void kbase_pm_send_event(kbase_device *kbdev, kbase_pm_event event)
+{
+ int pending_events;
+ int work_active;
+ int old_value, new_value;
+
+ OSK_ASSERT(kbdev != NULL);
+
+ pending_events = osk_atomic_get(&kbdev->pm.pending_events);
+
+ /* Atomically OR the new event into the pending_events bit mask */
+ do
+ {
+ old_value = pending_events;
+ new_value = kbasep_pm_merge_event(pending_events, event);
+ if (old_value == new_value)
+ {
+ /* Event already pending */
+ return;
+ }
+ pending_events = osk_atomic_compare_and_swap(&kbdev->pm.pending_events, old_value, new_value);
+ } while (old_value != pending_events);
+
+ work_active = osk_atomic_get(&kbdev->pm.work_active);
+ do
+ {
+ old_value = work_active;
+ switch(old_value)
+ {
+ case KBASE_PM_WORK_ACTIVE_STATE_INACTIVE:
+ /* Need to enqueue an event */
+ new_value = KBASE_PM_WORK_ACTIVE_STATE_ENQUEUED;
+ break;
+ case KBASE_PM_WORK_ACTIVE_STATE_ENQUEUED:
+ /* Event already queued */
+ return;
+ case KBASE_PM_WORK_ACTIVE_STATE_PROCESSING:
+ /* Event being processed, we need to ensure it checks for another event */
+ new_value = KBASE_PM_WORK_ACTIVE_STATE_PENDING_EVT;
+ break;
+ case KBASE_PM_WORK_ACTIVE_STATE_PENDING_EVT:
+ /* Event being processed, but another check for events is going to happen */
+ return;
+ default:
+ OSK_ASSERT(0);
+ }
+ work_active = osk_atomic_compare_and_swap(&kbdev->pm.work_active, old_value, new_value);
+ } while (old_value != work_active);
+
+ if (old_value == KBASE_PM_WORK_ACTIVE_STATE_INACTIVE)
+ {
+ osk_waitq_clear(&kbdev->pm.policy_outstanding_event);
+ osk_workq_work_init(&kbdev->pm.work, kbase_pm_worker);
+ osk_workq_submit(&kbdev->pm.workqueue, &kbdev->pm.work);
+ }
+}
+
+KBASE_EXPORT_TEST_API(kbase_pm_send_event)
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_pm.h
+ * Power management API definitions
+ */
+
+#ifndef _KBASE_PM_H_
+#define _KBASE_PM_H_
+
+#include <kbase/src/common/mali_midg_regmap.h>
+
+#include "mali_kbase_pm_always_on.h"
+#include "mali_kbase_pm_demand.h"
+
+/* Forward definition - see mali_kbase.h */
+struct kbase_device;
+
+/** The types of core in a GPU.
+ *
+ * These enumerated values are used in calls to @ref kbase_pm_invoke_power_up, @ref kbase_pm_invoke_power_down, @ref
+ * kbase_pm_get_present_cores, @ref kbase_pm_get_active_cores, @ref kbase_pm_get_trans_cores, @ref
+ * kbase_pm_get_ready_cores. The specify which type of core should be acted on.
+ * These values are set in a manner that allows @ref core_type_to_reg function to be simpler and more efficient.
+ */
+typedef enum kbase_pm_core_type
+{
+ KBASE_PM_CORE_L3 = L3_PRESENT_LO, /**< The L3 cache */
+ KBASE_PM_CORE_L2 = L2_PRESENT_LO, /**< The L2 cache */
+ KBASE_PM_CORE_SHADER = SHADER_PRESENT_LO, /**< Shader cores */
+ KBASE_PM_CORE_TILER = TILER_PRESENT_LO /**< Tiler cores */
+} kbase_pm_core_type;
+
+/** Initialize the power management framework.
+ *
+ * Must be called before any other power management function
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ *
+ * @return MALI_ERROR_NONE if the power management framework was successfully initialized.
+ */
+mali_error kbase_pm_init(struct kbase_device *kbdev);
+
+/** Power up GPU after all modules have been initialized and interrupt handlers installed.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ *
+ * @return MALI_ERROR_NONE if powerup was successful.
+ */
+mali_error kbase_pm_powerup(struct kbase_device *kbdev);
+
+/**
+ * Halt the power management framework.
+ * Should ensure that no new interrupts are generated,
+ * but allow any currently running interrupt handlers to complete successfully.
+ * No event can make the pm system turn on the GPU after this function returns.
+ * The active policy is sent @ref KBASE_PM_EVENT_SYSTEM_SUSPEND.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbase_pm_halt(struct kbase_device *kbdev);
+
+/** Terminate the power management framework.
+ *
+ * No power management functions may be called after this
+ * (except @ref kbase_pm_init)
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbase_pm_term(struct kbase_device *kbdev);
+
+/** Events that can be sent to a power policy.
+ *
+ * Power policies are expected to handle all these events, although they may choose to take no action.
+ */
+typedef enum kbase_pm_event
+{
+ /* helper for tests */
+ KBASEP_PM_EVENT_FIRST,
+
+ /** Initialize the power policy.
+ *
+ * This event is sent immediately after the @ref kbase_pm_policy.init function of the policy returns.
+ *
+ * The policy may decide to transition the cores to its 'normal' state (e.g. an always on policy would turn all
+ * the cores on). The policy should assume that the GPU is in active use (i.e. as if the @ref
+ * KBASE_PM_EVENT_GPU_ACTIVE event had been received), if this is not the case then @ref KBASE_PM_EVENT_GPU_IDLE
+ * will be called after this event has been handled.
+ */
+ KBASE_PM_EVENT_POLICY_INIT = KBASEP_PM_EVENT_FIRST,
+ /** The power state of the device has changed.
+ *
+ * This event is sent when the GPU raises an interrupt to announce that a power transition has finished. Because
+ * there may be multiple power transitions the power policy must interrogate the state of the GPU to check whether
+ * all expected transitions have finished. If the GPU has just turned on or off then the policy must call @ref
+ * kbase_pm_power_up_done or @ref kbase_pm_power_down_done as appropriate.
+ */
+ KBASE_PM_EVENT_GPU_STATE_CHANGED,
+ /** The GPU is becoming active.
+ *
+ * This event is sent when the first context is about to use the GPU.
+ *
+ * If the core is turned off then this event must cause the core to turn on. This is done asynchronously and the
+ * policy must call the function kbase_pm_power_up_done to signal that the core is turned on sufficiently to allow
+ * register access.
+ */
+ KBASE_PM_EVENT_GPU_ACTIVE,
+ /** The GPU is becoming idle.
+ *
+ * This event is sent when the last context has finished using the GPU.
+ *
+ * The power policy may turn the GPU off entirely (e.g. turn the clocks or power off).
+ */
+ KBASE_PM_EVENT_GPU_IDLE,
+ /** The system has requested a change of power policy.
+ *
+ * The current policy receives this message when a request to change policy occurs. It must ensure that all active
+ * power transitions are completed and then call the @ref kbase_pm_change_policy function.
+ *
+ * This event is only delivered when the policy has been informed that the GPU is 'active' (the power management
+ * code internally increments the context active counter during a policy change).
+ */
+ KBASE_PM_EVENT_POLICY_CHANGE,
+ /** The system is requesting to suspend the GPU.
+ *
+ * The power policy should ensure that the GPU is shut down sufficiently for the system to suspend the device.
+ * Once the GPU is ready the policy should call @ref kbase_pm_power_down_done.
+ */
+ KBASE_PM_EVENT_SYSTEM_SUSPEND,
+ /** The system is requesting to resume the GPU.
+ *
+ * The power policy should restore the GPU to the state it was before the previous
+ * @ref KBASE_PM_EVENT_SYSTEM_SUSPEND event. If the GPU is being powered up then it should call
+ * @ref kbase_pm_power_transitioning before changing the state and @ref kbase_pm_power_up_done when
+ * the transition is complete.
+ */
+ KBASE_PM_EVENT_SYSTEM_RESUME,
+ /** The job scheduler is requesting to power up/down cores.
+ *
+ * This event is sent when:
+ * - powered down cores are needed to complete a job
+ * - powered up cores are not needed anymore
+ */
+ KBASE_PM_EVENT_CHANGE_GPU_STATE,
+
+ /* helpers for tests */
+ KBASEP_PM_EVENT_LAST = KBASE_PM_EVENT_CHANGE_GPU_STATE,
+ KBASEP_PM_EVENT_INVALID
+} kbase_pm_event;
+
+typedef union kbase_pm_policy_data
+{
+ kbasep_pm_policy_always_on always_on;
+ kbasep_pm_policy_demand demand;
+} kbase_pm_policy_data;
+
+/** Power policy structure.
+ *
+ * Each power management policy exposes a (static) instance of this structure which contains function pointers to the
+ * policy's methods.
+ */
+typedef struct kbase_pm_policy
+{
+ /** The name of this policy */
+ char *name;
+
+ /** Function called when the policy is selected
+ *
+ * This should initialize the kbdev->pm.policy_data pointer to the policy's data structure. It should not attempt
+ * to make any changes to hardware state.
+ *
+ * It is undefined what state the cores are in when the function is called, however no power transitions should be
+ * occurring.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+ void (*init)(struct kbase_device *kbdev);
+ /** Function called when the policy is unselected.
+ *
+ * This should free any data allocated with \c init
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+ void (*term)(struct kbase_device *kbdev);
+ /** Function called when there is an event to process
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ * @param event The event to process
+ */
+ void (*event)(struct kbase_device *kbdev, kbase_pm_event event);
+} kbase_pm_policy;
+
+/** Metrics data collected for use by the power management framework.
+ *
+ */
+typedef struct kbasep_pm_metrics_data
+{
+ int vsync_hit;
+ int utilisation;
+
+ osk_ticks time_period_start;
+ u32 time_busy;
+ u32 time_idle;
+ mali_bool gpu_active;
+
+ osk_spinlock_irq lock;
+
+ osk_timer timer;
+ mali_bool timer_active;
+
+ void * platform_data;
+} kbasep_pm_metrics_data;
+
+/** Actions for DVFS.
+ *
+ * kbase_pm_get_dvfs_action will return one of these enumerated values to
+ * describe the action that the DVFS system should take.
+ */
+typedef enum kbase_pm_dvfs_action
+{
+ KBASE_PM_DVFS_NOP, /**< No change in clock frequency is requested */
+ KBASE_PM_DVFS_CLOCK_UP, /**< The clock frequency should be increased if possible */
+ KBASE_PM_DVFS_CLOCK_DOWN /**< The clock frequency should be decreased if possible */
+} kbase_pm_dvfs_action;
+
+/** A value for an atomic @ref work_active,
+ * which tracks whether the work unit has been enqueued.
+ */
+typedef enum kbase_pm_work_active_state
+{
+ KBASE_PM_WORK_ACTIVE_STATE_INACTIVE = 0x00u, /**< There are no work units enqueued and @ref kbase_pm_worker is not running. */
+ KBASE_PM_WORK_ACTIVE_STATE_ENQUEUED = 0x01u, /**< There is a work unit enqueued, but @ref kbase_pm_worker is not running. */
+ KBASE_PM_WORK_ACTIVE_STATE_PROCESSING = 0x02u, /**< @ref kbase_pm_worker is running. */
+ KBASE_PM_WORK_ACTIVE_STATE_PENDING_EVT = 0x03u /**< Processing and there's an event outstanding.
+ @ref kbase_pm_worker is running, but @ref pending_events
+ has been updated since it started so
+ it should recheck the list of pending events before exiting. */
+} kbase_pm_work_active_state;
+
+/** Data stored per device for power management.
+ *
+ * This structure contains data for the power management framework. There is one instance of this structure per device
+ * in the system.
+ */
+typedef struct kbase_pm_device_data
+{
+ /** The policy that is currently actively controlling the power state. */
+ const kbase_pm_policy *current_policy;
+ /** The policy that the system is transitioning to. */
+ const kbase_pm_policy *new_policy;
+ /** The data needed for the current policy. This is considered private to the policy. */
+ kbase_pm_policy_data policy_data;
+ /** The workqueue that the policy callbacks are executed on. */
+ osk_workq workqueue;
+ /** A bit mask of events that are waiting to be delivered to the active policy. */
+ osk_atomic pending_events;
+ /** The work unit that is enqueued onto the workqueue. */
+ osk_workq_work work;
+ /** An atomic which tracks whether the work unit has been enqueued.
+ * For list of possible values please refer to @ref kbase_pm_work_active_state.
+ */
+ osk_atomic work_active;
+ /** The wait queue for power up events. */
+ osk_waitq power_up_waitqueue;
+ /** The wait queue for power down events. */
+ osk_waitq power_down_waitqueue;
+ /** Wait queue for whether there is an outstanding event for the policy */
+ osk_waitq policy_outstanding_event;
+
+ /** The reference count of active contexts on this device. */
+ int active_count;
+ /** Lock to protect active_count */
+ osk_spinlock_irq active_count_lock;
+ /** The reference count of active gpu cycle counter users */
+ int gpu_cycle_counter_requests;
+ /** Lock to protect gpu_cycle_counter_requests */
+ osk_spinlock_irq gpu_cycle_counter_requests_lock;
+ /** A bit mask identifying the shader cores that the power policy would like to be on.
+ * The current state of the cores may be different, but there should be transitions in progress that will
+ * eventually achieve this state (assuming that the policy doesn't change its mind in the mean time.
+ */
+ u64 desired_shader_state;
+ /** A bit mask identifying the tiler cores that the power policy would like to be on.
+ * @see kbase_pm_device_data:desired_shader_state */
+ u64 desired_tiler_state;
+
+ /** Lock protecting the power state of the device.
+ *
+ * This lock must be held when accessing the shader_available_bitmap, tiler_available_bitmap, shader_inuse_bitmap
+ * and tiler_inuse_bitmap fields of kbase_device. It is also held when the hardware power registers are being
+ * written to, to ensure that two threads do not conflict over the power transitions that the hardware should
+ * make.
+ */
+ osk_spinlock_irq power_change_lock;
+
+ /** Set to true when the GPU is powered and register accesses are possible, false otherwise */
+ mali_bool gpu_powered;
+ /** Spinlock that must be held when writing gpu_powered */
+ osk_spinlock_irq gpu_powered_lock;
+
+ /** Structure to hold metrics for the GPU */
+ kbasep_pm_metrics_data metrics;
+
+ /** Callback when the GPU needs to be turned on. See @ref kbase_pm_callback_conf
+ *
+ * @param kbdev The kbase device
+ *
+ * @return 1 if GPU state was lost, 0 otherwise
+ */
+ int (*callback_power_on)(struct kbase_device *kbdev);
+
+ /** Callback when the GPU may be turned off. See @ref kbase_pm_callback_conf
+ *
+ * @param kbdev The kbase device
+ */
+ void (*callback_power_off)(struct kbase_device *kbdev);
+} kbase_pm_device_data;
+
+/** Get the current policy.
+ * Returns the policy that is currently active.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ *
+ * @return The current policy
+ */
+const kbase_pm_policy *kbase_pm_get_policy(struct kbase_device *kbdev);
+
+/** Change the policy to the one specified.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ * @param policy The policy to change to (valid pointer returned from @ref kbase_pm_list_policies)
+ */
+void kbase_pm_set_policy(struct kbase_device *kbdev, const kbase_pm_policy *policy);
+
+/** Retrieve a static list of the available policies.
+ * @param[out] policies An array pointer to take the list of policies. This may be NULL.
+ * The contents of this array must not be modified.
+ *
+ * @return The number of policies
+ */
+int kbase_pm_list_policies(const kbase_pm_policy * const **policies);
+
+/** The current policy is ready to change to the new policy
+ *
+ * The current policy must ensure that all cores have finished transitioning before calling this function.
+ * The new policy is sent an @ref KBASE_PM_EVENT_POLICY_INIT event.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbase_pm_change_policy(struct kbase_device *kbdev);
+
+/** The GPU is idle.
+ *
+ * The OS may choose to turn off idle devices
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbase_pm_dev_idle(struct kbase_device *kbdev);
+
+/** The GPU is active.
+ *
+ * The OS should avoid opportunistically turning off the GPU while it is active
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbase_pm_dev_activate(struct kbase_device *kbdev);
+
+/** Send an event to the active power policy.
+ *
+ * The event is queued for sending to the active power policy. The event is merged with the current queue by the @ref
+ * kbasep_pm_merge_event function which may decide to drop events.
+ *
+ * Note that this function may be called in an atomic context on Linux which implies that it must not sleep.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ * @param event The event that should be queued
+ */
+void kbase_pm_send_event(struct kbase_device *kbdev, kbase_pm_event event);
+
+/** Turn one or more cores on.
+ *
+ * This function is called by the active power policy to turn one or more cores on.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ * @param type The type of core (see the @ref kbase_pm_core_type enumeration)
+ * @param cores A bitmask of cores to turn on
+ */
+void kbase_pm_invoke_power_up(struct kbase_device *kbdev, kbase_pm_core_type type, u64 cores);
+
+/** Turn one or more cores off.
+ *
+ * This function is called by the active power policy to turn one or more core off.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ * @param type The type of core (see the @ref kbase_pm_core_type enumeration)
+ * @param cores A bitmask of cores to turn off
+ */
+void kbase_pm_invoke_power_down(struct kbase_device *kbdev, kbase_pm_core_type type, u64 cores);
+
+/** Get details of the cores that are present in the device.
+ *
+ * This function can be called by the active power policy to return a bitmask of the cores (of a specified type)
+ * present in the GPU device and also a count of the number of cores.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ * @param type The type of core (see the @ref kbase_pm_core_type enumeration)
+ *
+ * @return The bit mask of cores present
+ */
+u64 kbase_pm_get_present_cores(struct kbase_device *kbdev, kbase_pm_core_type type);
+
+/** Get details of the cores that are currently active in the device.
+ *
+ * This function can be called by the active power policy to return a bitmask of the cores (of a specified type) that
+ * are actively processing work (i.e. turned on *and* busy).
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ * @param type The type of core (see the @ref kbase_pm_core_type enumeration)
+ *
+ * @return The bit mask of active cores
+ */
+u64 kbase_pm_get_active_cores(struct kbase_device *kbdev, kbase_pm_core_type type);
+
+/** Get details of the cores that are currently transitioning between power states.
+ *
+ * This function can be called by the active power policy to return a bitmask of the cores (of a specified type) that
+ * are currently transitioning between power states.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ * @param type The type of core (see the @ref kbase_pm_core_type enumeration)
+ *
+ * @return The bit mask of transitioning cores
+ */
+u64 kbase_pm_get_trans_cores(struct kbase_device *kbdev, kbase_pm_core_type type);
+
+/** Get details of the cores that are currently powered and ready for jobs.
+ *
+ * This function can be called by the active power policy to return a bitmask of the cores (of a specified type) that
+ * are powered and ready for jobs (they may or may not be currently executing jobs).
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ * @param type The type of core (see the @ref kbase_pm_core_type enumeration)
+ *
+ * @return The bit mask of ready cores
+ */
+u64 kbase_pm_get_ready_cores(struct kbase_device *kbdev, kbase_pm_core_type type);
+
+/** Return whether the power manager is active
+ *
+ * This function will return true when there are cores (of any time) that are currently transitioning between power
+ * states.
+ *
+ * It can be used on receipt of the @ref KBASE_PM_EVENT_GPU_STATE_CHANGED message to determine whether the requested power
+ * transitions have completely finished or not.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ *
+ * @return true when there are cores transitioning between power states, false otherwise
+ */
+mali_bool kbase_pm_get_pwr_active(struct kbase_device *kbdev);
+
+/** Turn the clock for the device on.
+ *
+ * This function can be used by a power policy to turn the clock for the GPU on. It should be modified during
+ * integration to perform the necessary actions to ensure that the GPU is fully powered and clocked.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbase_pm_clock_on(struct kbase_device *kbdev);
+
+/** Turn the clock for the device off.
+ *
+ * This function can be used by a power policy to turn the clock for the GPU off. It should be modified during
+ * integration to perform the necessary actions to turn the clock off (if this is possible in the integration).
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbase_pm_clock_off(struct kbase_device *kbdev);
+
+/** Enable interrupts on the device.
+ *
+ * This function should be called by the active power policy immediately after calling @ref kbase_pm_clock_on to
+ * ensure that interrupts are enabled on the device.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbase_pm_enable_interrupts(struct kbase_device *kbdev);
+
+/** Disable interrupts on the device.
+ *
+ * This function should be called by the active power policy after shutting down the device (i.e. in the @ref
+ * KBASE_PM_EVENT_GPU_STATE_CHANGED handler after confirming that all cores have powered off). It prevents interrupt
+ * delivery to the CPU so no further @ref KBASE_PM_EVENT_GPU_STATE_CHANGED messages will be received until @ref
+ * kbase_pm_enable_interrupts is called.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbase_pm_disable_interrupts(struct kbase_device *kbdev);
+
+/** Initialize the hardware
+ *
+ * This function checks the GPU ID register to ensure that the GPU is supported by the driver and performs a reset on
+ * the device so that it is in a known state before the device is used.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ *
+ * @return MALI_ERROR_NONE if the device is supported and successfully reset.
+ */
+mali_error kbase_pm_init_hw(struct kbase_device *kbdev);
+
+/** Inform the power management system that the power state of the device is transitioning.
+ *
+ * This function must be called by the active power policy before transitioning the core between an 'off state' and an
+ * 'on state'. It resets the wait queues that are waited on by @ref kbase_pm_wait_for_power_up and @ref
+ * kbase_pm_wait_for_power_down.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbase_pm_power_transitioning(struct kbase_device *kbdev);
+
+/** The GPU has been powered up successfully.
+ *
+ * This function must be called by the active power policy when the GPU has been powered up successfully. It signals
+ * to the rest of the system that jobs can start being submitted to the device.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbase_pm_power_up_done(struct kbase_device *kbdev);
+
+/** The GPU has been reset successfully.
+ *
+ * This function must be called by the GPU interrupt handler when the RESET_COMPLETED bit is set. It signals to the
+ * power management initialization code that the GPU has been successfully reset.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbase_pm_reset_done(struct kbase_device *kbdev);
+
+/** The GPU has been powered down successfully.
+ *
+ * This function must be called by the active power policy when the GPU has been powered down successfully. It signals
+ * to the rest of the system that a system suspend can now take place.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbase_pm_power_down_done(struct kbase_device *kbdev);
+
+/** Wait for the power policy to signal power up.
+ *
+ * This function waits for the power policy to signal power up by calling @ref kbase_pm_power_up_done. After the power
+ * policy has signalled this the function will return immediately until the power policy calls @ref
+ * kbase_pm_power_transitioning.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbase_pm_wait_for_power_up(struct kbase_device *kbdev);
+
+/** Wait for the power policy to signal power down.
+ *
+ * This function waits for the power policy to signal power down by calling @ref kbase_pm_power_down_done. After the
+ * power policy has signalled this the function will return immediately until the power policy calls @ref
+ * kbase_pm_power_transitioning.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbase_pm_wait_for_power_down(struct kbase_device *kbdev);
+
+/** Increment the count of active contexts.
+ *
+ * This function should be called when a context is about to submit a job. It informs the active power policy that the
+ * GPU is going to be in use shortly and the policy is expected to start turning on the GPU.
+ *
+ * This function will block until the GPU is available.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbase_pm_context_active(struct kbase_device *kbdev);
+
+/** Decrement the reference count of active contexts.
+ *
+ * This function should be called when a context becomes idle. After this call the GPU may be turned off by the power
+ * policy so the calling code should ensure that it does not access the GPU's registers.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbase_pm_context_idle(struct kbase_device *kbdev);
+
+/** Check if there are any power transitions to make, and if so start them.
+ *
+ * This function will check the desired_xx_state members of kbase_pm_device_data and the actual status of the
+ * hardware to see if any power transitions can be made at this time to make the hardware state closer to the state
+ * desired by the power policy.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbase_pm_check_transitions(struct kbase_device *kbdev);
+
+/** Read the bitmasks of present cores.
+ *
+ * This information is cached to avoid having to perform register reads whenever the information is required.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbasep_pm_read_present_cores(struct kbase_device *kbdev);
+
+/** Mark one or more cores as being required for jobs to be submitted.
+ *
+ * This function is called by the job scheduler to mark one or both cores
+ * as being required to submit jobs that are ready to run.
+ *
+ * The cores requested are reference counted and a subsequent call to @ref kbase_pm_register_inuse_cores or
+ * @ref kbase_pm_unrequest_cores should be made to dereference the cores as being 'needed'.
+ *
+ * The current running policy is sent an @ref KBASE_PM_EVENT_CHANGE_GPU_STATE if power up of requested core is
+ * required.
+
+ * The policy is expected to make these cores available at some point in the future,
+ * but may take an arbitrary length of time to reach this state.
+ *
+ * @param kbdev The kbase device structure for the device
+ * @param shader_cores A bitmask of shader cores which are necessary for the job
+ * @param tiler_cores A bitmask of tiler cores which are necessary for the job
+ *
+ * @return MALI_ERROR_NONE if the cores were successfully requested.
+ */
+mali_error kbase_pm_request_cores(struct kbase_device *kbdev, u64 shader_cores, u64 tiler_cores);
+
+/** Unmark one or more cores as being required for jobs to be submitted.
+ *
+ * This function undoes the effect of @ref kbase_pm_request_cores. It should be used when a job is not
+ * going to be submitted to the hardware (e.g. the job is cancelled before it is enqueued).
+ *
+ * The current running policy is sent an @ref KBASE_PM_EVENT_CHANGE_GPU_STATE if power down of requested core
+ * is required.
+ *
+ * The policy may use this as an indication that it can power down cores.
+ *
+ * @param kbdev The kbase device structure for the device
+ * @param shader_cores A bitmask of shader cores (as given to @ref kbase_pm_request_cores)
+ * @param tiler_cores A bitmask of tiler cores (as given to @ref kbase_pm_request_cores)
+ */
+void kbase_pm_unrequest_cores(struct kbase_device *kbdev, u64 shader_cores, u64 tiler_cores);
+
+/** Register a set of cores as in use by a job.
+ *
+ * This function should be called after @ref kbase_pm_request_cores when the job is about to be submitted to
+ * the hardware. It will check that the necessary cores are available and if so update the 'needed' and 'inuse'
+ * bitmasks to reflect that the job is now committed to being run.
+ *
+ * If the necessary cores are not currently available then the function will return MALI_FALSE and have no effect.
+ *
+ * @param kbdev The kbase device structure for the device
+ * @param shader_cores A bitmask of shader cores (as given to @ref kbase_pm_request_cores)
+ * @param tiler_cores A bitmask of tiler cores (as given to @ref kbase_pm_request_cores)
+ *
+ * @return MALI_TRUE if the job can be submitted to the hardware or MALI_FALSE if the job is not ready to run.
+ */
+mali_bool kbase_pm_register_inuse_cores(struct kbase_device *kbdev, u64 shader_cores, u64 tiler_cores);
+
+/** Release cores after a job has run.
+ *
+ * This function should be called when a job has finished running on the hardware. A call to @ref
+ * kbase_pm_register_inuse_cores must have previously occurred. The reference counts of the specified cores will be
+ * decremented which may cause the bitmask of 'inuse' cores to be reduced. The power policy may then turn off any
+ * cores which are no longer 'inuse'.
+ *
+ * @param kbdev The kbase device structure for the device
+ * @param shader_cores A bitmask of shader cores (as given to @ref kbase_pm_register_inuse_cores)
+ * @param tiler_cores A bitmask of tiler cores (as given to @ref kbase_pm_register_inuse_cores)
+ */
+void kbase_pm_release_cores(struct kbase_device *kbdev, u64 shader_cores, u64 tiler_cores);
+
+/** Initialize the metrics gathering framework.
+ *
+ * This must be called before other metric gathering APIs are called.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ *
+ * @return MALI_ERROR_NONE on success, MALI_ERROR_FUNCTION_FAILED on error
+ */
+mali_error kbasep_pm_metrics_init(struct kbase_device *kbdev);
+
+/** Terminate the metrics gathering framework.
+ *
+ * This must be called when metric gathering is no longer required. It is an error to call any metrics gathering
+ * function (other than kbasep_pm_metrics_init) after calling this function.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbasep_pm_metrics_term(struct kbase_device *kbdev);
+
+/** Record that the GPU is active.
+ *
+ * This records that the GPU is now active. The previous GPU state must have been idle, the function will assert if
+ * this is not true in a debug build.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbasep_pm_record_gpu_active(struct kbase_device *kbdev);
+
+/** Record that the GPU is idle.
+ *
+ * This records that the GPU is now idle. The previous GPU state must have been active, the function will assert if
+ * this is not true in a debug build.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbasep_pm_record_gpu_idle(struct kbase_device *kbdev);
+
+/** Function to be called by the frame buffer driver to update the vsync metric.
+ *
+ * This function should be called by the frame buffer driver to update whether the system is hitting the vsync target
+ * or not. buffer_updated should be true if the vsync corresponded with a new frame being displayed, otherwise it
+ * should be false. This function does not need to be called every vsync, but only when the value of buffer_updated
+ * differs from a previous call.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ * @param buffer_updated True if the buffer has been updated on this VSync, false otherwise
+ */
+void kbase_pm_report_vsync(struct kbase_device *kbdev, int buffer_updated);
+
+/** Configure the frame buffer device to set the vsync callback.
+ *
+ * This function should do whatever is necessary for this integration to ensure that kbase_pm_report_vsync is
+ * called appropriately.
+ *
+ * This function will need porting as part of the integration for a device.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbase_pm_register_vsync_callback(struct kbase_device *kbdev);
+
+/** Free any resources that kbase_pm_register_vsync_callback allocated.
+ *
+ * This function should perform any cleanup required from the call to kbase_pm_register_vsync_callback.
+ * No call backs should occur after this function has returned.
+ *
+ * This function will need porting as part of the integration for a device.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbase_pm_unregister_vsync_callback(struct kbase_device *kbdev);
+
+/** Determine whether the DVFS system should change the clock speed of the GPU.
+ *
+ * This function should be called regularly by the DVFS system to check whether the clock speed of the GPU needs
+ * updating. It will return one of three enumerated values of kbase_pm_dvfs_action:
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ * @retval KBASE_PM_DVFS_NOP The clock does not need changing
+ * @retval KBASE_PM_DVFS_CLOCK_UP, The clock frequency should be increased if possible.
+ * @retval KBASE_PM_DVFS_CLOCK_DOWN The clock frequency should be decreased if possible.
+ */
+kbase_pm_dvfs_action kbase_pm_get_dvfs_action(struct kbase_device *kbdev);
+
+/** Mark that the GPU cycle counter is needed, if the caller is the first caller
+ * then the GPU cycle counters will be enabled.
+ *
+ * The GPU must be powered when calling this function (i.e. @ref kbase_pm_context_active must have been called).
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+
+void kbase_pm_request_gpu_cycle_counter(struct kbase_device *kbdev);
+
+/** Mark that the GPU cycle counter is no longer in use, if the caller is the last
+ * caller then the GPU cycle counters will be disabled. A request must have been made
+ * before a call to this.
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+
+void kbase_pm_release_gpu_cycle_counter(struct kbase_device *kbdev);
+
+/** Enables access to the GPU registers before power management has powered up the GPU
+ * with kbase_pm_powerup().
+ *
+ * Access to registers should be done using kbase_os_reg_read/write() at this stage,
+ * not kbase_reg_read/write().
+ *
+ * This results in the power management callbacks provided in the driver configuration
+ * to get called to turn on power and/or clocks to the GPU.
+ * See @ref kbase_pm_callback_conf.
+ *
+ * This should only be used before power management is powered up with kbase_pm_powerup()
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbase_pm_register_access_enable(struct kbase_device *kbdev);
+
+/** Disables access to the GPU registers enabled earlier by a call to
+ * kbase_pm_register_access_enable().
+ *
+ * This results in the power management callbacks provided in the driver configuration
+ * to get called to turn off power and/or clocks to the GPU.
+ * See @ref kbase_pm_callback_conf
+ *
+ * This should only be used before power management is powered up with kbase_pm_powerup()
+ *
+ * @param kbdev The kbase device structure for the device (must be a valid pointer)
+ */
+void kbase_pm_register_access_disable(struct kbase_device *kbdev);
+
+#endif /* _KBASE_PM_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_pm_always_on.c
+ * "Always on" power management policy
+ */
+
+#include <osk/mali_osk.h>
+#include <kbase/src/common/mali_kbase.h>
+#include <kbase/src/common/mali_kbase_pm.h>
+
+
+/** Function to handle a GPU state change for the always_on power policy.
+ *
+ * This function is called whenever the GPU has transitioned to another state. It first checks that the transition is
+ * complete and then moves the state machine to the next state.
+ *
+ * @param kbdev The kbase device structure for the device
+ */
+static void always_on_state_changed(kbase_device *kbdev)
+{
+ kbasep_pm_policy_always_on *data = &kbdev->pm.policy_data.always_on;
+
+ switch(data->state)
+ {
+ case KBASEP_PM_ALWAYS_ON_STATE_POWERING_UP:
+ if (kbase_pm_get_pwr_active(kbdev))
+ {
+ /* Cores still transitioning */
+ return;
+ }
+ /* All cores have transitioned, inform the OS */
+ kbase_pm_power_up_done(kbdev);
+ data->state = KBASEP_PM_ALWAYS_ON_STATE_POWERED_UP;
+
+ break;
+ case KBASEP_PM_ALWAYS_ON_STATE_POWERING_DOWN:
+ if (kbase_pm_get_pwr_active(kbdev))
+ {
+ /* Cores still transitioning */
+ return;
+ }
+ /* All cores have transitioned, turn the clock and interrupts off */
+ kbase_pm_disable_interrupts(kbdev);
+ kbase_pm_clock_off(kbdev);
+
+ /* Inform the OS */
+ kbase_pm_power_down_done(kbdev);
+
+ data->state = KBASEP_PM_ALWAYS_ON_STATE_POWERED_DOWN;
+
+ break;
+ case KBASEP_PM_ALWAYS_ON_STATE_CHANGING_POLICY:
+ if (kbase_pm_get_pwr_active(kbdev))
+ {
+ /* Cores still transitioning */
+ return;
+ }
+ /* All cores have transitioned, inform the system we can change policy*/
+ kbase_pm_change_policy(kbdev);
+
+ break;
+ default:
+ break;
+ }
+}
+
+/** Function to handle the @ref KBASE_PM_EVENT_SYSTEM_SUSPEND message for the always_on power policy.
+ *
+ * This function is called when a @ref KBASE_PM_EVENT_SYSTEM_SUSPEND message is received. It instructs the GPU to turn off
+ * all cores.
+ *
+ * @param kbdev The kbase device structure for the device
+ */
+static void always_on_suspend(kbase_device *kbdev)
+{
+ u64 cores;
+
+ /* Inform the system that the transition has started */
+ kbase_pm_power_transitioning(kbdev);
+
+ /* Turn the cores off */
+ cores = kbase_pm_get_present_cores(kbdev, KBASE_PM_CORE_SHADER);
+ kbase_pm_invoke_power_down(kbdev, KBASE_PM_CORE_SHADER, cores);
+
+ cores = kbase_pm_get_present_cores(kbdev, KBASE_PM_CORE_TILER);
+ kbase_pm_invoke_power_down(kbdev, KBASE_PM_CORE_TILER, cores);
+
+ kbase_pm_check_transitions(kbdev);
+
+ kbdev->pm.policy_data.always_on.state = KBASEP_PM_ALWAYS_ON_STATE_POWERING_DOWN;
+
+ /* Ensure that the OS is informed even if we didn't do anything */
+ always_on_state_changed(kbdev);
+}
+
+/** Function to handle the @ref KBASE_PM_EVENT_SYSTEM_RESUME message for the always_on power policy.
+ *
+ * This function is called when a @ref KBASE_PM_EVENT_SYSTEM_RESUME message is received. It instructs the GPU to turn on all
+ * the cores.
+ *
+ * @param kbdev The kbase device structure for the device
+ */
+static void always_on_resume(kbase_device *kbdev)
+{
+ u64 cores;
+
+ /* Inform the system that the transition has started */
+ kbase_pm_power_transitioning(kbdev);
+
+ /* Turn the clock on */
+ kbase_pm_clock_on(kbdev);
+ /* Enable interrupts */
+ kbase_pm_enable_interrupts(kbdev);
+
+ /* Turn the cores on */
+ cores = kbase_pm_get_present_cores(kbdev, KBASE_PM_CORE_SHADER);
+ kbase_pm_invoke_power_up(kbdev, KBASE_PM_CORE_SHADER, cores);
+
+ cores = kbase_pm_get_present_cores(kbdev, KBASE_PM_CORE_TILER);
+ kbase_pm_invoke_power_up(kbdev, KBASE_PM_CORE_TILER, cores);
+
+ kbase_pm_check_transitions(kbdev);
+
+ kbdev->pm.policy_data.always_on.state = KBASEP_PM_ALWAYS_ON_STATE_POWERING_UP;
+
+ /* Ensure that the OS is informed even if we didn't do anything */
+ always_on_state_changed(kbdev);
+}
+
+/** The event callback function for the always_on power policy.
+ *
+ * This function is called to handle the events for the power policy. It calls the relevant handler function depending
+ * on the type of the event.
+ *
+ * @param kbdev The kbase device structure for the device
+ * @param event The event that should be processed
+ */
+static void always_on_event(kbase_device *kbdev, kbase_pm_event event)
+{
+ kbasep_pm_policy_always_on *data = &kbdev->pm.policy_data.always_on;
+
+ switch(event)
+ {
+ case KBASE_PM_EVENT_SYSTEM_SUSPEND:
+ always_on_suspend(kbdev);
+ break;
+ case KBASE_PM_EVENT_POLICY_INIT: /* Init is the same as resume for this policy */
+ case KBASE_PM_EVENT_SYSTEM_RESUME:
+ always_on_resume(kbdev);
+ break;
+ case KBASE_PM_EVENT_GPU_STATE_CHANGED:
+ always_on_state_changed(kbdev);
+ break;
+ case KBASE_PM_EVENT_POLICY_CHANGE:
+ if (data->state == KBASEP_PM_ALWAYS_ON_STATE_POWERED_UP ||
+ data->state == KBASEP_PM_ALWAYS_ON_STATE_POWERED_DOWN)
+ {
+ kbase_pm_change_policy(kbdev);
+ }
+ else
+ {
+ data->state = KBASEP_PM_ALWAYS_ON_STATE_CHANGING_POLICY;
+ }
+ break;
+ case KBASE_PM_EVENT_GPU_ACTIVE:
+ case KBASE_PM_EVENT_GPU_IDLE:
+ case KBASE_PM_EVENT_CHANGE_GPU_STATE:
+ /* Not used - the GPU is always kept on */
+ break;
+ default:
+ /* Unrecognised event - this should never happen */
+ OSK_ASSERT(0);
+ }
+}
+
+/** Initialize the always_on power policy
+ *
+ * This sets up the private @ref kbase_pm_device_data.policy_data field of the device for use with the always_on power
+ * policy.
+ *
+ * @param kbdev The kbase device structure for the device
+ */
+static void always_on_init(kbase_device *kbdev)
+{
+ kbasep_pm_policy_always_on *data = &kbdev->pm.policy_data.always_on;
+
+ data->state = KBASEP_PM_ALWAYS_ON_STATE_POWERING_UP;
+}
+
+/** Terminate the always_on power policy
+ *
+ * This frees the resources that were allocated by @ref always_on_init.
+ *
+ * @param kbdev The kbase device structure for the device
+ */
+static void always_on_term(kbase_device *kbdev)
+{
+ CSTD_UNUSED(kbdev);
+}
+
+/** The @ref kbase_pm_policy structure for the always_on power policy
+ *
+ * This is the extern structure that defines the always_on power policy's callback and name.
+ */
+const kbase_pm_policy kbase_pm_always_on_policy_ops =
+{
+ "always_on", /* name */
+ always_on_init, /* init */
+ always_on_term, /* term */
+ always_on_event, /* event */
+};
+
+KBASE_EXPORT_TEST_API(kbase_pm_always_on_policy_ops)
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_pm_always_on.h
+ * "Always on" power management policy
+ */
+
+#ifndef MALI_KBASE_PM_ALWAYS_ON_H
+#define MALI_KBASE_PM_ALWAYS_ON_H
+
+/** The states that the always_on policy can enter.
+ *
+ * The diagram below should the states that the always_on policy can enter and the transitions that can occur between
+ * the states:
+ *
+ * @dot
+ * digraph always_on_states {
+ * node [fontsize=10];
+ * edge [fontsize=10];
+ *
+ * POWERING_UP [label="STATE_POWERING_UP"
+ * URL="\ref kbasep_pm_always_on_state.KBASEP_PM_ALWAYS_ON_STATE_POWERING_UP"];
+ * POWERING_DOWN [label="STATE_POWERING_DOWN"
+ * URL="\ref kbasep_pm_always_on_state.KBASEP_PM_ALWAYS_ON_STATE_POWERING_DOWN"];
+ * POWERED_UP [label="STATE_POWERED_UP"
+ * URL="\ref kbasep_pm_always_on_state.KBASEP_PM_ALWAYS_ON_STATE_POWERED_UP"];
+ * POWERED_DOWN [label="STATE_POWERED_DOWN"
+ * URL="\ref kbasep_pm_always_on_state.KBASEP_PM_ALWAYS_ON_STATE_POWERED_DOWN"];
+ * CHANGING_POLICY [label="STATE_CHANGING_POLICY"
+ * URL="\ref kbasep_pm_always_on_state.KBASEP_PM_ALWAYS_ON_STATE_CHANGING_POLICY"];
+ *
+ * init [label="init" URL="\ref KBASE_PM_EVENT_INIT"];
+ * change_policy [label="change_policy" URL="\ref kbase_pm_change_policy"];
+ *
+ * init -> POWERING_UP [ label = "Policy init" ];
+ *
+ * POWERING_UP -> POWERED_UP [label = "Power state change" URL="\ref KBASE_PM_EVENT_STATE_CHANGED"];
+ * POWERING_DOWN -> POWERED_DOWN [label = "Power state change" URL="\ref KBASE_PM_EVENT_STATE_CHANGED"];
+ * CHANGING_POLICY -> change_policy [label = "Power state change" URL="\ref KBASE_PM_EVENT_STATE_CHANGED"];
+ *
+ * POWERED_UP -> POWERING_DOWN [label = "Suspend" URL="\ref KBASE_PM_EVENT_SUSPEND"];
+ *
+ * POWERED_DOWN -> POWERING_UP [label = "Resume" URL="\ref KBASE_PM_EVENT_RESUME"];
+ *
+ * POWERING_UP -> CHANGING_POLICY [label = "Change policy" URL="\ref KBASE_PM_EVENT_CHANGE_POLICY"];
+ * POWERING_DOWN -> CHANGING_POLICY [label = "Change policy" URL="\ref KBASE_PM_EVENT_CHANGE_POLICY"];
+ * POWERED_UP -> change_policy [label = "Change policy" URL="\ref KBASE_PM_EVENT_CHANGE_POLICY"];
+ * POWERED_DOWN -> change_policy [label = "Change policy" URL="\ref KBASE_PM_EVENT_CHANGE_POLICY"];
+ * }
+ * @enddot
+ */
+typedef enum kbasep_pm_always_on_state
+{
+ KBASEP_PM_ALWAYS_ON_STATE_POWERING_UP, /**< The GPU is powering up */
+ KBASEP_PM_ALWAYS_ON_STATE_POWERING_DOWN, /**< The GPU is powering down */
+ KBASEP_PM_ALWAYS_ON_STATE_POWERED_UP, /**< The GPU is powered up and jobs can execute */
+ KBASEP_PM_ALWAYS_ON_STATE_POWERED_DOWN, /**< The GPU is powered down and the system can suspend */
+ KBASEP_PM_ALWAYS_ON_STATE_CHANGING_POLICY /**< The power policy is about to change */
+} kbasep_pm_always_on_state;
+
+/** Private structure for policy instance data.
+ *
+ * This contains data that is private to the particular power policy that is active.
+ */
+typedef struct kbasep_pm_policy_always_on
+{
+ kbasep_pm_always_on_state state; /**< The current state of the policy */
+} kbasep_pm_policy_always_on;
+
+#endif
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_pm_demand.c
+ * A simple demand based power management policy
+ */
+
+#include <osk/mali_osk.h>
+#include <kbase/src/common/mali_kbase.h>
+#include <kbase/src/common/mali_kbase_pm.h>
+
+/* Forward declaration for state change function, as it is required by
+ * the power up and down functions */
+static void demand_state_changed(kbase_device *kbdev);
+
+/** Turns the cores on.
+ *
+ * This function turns all the cores of the GPU on.
+ */
+static void demand_power_up(kbase_device *kbdev)
+{
+ /* Inform the system that the transition has started */
+ kbase_pm_power_transitioning(kbdev);
+
+ /* Turn clocks and interrupts on */
+ kbase_pm_clock_on(kbdev);
+ kbase_pm_enable_interrupts(kbdev);
+
+ kbase_pm_check_transitions(kbdev);
+
+ kbdev->pm.policy_data.demand.state = KBASEP_PM_DEMAND_STATE_POWERING_UP;
+}
+
+/** Turn the cores off.
+ *
+ * This function turns all the cores of the GPU off.
+ */
+static void demand_power_down(kbase_device *kbdev)
+{
+ u64 cores;
+
+ /* Inform the system that the transition has started */
+ kbase_pm_power_transitioning(kbdev);
+
+ /* Turn the cores off */
+ cores = kbase_pm_get_present_cores(kbdev, KBASE_PM_CORE_SHADER);
+ kbase_pm_invoke_power_down(kbdev, KBASE_PM_CORE_SHADER, cores);
+
+ cores = kbase_pm_get_present_cores(kbdev, KBASE_PM_CORE_TILER);
+ kbase_pm_invoke_power_down(kbdev, KBASE_PM_CORE_TILER, cores);
+
+ kbdev->pm.policy_data.demand.state = KBASEP_PM_DEMAND_STATE_POWERING_DOWN;
+
+ kbase_pm_check_transitions(kbdev);
+}
+
+/** Turn some cores on/off.
+ *
+ * This function turns on/off the cores needed by the scheduler.
+ */
+static void demand_change_gpu_state(kbase_device *kbdev)
+{
+ /* Update the bitmap of the cores we need */
+ u64 new_shader_desired = kbdev->shader_needed_bitmap | kbdev->shader_inuse_bitmap;
+ u64 new_tiler_desired = kbdev->tiler_needed_bitmap | kbdev->tiler_inuse_bitmap;
+
+ if ( kbdev->pm.desired_shader_state != new_shader_desired )
+ {
+ KBASE_TRACE_ADD( kbdev, PM_CORES_CHANGE_DESIRED, NULL, NULL, 0u, (u32)new_shader_desired );
+ }
+
+ kbdev->pm.desired_shader_state = new_shader_desired;
+ kbdev->pm.desired_tiler_state = new_tiler_desired;
+
+ kbase_pm_check_transitions(kbdev);
+}
+
+/** Function to handle a GPU state change for the demand power policy
+ *
+ * This function is called whenever the GPU has transitioned to another state. It first checks that the transition is
+ * complete and then moves the state machine to the next state.
+ */
+static void demand_state_changed(kbase_device *kbdev)
+{
+ kbasep_pm_policy_demand *data = &kbdev->pm.policy_data.demand;
+
+ switch(data->state) {
+ case KBASEP_PM_DEMAND_STATE_CHANGING_POLICY:
+ case KBASEP_PM_DEMAND_STATE_POWERING_UP:
+ case KBASEP_PM_DEMAND_STATE_POWERING_DOWN:
+ if (kbase_pm_get_pwr_active(kbdev)) {
+ /* Cores are still transitioning - ignore the event */
+ return;
+ }
+ break;
+ default:
+ /* Must not call kbase_pm_get_pwr_active here as the clock may be turned off */
+ break;
+ }
+
+ switch(data->state)
+ {
+ case KBASEP_PM_DEMAND_STATE_CHANGING_POLICY:
+ /* Signal power events before switching the policy */
+ kbase_pm_power_up_done(kbdev);
+ kbase_pm_power_down_done(kbdev);
+ kbase_pm_change_policy(kbdev);
+ break;
+ case KBASEP_PM_DEMAND_STATE_POWERING_UP:
+ data->state = KBASEP_PM_DEMAND_STATE_POWERED_UP;
+ kbase_pm_power_up_done(kbdev);
+ /* State changed, try to run jobs */
+ KBASE_TRACE_ADD( kbdev, PM_JOB_SUBMIT_AFTER_POWERING_UP, NULL, NULL, 0u, 0 );
+ kbase_js_try_run_jobs(kbdev);
+ break;
+ case KBASEP_PM_DEMAND_STATE_POWERING_DOWN:
+ data->state = KBASEP_PM_DEMAND_STATE_POWERED_DOWN;
+ /* Disable interrupts and turn the clock off */
+ kbase_pm_disable_interrupts(kbdev);
+ kbase_pm_clock_off(kbdev);
+ kbase_pm_power_down_done(kbdev);
+ break;
+ case KBASEP_PM_DEMAND_STATE_POWERED_UP:
+ /* Core states may have been changed, try to run jobs */
+ KBASE_TRACE_ADD( kbdev, PM_JOB_SUBMIT_AFTER_POWERED_UP, NULL, NULL, 0u, 0 );
+ kbase_js_try_run_jobs(kbdev);
+ break;
+ default:
+ break;
+ }
+}
+
+/** The event callback function for the demand power policy.
+ *
+ * This function is called to handle the events for the power policy. It calls the relevant handler function depending
+ * on the type of the event.
+ *
+ * @param kbdev The kbase device structure for the device
+ * @param event The event that should be processed
+ */
+static void demand_event(kbase_device *kbdev, kbase_pm_event event)
+{
+ kbasep_pm_policy_demand *data = &kbdev->pm.policy_data.demand;
+
+ switch(event)
+ {
+ case KBASE_PM_EVENT_POLICY_INIT:
+ demand_power_up(kbdev);
+ break;
+ case KBASE_PM_EVENT_POLICY_CHANGE:
+ if (data->state == KBASEP_PM_DEMAND_STATE_POWERED_UP ||
+ data->state == KBASEP_PM_DEMAND_STATE_POWERED_DOWN)
+ {
+ kbase_pm_change_policy(kbdev);
+ }
+ else
+ {
+ data->state = KBASEP_PM_DEMAND_STATE_CHANGING_POLICY;
+ }
+ break;
+ case KBASE_PM_EVENT_SYSTEM_RESUME:
+ case KBASE_PM_EVENT_GPU_ACTIVE:
+ switch (data->state)
+ {
+ case KBASEP_PM_DEMAND_STATE_POWERING_UP:
+ break;
+ case KBASEP_PM_DEMAND_STATE_POWERED_UP:
+ kbase_pm_power_up_done(kbdev);
+ break;
+ default:
+ demand_power_up(kbdev);
+ }
+ break;
+ case KBASE_PM_EVENT_SYSTEM_SUSPEND:
+ case KBASE_PM_EVENT_GPU_IDLE:
+ switch (data->state)
+ {
+ case KBASEP_PM_DEMAND_STATE_POWERING_DOWN:
+ break;
+ case KBASEP_PM_DEMAND_STATE_POWERED_DOWN:
+ kbase_pm_power_down_done(kbdev);
+ break;
+ default:
+ demand_power_down(kbdev);
+ }
+ break;
+ case KBASE_PM_EVENT_CHANGE_GPU_STATE:
+ if (data->state != KBASEP_PM_DEMAND_STATE_POWERED_DOWN &&
+ data->state != KBASEP_PM_DEMAND_STATE_POWERING_DOWN)
+ {
+ demand_change_gpu_state(kbdev);
+ }
+ break;
+ case KBASE_PM_EVENT_GPU_STATE_CHANGED:
+ demand_state_changed(kbdev);
+ break;
+ default:
+ /* unrecognized event, should never happen */
+ OSK_ASSERT(0);
+ }
+}
+
+/** Initialize the demand power policy.
+ *
+ * This sets up the private @ref kbase_pm_device_data.policy_data field of the device for use with the demand power
+ * policy.
+ *
+ * @param kbdev The kbase device structure for the device
+ */
+static void demand_init(kbase_device *kbdev)
+{
+ kbdev->pm.policy_data.demand.state = KBASEP_PM_DEMAND_STATE_POWERED_UP;
+}
+
+/** Terminate the demand power policy.
+ *
+ * This frees the resources that were allocated by @ref demand_init.
+ *
+ * @param kbdev The kbase device structure for the device
+ */
+static void demand_term(kbase_device *kbdev)
+{
+ CSTD_UNUSED(kbdev);
+}
+
+/** The @ref kbase_pm_policy structure for the demand power policy.
+ *
+ * This is the static structure that defines the demand power policy's callback and name.
+ */
+const kbase_pm_policy kbase_pm_demand_policy_ops =
+{
+ "demand", /* name */
+ demand_init, /* init */
+ demand_term, /* term */
+ demand_event, /* event */
+};
+
+
+KBASE_EXPORT_TEST_API(kbase_pm_demand_policy_ops)
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_pm_demand.h
+ * A simple demand based power management policy
+ */
+
+#ifndef MALI_KBASE_PM_DEMAND_H
+#define MALI_KBASE_PM_DEMAND_H
+
+/** The states that the demand policy can enter.
+ *
+ * The diagram below should the states that the demand policy can enter and the transitions that can occur between the
+ * states:
+ *
+ * @dot
+ * digraph demand_states {
+ * node [fontsize=10];
+ * edge [fontsize=10];
+ *
+ * POWERING_UP [label="STATE_POWERING_UP"
+ * URL="\ref kbasep_pm_demand_state.KBASEP_PM_DEMAND_STATE_POWERING_UP"];
+ * POWERING_DOWN [label="STATE_POWERING_DOWN"
+ * URL="\ref kbasep_pm_demand_state.KBASEP_PM_DEMAND_STATE_POWERING_DOWN"];
+ * POWERED_UP [label="STATE_POWERED_UP"
+ * URL="\ref kbasep_pm_demand_state.KBASEP_PM_DEMAND_STATE_POWERED_UP"];
+ * POWERED_DOWN [label="STATE_POWERED_DOWN"
+ * URL="\ref kbasep_pm_demand_state.KBASEP_PM_DEMAND_STATE_POWERED_DOWN"];
+ * CHANGING_POLICY [label="STATE_CHANGING_POLICY"
+ * URL="\ref kbasep_pm_demand_state.KBASEP_PM_DEMAND_STATE_CHANGING_POLICY"];
+ *
+ * init [label="init" URL="\ref KBASE_PM_EVENT_INIT"];
+ * change_policy [label="change_policy" URL="\ref kbase_pm_change_policy"];
+ *
+ * init -> POWERING_UP [ label = "Policy init" ];
+ *
+ * POWERING_UP -> POWERED_UP [label = "Power state change" URL="\ref KBASE_PM_EVENT_STATE_CHANGED"];
+ * POWERING_DOWN -> POWERED_DOWN [label = "Power state change" URL="\ref KBASE_PM_EVENT_STATE_CHANGED"];
+ * CHANGING_POLICY -> change_policy [label = "Power state change" URL="\ref KBASE_PM_EVENT_STATE_CHANGED"];
+ *
+ * POWERED_UP -> POWERING_DOWN [label = "GPU Idle" URL="\ref KBASE_PM_EVENT_GPU_IDLE"];
+ * POWERING_UP -> POWERING_DOWN [label = "GPU Idle" URL="\ref KBASE_PM_EVENT_GPU_IDLE"];
+ *
+ * POWERED_DOWN -> POWERING_UP [label = "GPU Active" URL="\ref KBASE_PM_EVENT_GPU_ACTIVE"];
+ * POWERING_DOWN -> POWERING_UP [label = "GPU Active" URL="\ref KBASE_PM_EVENT_GPU_ACTIVE"];
+
+ * POWERING_UP -> CHANGING_POLICY [label = "Change policy" URL="\ref KBASE_PM_EVENT_CHANGE_POLICY"];
+ * POWERING_DOWN -> CHANGING_POLICY [label = "Change policy" URL="\ref KBASE_PM_EVENT_CHANGE_POLICY"];
+ * POWERED_UP -> change_policy [label = "Change policy" URL="\ref KBASE_PM_EVENT_CHANGE_POLICY"];
+ * POWERED_DOWN -> change_policy [label = "Change policy" URL="\ref KBASE_PM_EVENT_CHANGE_POLICY"];
+ * }
+ * @enddot
+ */
+typedef enum kbasep_pm_demand_state
+{
+ KBASEP_PM_DEMAND_STATE_POWERING_UP, /**< The GPU is powering up */
+ KBASEP_PM_DEMAND_STATE_POWERED_UP, /**< The GPU is powered up and jobs can execute */
+ KBASEP_PM_DEMAND_STATE_POWERING_DOWN, /**< The GPU is powering down */
+ KBASEP_PM_DEMAND_STATE_POWERED_DOWN, /**< The GPU is powered down */
+ KBASEP_PM_DEMAND_STATE_CHANGING_POLICY /**< The power policy is about to change */
+} kbasep_pm_demand_state;
+
+/** Private structure for policy instance data.
+ *
+ * This contains data that is private to the particular power policy that is active.
+ */
+typedef struct kbasep_pm_policy_demand
+{
+ kbasep_pm_demand_state state; /**< The current state of the policy */
+} kbasep_pm_policy_demand;
+
+#endif
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_pm_driver.c
+ * Base kernel Power Management hardware control
+ */
+
+#include <osk/mali_osk.h>
+#include <kbase/src/common/mali_kbase.h>
+#include <kbase/src/common/mali_midg_regmap.h>
+#include <kbase/src/common/mali_kbase_gator.h>
+#include <kbase/src/common/mali_kbase_pm.h>
+
+#if MALI_MOCK_TEST
+#define MOCKABLE(function) function##_original
+#else
+#define MOCKABLE(function) function
+#endif /* MALI_MOCK_TEST */
+
+/** Number of milliseconds before we time out on a reset */
+#define RESET_TIMEOUT 500
+
+/** Actions that can be performed on a core.
+ *
+ * This enumeration is private to the file. Its values are set to allow @ref core_type_to_reg function,
+ * which decodes this enumeration, to be simpler and more efficient.
+ */
+typedef enum kbasep_pm_action
+{
+ ACTION_PRESENT = 0,
+ ACTION_READY = (SHADER_READY_LO - SHADER_PRESENT_LO),
+ ACTION_PWRON = (SHADER_PWRON_LO - SHADER_PRESENT_LO),
+ ACTION_PWROFF = (SHADER_PWROFF_LO - SHADER_PRESENT_LO),
+ ACTION_PWRTRANS = (SHADER_PWRTRANS_LO - SHADER_PRESENT_LO),
+ ACTION_PWRACTIVE = (SHADER_PWRACTIVE_LO - SHADER_PRESENT_LO)
+} kbasep_pm_action;
+
+/** Decode a core type and action to a register.
+ *
+ * Given a core type (defined by @ref kbase_pm_core_type) and an action (defined by @ref kbasep_pm_action) this
+ * function will return the register offset that will perform the action on the core type. The register returned is
+ * the \c _LO register and an offset must be applied to use the \c _HI register.
+ *
+ * @param core_type The type of core
+ * @param action The type of action
+ *
+ * @return The register offset of the \c _LO register that performs an action of type \c action on a core of type \c
+ * core_type.
+ */
+static u32 core_type_to_reg(kbase_pm_core_type core_type, kbasep_pm_action action)
+{
+ return core_type + action;
+}
+
+/** Invokes an action on a core set
+ *
+ * This function performs the action given by \c action on a set of cores of a type given by \c core_type. It is a
+ * static function used by @ref kbase_pm_invoke_power_up and @ref kbase_pm_invoke_power_down.
+ *
+ * @param kbdev The kbase device structure of the device
+ * @param core_type The type of core that the action should be performed on
+ * @param cores A bit mask of cores to perform the action on (low 32 bits)
+ * @param action The action to perform on the cores
+ */
+STATIC void kbase_pm_invoke(kbase_device *kbdev, kbase_pm_core_type core_type, u64 cores, kbasep_pm_action action)
+{
+ u32 reg;
+ u32 lo = cores & 0xFFFFFFFF;
+ u32 hi = (cores >> 32) & 0xFFFFFFFF;
+
+ reg = core_type_to_reg(core_type, action);
+
+ OSK_ASSERT(reg);
+#if MALI_GATOR_SUPPORT
+ if (cores)
+ {
+ if (action == ACTION_PWRON )
+ {
+ kbase_trace_mali_pm_power_on(core_type, cores);
+ }
+ else if ( action == ACTION_PWROFF )
+ {
+ kbase_trace_mali_pm_power_off(core_type, cores);
+ }
+ }
+#endif
+ /* Tracing */
+ if ( cores != 0 && core_type == KBASE_PM_CORE_SHADER )
+ {
+ if (action == ACTION_PWRON )
+ {
+ KBASE_TRACE_ADD( kbdev, PM_PWRON, NULL, NULL, 0u, lo );
+ }
+ else if ( action == ACTION_PWROFF )
+ {
+ KBASE_TRACE_ADD( kbdev, PM_PWROFF, NULL, NULL, 0u, lo );
+ }
+ }
+
+ if (lo != 0)
+ {
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(reg), lo, NULL);
+ }
+ if (hi != 0)
+ {
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(reg+4), hi, NULL);
+ }
+}
+
+void kbase_pm_invoke_power_up(kbase_device *kbdev, kbase_pm_core_type type, u64 cores)
+{
+ OSK_ASSERT( kbdev != NULL );
+
+ switch(type)
+ {
+ case KBASE_PM_CORE_SHADER:
+ {
+ u64 prev_desired_shader = kbdev->pm.desired_shader_state;
+ kbdev->pm.desired_shader_state |= cores;
+ if ( prev_desired_shader != kbdev->pm.desired_shader_state )
+ {
+ KBASE_TRACE_ADD( kbdev, PM_CORES_CHANGE_DESIRED_ON_POWERUP, NULL, NULL, 0u, (u32)kbdev->pm.desired_shader_state );
+ }
+ }
+ break;
+ case KBASE_PM_CORE_TILER:
+ kbdev->pm.desired_tiler_state |= cores;
+ break;
+ default:
+ OSK_ASSERT(0);
+ }
+}
+KBASE_EXPORT_TEST_API(kbase_pm_invoke_power_up)
+
+void kbase_pm_invoke_power_down(kbase_device *kbdev, kbase_pm_core_type type, u64 cores)
+{
+ OSK_ASSERT( kbdev != NULL );
+
+ switch(type)
+ {
+ case KBASE_PM_CORE_SHADER:
+ {
+ u64 prev_desired_shader = kbdev->pm.desired_shader_state;
+ kbdev->pm.desired_shader_state &= ~cores;
+ if ( prev_desired_shader != kbdev->pm.desired_shader_state )
+ {
+ KBASE_TRACE_ADD( kbdev, PM_CORES_CHANGE_DESIRED_ON_POWERDOWN, NULL, NULL, 0u, (u32)kbdev->pm.desired_shader_state );
+ }
+ }
+ break;
+ case KBASE_PM_CORE_TILER:
+ kbdev->pm.desired_tiler_state &= ~cores;
+ break;
+ default:
+ OSK_ASSERT(0);
+ }
+}
+KBASE_EXPORT_TEST_API(kbase_pm_invoke_power_down)
+/** Get information about a core set
+ *
+ * This function gets information (chosen by \c action) about a set of cores of a type given by \c core_type. It is a
+ * static function used by @ref kbase_pm_get_present_cores, @ref kbase_pm_get_active_cores, @ref
+ * kbase_pm_get_trans_cores and @ref kbase_pm_get_ready_cores.
+ *
+ * @param kbdev The kbase device structure of the device
+ * @param core_type The type of core that the should be queried
+ * @param action The property of the cores to query
+ *
+ * @return A bit mask specifying the state of the cores
+ */
+static u64 kbase_pm_get_state(kbase_device *kbdev, kbase_pm_core_type core_type, kbasep_pm_action action)
+{
+ u32 reg;
+ u32 lo, hi;
+
+ reg = core_type_to_reg(core_type, action);
+
+ OSK_ASSERT(reg);
+
+ lo = kbase_reg_read(kbdev, GPU_CONTROL_REG(reg), NULL);
+ hi = kbase_reg_read(kbdev, GPU_CONTROL_REG(reg+4), NULL);
+
+ return (((u64)hi) << 32) | ((u64)lo);
+}
+
+void kbasep_pm_read_present_cores(kbase_device *kbdev)
+{
+ kbdev->shader_present_bitmap = kbase_pm_get_state(kbdev, KBASE_PM_CORE_SHADER, ACTION_PRESENT);
+ kbdev->tiler_present_bitmap = kbase_pm_get_state(kbdev, KBASE_PM_CORE_TILER, ACTION_PRESENT);
+ kbdev->l2_present_bitmap = kbase_pm_get_state(kbdev, KBASE_PM_CORE_L2, ACTION_PRESENT);
+ kbdev->l3_present_bitmap = kbase_pm_get_state(kbdev, KBASE_PM_CORE_L3, ACTION_PRESENT);
+
+ kbdev->shader_inuse_bitmap = 0;
+ kbdev->tiler_inuse_bitmap = 0;
+ kbdev->shader_needed_bitmap = 0;
+ kbdev->tiler_needed_bitmap = 0;
+ kbdev->shader_available_bitmap = 0;
+ kbdev->tiler_available_bitmap = 0;
+
+ OSK_MEMSET(kbdev->shader_needed_cnt, 0, sizeof(kbdev->shader_needed_cnt));
+ OSK_MEMSET(kbdev->shader_needed_cnt, 0, sizeof(kbdev->tiler_needed_cnt));
+}
+KBASE_EXPORT_TEST_API(kbasep_pm_read_present_cores)
+
+/** Get the cores that are present
+ */
+u64 kbase_pm_get_present_cores(kbase_device *kbdev, kbase_pm_core_type type)
+{
+ OSK_ASSERT( kbdev != NULL );
+
+ switch(type) {
+ case KBASE_PM_CORE_L3:
+ return kbdev->l3_present_bitmap;
+ break;
+ case KBASE_PM_CORE_L2:
+ return kbdev->l2_present_bitmap;
+ break;
+ case KBASE_PM_CORE_SHADER:
+ return kbdev->shader_present_bitmap;
+ break;
+ case KBASE_PM_CORE_TILER:
+ return kbdev->tiler_present_bitmap;
+ break;
+ }
+ OSK_ASSERT(0);
+ return 0;
+}
+KBASE_EXPORT_TEST_API(kbase_pm_get_present_cores)
+
+/** Get the cores that are "active" (busy processing work)
+ */
+u64 kbase_pm_get_active_cores(kbase_device *kbdev, kbase_pm_core_type type)
+{
+ return kbase_pm_get_state(kbdev, type, ACTION_PWRACTIVE);
+}
+KBASE_EXPORT_TEST_API(kbase_pm_get_active_cores)
+
+/** Get the cores that are transitioning between power states
+ */
+u64 MOCKABLE(kbase_pm_get_trans_cores)(kbase_device *kbdev, kbase_pm_core_type type)
+{
+ return kbase_pm_get_state(kbdev, type, ACTION_PWRTRANS);
+}
+KBASE_EXPORT_TEST_API(kbase_pm_get_trans_cores)
+/** Get the cores that are powered on
+ */
+u64 kbase_pm_get_ready_cores(kbase_device *kbdev, kbase_pm_core_type type)
+{
+ u64 result;
+ result = kbase_pm_get_state(kbdev, type, ACTION_READY);
+
+ if ( type == KBASE_PM_CORE_SHADER )
+ {
+ KBASE_TRACE_ADD( kbdev, PM_CORES_POWERED, NULL, NULL, 0u, (u32)result);
+ }
+ return result;
+}
+KBASE_EXPORT_TEST_API(kbase_pm_get_ready_cores)
+
+/** Is there an active power transition?
+ *
+ * Returns true if there is a power transition in progress, otherwise false.
+ */
+mali_bool MOCKABLE(kbase_pm_get_pwr_active)(kbase_device *kbdev)
+{
+ return ((kbase_reg_read(kbdev, GPU_CONTROL_REG(GPU_STATUS), NULL) & (1<<1)) != 0);
+}
+KBASE_EXPORT_TEST_API(kbase_pm_get_pwr_active)
+
+/** Perform power transitions for a particular core type.
+ *
+ * This function will perform any available power transitions to make the actual hardware state closer to the desired
+ * state. If a core is currently transitioning then changes to the power state of that call cannot be made until the
+ * transition has finished. Cores which are not present in the hardware are ignored if they are specified in the
+ * desired_state bitmask, however the return value will always be 0 in this case.
+ *
+ * @param kbdev The kbase device
+ * @param type The core type to perform transitions for
+ * @param desired_state A bit mask of the desired state of the cores
+ * @param in_use A bit mask of the cores that are currently running jobs.
+ * These cores have to be kept powered up because there are jobs
+ * running (or about to run) on them.
+ * @param[out] available Receives a bit mask of the cores that the job scheduler can use to submit jobs to.
+ * May be NULL if this is not needed.
+ *
+ * @return MALI_TRUE if the desired state has been reached, MALI_FALSE otherwise
+ */
+
+STATIC mali_bool kbase_pm_transition_core_type(kbase_device *kbdev, kbase_pm_core_type type, u64 desired_state,
+ u64 in_use, u64 *available)
+{
+ u64 present;
+ u64 ready;
+ u64 trans;
+ u64 powerup;
+ u64 powerdown;
+
+ /* Get current state */
+ present = kbase_pm_get_present_cores(kbdev, type);
+ trans = kbase_pm_get_trans_cores(kbdev, type);
+ ready = kbase_pm_get_ready_cores(kbdev, type);
+
+ if (available != NULL)
+ {
+ *available = ready & desired_state;
+ }
+
+ /* Update desired state to include the in-use cores. These have to be kept powered up because there are jobs
+ * running or about to run on these cores
+ */
+ desired_state |= in_use;
+
+ /* Workaround for MIDBASE-1258 (L2 usage should be refcounted).
+ * Keep the L2 from being turned off.
+ */
+ if (type == KBASE_PM_CORE_L2)
+ {
+ desired_state = present;
+ }
+
+ if (desired_state == ready && trans == 0)
+ {
+ return MALI_TRUE;
+ }
+
+ /* Restrict the cores to those that are actually present */
+ powerup = desired_state & present;
+ powerdown = (~desired_state) & present;
+
+ /* Restrict to cores that are not already in the desired state */
+ powerup &= ~ready;
+ powerdown &= ready;
+
+ /* Don't transition any cores that are already transitioning */
+ powerup &= ~trans;
+ powerdown &= ~trans;
+
+ /* Perform transitions if any */
+ kbase_pm_invoke(kbdev, type, powerup, ACTION_PWRON);
+ kbase_pm_invoke(kbdev, type, powerdown, ACTION_PWROFF);
+
+ return MALI_FALSE;
+}
+KBASE_EXPORT_TEST_API(kbase_pm_transition_core_type)
+
+/** Determine which caches should be on for a particular core state.
+ *
+ * This function takes a bit mask of the present caches and the cores (or caches) that are attached to the caches that
+ * will be powered. It then computes which caches should be turned on to allow the cores requested to be powered up.
+ *
+ * @param present The bit mask of present caches
+ * @param cores_powered A bit mask of cores (or L2 caches) that are desired to be powered
+ *
+ * @return A bit mask of the caches that should be turned on
+ */
+STATIC u64 get_desired_cache_status(u64 present, u64 cores_powered)
+{
+ u64 desired = 0;
+
+ while (present)
+ {
+ /* Find out which is the highest set bit */
+ u64 bit = 63-osk_clz_64(present);
+ u64 bit_mask = 1ull << bit;
+ /* Create a mask which has all bits from 'bit' upwards set */
+
+ u64 mask = ~(bit_mask-1);
+
+ /* If there are any cores powered at this bit or above (that haven't previously been processed) then we need
+ * this core on */
+ if (cores_powered & mask)
+ {
+ desired |= bit_mask;
+ }
+
+ /* Remove bits from cores_powered and present */
+ cores_powered &= ~mask;
+ present &= ~bit_mask;
+ }
+
+ return desired;
+}
+KBASE_EXPORT_TEST_API(get_desired_cache_status)
+static mali_bool kbasep_pm_unrequest_cores_nolock(kbase_device *kbdev, u64 shader_cores, u64 tiler_cores)
+{
+ mali_bool change_gpu_state = MALI_FALSE;
+
+
+ OSK_ASSERT( kbdev != NULL );
+
+ while (shader_cores)
+ {
+ int bitnum = 63 - osk_clz_64(shader_cores);
+ u64 bit = 1ULL << bitnum;
+ int cnt;
+
+ OSK_ASSERT(kbdev->shader_needed_cnt[bitnum] > 0);
+
+ cnt = --kbdev->shader_needed_cnt[bitnum];
+
+ if (0 == cnt)
+ {
+ kbdev->shader_needed_bitmap &= ~bit;
+ change_gpu_state = MALI_TRUE;
+ }
+
+ shader_cores &= ~bit;
+ }
+
+ if ( change_gpu_state )
+ {
+ KBASE_TRACE_ADD( kbdev, PM_UNREQUEST_CHANGE_SHADER_NEEDED, NULL, NULL, 0u, (u32)kbdev->shader_needed_bitmap );
+ }
+
+ while (tiler_cores)
+ {
+ int bitnum = 63 - osk_clz_64(tiler_cores);
+ u64 bit = 1ULL << bitnum;
+ int cnt;
+
+ OSK_ASSERT(kbdev->tiler_needed_cnt[bitnum] > 0);
+
+ cnt = --kbdev->tiler_needed_cnt[bitnum];
+
+ if (0 == cnt)
+ {
+ kbdev->tiler_needed_bitmap &= ~bit;
+ change_gpu_state = MALI_TRUE;
+ }
+
+ tiler_cores &= ~bit;
+ }
+
+ return change_gpu_state;
+}
+
+mali_error kbase_pm_request_cores(kbase_device *kbdev, u64 shader_cores, u64 tiler_cores)
+{
+ u64 cores;
+
+ mali_bool change_gpu_state = MALI_FALSE;
+
+ OSK_ASSERT( kbdev != NULL );
+
+ osk_spinlock_irq_lock(&kbdev->pm.power_change_lock);
+
+ cores = shader_cores;
+ while (cores)
+ {
+ int bitnum = 63 - osk_clz_64(cores);
+ u64 bit = 1ULL << bitnum;
+
+ int cnt = ++kbdev->shader_needed_cnt[bitnum];
+
+ if (0 == cnt)
+ {
+ /* Wrapped, undo everything we've done so far */
+
+ kbdev->shader_needed_cnt[bitnum]--;
+ kbasep_pm_unrequest_cores_nolock(kbdev, cores ^ shader_cores, 0);
+
+ osk_spinlock_irq_unlock(&kbdev->pm.power_change_lock);
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+
+ if (1 == cnt)
+ {
+ kbdev->shader_needed_bitmap |= bit;
+ change_gpu_state = MALI_TRUE;
+ }
+
+ cores &= ~bit;
+ }
+
+
+ if ( change_gpu_state )
+ {
+ KBASE_TRACE_ADD( kbdev, PM_REQUEST_CHANGE_SHADER_NEEDED, NULL, NULL, 0u, (u32)kbdev->shader_needed_bitmap );
+ }
+
+ cores = tiler_cores;
+ while (cores)
+ {
+ int bitnum = 63 - osk_clz_64(cores);
+ u64 bit = 1ULL << bitnum;
+
+ int cnt = ++kbdev->tiler_needed_cnt[bitnum];
+
+ if (0 == cnt)
+ {
+ /* Wrapped, undo everything we've done so far */
+
+ kbdev->tiler_needed_cnt[bitnum]--;
+ kbasep_pm_unrequest_cores_nolock(kbdev, shader_cores, cores ^ tiler_cores);
+
+ osk_spinlock_irq_unlock(&kbdev->pm.power_change_lock);
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+
+ if (1 == cnt)
+ {
+ kbdev->tiler_needed_bitmap |= bit;
+ change_gpu_state = MALI_TRUE;
+ }
+
+ cores &= ~bit;
+ }
+
+ if (change_gpu_state)
+ {
+ kbase_pm_send_event(kbdev, KBASE_PM_EVENT_CHANGE_GPU_STATE);
+ }
+
+ osk_spinlock_irq_unlock(&kbdev->pm.power_change_lock);
+
+ return MALI_ERROR_NONE;
+}
+KBASE_EXPORT_TEST_API(kbase_pm_request_cores)
+
+void kbase_pm_unrequest_cores(kbase_device *kbdev, u64 shader_cores, u64 tiler_cores)
+{
+ mali_bool change_gpu_state = MALI_FALSE;
+
+
+ OSK_ASSERT( kbdev != NULL );
+
+ osk_spinlock_irq_lock(&kbdev->pm.power_change_lock);
+
+ change_gpu_state = kbasep_pm_unrequest_cores_nolock(kbdev, shader_cores, tiler_cores);
+
+ if (change_gpu_state)
+ {
+ kbase_pm_send_event(kbdev, KBASE_PM_EVENT_CHANGE_GPU_STATE);
+ }
+
+ osk_spinlock_irq_unlock(&kbdev->pm.power_change_lock);
+}
+KBASE_EXPORT_TEST_API(kbase_pm_unrequest_cores)
+
+mali_bool kbase_pm_register_inuse_cores(kbase_device *kbdev, u64 shader_cores, u64 tiler_cores)
+{
+ u64 prev_shader_needed; /* Just for tracing */
+ u64 prev_shader_inuse; /* Just for tracing */
+
+ osk_spinlock_irq_lock(&kbdev->pm.power_change_lock);
+
+ prev_shader_needed = kbdev->shader_needed_bitmap;
+ prev_shader_inuse = kbdev->shader_inuse_bitmap;
+
+ if ((kbdev->shader_available_bitmap & shader_cores) != shader_cores ||
+ (kbdev->tiler_available_bitmap & tiler_cores) != tiler_cores)
+ {
+ osk_spinlock_irq_unlock(&kbdev->pm.power_change_lock);
+ return MALI_FALSE;
+ }
+
+ while (shader_cores)
+ {
+ int bitnum = 63 - osk_clz_64(shader_cores);
+ u64 bit = 1ULL << bitnum;
+ int cnt;
+
+ OSK_ASSERT(kbdev->shader_needed_cnt[bitnum] > 0);
+
+ cnt = --kbdev->shader_needed_cnt[bitnum];
+
+ if (0 == cnt)
+ {
+ kbdev->shader_needed_bitmap &= ~bit;
+ }
+
+ /* shader_inuse_cnt should not overflow because there can only be a
+ * very limited number of jobs on the h/w at one time */
+
+ kbdev->shader_inuse_cnt[bitnum]++;
+ kbdev->shader_inuse_bitmap |= bit;
+
+ shader_cores &= ~bit;
+ }
+
+ if ( prev_shader_needed != kbdev->shader_needed_bitmap )
+ {
+ KBASE_TRACE_ADD( kbdev, PM_REGISTER_CHANGE_SHADER_NEEDED, NULL, NULL, 0u, (u32)kbdev->shader_needed_bitmap );
+ }
+ if ( prev_shader_inuse != kbdev->shader_inuse_bitmap )
+ {
+ KBASE_TRACE_ADD( kbdev, PM_REGISTER_CHANGE_SHADER_INUSE, NULL, NULL, 0u, (u32)kbdev->shader_inuse_bitmap );
+ }
+
+ while (tiler_cores)
+ {
+ int bitnum = 63 - osk_clz_64(tiler_cores);
+ u64 bit = 1ULL << bitnum;
+ int cnt;
+
+ OSK_ASSERT(kbdev->tiler_needed_cnt[bitnum] > 0);
+
+ cnt = --kbdev->tiler_needed_cnt[bitnum];
+
+ if (0 == cnt)
+ {
+ kbdev->tiler_needed_bitmap &= ~bit;
+ }
+
+ /* tiler_inuse_cnt should not overflow because there can only be a
+ * very limited number of jobs on the h/w at one time */
+
+ kbdev->tiler_inuse_cnt[bitnum]++;
+ kbdev->tiler_inuse_bitmap |= bit;
+
+ tiler_cores &= ~bit;
+ }
+
+ osk_spinlock_irq_unlock(&kbdev->pm.power_change_lock);
+
+ return MALI_TRUE;
+}
+KBASE_EXPORT_TEST_API(kbase_pm_register_inuse_cores)
+
+void kbase_pm_release_cores(kbase_device *kbdev, u64 shader_cores, u64 tiler_cores)
+{
+ mali_bool change_gpu_state = MALI_FALSE;
+
+ OSK_ASSERT( kbdev != NULL );
+
+ osk_spinlock_irq_lock(&kbdev->pm.power_change_lock);
+
+ while (shader_cores)
+ {
+ int bitnum = 63 - osk_clz_64(shader_cores);
+ u64 bit = 1ULL << bitnum;
+ int cnt;
+
+ OSK_ASSERT(kbdev->shader_inuse_cnt[bitnum] > 0);
+
+ cnt = --kbdev->shader_inuse_cnt[bitnum];
+
+ if (0 == cnt)
+ {
+ kbdev->shader_inuse_bitmap &= ~bit;
+ change_gpu_state = MALI_TRUE;
+ }
+
+ shader_cores &= ~bit;
+ }
+
+ if ( change_gpu_state )
+ {
+ KBASE_TRACE_ADD( kbdev, PM_RELEASE_CHANGE_SHADER_INUSE, NULL, NULL, 0u, (u32)kbdev->shader_inuse_bitmap );
+ }
+
+ while (tiler_cores)
+ {
+ int bitnum = 63 - osk_clz_64(tiler_cores);
+ u64 bit = 1ULL << bitnum;
+ int cnt;
+
+ OSK_ASSERT(kbdev->tiler_inuse_cnt[bitnum] > 0);
+
+ cnt = --kbdev->tiler_inuse_cnt[bitnum];
+
+ if (0 == cnt)
+ {
+ kbdev->tiler_inuse_bitmap &= ~bit;
+ change_gpu_state = MALI_TRUE;
+ }
+
+ tiler_cores &= ~bit;
+ }
+
+ if (change_gpu_state)
+ {
+ kbase_pm_send_event(kbdev, KBASE_PM_EVENT_CHANGE_GPU_STATE);
+ }
+
+ osk_spinlock_irq_unlock(&kbdev->pm.power_change_lock);
+}
+KBASE_EXPORT_TEST_API(kbase_pm_release_cores)
+
+void MOCKABLE(kbase_pm_check_transitions)(kbase_device *kbdev)
+{
+ mali_bool in_desired_state = MALI_TRUE;
+ u64 desired_l2_state;
+ u64 desired_l3_state;
+ u64 cores_powered;
+ u64 tiler_available_bitmap;
+ u64 shader_available_bitmap;
+
+ OSK_ASSERT( NULL != kbdev );
+
+ osk_spinlock_irq_lock(&kbdev->pm.power_change_lock);
+
+ cores_powered = (kbdev->pm.desired_shader_state | kbdev->pm.desired_tiler_state);
+
+ /* We need to keep the inuse cores powered */
+ cores_powered |= kbdev->shader_inuse_bitmap | kbdev->tiler_inuse_bitmap;
+
+ desired_l2_state = get_desired_cache_status(kbdev->l2_present_bitmap, cores_powered);
+ desired_l3_state = get_desired_cache_status(kbdev->l3_present_bitmap, desired_l2_state);
+
+ in_desired_state &= kbase_pm_transition_core_type(kbdev, KBASE_PM_CORE_L3, desired_l3_state, 0, NULL);
+ in_desired_state &= kbase_pm_transition_core_type(kbdev, KBASE_PM_CORE_L2, desired_l2_state, 0, NULL);
+
+ if (in_desired_state)
+ {
+ in_desired_state &= kbase_pm_transition_core_type(kbdev, KBASE_PM_CORE_TILER,
+ kbdev->pm.desired_tiler_state, kbdev->tiler_inuse_bitmap,
+ &tiler_available_bitmap);
+ in_desired_state &= kbase_pm_transition_core_type(kbdev, KBASE_PM_CORE_SHADER,
+ kbdev->pm.desired_shader_state, kbdev->shader_inuse_bitmap,
+ &shader_available_bitmap);
+
+ /* If we reached the desired state, or we powered off a core, then update the available
+ * This is because:
+ * - powering down happens immediately, so we must make the cores unavailable immediately
+ * - powering up may not bring all cores up together at once, so we must wait until we
+ * reach the desired state before making the cores available */
+ if ( in_desired_state )
+ {
+ if ( kbdev->shader_available_bitmap != shader_available_bitmap )
+ {
+ KBASE_TRACE_ADD( kbdev, PM_CORES_CHANGE_AVAILABLE, NULL, NULL, 0u, (u32)shader_available_bitmap );
+ }
+ kbdev->shader_available_bitmap = shader_available_bitmap;
+ kbdev->tiler_available_bitmap = tiler_available_bitmap;
+ }
+ else
+ {
+ /* Calculate the cores that were previously available and are still available now (i.e.
+ * take account of cores that powered down, but ignore those that powered up) */
+ u64 remaining_shader_available = kbdev->shader_available_bitmap & shader_available_bitmap;
+ u64 remaining_tiler_available = kbdev->tiler_available_bitmap & tiler_available_bitmap;
+ if ( kbdev->shader_available_bitmap != remaining_shader_available )
+ {
+ KBASE_TRACE_ADD( kbdev, PM_CORES_CHANGE_AVAILABLE, NULL, NULL, 0u, (u32)remaining_shader_available );
+ }
+ kbdev->shader_available_bitmap = remaining_shader_available;
+ kbdev->tiler_available_bitmap = remaining_tiler_available;
+ }
+ }
+
+ if (in_desired_state)
+ {
+#if MALI_GATOR_SUPPORT
+ kbase_trace_mali_pm_status(KBASE_PM_CORE_L3, kbase_pm_get_ready_cores(kbdev, KBASE_PM_CORE_L3));
+ kbase_trace_mali_pm_status(KBASE_PM_CORE_L2, kbase_pm_get_ready_cores(kbdev, KBASE_PM_CORE_L2));
+ kbase_trace_mali_pm_status(KBASE_PM_CORE_SHADER, kbase_pm_get_ready_cores(kbdev, KBASE_PM_CORE_SHADER));
+ kbase_trace_mali_pm_status(KBASE_PM_CORE_TILER, kbase_pm_get_ready_cores(kbdev, KBASE_PM_CORE_TILER));
+#endif
+ kbase_pm_send_event(kbdev, KBASE_PM_EVENT_GPU_STATE_CHANGED);
+ }
+
+ osk_spinlock_irq_unlock(&kbdev->pm.power_change_lock);
+}
+KBASE_EXPORT_TEST_API(kbase_pm_check_transitions)
+
+void MOCKABLE(kbase_pm_enable_interrupts)(kbase_device *kbdev)
+{
+
+ OSK_ASSERT( NULL != kbdev );
+ /*
+ * Clear all interrupts,
+ * and unmask them all.
+ */
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(GPU_IRQ_CLEAR), GPU_IRQ_REG_ALL, NULL);
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(GPU_IRQ_MASK), GPU_IRQ_REG_ALL, NULL);
+
+ kbase_reg_write(kbdev, JOB_CONTROL_REG(JOB_IRQ_CLEAR), 0xFFFFFFFF, NULL);
+ kbase_reg_write(kbdev, JOB_CONTROL_REG(JOB_IRQ_MASK), 0xFFFFFFFF, NULL);
+
+ kbase_reg_write(kbdev, MMU_REG(MMU_IRQ_CLEAR), 0xFFFFFFFF, NULL);
+ kbase_reg_write(kbdev, MMU_REG(MMU_IRQ_MASK), 0xFFFFFFFF, NULL);
+}
+KBASE_EXPORT_TEST_API(kbase_pm_enable_interrupts)
+
+void MOCKABLE(kbase_pm_disable_interrupts)(kbase_device *kbdev)
+{
+
+ OSK_ASSERT( NULL != kbdev );
+ /*
+ * Mask all interrupts,
+ * and clear them all.
+ */
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(GPU_IRQ_MASK), 0, NULL);
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(GPU_IRQ_CLEAR), GPU_IRQ_REG_ALL, NULL);
+
+ kbase_reg_write(kbdev, JOB_CONTROL_REG(JOB_IRQ_MASK), 0, NULL);
+ kbase_reg_write(kbdev, JOB_CONTROL_REG(JOB_IRQ_CLEAR), 0xFFFFFFFF, NULL);
+
+ kbase_reg_write(kbdev, MMU_REG(MMU_IRQ_MASK), 0, NULL);
+ kbase_reg_write(kbdev, MMU_REG(MMU_IRQ_CLEAR), 0xFFFFFFFF, NULL);
+}
+KBASE_EXPORT_TEST_API(kbase_pm_disable_interrupts)
+
+/*
+ * pmu layout:
+ * 0x0000: PMU TAG (RO) (0xCAFECAFE)
+ * 0x0004: PMU VERSION ID (RO) (0x00000000)
+ * 0x0008: CLOCK ENABLE (RW) (31:1 SBZ, 0 CLOCK STATE)
+ */
+void MOCKABLE(kbase_pm_clock_on)(kbase_device *kbdev)
+{
+ OSK_ASSERT( NULL != kbdev );
+
+ if (kbdev->pm.gpu_powered)
+ {
+ /* Already turned on */
+ return;
+ }
+
+ KBASE_TRACE_ADD( kbdev, PM_GPU_ON, NULL, NULL, 0u, 0u );
+
+ /* The GPU is going to transition, so unset the wait queues until the policy
+ * informs us that the transition is complete */
+ osk_waitq_clear(&kbdev->pm.power_up_waitqueue);
+ osk_waitq_clear(&kbdev->pm.power_down_waitqueue);
+
+ if (kbase_device_has_feature(kbdev, KBASE_FEATURE_HAS_MODEL_PMU))
+ kbase_os_reg_write(kbdev, 0x4008, 1);
+
+ if (kbdev->pm.callback_power_on && kbdev->pm.callback_power_on(kbdev))
+ {
+ /* GPU state was lost, reset GPU to ensure it is in a consistent state */
+ kbase_pm_init_hw(kbdev);
+ }
+
+ osk_spinlock_irq_lock(&kbdev->pm.gpu_powered_lock);
+ kbdev->pm.gpu_powered = MALI_TRUE;
+ osk_spinlock_irq_unlock(&kbdev->pm.gpu_powered_lock);
+}
+KBASE_EXPORT_TEST_API(kbase_pm_clock_on)
+
+void MOCKABLE(kbase_pm_clock_off)(kbase_device *kbdev)
+{
+ OSK_ASSERT( NULL != kbdev );
+
+ if (!kbdev->pm.gpu_powered)
+ {
+ /* Already turned off */
+ return;
+ }
+
+ KBASE_TRACE_ADD( kbdev, PM_GPU_OFF, NULL, NULL, 0u, 0u );
+
+ /* Ensure that any IRQ handlers have finished */
+ kbase_synchronize_irqs(kbdev);
+
+ /* The GPU power may be turned off from this point */
+ osk_spinlock_irq_lock(&kbdev->pm.gpu_powered_lock);
+ kbdev->pm.gpu_powered = MALI_FALSE;
+ osk_spinlock_irq_unlock(&kbdev->pm.gpu_powered_lock);
+
+ if (kbdev->pm.callback_power_off)
+ {
+ kbdev->pm.callback_power_off(kbdev);
+ }
+
+ if (kbase_device_has_feature(kbdev, KBASE_FEATURE_HAS_MODEL_PMU))
+ kbase_os_reg_write(kbdev, 0x4008, 0);
+}
+KBASE_EXPORT_TEST_API(kbase_pm_clock_off)
+
+struct kbasep_reset_timeout_data
+{
+ mali_bool timed_out;
+ kbase_device *kbdev;
+};
+
+static void kbasep_reset_timeout(void *data)
+{
+ struct kbasep_reset_timeout_data *rtdata = (struct kbasep_reset_timeout_data*)data;
+
+ rtdata->timed_out = 1;
+
+ /* Set the wait queue to wake up kbase_pm_init_hw even though the reset hasn't completed */
+ kbase_pm_reset_done(rtdata->kbdev);
+}
+
+static void kbase_pm_hw_issues(kbase_device *kbdev)
+{
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8443))
+ {
+ /* Needed due to MIDBASE-1494: LS_PAUSEBUFFER_DISABLE. See PRLAM-8443. */
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(SHADER_CONFIG), 0x00010000, NULL);
+ }
+}
+
+mali_error kbase_pm_init_hw(kbase_device *kbdev)
+{
+ osk_timer timer;
+ struct kbasep_reset_timeout_data rtdata;
+ osk_error osk_err;
+
+ OSK_ASSERT( NULL != kbdev );
+
+ /* Ensure the clock is on before attempting to access the hardware */
+ if (!kbdev->pm.gpu_powered)
+ {
+ /* The GPU is going to transition, so unset the wait queues until the policy
+ * informs us that the transition is complete */
+ osk_waitq_clear(&kbdev->pm.power_up_waitqueue);
+ osk_waitq_clear(&kbdev->pm.power_down_waitqueue);
+
+ if (kbase_device_has_feature(kbdev, KBASE_FEATURE_HAS_MODEL_PMU))
+ kbase_os_reg_write(kbdev, 0x4008, 1);
+
+ if (kbdev->pm.callback_power_on)
+ kbdev->pm.callback_power_on(kbdev);
+
+ osk_spinlock_irq_lock(&kbdev->pm.gpu_powered_lock);
+ kbdev->pm.gpu_powered = MALI_TRUE;
+ osk_spinlock_irq_unlock(&kbdev->pm.gpu_powered_lock);
+ }
+
+ /* Ensure interrupts are off to begin with, this also clears any outstanding interrupts */
+ kbase_pm_disable_interrupts(kbdev);
+
+ /* Soft reset the GPU */
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(GPU_COMMAND), 1, NULL);
+
+ /* Unmask the reset complete interrupt only */
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(GPU_IRQ_MASK), (1<<8), NULL);
+
+ /* If the GPU never asserts the reset interrupt we just assume that the reset has completed */
+ if (kbase_device_has_feature(kbdev, KBASE_FEATURE_LACKS_RESET_INT))
+ {
+ goto out;
+ }
+
+ /* Initialize a structure for tracking the status of the reset */
+ rtdata.kbdev = kbdev;
+ rtdata.timed_out = 0;
+
+ /* Create a timer to use as a timeout on the reset */
+ osk_err = osk_timer_on_stack_init(&timer);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+
+ osk_timer_callback_set(&timer, kbasep_reset_timeout, &rtdata);
+ osk_err = osk_timer_start(&timer, RESET_TIMEOUT);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ osk_timer_on_stack_term(&timer);
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+ /* Wait for the RESET_COMPLETED interrupt to be raised,
+ * we use the "power up" waitqueue since it isn't in use yet */
+ osk_waitq_wait(&kbdev->pm.power_up_waitqueue);
+
+ if (rtdata.timed_out == 0)
+ {
+ /* GPU has been reset */
+ osk_timer_stop(&timer);
+ osk_timer_on_stack_term(&timer);
+
+ goto out;
+ }
+
+ /* No interrupt has been received - check if the RAWSTAT register says the reset has completed */
+ if (kbase_reg_read(kbdev, GPU_CONTROL_REG(GPU_IRQ_RAWSTAT), NULL) & (1<<8))
+ {
+ /* The interrupt is set in the RAWSTAT; this suggests that the interrupts are not getting to the CPU */
+ OSK_PRINT_WARN(OSK_BASE_PM, "Reset interrupt didn't reach CPU. Check interrupt assignments.\n");
+ /* If interrupts aren't working we can't continue. */
+ osk_timer_on_stack_term(&timer);
+ goto out;
+ }
+
+ /* The GPU doesn't seem to be responding to the reset so try a hard reset */
+ OSK_PRINT_WARN(OSK_BASE_PM, "Failed to soft reset GPU, attempting a hard reset\n");
+ kbase_reg_write(kbdev, GPU_CONTROL_REG(GPU_COMMAND), 2, NULL);
+
+ /* Restart the timer to wait for the hard reset to complete */
+ rtdata.timed_out = 0;
+ osk_err = osk_timer_start(&timer, RESET_TIMEOUT);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ osk_timer_on_stack_term(&timer);
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+
+ /* Wait for the RESET_COMPLETED interrupt to be raised,
+ * we use the "power up" waitqueue since it isn't in use yet */
+ osk_waitq_wait(&kbdev->pm.power_up_waitqueue);
+
+ if (rtdata.timed_out == 0)
+ {
+ /* GPU has been reset */
+ osk_timer_stop(&timer);
+ osk_timer_on_stack_term(&timer);
+
+ goto out;
+ }
+
+ osk_timer_on_stack_term(&timer);
+
+ OSK_PRINT_ERROR(OSK_BASE_PM, "Failed to reset the GPU\n");
+
+ /* The GPU still hasn't reset, give up */
+ return MALI_ERROR_FUNCTION_FAILED;
+
+out:
+ /* If cycle counter was in use-re enable it */
+ osk_spinlock_irq_lock( &kbdev->pm.gpu_cycle_counter_requests_lock );
+
+ if ( kbdev->pm.gpu_cycle_counter_requests )
+ {
+ kbase_reg_write( kbdev, GPU_CONTROL_REG(GPU_COMMAND), GPU_COMMAND_CYCLE_COUNT_START, NULL );
+ }
+
+ osk_spinlock_irq_unlock( &kbdev->pm.gpu_cycle_counter_requests_lock );
+
+ kbase_pm_hw_issues(kbdev);
+
+ return MALI_ERROR_NONE;
+}
+KBASE_EXPORT_TEST_API(kbase_pm_init_hw)
+
+void kbase_pm_request_gpu_cycle_counter( kbase_device *kbdev )
+{
+ OSK_ASSERT( kbdev != NULL );
+
+ OSK_ASSERT( kbdev->pm.gpu_powered );
+
+ osk_spinlock_irq_lock( &kbdev->pm.gpu_cycle_counter_requests_lock );
+
+ OSK_ASSERT( kbdev->pm.gpu_cycle_counter_requests < INT_MAX );
+
+ ++kbdev->pm.gpu_cycle_counter_requests;
+
+ if ( 1 == kbdev->pm.gpu_cycle_counter_requests )
+ {
+ kbase_reg_write( kbdev, GPU_CONTROL_REG(GPU_COMMAND), GPU_COMMAND_CYCLE_COUNT_START, NULL );
+ }
+ osk_spinlock_irq_unlock( &kbdev->pm.gpu_cycle_counter_requests_lock );
+}
+KBASE_EXPORT_TEST_API(kbase_pm_request_gpu_cycle_counter)
+
+void kbase_pm_release_gpu_cycle_counter( kbase_device *kbdev )
+{
+ OSK_ASSERT( kbdev != NULL );
+
+ osk_spinlock_irq_lock( &kbdev->pm.gpu_cycle_counter_requests_lock );
+
+ OSK_ASSERT( kbdev->pm.gpu_cycle_counter_requests > 0 );
+
+ --kbdev->pm.gpu_cycle_counter_requests;
+
+ if ( 0 == kbdev->pm.gpu_cycle_counter_requests )
+ {
+ kbase_reg_write( kbdev, GPU_CONTROL_REG(GPU_COMMAND), GPU_COMMAND_CYCLE_COUNT_STOP, NULL );
+ }
+ osk_spinlock_irq_unlock( &kbdev->pm.gpu_cycle_counter_requests_lock );
+}
+KBASE_EXPORT_TEST_API(kbase_pm_release_gpu_cycle_counter)
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_pm_metrics.c
+ * Metrics for power management
+ */
+
+#include <osk/mali_osk.h>
+
+#include <kbase/src/common/mali_kbase.h>
+#include <kbase/src/common/mali_kbase_pm.h>
+
+/* When VSync is being hit aim for utilisation between 70-90% */
+#define KBASE_PM_VSYNC_MIN_UTILISATION 70
+#define KBASE_PM_VSYNC_MAX_UTILISATION 90
+/* Otherwise aim for 10-40% */
+#define KBASE_PM_NO_VSYNC_MIN_UTILISATION 10
+#define KBASE_PM_NO_VSYNC_MAX_UTILISATION 40
+
+/* Frequency that DVFS clock frequency decisions should be made */
+#define KBASE_PM_DVFS_FREQUENCY 500
+
+static void dvfs_callback(void *data)
+{
+ kbase_device *kbdev;
+ kbase_pm_dvfs_action action;
+ osk_error ret;
+
+ OSK_ASSERT(data != NULL);
+
+ kbdev = (kbase_device*)data;
+ action = kbase_pm_get_dvfs_action(kbdev);
+
+ switch(action) {
+ case KBASE_PM_DVFS_NOP:
+ break;
+ case KBASE_PM_DVFS_CLOCK_UP:
+ /* Do whatever is required to increase the clock frequency */
+ break;
+ case KBASE_PM_DVFS_CLOCK_DOWN:
+ /* Do whatever is required to decrease the clock frequency */
+ break;
+ }
+
+ osk_spinlock_irq_lock(&kbdev->pm.metrics.lock);
+ if (kbdev->pm.metrics.timer_active)
+ {
+ ret = osk_timer_start(&kbdev->pm.metrics.timer, KBASE_PM_DVFS_FREQUENCY);
+ if (ret != OSK_ERR_NONE)
+ {
+ /* Handle the situation where the timer cannot be restarted */
+ }
+ }
+ osk_spinlock_irq_unlock(&kbdev->pm.metrics.lock);
+}
+
+mali_error kbasep_pm_metrics_init(kbase_device *kbdev)
+{
+ osk_error osk_err;
+ mali_error ret;
+
+ OSK_ASSERT(kbdev != NULL);
+
+ kbdev->pm.metrics.vsync_hit = 0;
+ kbdev->pm.metrics.utilisation = 0;
+
+ kbdev->pm.metrics.time_period_start = osk_time_now();
+ kbdev->pm.metrics.time_busy = 0;
+ kbdev->pm.metrics.time_idle = 0;
+ kbdev->pm.metrics.gpu_active = MALI_TRUE;
+ kbdev->pm.metrics.timer_active = MALI_TRUE;
+
+ osk_err = osk_spinlock_irq_init(&kbdev->pm.metrics.lock, OSK_LOCK_ORDER_PM_METRICS);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ ret = MALI_ERROR_FUNCTION_FAILED;
+ goto out;
+ }
+
+ osk_err = osk_timer_init(&kbdev->pm.metrics.timer);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ ret = MALI_ERROR_FUNCTION_FAILED;
+ goto spinlock_free;
+ }
+ osk_timer_callback_set(&kbdev->pm.metrics.timer, dvfs_callback, kbdev);
+ osk_err = osk_timer_start(&kbdev->pm.metrics.timer, KBASE_PM_DVFS_FREQUENCY);
+ if (OSK_ERR_NONE != osk_err)
+ {
+ ret = MALI_ERROR_FUNCTION_FAILED;
+ goto timer_free;
+ }
+
+ kbase_pm_register_vsync_callback(kbdev);
+ ret = MALI_ERROR_NONE;
+ goto out;
+
+timer_free:
+ osk_timer_stop(&kbdev->pm.metrics.timer);
+ osk_timer_term(&kbdev->pm.metrics.timer);
+spinlock_free:
+ osk_spinlock_irq_term(&kbdev->pm.metrics.lock);
+out:
+ return ret;
+}
+KBASE_EXPORT_TEST_API(kbasep_pm_metrics_init)
+
+void kbasep_pm_metrics_term(kbase_device *kbdev)
+{
+ OSK_ASSERT(kbdev != NULL);
+
+ osk_spinlock_irq_lock(&kbdev->pm.metrics.lock);
+ kbdev->pm.metrics.timer_active = MALI_FALSE;
+ osk_spinlock_irq_unlock(&kbdev->pm.metrics.lock);
+
+ osk_timer_stop(&kbdev->pm.metrics.timer);
+ osk_timer_term(&kbdev->pm.metrics.timer);
+
+ kbase_pm_unregister_vsync_callback(kbdev);
+
+ osk_spinlock_irq_term(&kbdev->pm.metrics.lock);
+}
+KBASE_EXPORT_TEST_API(kbasep_pm_metrics_term)
+
+void kbasep_pm_record_gpu_idle(kbase_device *kbdev)
+{
+ osk_ticks now = osk_time_now();
+
+ OSK_ASSERT(kbdev != NULL);
+
+ osk_spinlock_irq_lock(&kbdev->pm.metrics.lock);
+
+ OSK_ASSERT(kbdev->pm.metrics.gpu_active == MALI_TRUE);
+
+ kbdev->pm.metrics.gpu_active = MALI_FALSE;
+
+ kbdev->pm.metrics.time_busy += osk_time_elapsed(kbdev->pm.metrics.time_period_start, now);
+ kbdev->pm.metrics.time_period_start = now;
+
+ osk_spinlock_irq_unlock(&kbdev->pm.metrics.lock);
+}
+KBASE_EXPORT_TEST_API(kbasep_pm_record_gpu_idle)
+
+void kbasep_pm_record_gpu_active(kbase_device *kbdev)
+{
+ osk_ticks now = osk_time_now();
+
+ OSK_ASSERT(kbdev != NULL);
+
+ osk_spinlock_irq_lock(&kbdev->pm.metrics.lock);
+
+ OSK_ASSERT(kbdev->pm.metrics.gpu_active == MALI_FALSE);
+
+ kbdev->pm.metrics.gpu_active = MALI_TRUE;
+
+ kbdev->pm.metrics.time_idle += osk_time_elapsed(kbdev->pm.metrics.time_period_start, now);
+ kbdev->pm.metrics.time_period_start = now;
+
+ osk_spinlock_irq_unlock(&kbdev->pm.metrics.lock);
+}
+KBASE_EXPORT_TEST_API(kbasep_pm_record_gpu_active)
+
+void kbase_pm_report_vsync(kbase_device *kbdev, int buffer_updated)
+{
+ OSK_ASSERT(kbdev != NULL);
+
+ osk_spinlock_irq_lock(&kbdev->pm.metrics.lock);
+ kbdev->pm.metrics.vsync_hit = buffer_updated;
+ osk_spinlock_irq_unlock(&kbdev->pm.metrics.lock);
+}
+KBASE_EXPORT_TEST_API(kbase_pm_report_vsync)
+
+kbase_pm_dvfs_action kbase_pm_get_dvfs_action(kbase_device *kbdev)
+{
+ int utilisation;
+ kbase_pm_dvfs_action action;
+ osk_ticks now = osk_time_now();
+
+ OSK_ASSERT(kbdev != NULL);
+
+ osk_spinlock_irq_lock(&kbdev->pm.metrics.lock);
+
+ if (kbdev->pm.metrics.gpu_active)
+ {
+ kbdev->pm.metrics.time_busy += osk_time_elapsed(kbdev->pm.metrics.time_period_start, now);
+ kbdev->pm.metrics.time_period_start = now;
+ }
+ else
+ {
+ kbdev->pm.metrics.time_idle += osk_time_elapsed(kbdev->pm.metrics.time_period_start, now);
+ kbdev->pm.metrics.time_period_start = now;
+ }
+
+ if (kbdev->pm.metrics.time_idle + kbdev->pm.metrics.time_busy == 0)
+ {
+ /* No data - so we return NOP */
+ action = KBASE_PM_DVFS_NOP;
+ goto out;
+ }
+
+ utilisation = (100*kbdev->pm.metrics.time_busy) / (kbdev->pm.metrics.time_idle + kbdev->pm.metrics.time_busy);
+
+ if (kbdev->pm.metrics.vsync_hit)
+ {
+ /* VSync is being met */
+ if (utilisation < KBASE_PM_VSYNC_MIN_UTILISATION)
+ {
+ action = KBASE_PM_DVFS_CLOCK_DOWN;
+ }
+ else if (utilisation > KBASE_PM_VSYNC_MAX_UTILISATION)
+ {
+ action = KBASE_PM_DVFS_CLOCK_UP;
+ }
+ else
+ {
+ action = KBASE_PM_DVFS_NOP;
+ }
+ }
+ else
+ {
+ /* VSync is being missed */
+ if (utilisation < KBASE_PM_NO_VSYNC_MIN_UTILISATION)
+ {
+ action = KBASE_PM_DVFS_CLOCK_DOWN;
+ }
+ else if (utilisation > KBASE_PM_NO_VSYNC_MAX_UTILISATION)
+ {
+ action = KBASE_PM_DVFS_CLOCK_UP;
+ }
+ else
+ {
+ action = KBASE_PM_DVFS_NOP;
+ }
+ }
+
+out:
+
+ kbdev->pm.metrics.time_idle = 0;
+ kbdev->pm.metrics.time_busy = 0;
+
+ osk_spinlock_irq_unlock(&kbdev->pm.metrics.lock);
+
+ return action;
+}
+KBASE_EXPORT_TEST_API(kbase_pm_get_dvfs_action)
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_pm_metrics_dummy.c
+ * Dummy Metrics for power management.
+ */
+
+#include <kbase/src/common/mali_kbase.h>
+#include <kbase/src/common/mali_kbase_pm.h>
+
+void kbase_pm_register_vsync_callback(kbase_device *kbdev)
+{
+ OSK_ASSERT(kbdev != NULL);
+
+ /* no VSync metrics will be available */
+ kbdev->pm.metrics.platform_data = NULL;
+}
+
+void kbase_pm_unregister_vsync_callback(kbase_device *kbdev)
+{
+ OSK_ASSERT(kbdev != NULL);
+}
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_security.c
+ * Base kernel security capability API
+ */
+
+#include <osk/mali_osk.h>
+#include <kbase/src/common/mali_kbase.h>
+
+/**
+ * kbase_security_has_capability - see mali_kbase_caps.h for description.
+ */
+
+mali_bool kbase_security_has_capability(kbase_context *kctx, kbase_security_capability cap, u32 flags)
+{
+ /* Assume failure */
+ mali_bool access_allowed = MALI_FALSE;
+ mali_bool audit = (KBASE_SEC_FLAG_AUDIT & flags)? MALI_TRUE : MALI_FALSE;
+
+ OSK_ASSERT(NULL != kctx);
+ CSTD_UNUSED(kctx);
+
+ /* Detect unsupported flags */
+ OSK_ASSERT(((~KBASE_SEC_FLAG_MASK) & flags) == 0);
+
+ /* Determine if access is allowed for the given cap */
+ switch(cap)
+ {
+ case KBASE_SEC_MODIFY_PRIORITY:
+#if KBASE_HWCNT_DUMP_BYPASS_ROOT
+ access_allowed = TRUE;
+#else
+ if (osk_is_privileged() == MALI_TRUE)
+ {
+ access_allowed = TRUE;
+ }
+#endif
+ break;
+ case KBASE_SEC_INSTR_HW_COUNTERS_COLLECT:
+ /* Access is granted only if the caller is privileged */
+#if KBASE_HWCNT_DUMP_BYPASS_ROOT
+ access_allowed = TRUE;
+#else
+ if (osk_is_privileged() == MALI_TRUE)
+ {
+ access_allowed = TRUE;
+ }
+#endif
+ break;
+ }
+
+ /* Report problem if requested */
+ if(MALI_FALSE == access_allowed)
+ {
+ if(MALI_FALSE != audit)
+ {
+ OSK_PRINT_WARN(OSK_BASE_CORE, "Security capability failure: %d, %p", cap, (void *)kctx);
+ }
+ }
+
+ return access_allowed;
+}
+KBASE_EXPORT_TEST_API(kbase_security_has_capability)
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_security.h
+ * Base kernel security capability APIs
+ */
+
+#ifndef _KBASE_SECURITY_H_
+#define _KBASE_SECURITY_H_
+
+/* Security flags */
+#define KBASE_SEC_FLAG_NOAUDIT (0u << 0) /* Silently handle privilege failure */
+#define KBASE_SEC_FLAG_AUDIT (1u << 0) /* Write audit message on privilege failure */
+#define KBASE_SEC_FLAG_MASK (KBASE_SEC_FLAG_AUDIT) /* Mask of all valid flag bits */
+
+/* List of unique capabilities that have security access privileges */
+typedef enum {
+ /* Instrumentation Counters access privilege */
+ KBASE_SEC_INSTR_HW_COUNTERS_COLLECT = 1,
+ KBASE_SEC_MODIFY_PRIORITY
+ /* Add additional access privileges here */
+} kbase_security_capability;
+
+
+/**
+ * kbase_security_has_capability - determine whether a task has a particular effective capability
+ * @param[in] kctx The task context.
+ * @param[in] cap The capability to check for.
+ * @param[in] flags Additional configuration information
+ * Such as whether to write an audit message or not.
+ * @return MALI_TRUE if success (capability is allowed), MALI_FALSE otherwise.
+ */
+
+mali_bool kbase_security_has_capability(kbase_context *kctx, kbase_security_capability cap, u32 flags);
+
+#endif /* _KBASE_SECURITY_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#include <kbase/src/common/mali_kbase.h>
+
+/**
+ * @file mali_kbase_softjobs.c
+ *
+ * This file implements the logic behind software only jobs that are
+ * executed within the driver rather than being handed over to the GPU.
+ */
+
+static base_jd_event_code kbase_dump_cpu_gpu_time(kbase_context *kctx, kbase_jd_atom *katom)
+{
+ kbase_va_region *reg;
+ osk_phy_addr addr;
+ u64 pfn;
+ u32 offset;
+ char *page;
+ osk_timeval tv;
+ base_dump_cpu_gpu_counters data;
+ u64 system_time;
+ u64 cycle_counter;
+ mali_addr64 jc = katom->jc;
+
+ u32 hi1, hi2;
+
+ OSK_MEMSET(&data, 0, sizeof(data));
+
+ /* GPU needs to be powered to read the cycle counters, the jctx->lock protects this check */
+ if (!katom->bag->has_pm_ctx_reference)
+ {
+ kbase_pm_context_active(kctx->kbdev);
+ katom->bag->has_pm_ctx_reference = MALI_TRUE;
+ }
+
+ /* Read hi, lo, hi to ensure that overflow from lo to hi is handled correctly */
+ do {
+ hi1 = kbase_reg_read(kctx->kbdev, GPU_CONTROL_REG(CYCLE_COUNT_HI), NULL);
+ cycle_counter = kbase_reg_read(kctx->kbdev, GPU_CONTROL_REG(CYCLE_COUNT_LO), NULL);
+ hi2 = kbase_reg_read(kctx->kbdev, GPU_CONTROL_REG(CYCLE_COUNT_HI), NULL);
+ cycle_counter |= (((u64)hi1) << 32);
+ } while (hi1 != hi2);
+
+ /* Read hi, lo, hi to ensure that overflow from lo to hi is handled correctly */
+ do {
+ hi1 = kbase_reg_read(kctx->kbdev, GPU_CONTROL_REG(TIMESTAMP_HI), NULL);
+ system_time = kbase_reg_read(kctx->kbdev, GPU_CONTROL_REG(TIMESTAMP_LO), NULL);
+ hi2 = kbase_reg_read(kctx->kbdev, GPU_CONTROL_REG(TIMESTAMP_HI), NULL);
+ system_time |= (((u64)hi1) << 32);
+ } while (hi1 != hi2);
+
+ /* Record the CPU's idea of current time */
+ osk_gettimeofday(&tv);
+
+ data.sec = tv.tv_sec;
+ data.usec = tv.tv_usec;
+ data.system_time = system_time;
+ data.cycle_counter = cycle_counter;
+
+ pfn = jc >> 12;
+ offset = jc & 0xFFF;
+
+ if (offset > 0x1000-sizeof(data))
+ {
+ /* Wouldn't fit in the page */
+ return BASE_JD_EVENT_JOB_CANCELLED;
+ }
+
+ reg = kbase_region_lookup(kctx, jc);
+ if (!reg)
+ {
+ return BASE_JD_EVENT_JOB_CANCELLED;
+ }
+
+ if (! (reg->flags & KBASE_REG_GPU_WR) )
+ {
+ /* Region is not writable by GPU so we won't write to it either */
+ return BASE_JD_EVENT_JOB_CANCELLED;
+ }
+
+ if (!reg->phy_pages)
+ {
+ return BASE_JD_EVENT_JOB_CANCELLED;
+ }
+
+ addr = reg->phy_pages[pfn - reg->start_pfn];
+ if (!addr)
+ {
+ return BASE_JD_EVENT_JOB_CANCELLED;
+ }
+
+ page = osk_kmap(addr);
+ if (!page)
+ {
+ return BASE_JD_EVENT_JOB_CANCELLED;
+ }
+ memcpy(page+offset, &data, sizeof(data));
+ osk_kunmap(addr, page);
+
+ return BASE_JD_EVENT_DONE;
+}
+
+void kbase_process_soft_job( kbase_context *kctx, kbase_jd_atom *katom )
+{
+ switch(katom->core_req) {
+ case BASE_JD_REQ_SOFT_DUMP_CPU_GPU_TIME:
+ katom->event.event_code = kbase_dump_cpu_gpu_time( kctx, katom);
+ break;
+ }
+}
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+
+/* ***** IMPORTANT: THIS IS NOT A NORMAL HEADER FILE *****
+ * ***** DO NOT INCLUDE DIRECTLY *****
+ * ***** THE LACK OF HEADER GUARDS IS INTENTIONAL ***** */
+
+/*
+ * The purpose of this header file is just to contain a list of trace code idenitifers
+ *
+ * Each identifier is wrapped in a macro, so that its string form and enum form can be created
+ *
+ * Each macro is separated with a comma, to allow insertion into an array initializer or enum definition block.
+ *
+ * This allows automatic creation of an enum and a corresponding array of strings
+ *
+ * Before #including, the includer MUST #define KBASE_TRACE_CODE_MAKE_CODE.
+ * After #including, the includer MUST #under KBASE_TRACE_CODE_MAKE_CODE.
+ *
+ * e.g.:
+ * #define KBASE_TRACE_CODE( X ) KBASE_TRACE_CODE_ ## X
+ * typedef enum
+ * {
+ * #define KBASE_TRACE_CODE_MAKE_CODE( X ) KBASE_TRACE_CODE( X )
+ * #include "mali_kbase_trace_defs.h"
+ * #undef KBASE_TRACE_CODE_MAKE_CODE
+ * } kbase_trace_code;
+ *
+ * IMPORTANT: THIS FILE MUST NOT BE USED FOR ANY OTHER PURPOSE OTHER THAN THE ABOVE
+ *
+ *
+ * The use of the macro here is:
+ * - KBASE_TRACE_CODE_MAKE_CODE( X )
+ *
+ * Which produces:
+ * - For an enum, KBASE_TRACE_CODE_X
+ * - For a string, "X"
+ *
+ *
+ * For example:
+ * - KBASE_TRACE_CODE_MAKE_CODE( JM_JOB_COMPLETE ) expands to:
+ * - KBASE_TRACE_CODE_JM_JOB_COMPLETE for the enum
+ * - "JM_JOB_COMPLETE" for the string
+ * - To use it to trace an event, do:
+ * - KBASE_TRACE_ADD( kbdev, JM_JOB_COMPLETE, subcode, kctx, uatom, val );
+ */
+
+/*
+ * Core events
+ */
+KBASE_TRACE_CODE_MAKE_CODE( CORE_CTX_DESTROY ), /* no info_val, no gpu_addr, no atom */
+KBASE_TRACE_CODE_MAKE_CODE( CORE_CTX_HWINSTR_TERM ), /* no info_val, no gpu_addr, no atom */
+
+/*
+ * Job Slot management events
+ */
+
+KBASE_TRACE_CODE_MAKE_CODE( JM_IRQ ), /* info_val==irq rawstat at start */
+KBASE_TRACE_CODE_MAKE_CODE( JM_IRQ_END ), /* info_val==jobs processed */
+/* In the following:
+ *
+ * - ctx is set if a corresponding job found (NULL otherwise, e.g. some soft-stop cases)
+ * - uatom==kernel-side mapped uatom address (for correlation with user-side)
+ */
+KBASE_TRACE_CODE_MAKE_CODE( JM_JOB_DONE ), /* info_val==exit code; gpu_addr==chain gpuaddr */
+KBASE_TRACE_CODE_MAKE_CODE( JM_SUBMIT ), /* gpu_addr==JSn_HEAD_NEXT written, info_val==lower 32 bits of affinity */
+
+/* gpu_addr is as follows:
+ * - If JSn_STATUS active after soft-stop, val==gpu addr written to JSn_HEAD on submit
+ * - otherwise gpu_addr==0 */
+KBASE_TRACE_CODE_MAKE_CODE( JM_SOFTSTOP ),
+KBASE_TRACE_CODE_MAKE_CODE( JM_HARDSTOP ), /* gpu_addr==JSn_HEAD read */
+
+KBASE_TRACE_CODE_MAKE_CODE( JM_UPDATE_HEAD ), /* gpu_addr==JSn_TAIL read */
+/* gpu_addr is as follows:
+ * - If JSn_STATUS active before soft-stop, val==JSn_HEAD
+ * - otherwise gpu_addr==0 */
+KBASE_TRACE_CODE_MAKE_CODE( JM_CHECK_HEAD ), /* gpu_addr==JSn_HEAD read */
+
+KBASE_TRACE_CODE_MAKE_CODE( JM_FLUSH_WORKQS ),
+KBASE_TRACE_CODE_MAKE_CODE( JM_FLUSH_WORKQS_DONE ),
+
+KBASE_TRACE_CODE_MAKE_CODE( JM_ZAP_NON_SCHEDULED ), /* info_val == is_scheduled */
+KBASE_TRACE_CODE_MAKE_CODE( JM_ZAP_SCHEDULED ), /* info_val == is_scheduled */
+KBASE_TRACE_CODE_MAKE_CODE( JM_ZAP_DONE ),
+
+KBASE_TRACE_CODE_MAKE_CODE( JM_SLOT_SOFT_OR_HARD_STOP ), /* info_val == nr jobs submitted */
+KBASE_TRACE_CODE_MAKE_CODE( JM_SLOT_EVICT ), /* gpu_addr==JSn_HEAD_NEXT last written */
+KBASE_TRACE_CODE_MAKE_CODE( JM_SUBMIT_AFTER_RESET ),
+
+
+/*
+ * Job dispatch events
+ */
+KBASE_TRACE_CODE_MAKE_CODE( JD_DONE ), /* gpu_addr==value to write into JSn_HEAD */
+KBASE_TRACE_CODE_MAKE_CODE( JD_DONE_WORKER ), /* gpu_addr==value to write into JSn_HEAD */
+KBASE_TRACE_CODE_MAKE_CODE( JD_DONE_WORKER_END ), /* gpu_addr==value to write into JSn_HEAD */
+KBASE_TRACE_CODE_MAKE_CODE( JD_DONE_TRY_RUN_NEXT_JOB ), /* gpu_addr==value to write into JSn_HEAD */
+
+KBASE_TRACE_CODE_MAKE_CODE( JD_ZAP_CONTEXT ), /* gpu_addr==0, info_val==0, uatom==0 */
+KBASE_TRACE_CODE_MAKE_CODE( JD_CANCEL ), /* gpu_addr==value to write into JSn_HEAD */
+KBASE_TRACE_CODE_MAKE_CODE( JD_CANCEL_WORKER ), /* gpu_addr==value to write into JSn_HEAD */
+
+/*
+ * Scheduler Core events
+ */
+
+KBASE_TRACE_CODE_MAKE_CODE( JS_RETAIN_CTX_NOLOCK ),
+KBASE_TRACE_CODE_MAKE_CODE( JS_ADD_JOB ), /* gpu_addr==value to write into JSn_HEAD */
+KBASE_TRACE_CODE_MAKE_CODE( JS_REMOVE_JOB ), /* gpu_addr==last value written/would be written to JSn_HEAD */
+KBASE_TRACE_CODE_MAKE_CODE( JS_RETAIN_CTX ),
+KBASE_TRACE_CODE_MAKE_CODE( JS_RELEASE_CTX ),
+KBASE_TRACE_CODE_MAKE_CODE( JS_TRY_SCHEDULE_HEAD_CTX ),
+KBASE_TRACE_CODE_MAKE_CODE( JS_JOB_DONE_TRY_RUN_NEXT_JOB ), /* gpu_addr==value to write into JSn_HEAD */
+KBASE_TRACE_CODE_MAKE_CODE( JS_JOB_DONE_RETRY_NEEDED ), /* gpu_addr==value to write into JSn_HEAD */
+KBASE_TRACE_CODE_MAKE_CODE( JS_FAST_START_EVICTS_CTX ), /* kctx is the one being evicted, info_val == kctx to put in */
+KBASE_TRACE_CODE_MAKE_CODE( JS_AFFINITY_SUBMIT_TO_BLOCKED ),
+KBASE_TRACE_CODE_MAKE_CODE( JS_AFFINITY_CURRENT ), /* info_val == lower 32 bits of affinity */
+KBASE_TRACE_CODE_MAKE_CODE( JS_CORE_REF_REQUEST_CORES_FAILED ), /* info_val == lower 32 bits of affinity */
+KBASE_TRACE_CODE_MAKE_CODE( JS_CORE_REF_REGISTER_INUSE_FAILED ), /* info_val == lower 32 bits of affinity */
+KBASE_TRACE_CODE_MAKE_CODE( JS_CORE_REF_REQUEST_ON_RECHECK_FAILED ), /* info_val == lower 32 bits of rechecked affinity */
+KBASE_TRACE_CODE_MAKE_CODE( JS_CORE_REF_REGISTER_ON_RECHECK_FAILED ), /* info_val == lower 32 bits of rechecked affinity */
+KBASE_TRACE_CODE_MAKE_CODE( JS_CORE_REF_AFFINITY_WOULD_VIOLATE ), /* info_val == lower 32 bits of affinity */
+KBASE_TRACE_CODE_MAKE_CODE( JS_CTX_ATTR_NOW_ON_CTX ), /* info_val == the ctx attribute now on ctx */
+KBASE_TRACE_CODE_MAKE_CODE( JS_CTX_ATTR_NOW_ON_RUNPOOL ), /* info_val == the ctx attribute now on runpool */
+KBASE_TRACE_CODE_MAKE_CODE( JS_CTX_ATTR_NOW_OFF_CTX ), /* info_val == the ctx attribute now off ctx */
+KBASE_TRACE_CODE_MAKE_CODE( JS_CTX_ATTR_NOW_OFF_RUNPOOL ), /* info_val == the ctx attribute now off runpool */
+
+
+/*
+ * Scheduler Policy events
+ */
+
+KBASE_TRACE_CODE_MAKE_CODE( JS_POLICY_INIT_CTX ),
+KBASE_TRACE_CODE_MAKE_CODE( JS_POLICY_TERM_CTX ),
+KBASE_TRACE_CODE_MAKE_CODE( JS_POLICY_TRY_EVICT_CTX ), /* info_val == whether it was evicted */
+KBASE_TRACE_CODE_MAKE_CODE( JS_POLICY_KILL_ALL_CTX_JOBS ),
+KBASE_TRACE_CODE_MAKE_CODE( JS_POLICY_ENQUEUE_CTX ),
+KBASE_TRACE_CODE_MAKE_CODE( JS_POLICY_DEQUEUE_HEAD_CTX ),
+KBASE_TRACE_CODE_MAKE_CODE( JS_POLICY_RUNPOOL_ADD_CTX ),
+KBASE_TRACE_CODE_MAKE_CODE( JS_POLICY_RUNPOOL_REMOVE_CTX ),
+
+KBASE_TRACE_CODE_MAKE_CODE( JS_POLICY_DEQUEUE_JOB ),
+KBASE_TRACE_CODE_MAKE_CODE( JS_POLICY_DEQUEUE_JOB_IRQ ),
+KBASE_TRACE_CODE_MAKE_CODE( JS_POLICY_ENQUEUE_JOB ), /* gpu_addr==JSn_HEAD to write if the job were run */
+KBASE_TRACE_CODE_MAKE_CODE( JS_POLICY_TIMER_START ),
+KBASE_TRACE_CODE_MAKE_CODE( JS_POLICY_TIMER_END ),
+
+
+/*
+ * Power Management Events
+ */
+KBASE_TRACE_CODE_MAKE_CODE( PM_JOB_SUBMIT_AFTER_POWERING_UP ),
+KBASE_TRACE_CODE_MAKE_CODE( PM_JOB_SUBMIT_AFTER_POWERED_UP ),
+KBASE_TRACE_CODE_MAKE_CODE( PM_PWRON ),
+KBASE_TRACE_CODE_MAKE_CODE( PM_PWROFF ),
+KBASE_TRACE_CODE_MAKE_CODE( PM_CORES_POWERED ),
+KBASE_TRACE_CODE_MAKE_CODE( PM_CORES_CHANGE_DESIRED_ON_POWERUP ),
+KBASE_TRACE_CODE_MAKE_CODE( PM_CORES_CHANGE_DESIRED_ON_POWERDOWN ),
+KBASE_TRACE_CODE_MAKE_CODE( PM_CORES_CHANGE_DESIRED ),
+KBASE_TRACE_CODE_MAKE_CODE( PM_CORES_CHANGE_AVAILABLE ),
+KBASE_TRACE_CODE_MAKE_CODE( PM_REGISTER_CHANGE_SHADER_INUSE ),
+KBASE_TRACE_CODE_MAKE_CODE( PM_REGISTER_CHANGE_SHADER_NEEDED ),
+KBASE_TRACE_CODE_MAKE_CODE( PM_RELEASE_CHANGE_SHADER_INUSE ),
+KBASE_TRACE_CODE_MAKE_CODE( PM_UNREQUEST_CHANGE_SHADER_NEEDED ),
+KBASE_TRACE_CODE_MAKE_CODE( PM_REQUEST_CHANGE_SHADER_NEEDED ),
+KBASE_TRACE_CODE_MAKE_CODE( PM_CONTEXT_ACTIVE ),
+KBASE_TRACE_CODE_MAKE_CODE( PM_CONTEXT_IDLE ),
+KBASE_TRACE_CODE_MAKE_CODE( PM_GPU_ON ),
+KBASE_TRACE_CODE_MAKE_CODE( PM_GPU_OFF ),
+
+
+
+/* Unused code just to make it easier to not have a comma at the end.
+ * All other codes MUST come before this */
+KBASE_TRACE_CODE_MAKE_CODE( DUMMY )
+
+/* ***** THE LACK OF HEADER GUARDS IS INTENTIONAL ***** */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2008-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#ifndef _KBASE_UKU_H_
+#define _KBASE_UKU_H_
+
+#include <uk/mali_uk.h>
+#if MALI_USE_UMP == 1
+#include <ump/ump_common.h>
+#endif /*MALI_USE_UMP == 1*/
+#include <malisw/mali_malisw.h>
+#include <kbase/mali_base_kernel.h>
+#if (MALI_ERROR_INJECT_ON || MALI_NO_MALI)
+#include <kbase/src/common/mali_kbase_model_dummy.h>
+#endif
+
+#include "mali_kbase_gpuprops_types.h"
+
+#define BASE_UK_VERSION_MAJOR 1
+#define BASE_UK_VERSION_MINOR 0
+
+/** 32/64-bit neutral way to represent pointers */
+typedef union kbase_pointer
+{
+ void * value; /**< client should store their pointers here */
+ u32 compat_value; /**< 64-bit kernels should fetch value here when handling 32-bit clients */
+ u64 sizer; /**< Force 64-bit storage for all clients regardless */
+} kbase_pointer;
+
+typedef struct kbase_uk_tmem_alloc
+{
+ uk_header header;
+ /* IN */
+ u32 vsize;
+ u32 psize;
+ u32 extent;
+ u32 flags;
+ mali_bool is_growable;
+ /* OUT */
+ mali_addr64 gpu_addr;
+} kbase_uk_tmem_alloc;
+
+typedef struct kbase_uk_tmem_import
+{
+ uk_header header;
+ /* IN */
+ kbase_pointer phandle;
+ base_tmem_import_type type;
+ u32 padding;
+ /* OUT */
+ mali_addr64 gpu_addr;
+ u64 pages;
+} kbase_uk_tmem_import;
+
+typedef struct kbase_uk_pmem_alloc
+{
+ uk_header header;
+ /* IN */
+ u32 vsize;
+ u32 flags;
+ /* OUT */
+ u16 cookie;
+} kbase_uk_pmem_alloc;
+
+typedef struct kbase_uk_mem_free
+{
+ uk_header header;
+ /* IN */
+ mali_addr64 gpu_addr;
+ /* OUT */
+} kbase_uk_mem_free;
+
+typedef struct kbase_uk_job_submit
+{
+ uk_header header;
+ /* IN */
+ u64 bag_uaddr;
+ u64 core_restriction;
+ u32 offset;
+ u32 size;
+ u32 nr_atoms;
+ /* OUT */
+} kbase_uk_job_submit;
+
+typedef struct kbase_uk_post_term
+{
+ uk_header header;
+} kbase_uk_post_term;
+
+typedef struct kbase_uk_sync_now
+{
+ uk_header header;
+
+ /* IN */
+ base_syncset sset;
+
+ /* OUT */
+} kbase_uk_sync_now;
+
+typedef struct kbase_uk_hwcnt_setup
+{
+ uk_header header;
+
+ /* IN */
+ mali_addr64 dump_buffer;
+ u32 jm_bm;
+ u32 shader_bm;
+ u32 tiler_bm;
+ u32 l3_cache_bm;
+ u32 mmu_l2_bm;
+ /* OUT */
+} kbase_uk_hwcnt_setup;
+
+typedef struct kbase_uk_hwcnt_dump
+{
+ uk_header header;
+} kbase_uk_hwcnt_dump;
+
+typedef struct kbase_uk_hwcnt_clear
+{
+ uk_header header;
+} kbase_uk_hwcnt_clear;
+
+typedef struct kbase_uk_cpuprops
+{
+ uk_header header;
+
+ /* IN */
+ struct base_cpu_props props;
+ /* OUT */
+}kbase_uk_cpuprops;
+
+typedef struct kbase_uk_gpuprops
+{
+ uk_header header;
+
+ /* IN */
+ struct mali_base_gpu_props props;
+ /* OUT */
+}kbase_uk_gpuprops;
+
+typedef struct kbase_uk_tmem_resize
+{
+ uk_header header;
+ /* IN */
+ mali_addr64 gpu_addr;
+ s32 delta;
+ /* OUT */
+ u32 size;
+ base_backing_threshold_status result_subcode;
+} kbase_uk_tmem_resize;
+
+typedef struct kbase_uk_find_cpu_mapping
+{
+ uk_header header;
+ /* IN */
+ mali_addr64 gpu_addr;
+ u64 cpu_addr;
+ u64 size;
+ /* OUT */
+ u64 uaddr;
+ u32 nr_pages;
+ mali_size64 page_off;
+} kbase_uk_find_cpu_mapping;
+
+#define KBASE_GET_VERSION_BUFFER_SIZE 64
+typedef struct kbase_uk_get_ddk_version
+{
+ uk_header header;
+ /* OUT */
+ char version_buffer[KBASE_GET_VERSION_BUFFER_SIZE];
+ u32 version_string_size;
+} kbase_uk_get_ddk_version;
+
+typedef struct kbase_uk_set_flags
+{
+ uk_header header;
+ /* IN */
+ u32 create_flags;
+} kbase_uk_set_flags;
+
+#if MALI_UNIT_TEST
+#define TEST_ADDR_COUNT 4
+#define KBASE_TEST_BUFFER_SIZE 128
+typedef struct kbase_exported_test_data
+{
+ mali_addr64 test_addr[TEST_ADDR_COUNT]; /* memory address */
+ u32 test_addr_pages[TEST_ADDR_COUNT]; /* memory size in pages */
+ struct kbase_context *kctx; /* base context created by process */
+ void *mm; /* pointer to process address space */
+ u8 buffer1[KBASE_TEST_BUFFER_SIZE]; /* unit test defined parameter */
+ u8 buffer2[KBASE_TEST_BUFFER_SIZE]; /* unit test defined parameter */
+} kbase_exported_test_data;
+
+typedef struct kbase_uk_set_test_data
+{
+ uk_header header;
+ /* IN */
+ kbase_exported_test_data test_data;
+} kbase_uk_set_test_data;
+
+#endif /* MALI_UNIT_TEST */
+#if MALI_ERROR_INJECT_ON
+typedef struct kbase_uk_error_params
+{
+ uk_header header;
+ /* IN */
+ kbase_error_params params;
+} kbase_uk_error_params;
+#endif
+
+#if MALI_NO_MALI
+typedef struct kbase_uk_model_control_params
+{
+ uk_header header;
+ /* IN */
+ kbase_model_control_params params;
+} kbase_uk_model_control_params;
+#endif /* MALI_NO_MALI */
+
+
+typedef struct kbase_uk_ext_buff_kds_data
+{
+ uk_header header;
+ kbase_pointer external_resource;
+ int num_res;
+ kbase_pointer file_descriptor;
+} kbase_uk_ext_buff_kds_data;
+
+
+typedef enum kbase_uk_function_id
+{
+ KBASE_FUNC_TMEM_ALLOC = (UK_FUNC_ID + 0),
+ KBASE_FUNC_TMEM_IMPORT,
+ KBASE_FUNC_PMEM_ALLOC,
+ KBASE_FUNC_MEM_FREE,
+
+ KBASE_FUNC_JOB_SUBMIT,
+
+ KBASE_FUNC_SYNC,
+
+ KBASE_FUNC_POST_TERM,
+
+ KBASE_FUNC_HWCNT_SETUP,
+ KBASE_FUNC_HWCNT_DUMP,
+ KBASE_FUNC_HWCNT_CLEAR,
+
+ KBASE_FUNC_CPU_PROPS_REG_DUMP,
+ KBASE_FUNC_GPU_PROPS_REG_DUMP,
+
+ KBASE_FUNC_TMEM_RESIZE,
+
+ KBASE_FUNC_FIND_CPU_MAPPING,
+
+ KBASE_FUNC_GET_VERSION,
+ KBASE_FUNC_EXT_BUFFER_LOCK,
+ KBASE_FUNC_SET_FLAGS
+
+#if MALI_DEBUG
+ , KBASE_FUNC_SET_TEST_DATA
+#endif /* MALI_DEBUG */
+#if MALI_ERROR_INJECT_ON
+ , KBASE_FUNC_INJECT_ERROR
+#endif
+#if MALI_NO_MALI
+ , KBASE_FUNC_MODEL_CONTROL
+#endif /* MALI_NO_MALI */
+
+} kbase_uk_function_id;
+
+#endif /* _KBASE_UKU_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#ifndef _MIDGARD_REGMAP_H_
+#define _MIDGARD_REGMAP_H_
+
+/*
+ * Begin Register Offsets
+ */
+
+#define GPU_CONTROL_BASE 0x0000
+#define GPU_CONTROL_REG(r) (GPU_CONTROL_BASE + (r))
+#define GPU_ID 0x000 /* (RO) GPU and revision identifier */
+#define L2_FEATURES 0x004 /* (RO) Level 2 cache features */
+#define L3_FEATURES 0x008 /* (RO) Level 3 cache features */
+#define TILER_FEATURES 0x00C /* (RO) Tiler Features */
+#define MEM_FEATURES 0x010 /* (RO) Memory system features */
+#define MMU_FEATURES 0x014 /* (RO) MMU features */
+#define AS_PRESENT 0x018 /* (RO) Address space slots present */
+#define JS_PRESENT 0x01C /* (RO) Job slots present */
+#define GPU_IRQ_RAWSTAT 0x020 /* (RW) */
+#define GPU_IRQ_CLEAR 0x024 /* (WO) */
+#define GPU_IRQ_MASK 0x028 /* (RW) */
+#define GPU_IRQ_STATUS 0x02C /* (RO) */
+
+/* IRQ flags */
+#define GPU_FAULT (1 << 0) /* A GPU Fault has occurred */
+#define MULTIPLE_GPU_FAULTS (1 << 7) /* More than one GPU Fault occurred. */
+#define RESET_COMPLETED (1 << 8) /* Set when a reset has completed. Intended to use with SOFT_RESET
+ commands which may take time.*/
+#define POWER_CHANGED_SINGLE (1 << 9) /* Set when a single core has finished powering up or down. */
+#define POWER_CHANGED_ALL (1 << 10) /* Set when all cores have finished powering up or down
+ and the power manager is idle. */
+
+#define PRFCNT_SAMPLE_COMPLETED (1 << 16) /* Set when a performance count sample has completed. */
+#define CLEAN_CACHES_COMPLETED (1 << 17) /* Set when a cache clean operation has completed. */
+
+#define GPU_IRQ_REG_ALL (GPU_FAULT | MULTIPLE_GPU_FAULTS | RESET_COMPLETED \
+ | POWER_CHANGED_ALL | PRFCNT_SAMPLE_COMPLETED \
+ | CLEAN_CACHES_COMPLETED)
+
+#define GPU_COMMAND 0x030 /* (WO) */
+#define GPU_STATUS 0x034 /* (RO) */
+
+#define GROUPS_L2_COHERENT (1 << 0) /* Cores groups are l2 coherent */
+#define GROUPS_L3_COHERENT (1 << 1) /* Cores groups are l3 coherent */
+
+#define GPU_FAULTSTATUS 0x03C /* (RO) GPU exception type and fault status */
+#define GPU_FAULTADDRESS_LO 0x040 /* (RO) GPU exception fault address, low word */
+#define GPU_FAULTADDRESS_HI 0x044 /* (RO) GPU exception fault address, high word */
+
+#define PWR_KEY 0x050 /* (WO) Power manager key register */
+#define PWR_OVERRIDE0 0x054 /* (RW) Power manager override settings */
+#define PWR_OVERRIDE1 0x058 /* (RW) Power manager override settings */
+
+#define PRFCNT_BASE_LO 0x060 /* (RW) Performance counter memory region base address, low word */
+#define PRFCNT_BASE_HI 0x064 /* (RW) Performance counter memory region base address, high word */
+#define PRFCNT_CONFIG 0x068 /* (RW) Performance counter configuration */
+#define PRFCNT_JM_EN 0x06C /* (RW) Performance counter enable flags for Job Manager */
+#define PRFCNT_SHADER_EN 0x070 /* (RW) Performance counter enable flags for shader cores */
+#define PRFCNT_TILER_EN 0x074 /* (RW) Performance counter enable flags for tiler */
+#define PRFCNT_L3_CACHE_EN 0x078 /* (RW) Performance counter enable flags for L3 cache */
+#define PRFCNT_MMU_L2_EN 0x07C /* (RW) Performance counter enable flags for MMU/L2 cache */
+
+#define CYCLE_COUNT_LO 0x090 /* (RO) Cycle counter, low word */
+#define CYCLE_COUNT_HI 0x094 /* (RO) Cycle counter, high word */
+#define TIMESTAMP_LO 0x098 /* (RO) Global time stamp counter, low word */
+#define TIMESTAMP_HI 0x09C /* (RO) Global time stamp counter, high word */
+
+#define TEXTURE_FEATURES_0 0x0B0 /* (RO) Support flags for indexed texture formats 0..31 */
+#define TEXTURE_FEATURES_1 0x0B4 /* (RO) Support flags for indexed texture formats 32..63 */
+#define TEXTURE_FEATURES_2 0x0B8 /* (RO) Support flags for indexed texture formats 64..95 */
+
+#define TEXTURE_FEATURES_REG(n) GPU_CONTROL_REG(TEXTURE_FEATURES_0 + ((n) << 2))
+
+#define JS0_FEATURES 0x0C0 /* (RO) Features of job slot 0 */
+#define JS1_FEATURES 0x0C4 /* (RO) Features of job slot 1 */
+#define JS2_FEATURES 0x0C8 /* (RO) Features of job slot 2 */
+#define JS3_FEATURES 0x0CC /* (RO) Features of job slot 3 */
+#define JS4_FEATURES 0x0D0 /* (RO) Features of job slot 4 */
+#define JS5_FEATURES 0x0D4 /* (RO) Features of job slot 5 */
+#define JS6_FEATURES 0x0D8 /* (RO) Features of job slot 6 */
+#define JS7_FEATURES 0x0DC /* (RO) Features of job slot 7 */
+#define JS8_FEATURES 0x0E0 /* (RO) Features of job slot 8 */
+#define JS9_FEATURES 0x0E4 /* (RO) Features of job slot 9 */
+#define JS10_FEATURES 0x0E8 /* (RO) Features of job slot 10 */
+#define JS11_FEATURES 0x0EC /* (RO) Features of job slot 11 */
+#define JS12_FEATURES 0x0F0 /* (RO) Features of job slot 12 */
+#define JS13_FEATURES 0x0F4 /* (RO) Features of job slot 13 */
+#define JS14_FEATURES 0x0F8 /* (RO) Features of job slot 14 */
+#define JS15_FEATURES 0x0FC /* (RO) Features of job slot 15 */
+
+#define JS_FEATURES_REG(n) GPU_CONTROL_REG(JS0_FEATURES + ((n) << 2))
+
+#define SHADER_PRESENT_LO 0x100 /* (RO) Shader core present bitmap, low word */
+#define SHADER_PRESENT_HI 0x104 /* (RO) Shader core present bitmap, high word */
+
+#define TILER_PRESENT_LO 0x110 /* (RO) Tiler core present bitmap, low word */
+#define TILER_PRESENT_HI 0x114 /* (RO) Tiler core present bitmap, high word */
+
+#define L2_PRESENT_LO 0x120 /* (RO) Level 2 cache present bitmap, low word */
+#define L2_PRESENT_HI 0x124 /* (RO) Level 2 cache present bitmap, high word */
+
+#define L3_PRESENT_LO 0x130 /* (RO) Level 3 cache present bitmap, low word */
+#define L3_PRESENT_HI 0x134 /* (RO) Level 3 cache present bitmap, high word */
+
+#define SHADER_READY_LO 0x140 /* (RO) Shader core ready bitmap, low word */
+#define SHADER_READY_HI 0x144 /* (RO) Shader core ready bitmap, high word */
+
+#define TILER_READY_LO 0x150 /* (RO) Tiler core ready bitmap, low word */
+#define TILER_READY_HI 0x154 /* (RO) Tiler core ready bitmap, high word */
+
+#define L2_READY_LO 0x160 /* (RO) Level 2 cache ready bitmap, low word */
+#define L2_READY_HI 0x164 /* (RO) Level 2 cache ready bitmap, high word */
+
+#define L3_READY_LO 0x170 /* (RO) Level 3 cache ready bitmap, low word */
+#define L3_READY_HI 0x174 /* (RO) Level 3 cache ready bitmap, high word */
+
+#define SHADER_PWRON_LO 0x180 /* (WO) Shader core power on bitmap, low word */
+#define SHADER_PWRON_HI 0x184 /* (WO) Shader core power on bitmap, high word */
+
+#define TILER_PWRON_LO 0x190 /* (WO) Tiler core power on bitmap, low word */
+#define TILER_PWRON_HI 0x194 /* (WO) Tiler core power on bitmap, high word */
+
+#define L2_PWRON_LO 0x1A0 /* (WO) Level 2 cache power on bitmap, low word */
+#define L2_PWRON_HI 0x1A4 /* (WO) Level 2 cache power on bitmap, high word */
+
+#define L3_PWRON_LO 0x1B0 /* (WO) Level 3 cache power on bitmap, low word */
+#define L3_PWRON_HI 0x1B4 /* (WO) Level 3 cache power on bitmap, high word */
+
+#define SHADER_PWROFF_LO 0x1C0 /* (WO) Shader core power off bitmap, low word */
+#define SHADER_PWROFF_HI 0x1C4 /* (WO) Shader core power off bitmap, high word */
+
+#define TILER_PWROFF_LO 0x1D0 /* (WO) Tiler core power off bitmap, low word */
+#define TILER_PWROFF_HI 0x1D4 /* (WO) Tiler core power off bitmap, high word */
+
+#define L2_PWROFF_LO 0x1E0 /* (WO) Level 2 cache power off bitmap, low word */
+#define L2_PWROFF_HI 0x1E4 /* (WO) Level 2 cache power off bitmap, high word */
+
+#define L3_PWROFF_LO 0x1F0 /* (WO) Level 3 cache power off bitmap, low word */
+#define L3_PWROFF_HI 0x1F4 /* (WO) Level 3 cache power off bitmap, high word */
+
+#define SHADER_PWRTRANS_LO 0x200 /* (RO) Shader core power transition bitmap, low word */
+#define SHADER_PWRTRANS_HI 0x204 /* (RO) Shader core power transition bitmap, high word */
+
+#define TILER_PWRTRANS_LO 0x210 /* (RO) Tiler core power transition bitmap, low word */
+#define TILER_PWRTRANS_HI 0x214 /* (RO) Tiler core power transition bitmap, high word */
+
+#define L2_PWRTRANS_LO 0x220 /* (RO) Level 2 cache power transition bitmap, low word */
+#define L2_PWRTRANS_HI 0x224 /* (RO) Level 2 cache power transition bitmap, high word */
+
+#define L3_PWRTRANS_LO 0x230 /* (RO) Level 3 cache power transition bitmap, low word */
+#define L3_PWRTRANS_HI 0x234 /* (RO) Level 3 cache power transition bitmap, high word */
+
+#define SHADER_PWRACTIVE_LO 0x240 /* (RO) Shader core active bitmap, low word */
+#define SHADER_PWRACTIVE_HI 0x244 /* (RO) Shader core active bitmap, high word */
+
+#define TILER_PWRACTIVE_LO 0x250 /* (RO) Tiler core active bitmap, low word */
+#define TILER_PWRACTIVE_HI 0x254 /* (RO) Tiler core active bitmap, high word */
+
+#define L2_PWRACTIVE_LO 0x260 /* (RO) Level 2 cache active bitmap, low word */
+#define L2_PWRACTIVE_HI 0x264 /* (RO) Level 2 cache active bitmap, high word */
+
+#define L3_PWRACTIVE_LO 0x270 /* (RO) Level 3 cache active bitmap, low word */
+#define L3_PWRACTIVE_HI 0x274 /* (RO) Level 3 cache active bitmap, high word */
+
+
+#define SHADER_CONFIG 0xF04 /* (RW) Shader core configuration settings (Mali-T60x additional register) */
+#define L2_MMU_CONFIG 0xF0C /* (RW) Configuration of the L2 cache and MMU (Mali-T60x additional register) */
+
+
+#define JOB_CONTROL_BASE 0x1000
+
+#define JOB_CONTROL_REG(r) (JOB_CONTROL_BASE + (r))
+
+#define JOB_IRQ_RAWSTAT 0x000 /* Raw interrupt status register */
+#define JOB_IRQ_CLEAR 0x004 /* Interrupt clear register */
+#define JOB_IRQ_MASK 0x008 /* Interrupt mask register */
+#define JOB_IRQ_STATUS 0x00C /* Interrupt status register */
+#define JOB_IRQ_JS_STATE 0x010 /* status==active and _next == busy snapshot from last JOB_IRQ_CLEAR */
+#define JOB_IRQ_THROTTLE 0x014 /* cycles to delay delivering an interrupt externally. The JOB_IRQ_STATUS is NOT affected by this, just the delivery of the interrupt. */
+
+#define JOB_SLOT0 0x800 /* Configuration registers for job slot 0 */
+#define JOB_SLOT1 0x880 /* Configuration registers for job slot 1 */
+#define JOB_SLOT2 0x900 /* Configuration registers for job slot 2 */
+#define JOB_SLOT3 0x980 /* Configuration registers for job slot 3 */
+#define JOB_SLOT4 0xA00 /* Configuration registers for job slot 4 */
+#define JOB_SLOT5 0xA80 /* Configuration registers for job slot 5 */
+#define JOB_SLOT6 0xB00 /* Configuration registers for job slot 6 */
+#define JOB_SLOT7 0xB80 /* Configuration registers for job slot 7 */
+#define JOB_SLOT8 0xC00 /* Configuration registers for job slot 8 */
+#define JOB_SLOT9 0xC80 /* Configuration registers for job slot 9 */
+#define JOB_SLOT10 0xD00 /* Configuration registers for job slot 10 */
+#define JOB_SLOT11 0xD80 /* Configuration registers for job slot 11 */
+#define JOB_SLOT12 0xE00 /* Configuration registers for job slot 12 */
+#define JOB_SLOT13 0xE80 /* Configuration registers for job slot 13 */
+#define JOB_SLOT14 0xF00 /* Configuration registers for job slot 14 */
+#define JOB_SLOT15 0xF80 /* Configuration registers for job slot 15 */
+
+#define JOB_SLOT_REG(n,r) (JOB_CONTROL_REG(JOB_SLOT0 + ((n) << 7)) + (r))
+
+#define JSn_HEAD_LO 0x00 /* (RO) Job queue head pointer for job slot n, low word */
+#define JSn_HEAD_HI 0x04 /* (RO) Job queue head pointer for job slot n, high word */
+#define JSn_TAIL_LO 0x08 /* (RO) Job queue tail pointer for job slot n, low word */
+#define JSn_TAIL_HI 0x0C /* (RO) Job queue tail pointer for job slot n, high word */
+#define JSn_AFFINITY_LO 0x10 /* (RO) Core affinity mask for job slot n, low word */
+#define JSn_AFFINITY_HI 0x14 /* (RO) Core affinity mask for job slot n, high word */
+#define JSn_CONFIG 0x18 /* (RO) Configuration settings for job slot n */
+
+#define JSn_COMMAND 0x20 /* (WO) Command register for job slot n */
+#define JSn_STATUS 0x24 /* (RO) Status register for job slot n */
+
+#define JSn_HEAD_NEXT_LO 0x40 /* (RW) Next job queue head pointer for job slot n, low word */
+#define JSn_HEAD_NEXT_HI 0x44 /* (RW) Next job queue head pointer for job slot n, high word */
+
+#define JSn_AFFINITY_NEXT_LO 0x50 /* (RW) Next core affinity mask for job slot n, low word */
+#define JSn_AFFINITY_NEXT_HI 0x54 /* (RW) Next core affinity mask for job slot n, high word */
+#define JSn_CONFIG_NEXT 0x58 /* (RW) Next configuration settings for job slot n */
+
+#define JSn_COMMAND_NEXT 0x60 /* (RW) Next command register for job slot n */
+
+
+#define MEMORY_MANAGEMENT_BASE 0x2000
+#define MMU_REG(r) (MEMORY_MANAGEMENT_BASE + (r))
+
+#define MMU_IRQ_RAWSTAT 0x000 /* (RW) Raw interrupt status register */
+#define MMU_IRQ_CLEAR 0x004 /* (WO) Interrupt clear register */
+#define MMU_IRQ_MASK 0x008 /* (RW) Interrupt mask register */
+#define MMU_IRQ_STATUS 0x00C /* (RO) Interrupt status register */
+
+#define MMU_AS0 0x400 /* Configuration registers for address space 0 */
+#define MMU_AS1 0x440 /* Configuration registers for address space 1 */
+#define MMU_AS2 0x480 /* Configuration registers for address space 2 */
+#define MMU_AS3 0x4C0 /* Configuration registers for address space 3 */
+#define MMU_AS4 0x500 /* Configuration registers for address space 4 */
+#define MMU_AS5 0x540 /* Configuration registers for address space 5 */
+#define MMU_AS6 0x580 /* Configuration registers for address space 6 */
+#define MMU_AS7 0x5C0 /* Configuration registers for address space 7 */
+#define MMU_AS8 0x600 /* Configuration registers for address space 8 */
+#define MMU_AS9 0x640 /* Configuration registers for address space 9 */
+#define MMU_AS10 0x680 /* Configuration registers for address space 10 */
+#define MMU_AS11 0x6C0 /* Configuration registers for address space 11 */
+#define MMU_AS12 0x700 /* Configuration registers for address space 12 */
+#define MMU_AS13 0x740 /* Configuration registers for address space 13 */
+#define MMU_AS14 0x780 /* Configuration registers for address space 14 */
+#define MMU_AS15 0x7C0 /* Configuration registers for address space 15 */
+
+#define MMU_AS_REG(n,r) (MMU_REG(MMU_AS0 + ((n) << 6)) + (r))
+
+#define ASn_TRANSTAB_LO 0x00 /* (RW) Translation Table Base Address for address space n, low word */
+#define ASn_TRANSTAB_HI 0x04 /* (RW) Translation Table Base Address for address space n, high word */
+#define ASn_MEMATTR_LO 0x08 /* (RW) Memory attributes for address space n, low word. */
+#define ASn_MEMATTR_HI 0x0C /* (RW) Memory attributes for address space n, high word. */
+#define ASn_LOCKADDR_LO 0x10 /* (RW) Lock region address for address space n, low word */
+#define ASn_LOCKADDR_HI 0x14 /* (RW) Lock region address for address space n, high word */
+#define ASn_COMMAND 0x18 /* (WO) MMU command register for address space n */
+#define ASn_FAULTSTATUS 0x1C /* (RO) MMU fault status register for address space n */
+#define ASn_FAULTADDRESS_LO 0x20 /* (RO) Fault Address for address space n, low word */
+#define ASn_FAULTADDRESS_HI 0x24 /* (RO) Fault Address for address space n, high word */
+#define ASn_STATUS 0x28 /* (RO) Status flags for address space n */
+
+/* End Register Offsets */
+
+/*
+ * MMU_IRQ_RAWSTAT register values. Values are valid also for
+ MMU_IRQ_CLEAR, MMU_IRQ_MASK, MMU_IRQ_STATUS registers.
+ */
+
+#define MMU_REGS_PAGE_FAULT_FLAGS 16
+
+/* Macros return bit number to retrvie page fault or bus eror flag from MMU registers */
+#define MMU_REGS_PAGE_FAULT_FLAG(n) (n)
+#define MMU_REGS_BUS_ERROR_FLAG(n) (n + MMU_REGS_PAGE_FAULT_FLAGS)
+
+/*
+ * Begin MMU TRANSTAB register values
+ */
+#define ASn_TRANSTAB_ADDR_SPACE_MASK 0xfffff000
+#define ASn_TRANSTAB_ADRMODE_UNMAPPED (0u << 0)
+#define ASn_TRANSTAB_ADRMODE_IDENTITY (1u << 1)
+#define ASn_TRANSTAB_ADRMODE_TABLE (3u << 0)
+#define ASn_TRANSTAB_READ_INNER (1u << 2)
+#define ASn_TRANSTAB_SHARE_OUTER (1u << 4)
+
+#define MMU_TRANSTAB_ADRMODE_MASK 0x00000003
+
+/*
+ * Begin MMU STATUS register values
+ */
+#define ASn_STATUS_FLUSH_ACTIVE 0x01
+
+#define ASn_FAULTSTATUS_ACCESS_TYPE_MASK (0x3<<8)
+#define ASn_FAULTSTATUS_ACCESS_TYPE_EX (0x1<<8)
+#define ASn_FAULTSTATUS_ACCESS_TYPE_READ (0x2<<8)
+#define ASn_FAULTSTATUS_ACCESS_TYPE_WRITE (0x3<<8)
+
+/*
+ * Begin Command Values
+ */
+
+/* JSn_COMMAND register commands */
+#define JSn_COMMAND_NOP 0x00 /* NOP Operation. Writing this value is ignored */
+#define JSn_COMMAND_START 0x01 /* Start processing a job chain. Writing this value is ignored */
+#define JSn_COMMAND_SOFT_STOP 0x02 /* Gently stop processing a job chain */
+#define JSn_COMMAND_HARD_STOP 0x03 /* Rudely stop processing a job chain */
+
+/* ASn_COMMAND register commands */
+#define ASn_COMMAND_NOP 0x00 /* NOP Operation */
+#define ASn_COMMAND_UPDATE 0x01 /* Broadcasts the values in ASn_TRANSTAB and ASn_MEMATTR to all MMUs */
+#define ASn_COMMAND_LOCK 0x02 /* Issue a lock region command to all MMUs */
+#define ASn_COMMAND_UNLOCK 0x03 /* Issue a flush region command to all MMUs */
+#define ASn_COMMAND_FLUSH 0x04 /* Flush all L2 caches then issue a flush region command to all MMUs */
+
+/* Possible values of JSn_CONFIG and JSn_CONFIG_NEXT registers */
+#define JSn_CONFIG_START_FLUSH_NO_ACTION (0u << 0)
+#define JSn_CONFIG_START_FLUSH_CLEAN (1u << 8)
+#define JSn_CONFIG_START_FLUSH_CLEAN_INVALIDATE (3u << 8)
+#define JSn_CONFIG_START_MMU (1u << 10)
+#define JSn_CONFIG_END_FLUSH_NO_ACTION JSn_CONFIG_START_FLUSH_NO_ACTION
+#define JSn_CONFIG_END_FLUSH_CLEAN (1u << 12)
+#define JSn_CONFIG_END_FLUSH_CLEAN_INVALIDATE (3u << 12)
+#define JSn_CONFIG_THREAD_PRI(n) ((n) << 16)
+
+/* JSn_STATUS register values */
+
+/* NOTE: Please keep this values in sync with enum base_jd_event_code in mali_base_kernel.h.
+ * The values are separated to avoid dependency of userspace and kernel code.
+ */
+
+/* Group of values representing the job status insead a particular fault */
+#define JSn_STATUS_NO_EXCEPTION_BASE 0x00
+#define JSn_STATUS_INTERRUPTED (JSn_STATUS_NO_EXCEPTION_BASE + 0x02) /* 0x02 means INTERRUPTED */
+#define JSn_STATUS_STOPPED (JSn_STATUS_NO_EXCEPTION_BASE + 0x03) /* 0x03 means STOPPED */
+#define JSn_STATUS_TERMINATED (JSn_STATUS_NO_EXCEPTION_BASE + 0x04) /* 0x04 means TERMINATED */
+
+/* General fault values */
+#define JSn_STATUS_FAULT_BASE 0x40
+#define JSn_STATUS_CONFIG_FAULT (JSn_STATUS_FAULT_BASE) /* 0x40 means CONFIG FAULT */
+#define JSn_STATUS_POWER_FAULT (JSn_STATUS_FAULT_BASE + 0x01) /* 0x41 means POWER FAULT */
+#define JSn_STATUS_READ_FAULT (JSn_STATUS_FAULT_BASE + 0x02) /* 0x42 means READ FAULT */
+#define JSn_STATUS_WRITE_FAULT (JSn_STATUS_FAULT_BASE + 0x03) /* 0x43 means WRITE FAULT */
+#define JSn_STATUS_AFFINITY_FAULT (JSn_STATUS_FAULT_BASE + 0x04) /* 0x44 means AFFINITY FAULT */
+#define JSn_STATUS_BUS_FAULT (JSn_STATUS_FAULT_BASE + 0x08) /* 0x48 means BUS FAULT */
+
+/* Instruction or data faults */
+#define JSn_STATUS_INSTRUCTION_FAULT_BASE 0x50
+#define JSn_STATUS_INSTR_INVALID_PC (JSn_STATUS_INSTRUCTION_FAULT_BASE) /* 0x50 means INSTR INVALID PC */
+#define JSn_STATUS_INSTR_INVALID_ENC (JSn_STATUS_INSTRUCTION_FAULT_BASE + 0x01) /* 0x51 means INSTR INVALID ENC */
+#define JSn_STATUS_INSTR_TYPE_MISMATCH (JSn_STATUS_INSTRUCTION_FAULT_BASE + 0x02) /* 0x52 means INSTR TYPE MISMATCH */
+#define JSn_STATUS_INSTR_OPERAND_FAULT (JSn_STATUS_INSTRUCTION_FAULT_BASE + 0x03) /* 0x53 means INSTR OPERAND FAULT */
+#define JSn_STATUS_INSTR_TLS_FAULT (JSn_STATUS_INSTRUCTION_FAULT_BASE + 0x04) /* 0x54 means INSTR TLS FAULT */
+#define JSn_STATUS_INSTR_BARRIER_FAULT (JSn_STATUS_INSTRUCTION_FAULT_BASE + 0x05) /* 0x55 means INSTR BARRIER FAULT */
+#define JSn_STATUS_INSTR_ALIGN_FAULT (JSn_STATUS_INSTRUCTION_FAULT_BASE + 0x06) /* 0x56 means INSTR ALIGN FAULT */
+/* NOTE: No fault with 0x57 code defined in spec. */
+#define JSn_STATUS_DATA_INVALID_FAULT (JSn_STATUS_INSTRUCTION_FAULT_BASE + 0x08) /* 0x58 means DATA INVALID FAULT */
+#define JSn_STATUS_TILE_RANGE_FAULT (JSn_STATUS_INSTRUCTION_FAULT_BASE + 0x09) /* 0x59 means TILE RANGE FAULT */
+#define JSn_STATUS_ADDRESS_RANGE_FAULT (JSn_STATUS_INSTRUCTION_FAULT_BASE + 0x0A) /* 0x5A means ADDRESS RANGE FAULT */
+
+/* Other faults */
+#define JSn_STATUS_MEMORY_FAULT_BASE 0x60
+#define JSn_STATUS_OUT_OF_MEMORY (JSn_STATUS_MEMORY_FAULT_BASE) /* 0x60 means OUT OF MEMORY */
+#define JSn_STATUS_UNKNOWN 0x7F /* 0x7F means UNKNOWN */
+
+
+/* GPU_COMMAND values */
+#define GPU_COMMAND_NOP 0x00 /* No operation, nothing happens */
+#define GPU_COMMAND_SOFT_RESET 0x01 /* Stop all external bus interfaces, and then reset the entire GPU. */
+#define GPU_COMMAND_HARD_RESET 0x02 /* Immediately reset the entire GPU. */
+#define GPU_COMMAND_PRFCNT_CLEAR 0x03 /* Clear all performance counters, setting them all to zero. */
+#define GPU_COMMAND_PRFCNT_SAMPLE 0x04 /* Sample all performance counters, writing them out to memory */
+#define GPU_COMMAND_CYCLE_COUNT_START 0x05 /* Starts the cycle counter, and system timestamp propagation */
+#define GPU_COMMAND_CYCLE_COUNT_STOP 0x06 /* Stops the cycle counter, and system timestamp propagation */
+#define GPU_COMMAND_CLEAN_CACHES 0x07 /* Clean all caches */
+#define GPU_COMMAND_CLEAN_INV_CACHES 0x08 /* Clean and invalidate all caches */
+
+/* End Command Values */
+
+/* GPU_STATUS values */
+#define GPU_STATUS_PRFCNT_ACTIVE (1 << 2) /* Set if the performance counters are active. */
+
+/* PRFCNT_CONFIG register values */
+#define PRFCNT_CONFIG_AS_SHIFT 4 /* address space bitmap starts from bit 4 of the register */
+#define PRFCNT_CONFIG_MODE_OFF 0 /* The performance counters are disabled. */
+#define PRFCNT_CONFIG_MODE_MANUAL 1 /* The performance counters are enabled, but are only written out when a PRFCNT_SAMPLE command is issued using the GPU_COMMAND register. */
+#define PRFCNT_CONFIG_MODE_TILE 2 /* The performance counters are enabled, and are written out each time a tile finishes rendering. */
+
+/* AS<n>_MEMATTR values */
+#define ASn_MEMATTR_IMPL_DEF_CACHE_POLICY 0x48484848 /* Use GPU implementation-defined caching policy. */
+#define ASn_MEMATTR_FORCE_TO_CACHE_ALL 0x4F4F4F4F /* The attribute set to force all resources to be cached. */
+
+/* GPU_ID register */
+#define GPU_ID_VERSION_STATUS_SHIFT 0
+#define GPU_ID_VERSION_MINOR_SHIFT 4
+#define GPU_ID_VERSION_MAJOR_SHIFT 12
+#define GPU_ID_VERSION_PRODUCT_ID_SHIFT 16
+#define GPU_ID_VERSION_STATUS (0xF << GPU_ID_VERSION_STATUS_SHIFT)
+#define GPU_ID_VERSION_MINOR (0xFF << GPU_ID_VERSION_MINOR_SHIFT)
+#define GPU_ID_VERSION_MAJOR (0xF << GPU_ID_VERSION_MAJOR_SHIFT)
+#define GPU_ID_VERSION_PRODUCT_ID (0xFFFF << GPU_ID_VERSION_PRODUCT_ID_SHIFT)
+
+/* Values for GPU_ID_VERSION_PRODUCT_ID bitfield */
+#define GPU_ID_PI_T60X 0x6956
+#define GPU_ID_PI_T65X 0x3456
+#define GPU_ID_PI_T62X 0x0620
+#define GPU_ID_PI_T67X 0x0670
+
+/* Values for GPU_ID_VERSION_STATUS field for PRODUCT_ID GPU_ID_PI_T60X and GPU_ID_PI_T65X */
+#define GPU_ID_S_15DEV0 0x1
+#define GPU_ID_S_EAC 0x2
+
+/* Helper macro to create a GPU_ID assuming valid values for id, major, minor, status */
+#define GPU_ID_MAKE(id, major, minor, status) \
+ (((id) << GPU_ID_VERSION_PRODUCT_ID_SHIFT) | \
+ ((major) << GPU_ID_VERSION_MAJOR_SHIFT) | \
+ ((minor) << GPU_ID_VERSION_MINOR_SHIFT) | \
+ ((status) << GPU_ID_VERSION_STATUS_SHIFT))
+
+/* End GPU_ID register */
+
+/* JS<n>_FEATURES register */
+
+#define JSn_FEATURE_NULL_JOB (1u << 1)
+#define JSn_FEATURE_SET_VALUE_JOB (1u << 2)
+#define JSn_FEATURE_CACHE_FLUSH_JOB (1u << 3)
+#define JSn_FEATURE_COMPUTE_JOB (1u << 4)
+#define JSn_FEATURE_VERTEX_JOB (1u << 5)
+#define JSn_FEATURE_GEOMETRY_JOB (1u << 6)
+#define JSn_FEATURE_TILER_JOB (1u << 7)
+#define JSn_FEATURE_FUSED_JOB (1u << 8)
+#define JSn_FEATURE_FRAGMENT_JOB (1u << 9)
+
+/* End JS<n>_FEATURES register */
+
+#endif /* _MIDGARD_REGMAP_H_ */
--- /dev/null
+ccflags-$(CONFIG_VITHAR) += -DMALI_DEBUG=0 -DMALI_HW_TYPE=2 \
+-DMALI_USE_UMP=0 -DMALI_HW_VERSION=r0p0 -DMALI_BASE_TRACK_MEMLEAK=0 \
+-DMALI_ANDROID=1 -DMALI_ERROR_INJECT_ON=0 -DMALI_NO_MALI=0 -DMALI_BACKEND_KERNEL=1 \
+-DMALI_FAKE_PLATFORM_DEVICE=1 -DMALI_MOCK_TEST=0 -DMALI_KERNEL_TEST_API=0 \
+-DMALI_INFINITE_CACHE=0 -DMALI_LICENSE_IS_GPL=1 -DMALI_PLATFORM_CONFIG=exynos5 \
+-DMALI_UNIT_TEST=0 -DMALI_GATOR_SUPPORT=0 -DMALI_LICENSE_IS_GPL=1 \
+-DUMP_SVN_REV_STRING="\"dummy\"" -DMALI_RELEASE_NAME="\"dummy\""
+
+ROOTDIR = $(src)/../../..
+
+ccflags-$(CONFIG_VITHAR) += -I$(ROOTDIR)/kbase
+ccflags-$(CONFIG_VITHAR) += -I$(ROOTDIR) -I$(src)/..
+
+ccflags-y += -I$(ROOTDIR) -I$(ROOTDIR)/include -I$(ROOTDIR)/osk/src/linux/include -I$(ROOTDIR)/uk/platform_dummy
+ccflags-y += -I$(ROOTDIR)/kbase/midg_gpus/r0p0
+
+
+obj-y += mali_kbase_mem_linux.o
+obj-y += mali_kbase_core_linux.o
+obj-y += mali_kbase_config_linux.o
+
+obj-y += config/
--- /dev/null
+ccflags-$(CONFIG_VITHAR) += -DMALI_DEBUG=0 -DMALI_HW_TYPE=2 \
+-DMALI_USE_UMP=0 -DMALI_HW_VERSION=r0p0 -DMALI_BASE_TRACK_MEMLEAK=0 \
+-DMALI_ANDROID=1 -DMALI_ERROR_INJECT_ON=0 -DMALI_NO_MALI=0 -DMALI_BACKEND_KERNEL=1 \
+-DMALI_FAKE_PLATFORM_DEVICE=1 -DMALI_MOCK_TEST=0 -DMALI_KERNEL_TEST_API=0 \
+-DMALI_INFINITE_CACHE=0 -DMALI_LICENSE_IS_GPL=1 -DMALI_PLATFORM_CONFIG=exynos5 \
+-DMALI_UNIT_TEST=0 -DMALI_GATOR_SUPPORT=0 -DMALI_LICENSE_IS_GPL=1 \
+-DUMP_SVN_REV_STRING="\"dummy\"" -DMALI_RELEASE_NAME="\"dummy\"" \
+-DMALI_CUSTOMER_RELEASE=1 -DMALI_UNCACHED=1
+
+ROOTDIR = $(src)/../../../..
+
+ccflags-$(CONFIG_VITHAR) += -I$(ROOTDIR)/kbase
+ccflags-$(CONFIG_VITHAR) += -I$(ROOTDIR) -I$(src)/..
+
+ccflags-y += -I$(ROOTDIR) -I$(ROOTDIR)/include -I$(ROOTDIR)/osk/src/linux/include -I$(ROOTDIR)/uk/platform_dummy -I$(ROOTDIR)/kbase/midg_gpus/r0p0
+
+obj-y += mali_kbase_config_exynos5.o
--- /dev/null
+/* LICENSE NOT CONFIRMED. Do not distribute without permission. Assume confidential and proprietary until classification is authorised. */
+/*
+ *
+ * (C) COPYRIGHT 2012 ARM Limited. All rights reserved.
+ *
+ * This program is free software and is provided to you under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation, and any use by you of this program is subject to the terms of such GNU licence.
+ *
+ * A copy of the licence is included with the program, and can also be obtained from Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/poll.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/platform_device.h>
+#include <linux/pci.h>
+#include <linux/miscdevice.h>
+#include <linux/list.h>
+#include <linux/semaphore.h>
+#include <linux/fs.h>
+#include <linux/uaccess.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/ioport.h>
+#include <linux/spinlock.h>
+
+#include <mach/map.h>
+#include <linux/fb.h>
+#include <linux/clk.h>
+#include <mach/regs-clock.h>
+#include <mach/pmu.h>
+#include <mach/regs-pmu.h>
+#include <asm/delay.h>
+#include <mach/map.h>
+#include <generated/autoconf.h>
+
+#include <linux/timer.h>
+#include <linux/pm_runtime.h>
+#include <linux/workqueue.h>
+#include <linux/regulator/consumer.h>
+#include <linux/regulator/driver.h>
+
+#include <osk/mali_osk.h>
+#include <kbase/src/common/mali_kbase.h>
+#include <kbase/src/common/mali_kbase_pm.h>
+#include <kbase/src/common/mali_kbase_uku.h>
+#include <kbase/src/common/mali_kbase_mem.h>
+#include <kbase/src/common/mali_midg_regmap.h>
+#include <kbase/src/linux/mali_kbase_mem_linux.h>
+#include <uk/mali_ukk.h>
+#include <kbase/src/common/mali_kbase_defs.h>
+#include <kbase/src/linux/mali_kbase_config_linux.h>
+#if MALI_USE_UMP == 1
+#include <ump/ump_common.h>
+#endif /*MALI_USE_UMP == 1*/
+
+#if MALI_UNCACHED == 0
+#error MALI_UNCACHED should equal 1 for Exynos5 support, your scons commandline should contain 'no_syncsets=1'
+#endif
+
+#define HZ_IN_MHZ (1000000)
+#define MALI_RTPM_DEBUG 0
+#define VITHAR_DEFAULT_CLOCK 533000000
+#define RUNTIME_PM_DELAY_TIME 10
+
+struct regulator *kbase_platform_get_regulator(void);
+int kbase_platform_regulator_init(void);
+int kbase_platform_regulator_disable(void);
+int kbase_platform_regulator_enable(struct device *dev);
+int kbase_platform_get_default_voltage(struct device *dev, int *vol);
+int kbase_platform_get_voltage(struct device *dev, int *vol);
+int kbase_platform_set_voltage(struct device *dev, int vol);
+void kbase_platform_dvfs_set_clock(kbase_device *kbdev, int freq);
+
+#ifdef CONFIG_VITHAR_DVFS
+int kbase_platform_dvfs_init(kbase_device *kbdev);
+void kbase_platform_dvfs_term(void);
+int kbase_platform_dvfs_event(kbase_device *kbdev, u32 utilisation);
+int kbase_platform_dvfs_get_control_status(void);
+int kbase_pm_get_dvfs_utilisation(kbase_device *kbdev);
+#ifdef CONFIG_VITHAR_FREQ_LOCK
+int mali_get_dvfs_upper_locked_freq(void);
+int mali_get_dvfs_under_locked_freq(void);
+int mali_dvfs_freq_lock(int level);
+void mali_dvfs_freq_unlock(void);
+int mali_dvfs_freq_under_lock(int level);
+void mali_dvfs_freq_under_unlock(void);
+#endif /* CONFIG_VITHAR_FREQ_LOCK */
+#endif /* CONFIG_VITHAR_DVFS */
+
+#ifdef CONFIG_PM_RUNTIME
+static void kbase_platform_runtime_term(struct kbase_device *kbdev);
+static mali_error kbase_platform_runtime_init(struct kbase_device *kbdev);
+#endif /* CONFIG_PM_RUNTIME */
+
+int kbase_platform_cmu_pmu_control(struct kbase_device *kbdev, int control);
+void kbase_platform_remove_sysfs_file(struct device *dev);
+mali_error kbase_platform_init(struct kbase_device *kbdev);
+static int kbase_platform_is_power_on(void);
+void kbase_platform_term(struct kbase_device *kbdev);
+
+#ifdef CONFIG_VITHAR_DEBUG_SYS
+static int kbase_platform_create_sysfs_file(struct device *dev);
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+
+struct exynos_context
+{
+ /** Indicator if system clock to mail-t604 is active */
+ int cmu_pmu_status;
+ /** cmd & pmu lock */
+ spinlock_t cmu_pmu_lock;
+ struct clk *sclk_g3d;
+};
+
+static kbase_io_resources io_resources =
+{
+ .job_irq_number = EXYNOS5_JOB_IRQ_NUMBER,
+ .mmu_irq_number = EXYNOS5_MMU_IRQ_NUMBER,
+ .gpu_irq_number = EXYNOS5_GPU_IRQ_NUMBER,
+ .io_memory_region =
+ {
+ .start = EXYNOS5_PA_G3D,
+ .end = EXYNOS5_PA_G3D + (4096 * 5) - 1
+ }
+};
+
+/**
+ * Read the CPU clock speed
+ */
+int get_cpu_clock_speed(u32* cpu_clock)
+{
+ struct clk * cpu_clk;
+ u32 freq=0;
+ cpu_clk = clk_get(NULL, "armclk");
+ if (IS_ERR(cpu_clk))
+ return 1;
+ freq = clk_get_rate(cpu_clk);
+ *cpu_clock = (freq/HZ_IN_MHZ);
+ return 0;
+}
+
+/**
+ * Power Management callback - power ON
+ */
+static int pm_callback_power_on(kbase_device *kbdev)
+{
+#ifdef CONFIG_PM_RUNTIME
+ pm_runtime_resume(kbdev->osdev.dev);
+#endif /* CONFIG_PM_RUNTIME */
+ return 0;
+}
+
+/**
+ * Power Management callback - power OFF
+ */
+static void pm_callback_power_off(kbase_device *kbdev)
+{
+#ifdef CONFIG_PM_RUNTIME
+ pm_schedule_suspend(kbdev->osdev.dev, RUNTIME_PM_DELAY_TIME);
+#endif /* CONFIG_PM_RUNTIME */
+}
+
+/**
+ * Power Management callback - runtime power ON
+ */
+#ifdef CONFIG_PM_RUNTIME
+static int pm_callback_runtime_power_on(kbase_device *kbdev)
+{
+#if MALI_RTPM_DEBUG
+ printk("kbase_device_runtime_resume\n");
+#endif /* MALI_RTPM_DEBUG */
+ return kbase_platform_cmu_pmu_control(kbdev, 1);
+}
+#endif /* CONFIG_PM_RUNTIME */
+
+/**
+ * Power Management callback - runtime power OFF
+ */
+#ifdef CONFIG_PM_RUNTIME
+static void pm_callback_runtime_power_off(kbase_device *kbdev)
+{
+#if MALI_RTPM_DEBUG
+ printk("kbase_device_runtime_suspend\n");
+#endif /* MALI_RTPM_DEBUG */
+ kbase_platform_cmu_pmu_control(kbdev, 0);
+}
+#endif /* CONFIG_PM_RUNTIME */
+
+static kbase_pm_callback_conf pm_callbacks =
+{
+ .power_on_callback = pm_callback_power_on,
+ .power_off_callback = pm_callback_power_off,
+};
+
+/**
+ * Exynos5 hardware specific initialization
+ */
+mali_bool kbase_platform_exynos5_init(kbase_device *kbdev)
+{
+ if(MALI_ERROR_NONE == kbase_platform_init(kbdev))
+ {
+#ifdef CONFIG_VITHAR_DEBUG_SYS
+ if(kbase_platform_create_sysfs_file(kbdev->osdev.dev))
+ {
+ return MALI_TRUE;
+ }
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+ return MALI_TRUE;
+ }
+
+ return MALI_FALSE;
+}
+
+/**
+ * Exynos5 hardware specific termination
+ */
+void kbase_platform_exynos5_term(kbase_device *kbdev)
+{
+#ifdef CONFIG_VITHAR_DEBUG_SYS
+ kbase_platform_remove_sysfs_file(kbdev->osdev.dev);
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+ kbase_platform_term(kbdev);
+}
+
+kbase_platform_funcs_conf platform_funcs =
+{
+ .platform_init_func = &kbase_platform_exynos5_init,
+ .platform_term_func = &kbase_platform_exynos5_term,
+};
+
+static kbase_attribute config_attributes[] = {
+#if MALI_USE_UMP == 1
+ {
+ KBASE_CONFIG_ATTR_UMP_DEVICE,
+ UMP_DEVICE_Z_SHIFT
+ },
+#endif /* MALI_USE_UMP == 1 */
+ {
+ KBASE_CONFIG_ATTR_MEMORY_OS_SHARED_MAX,
+ 2048 * 1024 * 1024UL /* 2048MB */
+ },
+
+ {
+ KBASE_CONFIG_ATTR_MEMORY_OS_SHARED_PERF_GPU,
+ KBASE_MEM_PERF_FAST
+ },
+ {
+ KBASE_CONFIG_ATTR_POWER_MANAGEMENT_CALLBACKS,
+ (uintptr_t)&pm_callbacks
+ },
+ {
+ KBASE_CONFIG_ATTR_PLATFORM_FUNCS,
+ (uintptr_t)&platform_funcs
+ },
+
+ {
+ KBASE_CONFIG_ATTR_GPU_FREQ_KHZ_MAX,
+ 533000
+ },
+ {
+ KBASE_CONFIG_ATTR_GPU_FREQ_KHZ_MIN,
+ 100000
+ },
+ {
+ KBASE_CONFIG_ATTR_JS_RESET_TIMEOUT_MS,
+ 500 /* 500ms before cancelling stuck jobs */
+ },
+ {
+ KBASE_CONFIG_ATTR_CPU_SPEED_FUNC,
+ (uintptr_t)&get_cpu_clock_speed
+ },
+ {
+ KBASE_CONFIG_ATTR_END,
+ 0
+ }
+};
+
+kbase_platform_config platform_config =
+{
+ .attributes = config_attributes,
+ .io_resources = &io_resources,
+ .midgard_type = KBASE_MALI_T604
+};
+
+static struct clk *clk_g3d = NULL;
+
+/**
+ * Initialize GPU clocks
+ */
+static int kbase_platform_power_clock_init(kbase_device *kbdev)
+{
+ struct device *dev = kbdev->osdev.dev;
+ int timeout;
+ struct exynos_context *platform;
+
+ platform = (struct exynos_context *) kbdev->platform_context;
+ if(NULL == platform)
+ {
+ panic("oops");
+ }
+
+ /* Turn on G3D power */
+ __raw_writel(0x7, EXYNOS5_G3D_CONFIGURATION);
+
+ /* Wait for G3D power stability for 1ms */
+ timeout = 10;
+ while((__raw_readl(EXYNOS5_G3D_STATUS) & 0x7) != 0x7) {
+ if(timeout == 0) {
+ /* need to call panic */
+ panic("failed to turn on g3d power\n");
+ goto out;
+ }
+ timeout--;
+ udelay(100);
+ }
+
+ /* Turn on G3D clock */
+ clk_g3d = clk_get(dev, "g3d");
+ if(IS_ERR(clk_g3d)) {
+ OSK_PRINT_ERROR(OSK_BASE_PM, "failed to clk_get [clk_g3d]\n");
+ /* chrome linux does not have this clock */
+ }
+ else
+ {
+ /* android_v4 support */
+ clk_enable(clk_g3d);
+ printk("v4 support\n");
+ }
+
+#ifdef CONFIG_VITHAR_HWVER_R0P0
+ platform->sclk_g3d = clk_get(dev, "aclk_400");
+ if(IS_ERR(platform->sclk_g3d)) {
+ OSK_PRINT_ERROR(OSK_BASE_PM, "failed to clk_get [sclk_g3d]\n");
+ goto out;
+ }
+#else /* CONFIG_VITHAR_HWVER_R0P0 */
+ {
+ struct clk *mpll = NULL;
+ mpll = clk_get(dev, "mout_mpll_user");
+ if(IS_ERR(mpll)) {
+ OSK_PRINT_ERROR(OSK_BASE_PM, "failed to clk_get [mout_mpll_user]\n");
+ goto out;
+ }
+
+ platform->sclk_g3d = clk_get(dev, "sclk_g3d");
+ if(IS_ERR(platform->sclk_g3d)) {
+ OSK_PRINT_ERROR(OSK_BASE_PM, "failed to clk_get [sclk_g3d]\n");
+ goto out;
+ }
+
+ clk_set_parent(platform->sclk_g3d, mpll);
+ if(IS_ERR(platform->sclk_g3d)) {
+ OSK_PRINT_ERROR(OSK_BASE_PM, "failed to clk_set_parent\n");
+ goto out;
+ }
+
+ clk_set_rate(platform->sclk_g3d, VITHAR_DEFAULT_CLOCK);
+ if(IS_ERR(platform->sclk_g3d)) {
+ OSK_PRINT_ERROR(OSK_BASE_PM, "failed to clk_set_rate [sclk_g3d] = %d\n", VITHAR_DEFAULT_CLOCK);
+ goto out;
+ }
+ }
+#endif /* CONFIG_VITHAR_HWVER_R0P0 */
+ (void) clk_enable(platform->sclk_g3d);
+ return 0;
+out:
+ return -EPERM;
+}
+
+/**
+ * Enable GPU clocks
+ */
+static int kbase_platform_clock_on(struct kbase_device *kbdev)
+{
+ struct exynos_context *platform;
+ if (!kbdev)
+ return -ENODEV;
+
+ platform = (struct exynos_context *) kbdev->platform_context;
+ if (!platform)
+ return -ENODEV;
+
+ if(clk_g3d)
+ {
+ /* android_v4 support */
+ (void) clk_enable(clk_g3d);
+ }
+ else
+ {
+ /* chrome support */
+ (void) clk_enable(platform->sclk_g3d);
+ }
+
+ return 0;
+}
+
+/**
+ * Disable GPU clocks
+ */
+static int kbase_platform_clock_off(struct kbase_device *kbdev)
+{
+ struct exynos_context *platform;
+ if (!kbdev)
+ return -ENODEV;
+
+ platform = (struct exynos_context *) kbdev->platform_context;
+ if (!platform)
+ return -ENODEV;
+
+ if(clk_g3d)
+ {
+ /* android_v4 support */
+ (void)clk_disable(clk_g3d);
+ }
+ else
+ {
+ /* chrome support */
+ (void)clk_disable(platform->sclk_g3d);
+ }
+ return 0;
+}
+
+/**
+ * Report GPU power status
+ */
+static inline int kbase_platform_is_power_on(void)
+{
+ return ((__raw_readl(EXYNOS5_G3D_STATUS) & 0x7) == 0x7) ? 1 : 0;
+}
+
+/**
+ * Enable GPU power
+ */
+static int kbase_platform_power_on(void)
+{
+ int timeout;
+
+ /* Turn on G3D */
+ __raw_writel(0x7, EXYNOS5_G3D_CONFIGURATION);
+
+ /* Wait for G3D power stability */
+ timeout = 1000;
+
+ while((__raw_readl(EXYNOS5_G3D_STATUS) & 0x7) != 0x7) {
+ if(timeout == 0) {
+ /* need to call panic */
+ panic("failed to turn on g3d via g3d_configuration\n");
+ return -ETIMEDOUT;
+ }
+ timeout--;
+ udelay(10);
+ }
+
+ return 0;
+}
+
+/**
+ * Disable GPU power
+ */
+static int kbase_platform_power_off(void)
+{
+ int timeout;
+
+ /* Turn off G3D */
+ __raw_writel(0x0, EXYNOS5_G3D_CONFIGURATION);
+
+ /* Wait for G3D power stability */
+ timeout = 1000;
+
+ while(__raw_readl(EXYNOS5_G3D_STATUS) & 0x7) {
+ if(timeout == 0) {
+ /* need to call panic */
+ panic( "failed to turn off g3d via g3d_configuration\n");
+ return -ETIMEDOUT;
+ }
+ timeout--;
+ udelay(10);
+ }
+
+ return 0;
+}
+
+/**
+ * Power Management unit control. Enable/disable power and clocks to GPU
+ */
+int kbase_platform_cmu_pmu_control(struct kbase_device *kbdev, int control)
+{
+ unsigned long flags;
+ struct exynos_context *platform;
+ if (!kbdev)
+ {
+ return -ENODEV;
+ }
+
+ platform = (struct exynos_context *) kbdev->platform_context;
+ if (!platform)
+ {
+ return -ENODEV;
+ }
+
+ spin_lock_irqsave(&platform->cmu_pmu_lock, flags);
+
+ /* off */
+ if(control == 0)
+ {
+ if(platform->cmu_pmu_status == 0)
+ {
+ spin_unlock_irqrestore(&platform->cmu_pmu_lock, flags);
+ return 0;
+ }
+
+ if(kbase_platform_power_off())
+ panic("failed to turn off g3d power\n");
+ if(kbase_platform_clock_off(kbdev))
+
+ panic("failed to turn off sclk_g3d\n");
+
+ platform->cmu_pmu_status = 0;
+#if MALI_RTPM_DEBUG
+ printk( KERN_ERR "3D cmu_pmu_control - off\n" );
+#endif /* MALI_RTPM_DEBUG */
+ }
+ else
+ {
+ /* on */
+ if(platform->cmu_pmu_status == 1)
+ {
+ spin_unlock_irqrestore(&platform->cmu_pmu_lock, flags);
+ return 0;
+ }
+
+ if(kbase_platform_clock_on(kbdev))
+ panic("failed to turn on sclk_g3d\n");
+ if(kbase_platform_power_on())
+ panic("failed to turn on g3d power\n");
+
+ platform->cmu_pmu_status = 1;
+#if MALI_RTPM_DEBUG
+ printk( KERN_ERR "3D cmu_pmu_control - on\n");
+#endif /* MALI_RTPM_DEBUG */
+ }
+
+ spin_unlock_irqrestore(&platform->cmu_pmu_lock, flags);
+
+ return 0;
+}
+
+#ifdef CONFIG_VITHAR_DEBUG_SYS
+static ssize_t show_clock(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct kbase_device *kbdev;
+ struct exynos_context *platform;
+ ssize_t ret = 0;
+ unsigned int clkrate;
+
+ kbdev = dev_get_drvdata(dev);
+
+ if (!kbdev)
+ return -ENODEV;
+
+ platform = (struct exynos_context *) kbdev->platform_context;
+ if(!platform)
+ return -ENODEV;
+
+ if(!platform->sclk_g3d)
+ return -ENODEV;
+
+ clkrate = clk_get_rate(platform->sclk_g3d);
+ ret += snprintf(buf+ret, PAGE_SIZE-ret, "Current sclk_g3d[G3D_BLK] = %dMhz", clkrate/1000000);
+
+ /* To be revised */
+ ret += snprintf(buf+ret, PAGE_SIZE-ret, "\nPossible settings : 533, 450, 400, 266, 160, 100Mhz");
+ if (ret < PAGE_SIZE - 1)
+ ret += snprintf(buf+ret, PAGE_SIZE-ret, "\n");
+ else
+ {
+ buf[PAGE_SIZE-2] = '\n';
+ buf[PAGE_SIZE-1] = '\0';
+ ret = PAGE_SIZE-1;
+ }
+
+ return ret;
+}
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+
+#ifdef CONFIG_VITHAR_DEBUG_SYS
+static ssize_t set_clock(struct device *dev, struct device_attribute *attr, const char *buf, size_t count)
+{
+ struct kbase_device *kbdev;
+ struct exynos_context *platform;
+ unsigned int tmp = 0;
+ unsigned int cmd = 0;
+ kbdev = dev_get_drvdata(dev);
+
+ if (!kbdev)
+ return -ENODEV;
+
+ platform = (struct exynos_context *) kbdev->platform_context;
+ if(!platform)
+ return -ENODEV;
+
+ if(!platform->sclk_g3d)
+ return -ENODEV;
+
+ if (sysfs_streq("533", buf)) {
+ cmd = 1;
+ kbase_platform_set_voltage( dev, 1250000 );
+ kbase_platform_dvfs_set_clock(kbdev, 533);
+ } else if (sysfs_streq("450", buf)) {
+ cmd = 1;
+ kbase_platform_set_voltage( dev, 1150000 );
+ kbase_platform_dvfs_set_clock(kbdev, 450);
+ } else if (sysfs_streq("400", buf)) {
+ cmd = 1;
+ kbase_platform_set_voltage( dev, 1100000 );
+ kbase_platform_dvfs_set_clock(kbdev, 400);
+ } else if (sysfs_streq("266", buf)) {
+ cmd = 1;
+ kbase_platform_set_voltage( dev, 937500);
+ kbase_platform_dvfs_set_clock(kbdev, 266);
+ } else if (sysfs_streq("160", buf)) {
+ cmd = 1;
+ kbase_platform_set_voltage( dev, 937500 );
+ kbase_platform_dvfs_set_clock(kbdev, 160);
+ } else if (sysfs_streq("100", buf)) {
+ cmd = 1;
+ kbase_platform_set_voltage( dev, 937500 );
+ kbase_platform_dvfs_set_clock(kbdev, 100);
+ } else {
+ dev_err(dev, "set_clock: invalid value\n");
+ return -ENOENT;
+ }
+
+ if(cmd == 1) {
+ /* Waiting for clock is stable */
+ do {
+ tmp = __raw_readl(/*EXYNOS5_CLKDIV_STAT_TOP0*/EXYNOS_CLKREG(0x10610));
+ } while (tmp & 0x1000000);
+ }
+ else if(cmd == 2) {
+ /* Do we need to check */
+ }
+
+ return count;
+}
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+
+#ifdef CONFIG_VITHAR_DEBUG_SYS
+static ssize_t show_fbdev(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct kbase_device *kbdev;
+ ssize_t ret = 0;
+ int i;
+
+ kbdev = dev_get_drvdata(dev);
+
+ if (!kbdev)
+ return -ENODEV;
+
+ for(i = 0 ; i < num_registered_fb ; i++) {
+ ret += snprintf(buf+ret, PAGE_SIZE-ret, "fb[%d] xres=%d, yres=%d, addr=0x%lx\n", i, registered_fb[i]->var.xres, registered_fb[i]->var.yres, registered_fb[i]->fix.smem_start);
+ }
+
+ if (ret < PAGE_SIZE - 1)
+ ret += snprintf(buf+ret, PAGE_SIZE-ret, "\n");
+ else
+ {
+ buf[PAGE_SIZE-2] = '\n';
+ buf[PAGE_SIZE-1] = '\0';
+ ret = PAGE_SIZE-1;
+ }
+
+ return ret;
+}
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+
+typedef enum {
+ L1_I_tag_RAM = 0x00,
+ L1_I_data_RAM = 0x01,
+ L1_I_BTB_RAM = 0x02,
+ L1_I_GHB_RAM = 0x03,
+ L1_I_TLB_RAM = 0x04,
+ L1_I_indirect_predictor_RAM = 0x05,
+ L1_D_tag_RAM = 0x08,
+ L1_D_data_RAM = 0x09,
+ L1_D_load_TLB_array = 0x0A,
+ L1_D_store_TLB_array = 0x0B,
+ L2_tag_RAM = 0x10,
+ L2_data_RAM = 0x11,
+ L2_snoop_tag_RAM = 0x12,
+ L2_data_ECC_RAM = 0x13,
+ L2_dirty_RAM = 0x14,
+ L2_TLB_RAM = 0x18
+} RAMID_type;
+
+static inline void asm_ramindex_mrc(u32 *DL1Data0, u32 *DL1Data1, u32 *DL1Data2, u32 *DL1Data3)
+{
+ u32 val;
+
+ if(DL1Data0)
+ {
+ asm volatile("mrc p15, 0, %0, c15, c1, 0" : "=r" (val));
+ *DL1Data0 = val;
+ }
+ if(DL1Data1)
+ {
+ asm volatile("mrc p15, 0, %0, c15, c1, 1" : "=r" (val));
+ *DL1Data1 = val;
+ }
+ if(DL1Data2)
+ {
+ asm volatile("mrc p15, 0, %0, c15, c1, 2" : "=r" (val));
+ *DL1Data2 = val;
+ }
+ if(DL1Data3)
+ {
+ asm volatile("mrc p15, 0, %0, c15, c1, 3" : "=r" (val));
+ *DL1Data3 = val;
+ }
+}
+
+static inline void asm_ramindex_mcr(u32 val)
+{
+ asm volatile("mcr p15, 0, %0, c15, c4, 0" : : "r" (val));
+ asm volatile("dsb");
+ asm volatile("isb");
+}
+
+#ifdef CONFIG_VITHAR_DEBUG_SYS
+static void get_tlb_array(u32 val, u32 *DL1Data0, u32 *DL1Data1, u32 *DL1Data2, u32 *DL1Data3)
+{
+ asm_ramindex_mcr(val);
+ asm_ramindex_mrc(DL1Data0, DL1Data1, DL1Data2, DL1Data3);
+}
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+
+#ifdef CONFIG_VITHAR_DEBUG_SYS
+static RAMID_type ramindex = L1_D_load_TLB_array;
+static ssize_t show_dtlb(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct kbase_device *kbdev;
+ ssize_t ret = 0;
+ int entries, ways;
+ u32 DL1Data0 = 0, DL1Data1 = 0, DL1Data2 = 0, DL1Data3 = 0;
+
+ kbdev = dev_get_drvdata(dev);
+
+ if (!kbdev)
+ return -ENODEV;
+
+ /* L1-I tag RAM */
+ if(ramindex == L1_I_tag_RAM)
+ {
+ printk("Not implemented yet\n");
+ }
+ /* L1-I data RAM */
+ else if(ramindex == L1_I_data_RAM)
+ {
+ printk("Not implemented yet\n");
+ }
+ /* L1-I BTB RAM */
+ else if(ramindex == L1_I_BTB_RAM)
+ {
+ printk("Not implemented yet\n");
+ }
+ /* L1-I GHB RAM */
+ else if(ramindex == L1_I_GHB_RAM)
+ {
+ printk("Not implemented yet\n");
+ }
+ /* L1-I TLB RAM */
+ else if(ramindex == L1_I_TLB_RAM)
+ {
+ printk("L1-I TLB RAM\n");
+ for(entries = 0 ; entries < 32 ; entries++)
+ {
+ get_tlb_array((((u8)ramindex) << 24) + entries, &DL1Data0, &DL1Data1, &DL1Data2, NULL);
+ printk("entries[%d], DL1Data0=%08x, DL1Data1=%08x DL1Data2=%08x\n", entries, DL1Data0, DL1Data1 & 0xffff, 0x0);
+ }
+ }
+ /* L1-I indirect predictor RAM */
+ else if(ramindex == L1_I_indirect_predictor_RAM)
+ {
+ printk("Not implemented yet\n");
+ }
+ /* L1-D tag RAM */
+ else if(ramindex == L1_D_tag_RAM)
+ {
+ printk("Not implemented yet\n");
+ }
+ /* L1-D data RAM */
+ else if(ramindex == L1_D_data_RAM)
+ {
+ printk("Not implemented yet\n");
+ }
+ /* L1-D load TLB array */
+ else if(ramindex == L1_D_load_TLB_array)
+ {
+ printk("L1-D load TLB array\n");
+ for(entries = 0 ; entries < 32 ; entries++)
+ {
+ get_tlb_array((((u8)ramindex) << 24) + entries, &DL1Data0, &DL1Data1, &DL1Data2, &DL1Data3);
+ printk("entries[%d], DL1Data0=%08x, DL1Data1=%08x, DL1Data2=%08x, DL1Data3=%08x\n", entries, DL1Data0, DL1Data1, DL1Data2, DL1Data3 & 0x3f);
+ }
+ }
+ /* L1-D store TLB array */
+ else if(ramindex == L1_D_store_TLB_array)
+ {
+ printk("\nL1-D store TLB array\n");
+ for(entries = 0 ; entries < 32 ; entries++)
+ {
+ get_tlb_array((((u8)ramindex) << 24) + entries, &DL1Data0, &DL1Data1, &DL1Data2, &DL1Data3);
+ printk("entries[%d], DL1Data0=%08x, DL1Data1=%08x, DL1Data2=%08x, DL1Data3=%08x\n", entries, DL1Data0, DL1Data1, DL1Data2, DL1Data3 & 0x3f);
+ }
+ }
+ /* L2 tag RAM */
+ else if(ramindex == L2_tag_RAM)
+ {
+ printk("Not implemented yet\n");
+ }
+ /* L2 data RAM */
+ else if(ramindex == L2_data_RAM)
+ {
+ printk("Not implemented yet\n");
+ }
+ /* L2 snoop tag RAM */
+ else if(ramindex == L2_snoop_tag_RAM)
+ {
+ printk("Not implemented yet\n");
+ }
+ /* L2 data ECC RAM */
+ else if(ramindex == L2_data_ECC_RAM)
+ {
+ printk("Not implemented yet\n");
+ }
+ /* L2 dirty RAM */
+ else if(ramindex == L2_dirty_RAM)
+ {
+ printk("Not implemented yet\n");
+ }
+ /* L2 TLB array */
+ else if(ramindex == L2_TLB_RAM)
+ {
+ printk("\nL2 TLB array\n");
+ for(ways = 0 ; ways < 4 ; ways++)
+ {
+ for(entries = 0 ; entries < 512 ; entries++)
+ {
+ get_tlb_array((ramindex << 24) + (ways << 18) + entries, &DL1Data0, &DL1Data1, &DL1Data2, &DL1Data3);
+ printk("ways[%d]:entries[%d], DL1Data0=%08x, DL1Data1=%08x, DL1Data2=%08x, DL1Data3=%08x\n", ways, entries, DL1Data0, DL1Data1, DL1Data2, DL1Data3);
+ }
+ }
+ }
+
+ ret += snprintf(buf+ret, PAGE_SIZE-ret, "Succeeded...\n");
+
+ if (ret < PAGE_SIZE - 1)
+ ret += snprintf(buf+ret, PAGE_SIZE-ret, "\n");
+ else
+ {
+ buf[PAGE_SIZE-2] = '\n';
+ buf[PAGE_SIZE-1] = '\0';
+ ret = PAGE_SIZE-1;
+ }
+ return ret;
+}
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+
+#ifdef CONFIG_VITHAR_DEBUG_SYS
+static ssize_t set_dtlb(struct device *dev, struct device_attribute *attr, const char *buf, size_t count)
+{
+ struct kbase_device *kbdev;
+ kbdev = dev_get_drvdata(dev);
+
+ if (!kbdev)
+ return -ENODEV;
+
+ if (sysfs_streq("L1_I_tag_RAM", buf)) {
+ ramindex = L1_I_tag_RAM;
+ } else if (sysfs_streq("L1_I_data_RAM", buf)) {
+ ramindex = L1_I_data_RAM;
+ } else if (sysfs_streq("L1_I_BTB_RAM", buf)) {
+ ramindex = L1_I_BTB_RAM;
+ } else if (sysfs_streq("L1_I_GHB_RAM", buf)) {
+ ramindex = L1_I_GHB_RAM;
+ } else if (sysfs_streq("L1_I_TLB_RAM", buf)) {
+ ramindex = L1_I_TLB_RAM;
+ } else if (sysfs_streq("L1_I_indirect_predictor_RAM", buf)) {
+ ramindex = L1_I_indirect_predictor_RAM;
+ } else if (sysfs_streq("L1_D_tag_RAM", buf)) {
+ ramindex = L1_D_tag_RAM;
+ } else if (sysfs_streq("L1_D_data_RAM", buf)) {
+ ramindex = L1_D_data_RAM;
+ } else if (sysfs_streq("L1_D_load_TLB_array", buf)) {
+ ramindex = L1_D_load_TLB_array;
+ } else if (sysfs_streq("L1_D_store_TLB_array", buf)) {
+ ramindex = L1_D_store_TLB_array;
+ } else if (sysfs_streq("L2_tag_RAM", buf)) {
+ ramindex = L2_tag_RAM;
+ } else if (sysfs_streq("L2_data_RAM", buf)) {
+ ramindex = L2_data_RAM;
+ } else if (sysfs_streq("L2_snoop_tag_RAM", buf)) {
+ ramindex = L2_snoop_tag_RAM;
+ } else if (sysfs_streq("L2_data_ECC_RAM", buf)) {
+ ramindex = L2_data_ECC_RAM;
+ } else if (sysfs_streq("L2_dirty_RAM", buf)) {
+ ramindex = L2_dirty_RAM;
+ } else if (sysfs_streq("L2_TLB_RAM", buf)) {
+ ramindex = L2_TLB_RAM;
+ } else {
+ printk("Invalid value....\n\n");
+ printk("Available options are one of below\n");
+ printk("L1_I_tag_RAM, L1_I_data_RAM, L1_I_BTB_RAM\n");
+ printk("L1_I_GHB_RAM, L1_I_TLB_RAM, L1_I_indirect_predictor_RAM\n");
+ printk("L1_D_tag_RAM, L1_D_data_RAM, L1_D_load_TLB_array, L1_D_store_TLB_array\n");
+ printk("L2_tag_RAM, L2_data_RAM, L2_snoop_tag_RAM, L2_data_ECC_RAM\n");
+ printk("L2_dirty_RAM, L2_TLB_RAM\n");
+ }
+
+ return count;
+}
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+
+#ifdef CONFIG_VITHAR_DEBUG_SYS
+static ssize_t show_vol(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct kbase_device *kbdev;
+ ssize_t ret = 0;
+ int vol;
+
+ kbdev = dev_get_drvdata(dev);
+
+ if (!kbdev)
+ return -ENODEV;
+
+ kbase_platform_get_voltage(dev, &vol);
+ ret += snprintf(buf+ret, PAGE_SIZE-ret, "Current operating voltage for vithar = %d", vol);
+
+ if (ret < PAGE_SIZE - 1)
+ ret += snprintf(buf+ret, PAGE_SIZE-ret, "\n");
+ else
+ {
+ buf[PAGE_SIZE-2] = '\n';
+ buf[PAGE_SIZE-1] = '\0';
+ ret = PAGE_SIZE-1;
+ }
+
+ return ret;
+}
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+
+#ifdef CONFIG_VITHAR_DEBUG_SYS
+static int get_clkout_cmu_top(int *val)
+{
+ *val = __raw_readl(/*EXYNOS5_CLKOUT_CMU_TOP*/EXYNOS_CLKREG(0x10A00));
+ if((*val & 0x1f) == 0xB) /* CLKOUT is ACLK_400 in CLKOUT_CMU_TOP */
+ return 1;
+ else
+ return 0;
+}
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+
+#ifdef CONFIG_VITHAR_DEBUG_SYS
+static void set_clkout_for_3d(void)
+{
+ int tmp;
+
+ tmp = 0x0;
+ tmp |= 0x1000B; /* ACLK_400 selected */
+ tmp |= 9 << 8; /* divided by (9 + 1) */
+ __raw_writel(tmp, /*EXYNOS5_CLKOUT_CMU_TOP*/EXYNOS_CLKREG(0x10A00));
+
+#ifdef PMU_XCLKOUT_SET
+ exynos5_pmu_xclkout_set(1, XCLKOUT_CMU_TOP);
+#else /* PMU_XCLKOUT_SET */
+ tmp = 0x0;
+ tmp |= 7 << 8; /* CLKOUT_CMU_TOP selected */
+ __raw_writel(tmp, /*S5P_PMU_DEBUG*/S5P_PMUREG(0x0A00));
+#endif /* PMU_XCLKOUT_SET */
+}
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+
+#ifdef CONFIG_VITHAR_DEBUG_SYS
+static ssize_t show_clkout(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct kbase_device *kbdev;
+ ssize_t ret = 0;
+ int val;
+
+ kbdev = dev_get_drvdata(dev);
+
+ if (!kbdev)
+ return -ENODEV;
+
+ if(get_clkout_cmu_top(&val))
+ ret += snprintf(buf+ret, PAGE_SIZE-ret, "Current CLKOUT is g3d divided by 10, CLKOUT_CMU_TOP=0x%x", val);
+ else
+ ret += snprintf(buf+ret, PAGE_SIZE-ret, "Current CLKOUT is not g3d, CLKOUT_CMU_TOP=0x%x", val);
+
+ if (ret < PAGE_SIZE - 1)
+ ret += snprintf(buf+ret, PAGE_SIZE-ret, "\n");
+ else
+ {
+ buf[PAGE_SIZE-2] = '\n';
+ buf[PAGE_SIZE-1] = '\0';
+ ret = PAGE_SIZE-1;
+ }
+
+ return ret;
+}
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+
+#ifdef CONFIG_VITHAR_DEBUG_SYS
+static ssize_t set_clkout(struct device *dev, struct device_attribute *attr, const char *buf, size_t count)
+{
+ struct kbase_device *kbdev;
+ kbdev = dev_get_drvdata(dev);
+
+ if (!kbdev)
+ return -ENODEV;
+
+ if (sysfs_streq("3d", buf)) {
+ set_clkout_for_3d();
+ } else {
+ printk("invalid val (only 3d is accepted\n");
+ }
+
+ return count;
+}
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+
+#ifdef CONFIG_VITHAR_DEBUG_SYS
+static ssize_t show_dvfs(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct kbase_device *kbdev;
+ ssize_t ret = 0;
+
+ kbdev = dev_get_drvdata(dev);
+
+ if (!kbdev)
+ return -ENODEV;
+
+#ifdef CONFIG_VITHAR_DVFS
+ if(kbdev->pm.metrics.timer.active == MALI_FALSE )
+ ret += snprintf(buf+ret, PAGE_SIZE-ret, "G3D DVFS is off");
+ else
+ ret += snprintf(buf+ret, PAGE_SIZE-ret, "G3D DVFS is on");
+#else /* CONFIG_VITHAR_DVFS */
+ ret += snprintf(buf+ret, PAGE_SIZE-ret, "G3D DVFS is disabled");
+#endif /* CONFIG_VITHAR_DVFS */
+
+ if (ret < PAGE_SIZE - 1)
+ ret += snprintf(buf+ret, PAGE_SIZE-ret, "\n");
+ else
+ {
+ buf[PAGE_SIZE-2] = '\n';
+ buf[PAGE_SIZE-1] = '\0';
+ ret = PAGE_SIZE-1;
+ }
+
+ return ret;
+}
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+
+#ifdef CONFIG_VITHAR_DEBUG_SYS
+static ssize_t set_dvfs(struct device *dev, struct device_attribute *attr, const char *buf, size_t count)
+{
+#ifdef CONFIG_VITHAR_DVFS
+ osk_error ret;
+ int vol;
+#endif /* CONFIG_VITHAR_DVFS */
+ struct kbase_device *kbdev;
+ kbdev = dev_get_drvdata(dev);
+
+ if (!kbdev)
+ return -ENODEV;
+
+#ifdef CONFIG_VITHAR_DVFS
+ if (sysfs_streq("off", buf)) {
+ if(kbdev->pm.metrics.timer.active == MALI_FALSE )
+ return count;
+
+ osk_spinlock_irq_lock(&kbdev->pm.metrics.lock);
+ kbdev->pm.metrics.timer_active = MALI_FALSE;
+ osk_spinlock_irq_unlock(&kbdev->pm.metrics.lock);
+
+ osk_timer_stop(&kbdev->pm.metrics.timer);
+
+ kbase_platform_get_default_voltage(dev, &vol);
+ if(vol != 0)
+ kbase_platform_set_voltage(dev, vol);
+ kbase_platform_dvfs_set_clock(kbdev,VITHAR_DEFAULT_CLOCK / 1000000);
+ } else if (sysfs_streq("on", buf)) {
+ if(kbdev->pm.metrics.timer_active == MALI_TRUE )
+ return count;
+
+ osk_spinlock_irq_lock(&kbdev->pm.metrics.lock);
+ kbdev->pm.metrics.timer_active = MALI_TRUE;
+ osk_spinlock_irq_unlock(&kbdev->pm.metrics.lock);
+
+ ret = osk_timer_start(&kbdev->pm.metrics.timer, KBASE_PM_DVFS_FREQUENCY);
+ if (ret != OSK_ERR_NONE)
+ {
+ printk("osk_timer_start failed\n");
+ }
+ } else {
+ printk("invalid val -only [on, off] is accepted\n");
+ }
+#else /* CONFIG_VITHAR_DVFS */
+ printk("G3D DVFS is disabled\n");
+#endif /* CONFIG_VITHAR_DVFS */
+
+ return count;
+}
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+
+#define KBASE_PM_DVFS_FREQUENCY 100
+
+#define MALI_DVFS_DEBUG 0
+#define MALI_DVFS_START_MAX_STEP 1
+#define MALI_DVFS_STEP 6
+#define MALI_DVFS_KEEP_STAY_CNT 10
+#define CONFIG_VITHAR_FREQ_LOCK
+
+#ifdef CONFIG_VITHAR_DEBUG_SYS
+static ssize_t show_lock_dvfs(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct kbase_device *kbdev;
+ ssize_t ret = 0;
+#ifdef CONFIG_VITHAR_DVFS
+ unsigned int locked_level = -1;
+#endif /* CONFIG_VITHAR_DVFS */
+
+ kbdev = dev_get_drvdata(dev);
+
+ if (!kbdev)
+ return -ENODEV;
+
+#ifdef CONFIG_VITHAR_DVFS
+ locked_level = mali_get_dvfs_upper_locked_freq();
+ if( locked_level > 0 )
+ ret += snprintf(buf+ret, PAGE_SIZE-ret, "Current Upper Lock Level = %dMhz", locked_level );
+ else
+ ret += snprintf(buf+ret, PAGE_SIZE-ret, "Unset the Upper Lock Level");
+ ret += snprintf(buf+ret, PAGE_SIZE-ret, "\nPossible settings : 450, 400, 266, 160, 100, If you want to unlock : 533");
+
+#else /* CONFIG_VITHAR_DVFS */
+ ret += snprintf(buf+ret, PAGE_SIZE-ret, "G3D DVFS is disabled. You can not setting the Upper Lock level.");
+#endif /* CONFIG_VITHAR_DVFS */
+
+ if (ret < PAGE_SIZE - 1)
+ ret += snprintf(buf+ret, PAGE_SIZE-ret, "\n");
+ else
+ {
+ buf[PAGE_SIZE-2] = '\n';
+ buf[PAGE_SIZE-1] = '\0';
+ ret = PAGE_SIZE-1;
+ }
+
+ return ret;
+}
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+
+#ifdef CONFIG_VITHAR_DEBUG_SYS
+static ssize_t set_lock_dvfs(struct device *dev, struct device_attribute *attr, const char *buf, size_t count)
+{
+ struct kbase_device *kbdev;
+ kbdev = dev_get_drvdata(dev);
+
+ if (!kbdev)
+ return -ENODEV;
+
+#ifdef CONFIG_VITHAR_DVFS
+#if (MALI_DVFS_STEP == 6)
+ if (sysfs_streq("533", buf)) {
+ mali_dvfs_freq_unlock();
+ } else if (sysfs_streq("450", buf)) {
+ mali_dvfs_freq_lock(4);
+ } else if (sysfs_streq("400", buf)) {
+ mali_dvfs_freq_lock(3);
+ } else if (sysfs_streq("266", buf)) {
+ mali_dvfs_freq_lock(2);
+ } else if (sysfs_streq("160", buf)) {
+ mali_dvfs_freq_lock(1);
+ } else if (sysfs_streq("100", buf)) {
+ mali_dvfs_freq_lock(0);
+ } else {
+ dev_err(dev, "set_clock: invalid value\n");
+ dev_err(dev, "Possible settings : 450, 400, 266, 160, 100, If you want to unlock : 533\n");
+ return -ENOENT;
+ }
+#elif (MALI_DVFS_STEP == 2)
+ if (sysfs_streq("533", buf)) {
+ mali_dvfs_freq_unlock();
+ } else if (sysfs_streq("266", buf)) {
+ mali_dvfs_freq_lock(0);
+ } else {
+ dev_err(dev, "set_clock: invalid value\n");
+ dev_err(dev, "Possible settings : 450, 400, 266, 160, 100, If you want to unlock : 533\n");
+ return -ENOENT;
+ }
+#endif /* MALI_DVFS_STEP */
+#else /* CONFIG_VITHAR_DVFS */
+ printk("G3D DVFS is disabled. You can not setting the Upper Lock level.\n");
+#endif /* CONFIG_VITHAR_DVFS */
+
+ return count;
+}
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+
+/** The sysfs file @c clock, fbdev.
+ *
+ * This is used for obtaining information about the vithar operating clock & framebuffer address,
+ */
+#ifdef CONFIG_VITHAR_DEBUG_SYS
+DEVICE_ATTR(clock, S_IRUGO|S_IWUSR, show_clock, set_clock);
+DEVICE_ATTR(fbdev, S_IRUGO, show_fbdev, NULL);
+DEVICE_ATTR(dtlb, S_IRUGO|S_IWUSR, show_dtlb, set_dtlb);
+DEVICE_ATTR(vol, S_IRUGO|S_IWUSR, show_vol, NULL);
+DEVICE_ATTR(clkout, S_IRUGO|S_IWUSR, show_clkout, set_clkout);
+DEVICE_ATTR(dvfs, S_IRUGO|S_IWUSR, show_dvfs, set_dvfs);
+DEVICE_ATTR(dvfs_lock, S_IRUGO|S_IWUSR, show_lock_dvfs, set_lock_dvfs);
+
+static int kbase_platform_create_sysfs_file(struct device *dev)
+{
+ if (device_create_file(dev, &dev_attr_clock))
+ {
+ dev_err(dev, "Couldn't create sysfs file [clock]\n");
+ goto out;
+ }
+
+ if (device_create_file(dev, &dev_attr_fbdev))
+ {
+ dev_err(dev, "Couldn't create sysfs file [fbdev]\n");
+ goto out;
+ }
+
+ if (device_create_file(dev, &dev_attr_dtlb))
+ {
+ dev_err(dev, "Couldn't create sysfs file [dtlb]\n");
+ goto out;
+ }
+
+ if (device_create_file(dev, &dev_attr_vol))
+ {
+ dev_err(dev, "Couldn't create sysfs file [vol]\n");
+ goto out;
+ }
+
+ if (device_create_file(dev, &dev_attr_clkout))
+ {
+ dev_err(dev, "Couldn't create sysfs file [clkout]\n");
+ goto out;
+ }
+
+ if (device_create_file(dev, &dev_attr_dvfs))
+ {
+ dev_err(dev, "Couldn't create sysfs file [dvfs]\n");
+ goto out;
+ }
+
+ if (device_create_file(dev, &dev_attr_dvfs_lock))
+ {
+ dev_err(dev, "Couldn't create sysfs file [dvfs_lock]\n");
+ goto out;
+ }
+ return 0;
+out:
+ return -ENOENT;
+}
+
+void kbase_platform_remove_sysfs_file(struct device *dev)
+{
+ device_remove_file(dev, &dev_attr_clock);
+ device_remove_file(dev, &dev_attr_fbdev);
+ device_remove_file(dev, &dev_attr_dtlb);
+ device_remove_file(dev, &dev_attr_vol);
+ device_remove_file(dev, &dev_attr_clkout);
+ device_remove_file(dev, &dev_attr_dvfs);
+ device_remove_file(dev, &dev_attr_dvfs_lock);
+}
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+
+#include "osk/include/mali_osk_lock_order.h"
+
+#ifdef CONFIG_PM_RUNTIME
+static void kbase_platform_runtime_term(struct kbase_device *kbdev)
+{
+ pm_runtime_disable(kbdev->osdev.dev);
+}
+#endif /* CONFIG_PM_RUNTIME */
+
+#ifdef CONFIG_PM_RUNTIME
+extern void pm_runtime_init(struct device *dev);
+
+static mali_error kbase_platform_runtime_init(struct kbase_device *kbdev)
+{
+ pm_runtime_init(kbdev->osdev.dev);
+ pm_suspend_ignore_children(kbdev->osdev.dev, true);
+ pm_runtime_enable(kbdev->osdev.dev);
+ return MALI_ERROR_NONE;
+}
+#endif /* CONFIG_PM_RUNTIME */
+
+
+mali_error kbase_platform_init(kbase_device *kbdev)
+{
+ struct exynos_context *platform;
+
+ platform = osk_malloc(sizeof(struct exynos_context));
+
+ if(NULL == platform)
+ {
+ return MALI_ERROR_OUT_OF_MEMORY;
+ }
+
+ kbdev->platform_context = (void *) platform;
+
+ platform->cmu_pmu_status = 0;
+ spin_lock_init(&platform->cmu_pmu_lock);
+
+ if(kbase_platform_power_clock_init(kbdev))
+ {
+ goto clock_init_fail;
+ }
+
+#ifdef CONFIG_REGULATOR
+ if(kbase_platform_regulator_init())
+ {
+ goto regulator_init_fail;
+ }
+#endif /* CONFIG_REGULATOR */
+
+#ifdef CONFIG_VITHAR_DEBUG_SYS
+ if(kbase_platform_create_sysfs_file(dev))
+ {
+ return create_sysfs_file_fail;
+ }
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+
+#ifdef CONFIG_VITHAR_DVFS
+ kbase_platform_dvfs_init(kbdev);
+#endif /* CONFIG_VITHAR_DVFS */
+
+ /* Enable power */
+ kbase_platform_cmu_pmu_control(kbdev, 1);
+ return MALI_ERROR_NONE;
+
+#ifdef CONFIG_VITHAR_DEBUG_SYS
+create_sysfs_file_fail:
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+#ifdef CONFIG_REGULATOR
+ kbase_platform_regulator_disable();
+#endif /* CONFIG_REGULATOR */
+regulator_init_fail:
+clock_init_fail:
+ osk_free(platform);
+
+ return MALI_ERROR_FUNCTION_FAILED;
+}
+
+void kbase_platform_term(kbase_device *kbdev)
+{
+ struct exynos_context *platform;
+
+ platform = (struct exynos_context *) kbdev->platform_context;
+
+#ifdef CONFIG_VITHAR_DVFS
+ kbase_platform_dvfs_term();
+#endif /* CONFIG_VITHAR_DVFS */
+
+ /* Disable power */
+ kbase_platform_cmu_pmu_control(kbdev, 0);
+#ifdef CONFIG_REGULATOR
+ kbase_platform_regulator_disable();
+#endif /* CONFIG_REGULATOR */
+ osk_free(kbdev->platform_context);
+ kbdev->platform_context = 0;
+ return;
+}
+
+#ifdef CONFIG_REGULATOR
+static struct regulator *g3d_regulator=NULL;
+#ifdef CONFIG_VITHAR_HWVER_R0P0
+static int mali_gpu_vol = 1250000; /* 1.25V @ 533 MHz */
+#else
+static int mali_gpu_vol = 1050000; /* 1.05V @ 266 MHz */
+#endif /* CONFIG_VITHAR_HWVER_R0P0 */
+#endif /* CONFIG_REGULATOR */
+
+#ifdef CONFIG_VITHAR_DVFS
+typedef struct _mali_dvfs_info{
+ unsigned int voltage;
+ unsigned int clock;
+ int min_threshold;
+ int max_threshold;
+}mali_dvfs_info;
+
+typedef struct _mali_dvfs_status_type{
+ kbase_device *kbdev;
+ int step;
+ int utilisation;
+ int keepcnt;
+ uint noutilcnt;
+#ifdef CONFIG_VITHAR_FREQ_LOCK
+ int upper_lock;
+ int under_lock;
+#endif /* CONFIG_VITHAR_FREQ_LOCK */
+}mali_dvfs_status;
+
+static struct workqueue_struct *mali_dvfs_wq = 0;
+int mali_dvfs_control=0;
+osk_spinlock mali_dvfs_spinlock;
+
+
+/*dvfs status*/
+static mali_dvfs_status mali_dvfs_status_current;
+static const mali_dvfs_info mali_dvfs_infotbl[MALI_DVFS_STEP]=
+{
+#if (MALI_DVFS_STEP == 6)
+ {937500/*750000*/, 100, 0, 40},
+ {937500/*81250*/, 160, 30, 60},
+ {937500, 266, 50, 70},
+ {1100000, 400, 60, 80},
+ {1150000, 450, 70, 90},
+ {1250000, 533, 80, 100}
+#elif (MALI_DVFS_STEP == 2)
+ {937500, 266, 0, 55},
+ {1250000, 533, 45, 100}
+#else /* MALI_DVFS_STEP */
+#error no table
+#endif /* MALI_DVFS_STEP */
+};
+
+static void kbase_platform_dvfs_set_vol(unsigned int vol)
+{
+ static int _vol = -1;
+
+ if (_vol == vol)
+ return;
+
+
+ switch(vol)
+ {
+ case 1250000:
+ case 1150000:
+ case 1100000:
+ case 1000000:
+ case 937500:
+ case 812500:
+ case 750000:
+ kbase_platform_set_voltage(NULL, vol);
+ break;
+ default:
+ return;
+ }
+
+ _vol = vol;
+
+#if MALI_DVFS_DEBUG
+ printk("dvfs_set_vol %dmV\n", vol);
+#endif /* MALI_DVFS_DEBUG */
+ return;
+}
+
+#ifdef CONFIG_VITHAR_FREQ_LOCK
+int mali_get_dvfs_upper_locked_freq(void)
+{
+ unsigned int locked_level = -1;
+
+ osk_spinlock_lock(&mali_dvfs_spinlock);
+ locked_level = mali_dvfs_infotbl[mali_dvfs_status_current.upper_lock].clock;
+ osk_spinlock_unlock(&mali_dvfs_spinlock);
+
+ return locked_level;
+}
+
+int mali_get_dvfs_under_locked_freq(void)
+{
+ unsigned int locked_level = -1;
+
+ osk_spinlock_lock(&mali_dvfs_spinlock);
+ locked_level = mali_dvfs_infotbl[mali_dvfs_status_current.under_lock].clock;
+ osk_spinlock_unlock(&mali_dvfs_spinlock);
+
+ return locked_level;
+}
+
+
+int mali_dvfs_freq_lock(int level)
+{
+ osk_spinlock_lock(&mali_dvfs_spinlock);
+ mali_dvfs_status_current.upper_lock = level;
+ osk_spinlock_unlock(&mali_dvfs_spinlock);
+ return 0;
+}
+void mali_dvfs_freq_unlock(void)
+{
+ osk_spinlock_lock(&mali_dvfs_spinlock);
+ mali_dvfs_status_current.upper_lock = -1;
+ osk_spinlock_unlock(&mali_dvfs_spinlock);
+}
+
+int mali_dvfs_freq_under_lock(int level)
+{
+ osk_spinlock_lock(&mali_dvfs_spinlock);
+ mali_dvfs_status_current.under_lock = level;
+ osk_spinlock_unlock(&mali_dvfs_spinlock);
+ return 0;
+}
+void mali_dvfs_freq_under_unlock(void)
+{
+ osk_spinlock_lock(&mali_dvfs_spinlock);
+ mali_dvfs_status_current.under_lock = -1;
+ osk_spinlock_unlock(&mali_dvfs_spinlock);
+}
+#endif /* CONFIG_VITHAR_FREQ_LOCK */
+
+void kbase_platform_dvfs_set_level(int level)
+{
+ static int level_prev=-1;
+
+ if (level == level_prev)
+ return;
+
+ if (WARN_ON(level >= MALI_DVFS_STEP))
+ panic("invalid level");
+
+ if (level > level_prev) {
+ kbase_platform_dvfs_set_vol(mali_dvfs_infotbl[level].voltage);
+ kbase_platform_dvfs_set_clock(mali_dvfs_status_current.kbdev, mali_dvfs_infotbl[level].clock);
+ }else{
+ kbase_platform_dvfs_set_clock(mali_dvfs_status_current.kbdev, mali_dvfs_infotbl[level].clock);
+ kbase_platform_dvfs_set_vol(mali_dvfs_infotbl[level].voltage);
+ }
+ level_prev = level;
+}
+
+static void mali_dvfs_event_proc(struct work_struct *w)
+{
+ mali_dvfs_status dvfs_status;
+
+ osk_spinlock_lock(&mali_dvfs_spinlock);
+ dvfs_status = mali_dvfs_status_current;
+ osk_spinlock_unlock(&mali_dvfs_spinlock);
+
+#if MALI_DVFS_START_MAX_STEP
+ /*If no input is for longtime, first step will be max step. */
+ if (dvfs_status.utilisation > 10 && dvfs_status.noutilcnt > 20) {
+ dvfs_status.step=MALI_DVFS_STEP-2;
+ dvfs_status.utilisation = 100;
+ }
+#endif /* MALI_DVFS_START_MAX_STEP */
+
+ if (dvfs_status.utilisation > mali_dvfs_infotbl[dvfs_status.step].max_threshold)
+ {
+ OSK_ASSERT(dvfs_status.step < MALI_DVFS_STEP);
+ dvfs_status.step++;
+ dvfs_status.keepcnt=0;
+ }else if ((dvfs_status.step>0) &&
+ (dvfs_status.utilisation < mali_dvfs_infotbl[dvfs_status.step].min_threshold)) {
+ dvfs_status.keepcnt++;
+ if (dvfs_status.keepcnt > MALI_DVFS_KEEP_STAY_CNT)
+ {
+ OSK_ASSERT(dvfs_status.step > 0);
+ dvfs_status.step--;
+ dvfs_status.keepcnt=0;
+ }
+ }else{
+ dvfs_status.keepcnt=0;
+ }
+
+#ifdef CONFIG_VITHAR_FREQ_LOCK
+ if ((dvfs_status.upper_lock > 0)&&(dvfs_status.step > dvfs_status.upper_lock)) {
+ dvfs_status.step = dvfs_status.upper_lock;
+ if ((dvfs_status.under_lock > 0)&&(dvfs_status.under_lock > dvfs_status.upper_lock)) {
+ dvfs_status.under_lock = dvfs_status.upper_lock;
+ }
+ }
+ if (dvfs_status.under_lock > 0) {
+ if (dvfs_status.step < dvfs_status.under_lock)
+ dvfs_status.step = dvfs_status.under_lock;
+ }
+#endif /* CONFIG_VITHAR_FREQ_LOCK */
+
+ kbase_platform_dvfs_set_level(dvfs_status.step);
+
+#if MALI_DVFS_START_MAX_STEP
+ if (dvfs_status.utilisation == 0) {
+ dvfs_status.noutilcnt++;
+ } else {
+ dvfs_status.noutilcnt=0;
+ }
+#endif /* MALI_DVFS_START_MAX_STEP */
+
+#if MALI_DVFS_DEBUG
+ printk("[mali_dvfs] utilisation: %d step: %d[%d,%d] cnt: %d\n",
+ dvfs_status.utilisation, dvfs_status.step,
+ mali_dvfs_infotbl[dvfs_status.step].min_threshold,
+ mali_dvfs_infotbl[dvfs_status.step].max_threshold, dvfs_status.keepcnt);
+#endif /* MALI_DVFS_DEBUG */
+
+ osk_spinlock_lock(&mali_dvfs_spinlock);
+ mali_dvfs_status_current=dvfs_status;
+ osk_spinlock_unlock(&mali_dvfs_spinlock);
+
+}
+
+static DECLARE_WORK(mali_dvfs_work, mali_dvfs_event_proc);
+
+int kbase_platform_dvfs_event(struct kbase_device *kbdev, u32 utilisation)
+{
+ osk_spinlock_lock(&mali_dvfs_spinlock);
+ mali_dvfs_status_current.utilisation = utilisation;
+ osk_spinlock_unlock(&mali_dvfs_spinlock);
+ queue_work_on(0, mali_dvfs_wq, &mali_dvfs_work);
+
+ /*add error handle here*/
+ return MALI_TRUE;
+}
+
+int kbase_platform_dvfs_get_control_status(void)
+{
+ return mali_dvfs_control;
+}
+
+int kbase_platform_dvfs_init(kbase_device *kbdev)
+{
+ /*default status
+ add here with the right function to get initilization value.
+ */
+ if (!mali_dvfs_wq)
+ mali_dvfs_wq = create_singlethread_workqueue("mali_dvfs");
+
+ osk_spinlock_init(&mali_dvfs_spinlock,OSK_LOCK_ORDER_PM_METRICS);
+
+ /*add a error handling here*/
+ osk_spinlock_lock(&mali_dvfs_spinlock);
+ mali_dvfs_status_current.kbdev = kbdev;
+ mali_dvfs_status_current.utilisation = 100;
+ mali_dvfs_status_current.step = MALI_DVFS_STEP-1;
+#ifdef CONFIG_VITHAR_FREQ_LOCK
+ mali_dvfs_status_current.upper_lock = -1;
+ mali_dvfs_status_current.under_lock = -1;
+#endif /* CONFIG_VITHAR_FREQ_LOCK */
+ osk_spinlock_unlock(&mali_dvfs_spinlock);
+
+ return MALI_TRUE;
+}
+
+void kbase_platform_dvfs_term(void)
+{
+ if (mali_dvfs_wq)
+ destroy_workqueue(mali_dvfs_wq);
+
+ mali_dvfs_wq = NULL;
+}
+#endif /* CONFIG_VITHAR_DVFS */
+
+int kbase_platform_regulator_init(void)
+{
+#ifdef CONFIG_REGULATOR
+ g3d_regulator = regulator_get(NULL, "vdd_g3d");
+ if(IS_ERR(g3d_regulator))
+ {
+ printk("[kbase_platform_regulator_init] failed to get vithar regulator\n");
+ return -1;
+ }
+
+ if(regulator_enable(g3d_regulator) != 0)
+ {
+ printk("[kbase_platform_regulator_init] failed to enable vithar regulator\n");
+ return -1;
+ }
+
+ if(regulator_set_voltage(g3d_regulator, mali_gpu_vol, mali_gpu_vol) != 0)
+ {
+ printk("[kbase_platform_regulator_init] failed to set vithar operating voltage [%d]\n", mali_gpu_vol);
+ return -1;
+ }
+#endif /* CONFIG_REGULATOR */
+
+ return 0;
+}
+
+int kbase_platform_regulator_disable(void)
+{
+#ifdef CONFIG_REGULATOR
+ if(!g3d_regulator)
+ {
+ printk("[kbase_platform_regulator_disable] g3d_regulator is not initialized\n");
+ return -1;
+ }
+
+ if(regulator_disable(g3d_regulator) != 0)
+ {
+ printk("[kbase_platform_regulator_disable] failed to disable g3d regulator\n");
+ return -1;
+ }
+#endif /* CONFIG_REGULATOR */
+ return 0;
+}
+
+int kbase_platform_regulator_enable(struct device *dev)
+{
+#ifdef CONFIG_REGULATOR
+ if(!g3d_regulator)
+ {
+ printk("[kbase_platform_regulator_enable] g3d_regulator is not initialized\n");
+ return -1;
+ }
+
+ if(regulator_enable(g3d_regulator) != 0)
+ {
+ printk("[kbase_platform_regulator_enable] failed to enable g3d regulator\n");
+ return -1;
+ }
+#endif /* CONFIG_REGULATOR */
+ return 0;
+}
+
+int kbase_platform_get_default_voltage(struct device *dev, int *vol)
+{
+#ifdef CONFIG_REGULATOR
+ *vol = mali_gpu_vol;
+#else /* CONFIG_REGULATOR */
+ *vol = 0;
+#endif /* CONFIG_REGULATOR */
+ return 0;
+}
+
+#ifdef CONFIG_VITHAR_DEBUG_SYS
+int kbase_platform_get_voltage(struct device *dev, int *vol)
+{
+#ifdef CONFIG_REGULATOR
+ if(!g3d_regulator)
+ {
+ printk("[kbase_platform_get_voltage] g3d_regulator is not initialized\n");
+ return -1;
+ }
+
+ *vol = regulator_get_voltage(g3d_regulator);
+#else /* CONFIG_REGULATOR */
+ *vol = 0;
+#endif /* CONFIG_REGULATOR */
+ return 0;
+}
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+
+#if defined CONFIG_VITHAR_DEBUG_SYS || defined CONFIG_VITHAR_DVFS
+int kbase_platform_set_voltage(struct device *dev, int vol)
+{
+#ifdef CONFIG_REGULATOR
+ if(!g3d_regulator)
+ {
+ printk("[kbase_platform_set_voltage] g3d_regulator is not initialized\n");
+ return -1;
+ }
+
+ if(regulator_set_voltage(g3d_regulator, vol, vol) != 0)
+ {
+ printk("[kbase_platform_set_voltage] failed to set voltage\n");
+ return -1;
+ }
+#endif /* CONFIG_REGULATOR */
+ return 0;
+}
+#endif /* CONFIG_VITHAR_DEBUG_SYS */
+
+void kbase_platform_dvfs_set_clock(kbase_device *kbdev, int freq)
+{
+ static struct clk * mout_gpll = NULL;
+ static struct clk * fin_gpll = NULL;
+ static struct clk * fout_gpll = NULL;
+ static int _freq = -1;
+ static unsigned long gpll_rate_prev = 0;
+ unsigned long gpll_rate = 0, aclk_400_rate = 0;
+ unsigned long tmp = 0;
+ struct exynos_context *platform;
+
+
+ if (!kbdev)
+ panic("oops");
+
+ platform = (struct exynos_context *) kbdev->platform_context;
+ if(NULL == platform)
+ {
+ panic("oops");
+ }
+
+ if (mout_gpll==NULL) {
+ mout_gpll = clk_get(kbdev->osdev.dev, "mout_gpll");
+ fin_gpll = clk_get(kbdev->osdev.dev, "ext_xtal");
+ fout_gpll = clk_get(kbdev->osdev.dev, "fout_gpll");
+ if(IS_ERR(mout_gpll) || IS_ERR(fin_gpll) || IS_ERR(fout_gpll))
+ panic("clk_get ERROR");
+ }
+
+ if(platform->sclk_g3d == 0)
+ return;
+
+ if (freq == _freq)
+ return;
+
+ switch(freq)
+ {
+ case 533:
+ gpll_rate = 533000000;
+ aclk_400_rate = 533000000;
+ break;
+ case 450:
+ gpll_rate = 450000000;
+ aclk_400_rate = 450000000;
+ break;
+ case 400:
+ gpll_rate = 800000000;
+ aclk_400_rate = 400000000;
+ break;
+ case 266:
+ gpll_rate = 800000000;
+ aclk_400_rate = 267000000;
+ break;
+ case 160:
+ gpll_rate = 800000000;
+ aclk_400_rate = 160000000;
+ break;
+ case 100:
+ gpll_rate = 800000000;
+ aclk_400_rate = 100000000;
+ break;
+ default:
+ return;
+ }
+
+ /* if changed the GPLL rate, set rate for GPLL and wait for lock time */
+ if( gpll_rate != gpll_rate_prev) {
+ /*for stable clock input.*/
+ clk_set_rate(platform->sclk_g3d, 100000000);
+ clk_set_parent(mout_gpll, fin_gpll);
+
+ /*change gpll*/
+ clk_set_rate( fout_gpll, gpll_rate );
+
+ /*restore parent*/
+ clk_set_parent(mout_gpll, fout_gpll);
+ gpll_rate_prev = gpll_rate;
+ }
+
+ _freq = freq;
+ clk_set_rate(platform->sclk_g3d, aclk_400_rate);
+
+ /* Waiting for clock is stable */
+ do {
+ tmp = __raw_readl(/*EXYNOS5_CLKDIV_STAT_TOP0*/EXYNOS_CLKREG(0x10610));
+ } while (tmp & 0x1000000);
+#ifdef CONFIG_VITHAR_DVFS
+#if MALI_DVFS_DEBUG
+ printk("aclk400 %u[%d]\n", (unsigned int)clk_get_rate(platform->sclk_g3d),mali_dvfs_status_current.utilisation);
+ printk("dvfs_set_clock GPLL : %lu, ACLK_400 : %luMhz\n", gpll_rate, aclk_400_rate );
+#endif /* MALI_DVFS_DEBUG */
+#endif /* CONFIG_VITHAR_DVFS */
+ return;
+}
+
+/**
+ * Exynos5 alternative dvfs_callback imlpementation.
+ * instead of:
+ * action = kbase_pm_get_dvfs_action(kbdev);
+ * use this:
+ * kbase_platform_dvfs_event(kbdev, kbase_pm_get_dvfs_utilisation(kbdev));
+ */
+
+#ifdef CONFIG_VITHAR_DVFS
+int kbase_pm_get_dvfs_utilisation(kbase_device *kbdev)
+{
+ int utilisation=0;
+ osk_ticks now = osk_time_now();
+
+ OSK_ASSERT(kbdev != NULL);
+
+ osk_spinlock_irq_lock(&kbdev->pm.metrics.lock);
+
+ if (kbdev->pm.metrics.gpu_active)
+ {
+ kbdev->pm.metrics.time_busy += osk_time_elapsed(kbdev->pm.metrics.time_period_start, now);
+ kbdev->pm.metrics.time_period_start = now;
+ }
+ else
+ {
+ kbdev->pm.metrics.time_idle += osk_time_elapsed(kbdev->pm.metrics.time_period_start, now);
+ kbdev->pm.metrics.time_period_start = now;
+ }
+
+ if (kbdev->pm.metrics.time_idle + kbdev->pm.metrics.time_busy == 0)
+ {
+ /* No data - so we return NOP */
+ goto out;
+ }
+
+ utilisation = (100*kbdev->pm.metrics.time_busy) / (kbdev->pm.metrics.time_idle + kbdev->pm.metrics.time_busy);
+
+out:
+
+ kbdev->pm.metrics.time_idle = 0;
+ kbdev->pm.metrics.time_busy = 0;
+
+ osk_spinlock_irq_unlock(&kbdev->pm.metrics.lock);
+
+ return utilisation;
+}
+#endif
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#include <linux/ioport.h>
+#include <kbase/src/common/mali_kbase.h>
+#include <kbase/src/common/mali_kbase_defs.h>
+#include <kbase/src/linux/mali_kbase_config_linux.h>
+#include <ump/ump_common.h>
+
+#include "mali_kbase_cpu_vexpress.h"
+
+/* Versatile Express (VE) configuration defaults shared between config_attributes[]
+ * and config_attributes_hw_issue_8408[]. Settings are not shared for
+ * JS_HARD_STOP_TICKS_SS and JS_RESET_TICKS_SS.
+ */
+#define KBASE_VE_MEMORY_PER_PROCESS_LIMIT 512 * 1024 * 1024UL /* 512MB */
+#define KBASE_VE_UMP_DEVICE UMP_DEVICE_Z_SHIFT
+#define KBASE_VE_MEMORY_OS_SHARED_MAX 768 * 1024 * 1024UL /* 768MB */
+#define KBASE_VE_MEMORY_OS_SHARED_PERF_GPU KBASE_MEM_PERF_SLOW
+#define KBASE_VE_GPU_FREQ_KHZ_MAX 5000
+#define KBASE_VE_GPU_FREQ_KHZ_MIN 5000
+
+#define KBASE_VE_JS_SCHEDULING_TICK_NS_DEBUG 15000000u /* 15ms, an agressive tick for testing purposes. This will reduce performance significantly */
+#define KBASE_VE_JS_SOFT_STOP_TICKS_DEBUG 1 /* between 15ms and 30ms before soft-stop a job */
+#define KBASE_VE_JS_HARD_STOP_TICKS_SS_DEBUG 333 /* 5s before hard-stop */
+#define KBASE_VE_JS_HARD_STOP_TICKS_SS_8401_DEBUG 2000 /* 30s before hard-stop, for a certain GLES2 test at 128x128 (bound by combined vertex+tiler job) - for issue 8401 */
+#define KBASE_VE_JS_HARD_STOP_TICKS_NSS_DEBUG 100000 /* 1500s (25mins) before NSS hard-stop */
+#define KBASE_VE_JS_RESET_TICKS_SS_DEBUG 500 /* 45s before resetting GPU, for a certain GLES2 test at 128x128 (bound by combined vertex+tiler job) */
+#define KBASE_VE_JS_RESET_TICKS_SS_8401_DEBUG 3000 /* 7.5s before resetting GPU - for issue 8401 */
+#define KBASE_VE_JS_RESET_TICKS_NSS_DEBUG 100166 /* 1502s before resetting GPU */
+
+#define KBASE_VE_JS_SCHEDULING_TICK_NS 2500000000u /* 2.5s */
+#define KBASE_VE_JS_SOFT_STOP_TICKS 1 /* 2.5s before soft-stop a job */
+#define KBASE_VE_JS_HARD_STOP_TICKS_SS 2 /* 5s before hard-stop */
+#define KBASE_VE_JS_HARD_STOP_TICKS_SS_8401 12 /* 30s before hard-stop, for a certain GLES2 test at 128x128 (bound by combined vertex+tiler job) - for issue 8401 */
+#define KBASE_VE_JS_HARD_STOP_TICKS_NSS 600 /* 1500s before NSS hard-stop */
+#define KBASE_VE_JS_RESET_TICKS_SS 3 /* 7.5s before resetting GPU */
+#define KBASE_VE_JS_RESET_TICKS_SS_8401 18 /* 45s before resetting GPU, for a certain GLES2 test at 128x128 (bound by combined vertex+tiler job) - for issue 8401 */
+#define KBASE_VE_JS_RESET_TICKS_NSS 601 /* 1502s before resetting GPU */
+
+#define KBASE_VE_JS_RESET_TIMEOUT_MS 3000 /* 3s before cancelling stuck jobs */
+#define KBASE_VE_JS_CTX_TIMESLICE_NS 1000000 /* 1ms - an agressive timeslice for testing purposes (causes lots of scheduling out for >4 ctxs) */
+#define KBASE_VE_SECURE_BUT_LOSS_OF_PERFORMANCE (uintptr_t)MALI_FALSE /* By default we prefer performance over security on r0p0-15dev0 and KBASE_CONFIG_ATTR_ earlier */
+#define KBASE_VE_POWER_MANAGEMENT_CALLBACKS (uintptr_t)&pm_callbacks
+#define KBASE_VE_MEMORY_RESOURCE_ZBT (uintptr_t)<_zbt
+#define KBASE_VE_MEMORY_RESOURCE_DDR (uintptr_t)<_ddr
+#define KBASE_VE_CPU_SPEED_FUNC (uintptr_t)&kbase_get_vexpress_cpu_clock_speed
+
+/* Set this to 1 to enable dedicated memory banks */
+#define T6F1_ZBT_DDR_ENABLED 0
+#define HARD_RESET_AT_POWER_OFF 0
+
+static kbase_io_resources io_resources =
+{
+ .job_irq_number = 68,
+ .mmu_irq_number = 69,
+ .gpu_irq_number = 70,
+ .io_memory_region =
+ {
+ .start = 0xFC010000,
+ .end = 0xFC010000 + (4096 * 5) - 1
+ }
+};
+
+#if T6F1_ZBT_DDR_ENABLED
+
+static kbase_attribute lt_zbt_attrs[] =
+{
+ {
+ KBASE_MEM_ATTR_PERF_CPU,
+ KBASE_MEM_PERF_SLOW
+ },
+ {
+ KBASE_MEM_ATTR_END,
+ 0
+ }
+};
+
+static kbase_memory_resource lt_zbt =
+{
+ .base = 0xFD000000,
+ .size = 16 * 1024 * 1024UL /* 16MB */,
+ .attributes = lt_zbt_attrs,
+ .name = "T604 ZBT memory"
+};
+
+
+static kbase_attribute lt_ddr_attrs[] =
+{
+ {
+ KBASE_MEM_ATTR_PERF_CPU,
+ KBASE_MEM_PERF_SLOW
+ },
+ {
+ KBASE_MEM_ATTR_END,
+ 0
+ }
+};
+
+static kbase_memory_resource lt_ddr =
+{
+ .base = 0xE0000000,
+ .size = 256 * 1024 * 1024UL /* 256MB */,
+ .attributes = lt_ddr_attrs,
+ .name = "T604 DDR memory"
+};
+
+#endif /* T6F1_ZBT_DDR_ENABLED */
+
+static int pm_callback_power_on(kbase_device *kbdev)
+{
+ /* Nothing is needed on VExpress, but we may have destroyed GPU state (if the below HARD_RESET code is active) */
+ return 1;
+}
+
+static void pm_callback_power_off(kbase_device *kbdev)
+{
+#if HARD_RESET_AT_POWER_OFF
+ /* Cause a GPU hard reset to test whether we have actually idled the GPU
+ * and that we properly reconfigure the GPU on power up.
+ * Usually this would be dangerous, but if the GPU is working correctly it should
+ * be completely safe as the GPU should not be active at this point.
+ * However this is disabled normally because it will most likely interfere with
+ * bus logging etc.
+ */
+ kbase_os_reg_write(kbdev, GPU_CONTROL_REG(GPU_COMMAND), GPU_COMMAND_HARD_RESET);
+#endif
+}
+
+static kbase_pm_callback_conf pm_callbacks =
+{
+ .power_on_callback = pm_callback_power_on,
+ .power_off_callback = pm_callback_power_off
+};
+
+/* Please keep table config_attributes in sync with config_attributes_hw_issue_8408 */
+static kbase_attribute config_attributes[] =
+{
+ {
+ KBASE_CONFIG_ATTR_MEMORY_PER_PROCESS_LIMIT,
+ KBASE_VE_MEMORY_PER_PROCESS_LIMIT
+ },
+ {
+ KBASE_CONFIG_ATTR_UMP_DEVICE,
+ KBASE_VE_UMP_DEVICE
+ },
+
+ {
+ KBASE_CONFIG_ATTR_MEMORY_OS_SHARED_MAX,
+ KBASE_VE_MEMORY_OS_SHARED_MAX
+ },
+
+ {
+ KBASE_CONFIG_ATTR_MEMORY_OS_SHARED_PERF_GPU,
+ KBASE_VE_MEMORY_OS_SHARED_PERF_GPU
+ },
+
+#if T6F1_ZBT_DDR_ENABLED
+ {
+ KBASE_CONFIG_ATTR_MEMORY_RESOURCE,
+ KBASE_VE_MEMORY_RESOURCE_ZBT
+ },
+
+ {
+ KBASE_CONFIG_ATTR_MEMORY_RESOURCE,
+ KBASE_VE_MEMORY_RESOURCE_DDR
+ },
+#endif /* T6F1_ZBT_DDR_ENABLED */
+
+ {
+ KBASE_CONFIG_ATTR_GPU_FREQ_KHZ_MAX,
+ KBASE_VE_GPU_FREQ_KHZ_MAX
+ },
+
+ {
+ KBASE_CONFIG_ATTR_GPU_FREQ_KHZ_MIN,
+ KBASE_VE_GPU_FREQ_KHZ_MIN
+ },
+
+#if MALI_DEBUG
+/* Use more aggressive scheduling timeouts in debug builds for testing purposes */
+ {
+ KBASE_CONFIG_ATTR_JS_SCHEDULING_TICK_NS,
+ KBASE_VE_JS_SCHEDULING_TICK_NS_DEBUG
+ },
+
+ {
+ KBASE_CONFIG_ATTR_JS_SOFT_STOP_TICKS,
+ KBASE_VE_JS_SOFT_STOP_TICKS_DEBUG
+ },
+
+ {
+ KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_SS,
+ KBASE_VE_JS_HARD_STOP_TICKS_SS_DEBUG
+ },
+
+ {
+ KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_NSS,
+ KBASE_VE_JS_HARD_STOP_TICKS_NSS_DEBUG
+ },
+
+ {
+ KBASE_CONFIG_ATTR_JS_RESET_TICKS_SS,
+ KBASE_VE_JS_RESET_TICKS_SS_DEBUG
+ },
+
+ {
+ KBASE_CONFIG_ATTR_JS_RESET_TICKS_NSS,
+ KBASE_VE_JS_RESET_TICKS_NSS_DEBUG
+ },
+#else /* MALI_DEBUG */
+/* In release builds same as the defaults but scaled for 5MHz FPGA */
+ {
+ KBASE_CONFIG_ATTR_JS_SCHEDULING_TICK_NS,
+ KBASE_VE_JS_SCHEDULING_TICK_NS
+ },
+
+ {
+ KBASE_CONFIG_ATTR_JS_SOFT_STOP_TICKS,
+ KBASE_VE_JS_SOFT_STOP_TICKS
+ },
+
+ {
+ KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_SS,
+ KBASE_VE_JS_HARD_STOP_TICKS_SS
+ },
+
+ {
+ KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_NSS,
+ KBASE_VE_JS_HARD_STOP_TICKS_NSS
+ },
+
+ {
+ KBASE_CONFIG_ATTR_JS_RESET_TICKS_SS,
+ KBASE_VE_JS_RESET_TICKS_SS
+ },
+
+ {
+ KBASE_CONFIG_ATTR_JS_RESET_TICKS_NSS,
+ KBASE_VE_JS_RESET_TICKS_NSS
+ },
+#endif /* MALI_DEBUG */
+ {
+ KBASE_CONFIG_ATTR_JS_RESET_TIMEOUT_MS,
+ KBASE_VE_JS_RESET_TIMEOUT_MS
+ },
+
+ {
+ KBASE_CONFIG_ATTR_JS_CTX_TIMESLICE_NS,
+ KBASE_VE_JS_CTX_TIMESLICE_NS
+ },
+
+ {
+ KBASE_CONFIG_ATTR_POWER_MANAGEMENT_CALLBACKS,
+ KBASE_VE_POWER_MANAGEMENT_CALLBACKS
+ },
+
+ {
+ KBASE_CONFIG_ATTR_CPU_SPEED_FUNC,
+ KBASE_VE_CPU_SPEED_FUNC
+ },
+
+ {
+ KBASE_CONFIG_ATTR_SECURE_BUT_LOSS_OF_PERFORMANCE,
+ KBASE_VE_SECURE_BUT_LOSS_OF_PERFORMANCE
+ },
+
+ {
+ KBASE_CONFIG_ATTR_GPU_IRQ_THROTTLE_TIME_US,
+ 20
+ },
+
+ {
+ KBASE_CONFIG_ATTR_END,
+ 0
+ }
+};
+
+/* as config_attributes array above except with different settings for
+ * JS_HARD_STOP_TICKS_SS, JS_RESET_TICKS_SS that
+ * are needed for BASE_HW_ISSUE_8408.
+ */
+kbase_attribute config_attributes_hw_issue_8408[] =
+{
+ {
+ KBASE_CONFIG_ATTR_MEMORY_PER_PROCESS_LIMIT,
+ KBASE_VE_MEMORY_PER_PROCESS_LIMIT
+ },
+ {
+ KBASE_CONFIG_ATTR_UMP_DEVICE,
+ KBASE_VE_UMP_DEVICE
+ },
+
+ {
+ KBASE_CONFIG_ATTR_MEMORY_OS_SHARED_MAX,
+ KBASE_VE_MEMORY_OS_SHARED_MAX
+ },
+
+ {
+ KBASE_CONFIG_ATTR_MEMORY_OS_SHARED_PERF_GPU,
+ KBASE_VE_MEMORY_OS_SHARED_PERF_GPU
+ },
+
+#if T6F1_ZBT_DDR_ENABLED
+ {
+ KBASE_CONFIG_ATTR_MEMORY_RESOURCE,
+ KBASE_VE_MEMORY_RESOURCE_ZBT
+ },
+
+ {
+ KBASE_CONFIG_ATTR_MEMORY_RESOURCE,
+ KBASE_VE_MEMORY_RESOURCE_DDR
+ },
+#endif /* T6F1_ZBT_DDR_ENABLED */
+
+ {
+ KBASE_CONFIG_ATTR_GPU_FREQ_KHZ_MAX,
+ KBASE_VE_GPU_FREQ_KHZ_MAX
+ },
+
+ {
+ KBASE_CONFIG_ATTR_GPU_FREQ_KHZ_MIN,
+ KBASE_VE_GPU_FREQ_KHZ_MIN
+ },
+
+#if MALI_DEBUG
+/* Use more aggressive scheduling timeouts in debug builds for testing purposes */
+ {
+ KBASE_CONFIG_ATTR_JS_SCHEDULING_TICK_NS,
+ KBASE_VE_JS_SCHEDULING_TICK_NS_DEBUG
+ },
+
+ {
+ KBASE_CONFIG_ATTR_JS_SOFT_STOP_TICKS,
+ KBASE_VE_JS_SOFT_STOP_TICKS_DEBUG
+ },
+
+ {
+ KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_SS,
+ KBASE_VE_JS_HARD_STOP_TICKS_SS_8401_DEBUG
+ },
+
+ {
+ KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_NSS,
+ KBASE_VE_JS_HARD_STOP_TICKS_NSS_DEBUG
+ },
+
+ {
+ KBASE_CONFIG_ATTR_JS_RESET_TICKS_SS,
+ KBASE_VE_JS_RESET_TICKS_SS_8401_DEBUG
+ },
+
+ {
+ KBASE_CONFIG_ATTR_JS_RESET_TICKS_NSS,
+ KBASE_VE_JS_RESET_TICKS_NSS_DEBUG
+ },
+#else /* MALI_DEBUG */
+/* In release builds same as the defaults but scaled for 5MHz FPGA */
+ {
+ KBASE_CONFIG_ATTR_JS_SCHEDULING_TICK_NS,
+ KBASE_VE_JS_SCHEDULING_TICK_NS
+ },
+
+ {
+ KBASE_CONFIG_ATTR_JS_SOFT_STOP_TICKS,
+ KBASE_VE_JS_SOFT_STOP_TICKS
+ },
+
+ {
+ KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_SS,
+ KBASE_VE_JS_HARD_STOP_TICKS_SS_8401
+ },
+
+ {
+ KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_NSS,
+ KBASE_VE_JS_HARD_STOP_TICKS_NSS
+ },
+
+ {
+ KBASE_CONFIG_ATTR_JS_RESET_TICKS_SS,
+ KBASE_VE_JS_RESET_TICKS_SS_8401
+ },
+
+ {
+ KBASE_CONFIG_ATTR_JS_RESET_TICKS_NSS,
+ KBASE_VE_JS_RESET_TICKS_NSS
+ },
+#endif /* MALI_DEBUG */
+ {
+ KBASE_CONFIG_ATTR_JS_RESET_TIMEOUT_MS,
+ KBASE_VE_JS_RESET_TIMEOUT_MS
+ },
+
+ {
+ KBASE_CONFIG_ATTR_JS_CTX_TIMESLICE_NS,
+ KBASE_VE_JS_CTX_TIMESLICE_NS
+ },
+
+ {
+ KBASE_CONFIG_ATTR_POWER_MANAGEMENT_CALLBACKS,
+ KBASE_VE_POWER_MANAGEMENT_CALLBACKS
+ },
+
+ {
+ KBASE_CONFIG_ATTR_CPU_SPEED_FUNC,
+ KBASE_VE_CPU_SPEED_FUNC
+ },
+
+ {
+ KBASE_CONFIG_ATTR_SECURE_BUT_LOSS_OF_PERFORMANCE,
+ KBASE_VE_SECURE_BUT_LOSS_OF_PERFORMANCE
+ },
+
+ {
+ KBASE_CONFIG_ATTR_END,
+ 0
+ }
+};
+
+kbase_platform_config platform_config =
+{
+ .attributes = config_attributes,
+ .io_resources = &io_resources,
+ .midgard_type = KBASE_MALI_T6F1
+};
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#include <linux/io.h>
+#include <kbase/src/common/mali_kbase.h>
+#include "mali_kbase_cpu_vexpress.h"
+
+#define HZ_IN_MHZ (1000000)
+
+#define CORETILE_EXPRESS_A9X4_SCC_START (0x100E2000)
+#define MOTHERBOARD_SYS_CFG_START (0x10000000)
+#define SYS_CFGDATA_OFFSET (0x000000A0)
+#define SYS_CFGCTRL_OFFSET (0x000000A4)
+#define SYS_CFGSTAT_OFFSET (0x000000A8)
+
+#define SYS_CFGCTRL_START_BIT_VALUE (1 << 31)
+#define READ_REG_BIT_VALUE (0 << 30)
+#define DCC_DEFAULT_BIT_VALUE (0 << 26)
+#define SYS_CFG_OSC_FUNC_BIT_VALUE (1 << 20)
+#define SITE_DEFAULT_BIT_VALUE (1 << 16)
+#define BOARD_STACK_POS_DEFAULT_BIT_VALUE (0 << 12)
+#define DEVICE_DEFAULT_BIT_VALUE (2 << 0)
+#define SYS_CFG_COMPLETE_BIT_VALUE (1 << 0)
+#define SYS_CFG_ERROR_BIT_VALUE (1 << 1)
+
+#define FEED_REG_BIT_MASK (0x0F)
+#define FCLK_PA_DIVIDE_BIT_SHIFT (0x03)
+#define FCLK_PB_DIVIDE_BIT_SHIFT (0x07)
+#define FCLK_PC_DIVIDE_BIT_SHIFT (0x0B)
+#define AXICLK_PA_DIVIDE_BIT_SHIFT (0x0F)
+#define AXICLK_PB_DIVIDE_BIT_SHIFT (0x13)
+
+#define IS_SINGLE_BIT_SET(val,pos) (val&(1<<pos))
+
+/**
+ * kbase_get_vendor_specific_cpu_clock_speed
+ * @brief Retrieves the CPU clock speed.
+ * The implementation is platform specific.
+ * @param[in/out] u32* cpu_clock - the value of CPU clock speed in MHz
+ * @return 0 on success, 1 otherwise
+*/
+int kbase_get_vexpress_cpu_clock_speed(u32* cpu_clock)
+{
+ int result = 0;
+ u32 reg_val = 0;
+ u32 osc2_value = 0;
+ u32 pa_divide = 0;
+ u32 pb_divide = 0;
+ u32 pc_divide = 0;
+ void* volatile pSysCfgReg = 0;
+ void* volatile pSCCReg = 0;
+
+ /* Init the value case something goes wrong */
+ *cpu_clock = 0;
+
+ /* Map CPU register into virtual memory */
+ pSysCfgReg = ioremap(MOTHERBOARD_SYS_CFG_START, 0x1000);
+ if (pSysCfgReg == NULL)
+ {
+ result = 1;
+
+ goto pSysCfgReg_map_failed;
+ }
+
+ pSCCReg = ioremap(CORETILE_EXPRESS_A9X4_SCC_START, 0x1000);
+ if (pSCCReg == NULL)
+ {
+ result = 1;
+
+ goto pSCCReg_map_failed;
+ }
+
+ /*Read SYS regs - OSC2*/
+ reg_val = readl(pSysCfgReg + SYS_CFGCTRL_OFFSET);
+
+ /*Verify if there is no other undergoing request*/
+ if(!(reg_val&SYS_CFGCTRL_START_BIT_VALUE ))
+ {
+ /*Reset the CGFGSTAT reg*/
+ writel(0,(pSysCfgReg + SYS_CFGSTAT_OFFSET));
+
+ writel( SYS_CFGCTRL_START_BIT_VALUE | READ_REG_BIT_VALUE | DCC_DEFAULT_BIT_VALUE |
+ SYS_CFG_OSC_FUNC_BIT_VALUE | SITE_DEFAULT_BIT_VALUE |
+ BOARD_STACK_POS_DEFAULT_BIT_VALUE | DEVICE_DEFAULT_BIT_VALUE,
+ (pSysCfgReg + SYS_CFGCTRL_OFFSET));
+ /* Wait for the transaction to complete */
+ while( !(readl(pSysCfgReg + SYS_CFGSTAT_OFFSET)&SYS_CFG_COMPLETE_BIT_VALUE));
+ /* Read SYS_CFGSTAT Register to get the status of submitted transaction*/
+ reg_val = readl(pSysCfgReg + SYS_CFGSTAT_OFFSET);
+
+ /*------------------------------------------------------------------------------------------*/
+ /* Check for possible errors*/
+ if(reg_val & SYS_CFG_ERROR_BIT_VALUE)
+ {
+ /* Error while setting register*/
+ result = 1;
+ }
+ else
+ {
+ osc2_value = readl(pSysCfgReg + SYS_CFGDATA_OFFSET );
+ /* Read the SCC CFGRW0 register*/
+ reg_val = readl(pSCCReg);
+
+ /*
+ Select the appropriate feed:
+ CFGRW0[0] - CLKOB
+ CFGRW0[1] - CLKOC
+ CFGRW0[2] - FACLK (CLK)B FROM AXICLK PLL)
+ */
+ /* Calculate the FCLK*/
+ if(IS_SINGLE_BIT_SET(reg_val,0)) /*CFGRW0[0] - CLKOB*/
+ {
+ /* CFGRW0[6:3]*/
+ pa_divide =((reg_val&(FEED_REG_BIT_MASK<<FCLK_PA_DIVIDE_BIT_SHIFT))>>FCLK_PA_DIVIDE_BIT_SHIFT);
+ /* CFGRW0[10:7]*/
+ pb_divide =((reg_val&(FEED_REG_BIT_MASK<<FCLK_PB_DIVIDE_BIT_SHIFT))>>FCLK_PB_DIVIDE_BIT_SHIFT);
+ *cpu_clock = osc2_value * (pa_divide + 1) / (pb_divide +1);
+ }
+ else
+ {
+ if(IS_SINGLE_BIT_SET(reg_val,1))/*CFGRW0[1] - CLKOC*/
+ {
+ /* CFGRW0[6:3]*/
+ pa_divide = ((reg_val&(FEED_REG_BIT_MASK<<FCLK_PA_DIVIDE_BIT_SHIFT))>>FCLK_PA_DIVIDE_BIT_SHIFT);
+ /* CFGRW0[14:11]*/
+ pc_divide = ((reg_val&(FEED_REG_BIT_MASK<<FCLK_PC_DIVIDE_BIT_SHIFT)) >> FCLK_PC_DIVIDE_BIT_SHIFT);
+ *cpu_clock = osc2_value * (pa_divide + 1) / (pc_divide + 1);
+ }
+ else
+ if(IS_SINGLE_BIT_SET(reg_val,2))/*CFGRW0[2] - FACLK*/
+ {
+ /* CFGRW0[18:15]*/
+ pa_divide = ((reg_val&(FEED_REG_BIT_MASK<<AXICLK_PA_DIVIDE_BIT_SHIFT)) >>AXICLK_PA_DIVIDE_BIT_SHIFT);
+ /* CFGRW0[22:19]*/
+ pb_divide = ((reg_val&(FEED_REG_BIT_MASK<<AXICLK_PB_DIVIDE_BIT_SHIFT))>>AXICLK_PB_DIVIDE_BIT_SHIFT);
+ *cpu_clock = osc2_value * (pa_divide + 1) / (pb_divide +1);
+ }
+ else
+ {
+ result = 1;
+ }
+ }
+ }
+ }
+ else
+ {
+ result = 1;
+ }
+
+ /* Convert result expressed in Hz to Mhz units.*/
+ *cpu_clock /= HZ_IN_MHZ;
+
+ /* Unmap memory*/
+ iounmap(pSCCReg);
+
+pSCCReg_map_failed:
+ iounmap(pSysCfgReg);
+
+pSysCfgReg_map_failed:
+ return result;
+}
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#ifndef _KBASE_CPU_VEXPRESS_H_
+#define _KBASE_CPU_VEXPRESS_H_
+
+/**
+ * Versatile Express implementation of @ref kbase_cpuprops_clock_speed_function.
+ */
+int kbase_get_vexpress_cpu_clock_speed(u32* cpu_clock);
+
+#endif /* _KBASE_CPU_VEXPRESS_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+#include <kbase/src/linux/mali_kbase_config_linux.h>
+#include <osk/mali_osk.h>
+
+#if !MALI_LICENSE_IS_GPL || MALI_FAKE_PLATFORM_DEVICE
+
+void kbasep_config_parse_io_resources(const kbase_io_resources *io_resources, struct resource *linux_resources)
+{
+ OSK_ASSERT(io_resources != NULL);
+ OSK_ASSERT(linux_resources != NULL);
+
+ OSK_MEMSET(linux_resources, 0, PLATFORM_CONFIG_RESOURCE_COUNT * sizeof(struct resource));
+
+ linux_resources[0].start = io_resources->io_memory_region.start;
+ linux_resources[0].end = io_resources->io_memory_region.end;
+ linux_resources[0].flags = IORESOURCE_MEM;
+
+ linux_resources[1].start = linux_resources[1].end = io_resources->job_irq_number;
+ linux_resources[1].flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL;
+
+ linux_resources[2].start = linux_resources[2].end = io_resources->mmu_irq_number;
+ linux_resources[2].flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL;
+
+ linux_resources[3].start = linux_resources[3].end = io_resources->gpu_irq_number;
+ linux_resources[3].flags = IORESOURCE_IRQ | IORESOURCE_IRQ_HIGHLEVEL;
+}
+
+#endif /* !MALI_LICENSE_IS_GPL || MALI_FAKE_PLATFORM_DEVICE */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#ifndef _KBASE_CONFIG_LINUX_H_
+#define _KBASE_CONFIG_LINUX_H_
+
+#include <kbase/mali_kbase_config.h>
+#include <linux/ioport.h>
+
+#define PLATFORM_CONFIG_RESOURCE_COUNT 4
+#define PLATFORM_CONFIG_IRQ_RES_COUNT 3
+
+#if !MALI_LICENSE_IS_GPL || MALI_FAKE_PLATFORM_DEVICE
+/**
+ * @brief Convert data in kbase_io_resources struct to Linux-specific resources
+ *
+ * Function converts data in kbase_io_resources struct to an array of Linux resource structures. Note that function
+ * assumes that size of linux_resource array is at least PLATFORM_CONFIG_RESOURCE_COUNT.
+ * Resources are put in fixed order: I/O memory region, job IRQ, MMU IRQ, GPU IRQ.
+ *
+ * @param[in] io_resource Input IO resource data
+ * @param[out] linux_resources Pointer to output array of Linux resource structures
+ */
+void kbasep_config_parse_io_resources(const kbase_io_resources *io_resource, struct resource *linux_resources);
+#endif /* !MALI_LICENSE_IS_GPL || MALI_FAKE_PLATFORM_DEVICE */
+
+
+#endif /* _KBASE_CONFIG_LINUX_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_core_linux.c
+ * Base kernel driver init.
+ */
+
+#include <osk/mali_osk.h>
+#include <kbase/src/common/mali_kbase.h>
+#include <kbase/src/common/mali_kbase_uku.h>
+#include <kbase/src/common/mali_midg_regmap.h>
+#include <kbase/src/linux/mali_kbase_mem_linux.h>
+#include <kbase/src/linux/mali_kbase_config_linux.h>
+#include <uk/mali_ukk.h>
+#if MALI_NO_MALI
+#include "mali_kbase_model_linux.h"
+#endif
+
+#ifdef CONFIG_KDS
+#include <kds/include/linux/kds.h>
+#include <linux/anon_inodes.h>
+#include <linux/syscalls.h>
+#endif
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/poll.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#if MALI_LICENSE_IS_GPL
+#include <linux/platform_device.h>
+#include <linux/miscdevice.h>
+#endif
+#include <linux/list.h>
+#include <linux/semaphore.h>
+#include <linux/fs.h>
+#include <linux/uaccess.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/compat.h> /* is_compat_task */
+#include <kbase/src/common/mali_kbase_8401_workaround.h>
+#include <kbase/src/common/mali_kbase_hw.h>
+
+#if MALI_LICENSE_IS_GPL && MALI_CUSTOMER_RELEASE == 0 && MALI_COVERAGE == 0
+#include <linux/pci.h>
+#define MALI_PCI_DEVICE
+#endif
+
+#define JOB_IRQ_TAG 0
+#define MMU_IRQ_TAG 1
+#define GPU_IRQ_TAG 2
+
+struct kbase_irq_table
+{
+ u32 tag;
+ irq_handler_t handler;
+};
+#if MALI_UNIT_TEST
+kbase_exported_test_data shared_kernel_test_data;
+EXPORT_SYMBOL(shared_kernel_test_data);
+#endif /* MALI_UNIT_TEST */
+
+static const char kbase_drv_name[] = KBASE_DRV_NAME;
+
+static int kbase_dev_nr;
+
+#if MALI_LICENSE_IS_GPL
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,36)
+static DEFINE_SEMAPHORE(kbase_dev_list_lock);
+#else
+static DECLARE_MUTEX(kbase_dev_list_lock);
+#endif
+static LIST_HEAD(kbase_dev_list);
+
+KBASE_EXPORT_TEST_API(kbase_dev_list_lock)
+KBASE_EXPORT_TEST_API(kbase_dev_list)
+#endif
+
+#if MALI_LICENSE_IS_GPL == 0
+#include <linux/cdev.h> /* character device definitions */
+
+/* By default the module uses any available major, but it's possible to set it at load time to a specific number */
+int mali_major = 0;
+module_param(mali_major, int, S_IRUGO); /* r--r--r-- */
+MODULE_PARM_DESC(mali_major, "Device major number");
+
+struct mali_linux_device
+{
+ struct cdev cdev;
+};
+
+/* The global variable containing the global device data */
+static struct mali_linux_device mali_linux_device;
+
+static char mali_dev_name[] = KBASE_DRV_NAME; /* should be const, but the functions we call requires non-cost */
+
+#undef dev_err
+#undef dev_info
+#undef dev_dbg
+#define dev_err(dev,msg,...) do { printk(KERN_ERR KBASE_DRV_NAME " error: "); printk(msg, ## __VA_ARGS__); } while(0)
+#define dev_info(dev,msg,...) do { printk(KERN_INFO KBASE_DRV_NAME " info: "); printk(msg, ## __VA_ARGS__); } while(0)
+#define dev_dbg(dev,msg,...) do { printk(KERN_DEBUG KBASE_DRV_NAME " debug: "); printk(msg, ## __VA_ARGS__); } while(0)
+#define dev_name(dev) "MALI"
+
+/* STATIC */ struct kbase_device *g_kbdev;
+KBASE_EXPORT_TEST_API(g_kbdev);
+
+#endif
+
+
+#if MALI_LICENSE_IS_GPL
+#define KERNEL_SIDE_DDK_VERSION_STRING "K:" MALI_RELEASE_NAME "(GPL)"
+#else
+#define KERNEL_SIDE_DDK_VERSION_STRING "K:" MALI_RELEASE_NAME
+#endif /* MALI_LICENSE_IS_GPL */
+
+static INLINE void __compile_time_asserts( void )
+{
+ CSTD_COMPILE_TIME_ASSERT( sizeof(KERNEL_SIDE_DDK_VERSION_STRING) <= KBASE_GET_VERSION_BUFFER_SIZE);
+}
+
+#ifdef CONFIG_KDS
+
+typedef struct kbasep_kds_resource_set_file_data
+{
+ struct kds_resource_set * lock;
+}kbasep_kds_resource_set_file_data;
+
+static int kds_resource_release(struct inode *inode, struct file *file);
+
+static const struct file_operations kds_resource_fops =
+{
+ .release = kds_resource_release
+};
+
+typedef struct kbase_kds_resource_list_data
+{
+ struct kds_resource ** kds_resources;
+ unsigned long * kds_access_bitmap;
+ int num_elems;
+}kbase_kds_resource_list_data;
+
+
+static int kds_resource_release(struct inode *inode, struct file *file)
+{
+ struct kbasep_kds_resource_set_file_data *data;
+
+ data = (struct kbasep_kds_resource_set_file_data *)file->private_data;
+ if ( NULL != data )
+ {
+ if ( NULL != data->lock )
+ {
+ kds_resource_set_release( &data->lock );
+ }
+ osk_free( data );
+ }
+ return 0;
+}
+
+mali_error kbasep_kds_allocate_resource_list_data( kbase_context * kctx,
+ base_external_resource *ext_res,
+ int num_elems,
+ kbase_kds_resource_list_data * resources_list )
+{
+ base_external_resource *res = ext_res;
+ int res_id;
+
+ /* assume we have to wait for all */
+ resources_list->kds_resources = osk_malloc(sizeof(struct kds_resource *) * num_elems);
+ if ( NULL == resources_list->kds_resources )
+ {
+ return MALI_ERROR_OUT_OF_MEMORY;
+ }
+
+ resources_list->kds_access_bitmap = osk_calloc(sizeof(unsigned long) * ((num_elems + OSK_BITS_PER_LONG - 1) / OSK_BITS_PER_LONG));
+ if (NULL == resources_list->kds_access_bitmap)
+ {
+ osk_free(resources_list->kds_access_bitmap);
+ return MALI_ERROR_OUT_OF_MEMORY;
+ }
+
+ for (res_id = 0; res_id < num_elems; res_id++, res++ )
+ {
+ int exclusive;
+ kbase_va_region * reg;
+ struct kds_resource * kds_res = NULL;
+
+ exclusive = res->ext_resource & BASE_EXT_RES_ACCESS_EXCLUSIVE;
+ reg = kbase_region_lookup(kctx, res->ext_resource & ~BASE_EXT_RES_ACCESS_EXCLUSIVE);
+
+ /* did we find a matching region object? */
+ if (NULL == reg)
+ {
+ break;
+ }
+
+ switch (reg->imported_type)
+ {
+#if MALI_USE_UMP == 1
+ case BASE_TMEM_IMPORT_TYPE_UMP:
+ kds_res = ump_dd_kds_resource_get(reg->imported_metadata.ump_handle);
+ break;
+#endif /*MALI_USE_UMP == 1*/
+ default:
+ break;
+ }
+
+ /* no kds resource for the region ? */
+ if (!kds_res)
+ {
+ break;
+ }
+
+ resources_list->kds_resources[res_id] = kds_res;
+
+ if (exclusive)
+ {
+ osk_bitarray_set_bit(res_id, resources_list->kds_access_bitmap);
+ }
+ }
+
+ /* did the loop run to completion? */
+ if (res_id == num_elems)
+ {
+ return MALI_ERROR_NONE;
+ }
+
+ /* Clean up as the resource list is not valid. */
+ osk_free( resources_list->kds_resources );
+ osk_free( resources_list->kds_access_bitmap );
+
+ return MALI_ERROR_FUNCTION_FAILED;
+}
+
+mali_bool kbasep_validate_kbase_pointer( kbase_pointer * p )
+{
+#ifdef CONFIG_COMPAT
+ if (is_compat_task())
+ {
+ if ( p->compat_value == 0 )
+ {
+ return MALI_FALSE;
+ }
+ }
+ else
+ {
+#endif /* CONFIG_COMPAT */
+ if ( NULL == p->value )
+ {
+ return MALI_FALSE;
+ }
+#ifdef CONFIG_COMPAT
+ }
+#endif /* CONFIG_COMPAT */
+ return MALI_TRUE;
+}
+
+mali_error kbase_external_buffer_lock(kbase_context * kctx, ukk_call_context *ukk_ctx, kbase_uk_ext_buff_kds_data *args, u32 args_size)
+{
+ base_external_resource *ext_res_copy;
+ size_t ext_resource_size;
+ mali_error return_error = MALI_ERROR_FUNCTION_FAILED;
+ int fd;
+
+ if (args_size != sizeof(kbase_uk_ext_buff_kds_data))
+ {
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+
+ /* Check user space has provided valid data */
+ if ( !kbasep_validate_kbase_pointer(&args->external_resource) ||
+ !kbasep_validate_kbase_pointer(&args->file_descriptor) ||
+ (0 == args->num_res))
+ {
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+
+ ext_resource_size = sizeof( base_external_resource ) * args->num_res;
+ ext_res_copy = (base_external_resource *)osk_malloc( ext_resource_size );
+
+ if ( NULL != ext_res_copy )
+ {
+ base_external_resource * __user ext_res_user;
+ int * __user file_descriptor_user;
+#ifdef CONFIG_COMPAT
+ if (is_compat_task())
+ {
+ ext_res_user = args->external_resource.compat_value;
+ file_descriptor_user = args->file_descriptor.compat_value;
+ }
+ else
+ {
+#endif /* CONFIG_COMPAT */
+ ext_res_user = args->external_resource.value;
+ file_descriptor_user = args->file_descriptor.value;
+#ifdef CONFIG_COMPAT
+ }
+#endif /* CONFIG_COMPAT */
+
+ /* Copy the external resources to lock from user space */
+ if ( MALI_ERROR_NONE == ukk_copy_from_user( ext_resource_size, ext_res_copy, ext_res_user ) )
+ {
+ /* Allocate data to be stored in the file */
+ kbasep_kds_resource_set_file_data * fdata = osk_malloc( sizeof( kbasep_kds_resource_set_file_data));
+
+ if ( NULL != fdata )
+ {
+ kbase_kds_resource_list_data resource_list_data;
+ /* Parse given elements and create resource and access lists */
+ return_error = kbasep_kds_allocate_resource_list_data( kctx, ext_res_copy, args->num_res, &resource_list_data );
+ if ( MALI_ERROR_NONE == return_error )
+ {
+ fdata->lock = NULL;
+
+ fd = anon_inode_getfd( "kds_ext", &kds_resource_fops, fdata, 0 );
+
+ return_error = ukk_copy_to_user( sizeof( fd ), file_descriptor_user, &fd );
+
+ /* If the file descriptor was valid and we successfully copied it to user space, then we
+ * can try and lock the requested kds resources.
+ */
+ if ( ( fd >= 0 ) && ( MALI_ERROR_NONE == return_error ) )
+ {
+ struct kds_resource_set * lock;
+
+ lock = kds_waitall(args->num_res,
+ resource_list_data.kds_access_bitmap,
+ resource_list_data.kds_resources, KDS_WAIT_BLOCKING );
+
+ if (IS_ERR_OR_NULL(lock))
+ {
+ return_error = MALI_ERROR_FUNCTION_FAILED;
+ }
+ else
+ {
+ return_error = MALI_ERROR_NONE;
+ fdata->lock = lock;
+ }
+ }
+ else
+ {
+ return_error = MALI_ERROR_FUNCTION_FAILED;
+ }
+
+ osk_free( resource_list_data.kds_resources );
+ osk_free( resource_list_data.kds_access_bitmap );
+ }
+
+ if ( MALI_ERROR_NONE != return_error )
+ {
+ /* If the file was opened successfully then close it which will clean up
+ * the file data, otherwise we clean up the file data ourself. */
+ if ( fd >= 0 )
+ {
+ sys_close(fd);
+ }
+ else
+ {
+ osk_free( fdata );
+ }
+ }
+ }
+ else
+ {
+ return_error = MALI_ERROR_OUT_OF_MEMORY;
+ }
+ }
+ osk_free( ext_res_copy );
+ }
+ return return_error;
+}
+#endif
+
+
+static mali_error kbase_dispatch(ukk_call_context * const ukk_ctx, void * const args, u32 args_size)
+{
+ struct kbase_context *kctx;
+ struct kbase_device *kbdev;
+ uk_header *ukh = args;
+ u32 id;
+
+ OSKP_ASSERT( ukh != NULL );
+
+ kctx = CONTAINER_OF(ukk_session_get(ukk_ctx), kbase_context, ukk_session);
+ kbdev = kctx->kbdev;
+ id = ukh->id;
+ ukh->ret = MALI_ERROR_NONE; /* Be optimistic */
+
+ switch(id)
+ {
+ case KBASE_FUNC_TMEM_ALLOC:
+ {
+ kbase_uk_tmem_alloc *tmem = args;
+ struct kbase_va_region *reg;
+
+ if (sizeof(*tmem) != args_size)
+ {
+ goto bad_size;
+ }
+
+ reg = kbase_tmem_alloc(kctx, tmem->vsize, tmem->psize,
+ tmem->extent, tmem->flags, tmem->is_growable);
+ if (reg)
+ {
+ tmem->gpu_addr = reg->start_pfn << OSK_PAGE_SHIFT;
+ }
+ else
+ {
+ ukh->ret = MALI_ERROR_FUNCTION_FAILED;
+ }
+ break;
+ }
+
+ case KBASE_FUNC_TMEM_IMPORT:
+ {
+ kbase_uk_tmem_import * tmem_import = args;
+ struct kbase_va_region *reg;
+ int * __user phandle;
+ int handle;
+
+ if (sizeof(*tmem_import) != args_size)
+ {
+ goto bad_size;
+ }
+#ifdef CONFIG_COMPAT
+ if (is_compat_task())
+ {
+ phandle = tmem_import->phandle.compat_value;
+ }
+ else
+ {
+#endif /* CONFIG_COMPAT */
+ phandle = tmem_import->phandle.value;
+#ifdef CONFIG_COMPAT
+ }
+#endif /* CONFIG_COMPAT */
+
+ /* code should be in kbase_tmem_import and its helpers, but uk dropped its get_user abstraction */
+ switch (tmem_import->type)
+ {
+#if MALI_USE_UMP == 1
+ case BASE_TMEM_IMPORT_TYPE_UMP:
+ get_user(handle, phandle);
+ break;
+#endif /* MALI_USE_UMP == 1 */
+ case BASE_TMEM_IMPORT_TYPE_UMM:
+ get_user(handle, phandle);
+ break;
+ default:
+ goto bad_type;
+ break;
+ }
+
+ reg = kbase_tmem_import(kctx, tmem_import->type, handle, &tmem_import->pages);
+
+ if (reg)
+ {
+ tmem_import->gpu_addr = reg->start_pfn << OSK_PAGE_SHIFT;
+ }
+ else
+ {
+bad_type:
+ ukh->ret = MALI_ERROR_FUNCTION_FAILED;
+ }
+ break;
+ }
+ case KBASE_FUNC_PMEM_ALLOC:
+ {
+ kbase_uk_pmem_alloc *pmem = args;
+ struct kbase_va_region *reg;
+
+ if (sizeof(*pmem) != args_size)
+ {
+ goto bad_size;
+ }
+
+ reg = kbase_pmem_alloc(kctx, pmem->vsize, pmem->flags,
+ &pmem->cookie);
+ if (!reg)
+ {
+ ukh->ret = MALI_ERROR_FUNCTION_FAILED;
+ }
+ break;
+ }
+
+ case KBASE_FUNC_MEM_FREE:
+ {
+ kbase_uk_mem_free *mem = args;
+
+ if (sizeof(*mem) != args_size)
+ {
+ goto bad_size;
+ }
+
+ if ((mem->gpu_addr & BASE_MEM_TAGS_MASK)&&(mem->gpu_addr >= OSK_PAGE_SIZE))
+ {
+ OSK_PRINT_WARN(OSK_BASE_MEM, "kbase_dispatch case KBASE_FUNC_MEM_FREE: mem->gpu_addr: passed parameter is invalid");
+ ukh->ret = MALI_ERROR_FUNCTION_FAILED;
+ break;
+ }
+
+ if (kbase_mem_free(kctx, mem->gpu_addr))
+ {
+ ukh->ret = MALI_ERROR_FUNCTION_FAILED;
+ }
+ break;
+ }
+
+ case KBASE_FUNC_JOB_SUBMIT:
+ {
+ kbase_uk_job_submit * job = args;
+
+ if (sizeof(*job) != args_size)
+ {
+ goto bad_size;
+ }
+
+ if (MALI_ERROR_NONE != kbase_jd_submit(kctx, job))
+ {
+ ukh->ret = MALI_ERROR_FUNCTION_FAILED;
+ }
+ break;
+ }
+
+ case KBASE_FUNC_SYNC:
+ {
+ kbase_uk_sync_now *sn = args;
+
+ if (sizeof(*sn) != args_size)
+ {
+ goto bad_size;
+ }
+
+ if (sn->sset.basep_sset.mem_handle & BASE_MEM_TAGS_MASK)
+ {
+ OSK_PRINT_WARN(OSK_BASE_MEM, "kbase_dispatch case KBASE_FUNC_SYNC: sn->sset.basep_sset.mem_handle: passed parameter is invalid");
+ ukh->ret = MALI_ERROR_FUNCTION_FAILED;
+ break;
+ }
+
+ if (MALI_ERROR_NONE != kbase_sync_now(kctx, &sn->sset))
+ {
+ ukh->ret = MALI_ERROR_FUNCTION_FAILED;
+ }
+ break;
+ }
+
+ case KBASE_FUNC_POST_TERM:
+ {
+ kbase_event_close(kctx);
+ break;
+ }
+
+ case KBASE_FUNC_HWCNT_SETUP:
+ {
+ kbase_uk_hwcnt_setup * setup = args;
+
+ if (sizeof(*setup) != args_size)
+ {
+ goto bad_size;
+ }
+ if (MALI_ERROR_NONE != kbase_instr_hwcnt_setup(kctx, setup))
+ {
+ ukh->ret = MALI_ERROR_FUNCTION_FAILED;
+ }
+ break;
+ }
+
+ case KBASE_FUNC_HWCNT_DUMP:
+ {
+ /* args ignored */
+ if (MALI_ERROR_NONE != kbase_instr_hwcnt_dump(kctx))
+ {
+ ukh->ret = MALI_ERROR_FUNCTION_FAILED;
+ }
+ break;
+ }
+
+ case KBASE_FUNC_HWCNT_CLEAR:
+ {
+ /* args ignored */
+ if (MALI_ERROR_NONE != kbase_instr_hwcnt_clear(kctx))
+ {
+ ukh->ret = MALI_ERROR_FUNCTION_FAILED;
+ }
+ break;
+ }
+
+ case KBASE_FUNC_CPU_PROPS_REG_DUMP:
+ {
+ kbase_uk_cpuprops * setup = args;
+
+ if (sizeof(*setup) != args_size)
+ {
+ goto bad_size;
+ }
+
+ if (MALI_ERROR_NONE != kbase_cpuprops_uk_get_props(kctx,setup))
+ {
+ ukh->ret = MALI_ERROR_FUNCTION_FAILED;
+ }
+ break;
+ }
+
+ case KBASE_FUNC_GPU_PROPS_REG_DUMP:
+ {
+ kbase_uk_gpuprops * setup = args;
+
+ if (sizeof(*setup) != args_size)
+ {
+ goto bad_size;
+ }
+
+ if (MALI_ERROR_NONE != kbase_gpuprops_uk_get_props(kctx, setup))
+ {
+ ukh->ret = MALI_ERROR_FUNCTION_FAILED;
+ }
+ break;
+ }
+
+ case KBASE_FUNC_TMEM_RESIZE:
+ {
+ kbase_uk_tmem_resize *resize = args;
+ if (sizeof(*resize) != args_size)
+ {
+ goto bad_size;
+ }
+ if (resize->gpu_addr & BASE_MEM_TAGS_MASK)
+ {
+ OSK_PRINT_WARN(OSK_BASE_MEM, "kbase_dispatch case KBASE_FUNC_TMEM_RESIZE: resize->gpu_addr: passed parameter is invalid");
+ ukh->ret = MALI_ERROR_FUNCTION_FAILED;
+ break;
+ }
+
+ ukh->ret = kbase_tmem_resize(kctx, resize->gpu_addr, resize->delta, &resize->size, &resize->result_subcode);
+ break;
+ }
+
+ case KBASE_FUNC_FIND_CPU_MAPPING:
+ {
+ kbase_uk_find_cpu_mapping *find = args;
+ struct kbase_cpu_mapping *map;
+
+ if (sizeof(*find) != args_size)
+ {
+ goto bad_size;
+ }
+ if (find->gpu_addr & BASE_MEM_TAGS_MASK)
+ {
+ OSK_PRINT_WARN(OSK_BASE_MEM, "kbase_dispatch case KBASE_FUNC_FIND_CPU_MAPPING: find->gpu_addr: passed parameter is invalid");
+ goto out_bad;
+ }
+
+ OSKP_ASSERT( find != NULL );
+ if ( find->size > SIZE_MAX || find->cpu_addr > UINTPTR_MAX )
+ {
+ map = NULL;
+ }
+ else
+ {
+ map = kbasep_find_enclosing_cpu_mapping( kctx,
+ find->gpu_addr,
+ (osk_virt_addr)(uintptr_t)find->cpu_addr,
+ (size_t)find->size );
+ }
+
+ if ( NULL != map )
+ {
+ find->uaddr = PTR_TO_U64( map->uaddr );
+ find->nr_pages = map->nr_pages;
+ find->page_off = map->page_off;
+ }
+ else
+ {
+ ukh->ret = MALI_ERROR_FUNCTION_FAILED;
+ }
+ break;
+ }
+ case KBASE_FUNC_GET_VERSION:
+ {
+ kbase_uk_get_ddk_version *get_version = (kbase_uk_get_ddk_version *)args;
+
+ if (sizeof(*get_version) != args_size)
+ {
+ goto bad_size;
+ }
+
+ /* version buffer size check is made in compile time assert */
+ OSK_MEMCPY(get_version->version_buffer, KERNEL_SIDE_DDK_VERSION_STRING,
+ sizeof(KERNEL_SIDE_DDK_VERSION_STRING));
+ get_version->version_string_size = sizeof(KERNEL_SIDE_DDK_VERSION_STRING);
+ break;
+ }
+#ifdef CONFIG_KDS
+ case KBASE_FUNC_EXT_BUFFER_LOCK:
+ {
+ ukh->ret = kbase_external_buffer_lock( kctx, ukk_ctx,(kbase_uk_ext_buff_kds_data *)args, args_size );
+ break;
+ }
+#endif
+ case KBASE_FUNC_SET_FLAGS:
+ {
+ kbase_uk_set_flags *kbase_set_flags = (kbase_uk_set_flags *)args;
+
+ if (sizeof(*kbase_set_flags) != args_size)
+ {
+ goto bad_size;
+ }
+
+ if (MALI_ERROR_NONE != kbase_context_set_create_flags(kctx, kbase_set_flags->create_flags))
+ {
+ ukh->ret = MALI_ERROR_FUNCTION_FAILED;
+ }
+ break;
+ }
+
+#if MALI_UNIT_TEST
+ case KBASE_FUNC_SET_TEST_DATA:
+ {
+ kbase_uk_set_test_data *set_data = args;
+
+ shared_kernel_test_data = set_data->test_data;
+ shared_kernel_test_data.kctx = kctx;
+ shared_kernel_test_data.mm = (void*)current->mm;
+ ukh->ret = MALI_ERROR_NONE;
+ break;
+ }
+#endif /* MALI_UNIT_TEST */
+#if MALI_ERROR_INJECT_ON
+ case KBASE_FUNC_INJECT_ERROR:
+ {
+ kbase_error_params params = ((kbase_uk_error_params*)args)->params;
+ /*mutex lock*/
+ osk_spinlock_lock(&kbdev->osdev.reg_op_lock);
+ ukh->ret = job_atom_inject_error(¶ms);
+ osk_spinlock_unlock(&kbdev->osdev.reg_op_lock);
+ /*mutex unlock*/
+
+ break;
+ }
+#endif /*MALI_ERROR_INJECT_ON*/
+#if MALI_NO_MALI
+ case KBASE_FUNC_MODEL_CONTROL:
+ {
+ kbase_model_control_params params = ((kbase_uk_model_control_params*)args)->params;
+ /*mutex lock*/
+ osk_spinlock_lock(&kbdev->osdev.reg_op_lock);
+ ukh->ret = midg_model_control(kbdev->osdev.model, ¶ms);
+ osk_spinlock_unlock(&kbdev->osdev.reg_op_lock);
+ /*mutex unlock*/
+ break;
+ }
+#endif /* MALI_NO_MALI */
+ default:
+ dev_err(kbdev->osdev.dev, "unknown syscall %08x", ukh->id);
+ goto out_bad;
+ }
+
+ return MALI_ERROR_NONE;
+
+bad_size:
+ dev_err(kbdev->osdev.dev, "Wrong syscall size (%d) for %08x\n", args_size, ukh->id);
+out_bad:
+ return MALI_ERROR_FUNCTION_FAILED;
+}
+
+#if MALI_LICENSE_IS_GPL
+static struct kbase_device *to_kbase_device(struct device *dev)
+{
+ return dev_get_drvdata(dev);
+}
+#endif /* MALI_LICENSE_IS_GPL */
+
+/* Find a particular kbase device (as specified by minor number), or find the "first" device if -1 is specified */
+struct kbase_device *kbase_find_device(int minor)
+{
+ struct kbase_device *kbdev = NULL;
+#if MALI_LICENSE_IS_GPL
+ struct list_head *entry;
+
+ down(&kbase_dev_list_lock);
+ list_for_each(entry, &kbase_dev_list)
+ {
+ struct kbase_device *tmp;
+
+ tmp = list_entry(entry, struct kbase_device, osdev.entry);
+ if (tmp->osdev.mdev.minor == minor || minor == -1)
+ {
+ kbdev = tmp;
+ get_device(kbdev->osdev.dev);
+ break;
+ }
+ }
+ up(&kbase_dev_list_lock);
+#else
+ kbdev = g_kbdev;
+#endif
+
+ return kbdev;
+}
+
+EXPORT_SYMBOL(kbase_find_device);
+
+
+
+
+
+void kbase_release_device(struct kbase_device *kbdev)
+{
+#if MALI_LICENSE_IS_GPL
+ put_device(kbdev->osdev.dev);
+#endif
+}
+
+EXPORT_SYMBOL(kbase_release_device);
+
+static int kbase_open(struct inode *inode, struct file *filp)
+{
+ struct kbase_device *kbdev = NULL;
+ struct kbase_context *kctx;
+ int ret = 0;
+
+ /* Enforce that the driver is opened with O_CLOEXEC so that execve() automatically
+ * closes the file descriptor in a child process.
+ */
+ if (0 == (filp->f_flags & O_CLOEXEC))
+ {
+ printk(KERN_ERR KBASE_DRV_NAME " error: O_CLOEXEC flag not set\n");
+ return -EINVAL;
+ }
+
+ kbdev = kbase_find_device(iminor(inode));
+
+ if (!kbdev)
+ return -ENODEV;
+
+ kctx = kbase_create_context(kbdev);
+ if (!kctx)
+ {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ if (MALI_ERROR_NONE != ukk_session_init(&kctx->ukk_session, kbase_dispatch, BASE_UK_VERSION_MAJOR, BASE_UK_VERSION_MINOR))
+ {
+ kbase_destroy_context(kctx);
+ ret = -EFAULT;
+ goto out;
+ }
+
+ init_waitqueue_head(&kctx->osctx.event_queue);
+ filp->private_data = kctx;
+
+ dev_dbg(kbdev->osdev.dev, "created base context\n");
+ return 0;
+
+out:
+ kbase_release_device(kbdev);
+ return ret;
+}
+
+static int kbase_release(struct inode *inode, struct file *filp)
+{
+ struct kbase_context *kctx = filp->private_data;
+ struct kbase_device *kbdev = kctx->kbdev;
+
+ ukk_session_term(&kctx->ukk_session);
+ filp->private_data = NULL;
+ kbase_destroy_context(kctx);
+
+ dev_dbg(kbdev->osdev.dev, "deleted base context\n");
+ kbase_release_device(kbdev);
+ return 0;
+}
+
+static long kbase_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+{
+ u64 msg[(UKK_CALL_MAX_SIZE+7)>>3]; /* alignment fixup */
+ u32 size = _IOC_SIZE(cmd);
+ ukk_call_context ukk_ctx;
+ struct kbase_context *kctx = filp->private_data;
+
+ if (size > UKK_CALL_MAX_SIZE) return -ENOTTY;
+
+ if (0 != copy_from_user(&msg, (void *)arg, size))
+ {
+ return -EFAULT;
+ }
+
+ ukk_call_prepare(&ukk_ctx, &kctx->ukk_session);
+
+ if (MALI_ERROR_NONE != ukk_dispatch(&ukk_ctx, &msg, size))
+ {
+ return -EFAULT;
+ }
+
+ if (0 != copy_to_user((void *)arg, &msg, size))
+ {
+ pr_err("failed to copy results of UK call back to user space\n");
+ return -EFAULT;
+ }
+ return 0;
+}
+
+static ssize_t kbase_read(struct file *filp, char __user *buf,
+ size_t count, loff_t *f_pos)
+{
+ struct kbase_context *kctx = filp->private_data;
+ base_jd_event uevent;
+
+ if (count < sizeof(uevent))
+ {
+ return -ENOBUFS;
+ }
+
+ while (kbase_event_dequeue(kctx, &uevent))
+ {
+ if (filp->f_flags & O_NONBLOCK)
+ {
+ return -EAGAIN;
+ }
+
+ if (wait_event_interruptible(kctx->osctx.event_queue,
+ kbase_event_pending(kctx)))
+ {
+ return -ERESTARTSYS;
+ }
+ }
+
+ if (copy_to_user(buf, &uevent, sizeof(uevent)))
+ {
+ return -EFAULT;
+ }
+
+ return sizeof(uevent);
+}
+
+static unsigned int kbase_poll(struct file *filp, poll_table *wait)
+{
+ struct kbase_context *kctx = filp->private_data;
+
+ poll_wait(filp, &kctx->osctx.event_queue, wait);
+ if (kbase_event_pending(kctx))
+ {
+ return POLLIN | POLLRDNORM;
+ }
+
+ return 0;
+}
+
+void kbase_event_wakeup(kbase_context *kctx)
+{
+ OSK_ASSERT(kctx);
+
+ wake_up_interruptible(&kctx->osctx.event_queue);
+}
+KBASE_EXPORT_TEST_API(kbase_event_wakeup)
+
+int kbase_check_flags(int flags)
+{
+ /* Enforce that the driver keeps the O_CLOEXEC flag so that execve() always
+ * closes the file descriptor in a child process.
+ */
+ if (0 == (flags & O_CLOEXEC))
+ {
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static const struct file_operations kbase_fops =
+{
+ .owner = THIS_MODULE,
+ .open = kbase_open,
+ .release = kbase_release,
+ .read = kbase_read,
+ .poll = kbase_poll,
+ .unlocked_ioctl = kbase_ioctl,
+ .mmap = kbase_mmap,
+ .check_flags = kbase_check_flags,
+};
+
+#if !MALI_NO_MALI
+void kbase_os_reg_write(kbase_device *kbdev, u16 offset, u32 value)
+{
+ writel(value, kbdev->osdev.reg + offset);
+}
+
+u32 kbase_os_reg_read(kbase_device *kbdev, u16 offset)
+{
+ return readl(kbdev->osdev.reg + offset);
+}
+
+static void *kbase_tag(void *ptr, u32 tag)
+{
+ return (void *)(((uintptr_t) ptr) | tag);
+}
+
+static void *kbase_untag(void *ptr)
+{
+ return (void *)(((uintptr_t) ptr) & ~3);
+}
+
+static irqreturn_t kbase_job_irq_handler(int irq, void *data)
+{
+ struct kbase_device *kbdev = kbase_untag(data);
+ u32 val;
+
+ osk_spinlock_irq_lock(&kbdev->pm.gpu_powered_lock);
+
+ if (!kbdev->pm.gpu_powered)
+ {
+ /* GPU is turned off - IRQ is not for us */
+ osk_spinlock_irq_unlock(&kbdev->pm.gpu_powered_lock);
+ return IRQ_NONE;
+ }
+
+ val = kbase_reg_read(kbdev, JOB_CONTROL_REG(JOB_IRQ_STATUS), NULL);
+
+ osk_spinlock_irq_unlock(&kbdev->pm.gpu_powered_lock);
+
+ if (!val)
+ {
+ return IRQ_NONE;
+ }
+
+ dev_dbg(kbdev->osdev.dev, "%s: irq %d irqstatus 0x%x\n", __func__, irq, val);
+
+ kbase_job_done(kbdev, val);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t kbase_mmu_irq_handler(int irq, void *data)
+{
+ struct kbase_device *kbdev = kbase_untag(data);
+ u32 val;
+
+ osk_spinlock_irq_lock(&kbdev->pm.gpu_powered_lock);
+
+ if (!kbdev->pm.gpu_powered)
+ {
+ /* GPU is turned off - IRQ is not for us */
+ osk_spinlock_irq_unlock(&kbdev->pm.gpu_powered_lock);
+ return IRQ_NONE;
+ }
+
+ val = kbase_reg_read(kbdev, MMU_REG(MMU_IRQ_STATUS), NULL);
+
+ osk_spinlock_irq_unlock(&kbdev->pm.gpu_powered_lock);
+
+ if (!val)
+ {
+ return IRQ_NONE;
+ }
+
+ dev_dbg(kbdev->osdev.dev, "%s: irq %d irqstatus 0x%x\n", __func__, irq, val);
+
+ kbase_mmu_interrupt(kbdev, val);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t kbase_gpu_irq_handler(int irq, void *data)
+{
+ struct kbase_device *kbdev = kbase_untag(data);
+ u32 val;
+
+ osk_spinlock_irq_lock(&kbdev->pm.gpu_powered_lock);
+
+ if (!kbdev->pm.gpu_powered)
+ {
+ /* GPU is turned off - IRQ is not for us */
+ osk_spinlock_irq_unlock(&kbdev->pm.gpu_powered_lock);
+ return IRQ_NONE;
+ }
+
+ val = kbase_reg_read(kbdev, GPU_CONTROL_REG(GPU_IRQ_STATUS), NULL);
+
+ osk_spinlock_irq_unlock(&kbdev->pm.gpu_powered_lock);
+
+ if (!val)
+ {
+ return IRQ_NONE;
+ }
+
+ dev_dbg(kbdev->osdev.dev, "%s: irq %d irqstatus 0x%x\n", __func__, irq, val);
+
+ kbase_gpu_interrupt(kbdev, val);
+
+ return IRQ_HANDLED;
+}
+
+static irq_handler_t kbase_handler_table[] = {
+ [JOB_IRQ_TAG] = kbase_job_irq_handler,
+ [MMU_IRQ_TAG] = kbase_mmu_irq_handler,
+ [GPU_IRQ_TAG] = kbase_gpu_irq_handler,
+};
+
+static int kbase_install_interrupts(kbase_device *kbdev)
+{
+ struct kbase_os_device *osdev = &kbdev->osdev;
+ u32 nr = ARRAY_SIZE(kbase_handler_table);
+ int err;
+ u32 i;
+
+ BUG_ON(nr > PLATFORM_CONFIG_IRQ_RES_COUNT); /* Only 3 interrupts! */
+
+ for (i = 0; i < nr; i++)
+ {
+ err = request_irq(osdev->irqs[i].irq,
+ kbase_handler_table[i],
+ osdev->irqs[i].flags | IRQF_SHARED,
+ dev_name(osdev->dev),
+ kbase_tag(kbdev, i));
+ if (err)
+ {
+ dev_err(osdev->dev, "Can't request interrupt %d (index %d)\n", osdev->irqs[i].irq, i);
+ goto release;
+ }
+ }
+
+ return 0;
+
+release:
+ while (i-- > 0)
+ {
+ free_irq(osdev->irqs[i].irq, kbase_tag(kbdev, i));
+ }
+
+ return err;
+}
+
+static void kbase_release_interrupts(kbase_device *kbdev)
+{
+ struct kbase_os_device *osdev = &kbdev->osdev;
+ u32 nr = ARRAY_SIZE(kbase_handler_table);
+ u32 i;
+
+ for (i = 0; i < nr; i++)
+ {
+ if (osdev->irqs[i].irq)
+ {
+ free_irq(osdev->irqs[i].irq, kbase_tag(kbdev, i));
+ }
+ }
+}
+
+void kbase_synchronize_irqs(kbase_device *kbdev)
+{
+ struct kbase_os_device *osdev = &kbdev->osdev;
+ u32 nr = ARRAY_SIZE(kbase_handler_table);
+ u32 i;
+
+ for (i = 0; i < nr; i++)
+ {
+ if (osdev->irqs[i].irq)
+ {
+ synchronize_irq(osdev->irqs[i].irq);
+ }
+ }
+}
+#endif
+
+#if MALI_LICENSE_IS_GPL
+
+/** Show callback for the @c power_policy sysfs file.
+ *
+ * This function is called to get the contents of the @c power_policy sysfs
+ * file. This is a list of the available policies with the currently active one
+ * surrounded by square brackets.
+ *
+ * @param dev The device this sysfs file is for
+ * @param attr The attributes of the sysfs file
+ * @param buf The output buffer for the sysfs file contents
+ *
+ * @return The number of bytes output to @c buf.
+ */
+static ssize_t show_policy(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct kbase_device *kbdev;
+ const struct kbase_pm_policy *current_policy;
+ const struct kbase_pm_policy * const *policy_list;
+ int policy_count;
+ int i;
+ ssize_t ret = 0;
+
+ kbdev = to_kbase_device(dev);
+
+ if (!kbdev)
+ {
+ return -ENODEV;
+ }
+
+ current_policy = kbase_pm_get_policy(kbdev);
+
+ policy_count = kbase_pm_list_policies(&policy_list);
+
+ for(i=0; i<policy_count && ret<PAGE_SIZE; i++)
+ {
+ if (policy_list[i] == current_policy)
+ {
+ ret += scnprintf(buf+ret, PAGE_SIZE - ret, "[%s] ", policy_list[i]->name);
+ }
+ else
+ {
+ ret += scnprintf(buf+ret, PAGE_SIZE - ret, "%s ", policy_list[i]->name);
+ }
+ }
+
+ if (ret < PAGE_SIZE - 1)
+ {
+ ret += scnprintf(buf+ret, PAGE_SIZE-ret, "\n");
+ }
+ else
+ {
+ buf[PAGE_SIZE-2] = '\n';
+ buf[PAGE_SIZE-1] = '\0';
+ ret = PAGE_SIZE-1;
+ }
+
+ return ret;
+}
+
+/** Store callback for the @c power_policy sysfs file.
+ *
+ * This function is called when the @c power_policy sysfs file is written to.
+ * It matches the requested policy against the available policies and if a
+ * matching policy is found calls @ref kbase_pm_set_policy to change the
+ * policy.
+ *
+ * @param dev The device with sysfs file is for
+ * @param attr The attributes of the sysfs file
+ * @param buf The value written to the sysfs file
+ * @param count The number of bytes written to the sysfs file
+ *
+ * @return @c count if the function succeeded. An error code on failure.
+ */
+static ssize_t set_policy(struct device *dev, struct device_attribute *attr, const char *buf, size_t count)
+{
+ struct kbase_device *kbdev;
+ const struct kbase_pm_policy *new_policy = NULL;
+ const struct kbase_pm_policy * const *policy_list;
+ int policy_count;
+ int i;
+
+ kbdev = to_kbase_device(dev);
+
+ if (!kbdev)
+ {
+ return -ENODEV;
+ }
+
+ policy_count = kbase_pm_list_policies(&policy_list);
+
+ for(i=0; i<policy_count; i++)
+ {
+ if (sysfs_streq(policy_list[i]->name, buf))
+ {
+ new_policy = policy_list[i];
+ break;
+ }
+ }
+
+ if (!new_policy)
+ {
+ dev_err(dev, "power_policy: policy not found\n");
+ return -EINVAL;
+ }
+
+ kbase_pm_set_policy(kbdev, new_policy);
+ return count;
+}
+
+/** The sysfs file @c power_policy.
+ *
+ * This is used for obtaining information about the available policies,
+ * determining which policy is currently active, and changing the active
+ * policy.
+ */
+DEVICE_ATTR(power_policy, S_IRUGO|S_IWUSR, show_policy, set_policy);
+
+#if MALI_LICENSE_IS_GPL && (MALI_CUSTOMER_RELEASE == 0)
+/** Store callback for the @c js_timeouts sysfs file.
+ *
+ * This function is called to get the contents of the @c js_timeouts sysfs
+ * file. This file contains five values separated by whitespace. The values
+ * are basically the same as KBASE_CONFIG_ATTR_JS_SOFT_STOP_TICKS,
+ * KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_SS, KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_NSS,
+ * KBASE_CONFIG_ATTR_JS_RESET_TICKS_SS, BASE_CONFIG_ATTR_JS_RESET_TICKS_NSS
+ * configuration values (in that order), with the difference that the js_timeout
+ * valus are expressed in MILLISECONDS.
+ *
+ * The js_timeouts sysfile file allows the current values in
+ * use by the job scheduler to get override. Note that a value needs to
+ * be other than 0 for it to override the current job scheduler value.
+ *
+ * @param dev The device with sysfs file is for
+ * @param attr The attributes of the sysfs file
+ * @param buf The value written to the sysfs file
+ * @param count The number of bytes written to the sysfs file
+ *
+ * @return @c count if the function succeeded. An error code on failure.
+ */
+static ssize_t set_js_timeouts(struct device *dev, struct device_attribute *attr, const char *buf, size_t count)
+{
+ struct kbase_device *kbdev;
+ int items;
+ unsigned long js_soft_stop_ms;
+ unsigned long js_hard_stop_ms_ss;
+ unsigned long js_hard_stop_ms_nss;
+ unsigned long js_reset_ms_ss;
+ unsigned long js_reset_ms_nss;
+
+ kbdev = to_kbase_device(dev);
+ if (!kbdev)
+ {
+ return -ENODEV;
+ }
+
+ items = sscanf(buf, "%lu %lu %lu %lu %lu", &js_soft_stop_ms, &js_hard_stop_ms_ss, &js_hard_stop_ms_nss, &js_reset_ms_ss, &js_reset_ms_nss);
+ if (items == 5)
+ {
+ u64 ticks;
+
+ ticks = js_soft_stop_ms * 1000000ULL;
+ osk_divmod6432(&ticks, kbdev->js_data.scheduling_tick_ns);
+ kbdev->js_soft_stop_ticks = ticks;
+
+ ticks = js_hard_stop_ms_ss * 1000000ULL;
+ osk_divmod6432(&ticks, kbdev->js_data.scheduling_tick_ns);
+ kbdev->js_hard_stop_ticks_ss = ticks;
+
+ ticks = js_hard_stop_ms_nss * 1000000ULL;
+ osk_divmod6432(&ticks, kbdev->js_data.scheduling_tick_ns);
+ kbdev->js_hard_stop_ticks_nss = ticks;
+
+ ticks = js_reset_ms_ss * 1000000ULL;
+ osk_divmod6432(&ticks, kbdev->js_data.scheduling_tick_ns);
+ kbdev->js_reset_ticks_ss = ticks;
+
+ ticks = js_reset_ms_nss * 1000000ULL;
+ osk_divmod6432(&ticks, kbdev->js_data.scheduling_tick_ns);
+ kbdev->js_reset_ticks_nss = ticks;
+
+ dev_info(kbdev->osdev.dev, "Overriding KBASE_CONFIG_ATTR_JS_SOFT_STOP_TICKS with %lu ticks (%lu ms)\n", (unsigned long)kbdev->js_soft_stop_ticks, js_soft_stop_ms);
+ dev_info(kbdev->osdev.dev, "Overriding KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_SS with %lu ticks (%lu ms)\n", (unsigned long)kbdev->js_hard_stop_ticks_ss, js_hard_stop_ms_ss);
+ dev_info(kbdev->osdev.dev, "Overriding KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_NSS with %lu ticks (%lu ms)\n", (unsigned long)kbdev->js_hard_stop_ticks_nss, js_hard_stop_ms_nss);
+ dev_info(kbdev->osdev.dev, "Overriding KBASE_CONFIG_ATTR_JS_RESET_TICKS_SS with %lu ticks (%lu ms)\n", (unsigned long)kbdev->js_reset_ticks_ss, js_reset_ms_ss);
+ dev_info(kbdev->osdev.dev, "Overriding KBASE_CONFIG_ATTR_JS_RESET_TICKS_NSS with %lu ticks (%lu ms)\n", (unsigned long)kbdev->js_reset_ticks_nss, js_reset_ms_nss);
+
+ return count;
+ }
+ else
+ {
+ dev_err(kbdev->osdev.dev, "Couldn't process js_timeouts write operation.\nUse format "
+ "<soft_stop_ms> <hard_stop_ms_ss> <hard_stop_ms_nss> <reset_ms_ss> <reset_ms_nss>\n");
+ return -EINVAL;
+ }
+}
+
+/** Show callback for the @c js_timeouts sysfs file.
+ *
+ * This function is called to get the contents of the @c js_timeouts sysfs
+ * file. It returns the last set values written to the js_timeouts sysfs file.
+ * If the file didn't get written yet, the values will be 0.
+ *
+ * @param dev The device this sysfs file is for
+ * @param attr The attributes of the sysfs file
+ * @param buf The output buffer for the sysfs file contents
+ *
+ * @return The number of bytes output to @c buf.
+ */
+static ssize_t show_js_timeouts(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct kbase_device *kbdev;
+ ssize_t ret;
+ u64 ms;
+ unsigned long js_soft_stop_ms;
+ unsigned long js_hard_stop_ms_ss;
+ unsigned long js_hard_stop_ms_nss;
+ unsigned long js_reset_ms_ss;
+ unsigned long js_reset_ms_nss;
+
+ kbdev = to_kbase_device(dev);
+ if (!kbdev)
+ {
+ return -ENODEV;
+ }
+
+ ms = (u64)kbdev->js_soft_stop_ticks * kbdev->js_data.scheduling_tick_ns;
+ osk_divmod6432(&ms, 1000000UL);
+ js_soft_stop_ms = (unsigned long)ms;
+
+ ms = (u64)kbdev->js_hard_stop_ticks_ss * kbdev->js_data.scheduling_tick_ns;
+ osk_divmod6432(&ms, 1000000UL);
+ js_hard_stop_ms_ss = (unsigned long)ms;
+
+ ms = (u64)kbdev->js_hard_stop_ticks_nss * kbdev->js_data.scheduling_tick_ns;
+ osk_divmod6432(&ms, 1000000UL);
+ js_hard_stop_ms_nss = (unsigned long)ms;
+
+ ms = (u64)kbdev->js_reset_ticks_ss * kbdev->js_data.scheduling_tick_ns;
+ osk_divmod6432(&ms, 1000000UL);
+ js_reset_ms_ss = (unsigned long)ms;
+
+ ms = (u64)kbdev->js_reset_ticks_nss * kbdev->js_data.scheduling_tick_ns;
+ osk_divmod6432(&ms, 1000000UL);
+ js_reset_ms_nss = (unsigned long)ms;
+
+ ret = scnprintf(buf, PAGE_SIZE, "%lu %lu %lu %lu %lu\n", js_soft_stop_ms, js_hard_stop_ms_ss, js_hard_stop_ms_nss, js_reset_ms_ss, js_reset_ms_nss);
+
+ if (ret >= PAGE_SIZE )
+ {
+ buf[PAGE_SIZE-2] = '\n';
+ buf[PAGE_SIZE-1] = '\0';
+ ret = PAGE_SIZE-1;
+ }
+
+ return ret;
+}
+/** The sysfs file @c js_timeouts.
+ *
+ * This is used to override the current job scheduler values for
+ * KBASE_CONFIG_ATTR_JS_STOP_STOP_TICKS_SS
+ * KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_SS
+ * KBASE_CONFIG_ATTR_JS_HARD_STOP_TICKS_NSS
+ * KBASE_CONFIG_ATTR_JS_RESET_TICKS_SS
+ * KBASE_CONFIG_ATTR_JS_RESET_TICKS_NSS.
+ */
+DEVICE_ATTR(js_timeouts, S_IRUGO|S_IWUSR, show_js_timeouts, set_js_timeouts);
+#endif /* MALI_LICENSE_IS_GPL && (MALI_CUSTOMER_RELEASE == 0) */
+
+#if MALI_DEBUG
+static ssize_t set_js_softstop_always(struct device *dev, struct device_attribute *attr, const char *buf, size_t count)
+{
+ struct kbase_device *kbdev;
+ int items;
+ int softstop_always;
+
+ kbdev = to_kbase_device(dev);
+ if (!kbdev)
+ {
+ return -ENODEV;
+ }
+
+ items = sscanf(buf, "%d", &softstop_always);
+ if ((items == 1) && ((softstop_always == 0) || (softstop_always == 1)))
+ {
+ kbdev->js_data.softstop_always = (mali_bool) softstop_always;
+
+ dev_info(kbdev->osdev.dev, "Support for softstop on a single context: %s\n", (kbdev->js_data.softstop_always == MALI_FALSE)? "Disabled" : "Enabled");
+ return count;
+ }
+ else
+ {
+ dev_err(kbdev->osdev.dev, "Couldn't process js_softstop_always write operation.\nUse format "
+ "<soft_stop_always>\n");
+ return -EINVAL;
+ }
+}
+
+static ssize_t show_js_softstop_always(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct kbase_device *kbdev;
+ ssize_t ret;
+
+ kbdev = to_kbase_device(dev);
+ if (!kbdev)
+ {
+ return -ENODEV;
+ }
+
+ ret = scnprintf(buf, PAGE_SIZE, "%d\n", kbdev->js_data.softstop_always);
+
+ if (ret >= PAGE_SIZE )
+ {
+ buf[PAGE_SIZE-2] = '\n';
+ buf[PAGE_SIZE-1] = '\0';
+ ret = PAGE_SIZE-1;
+ }
+
+ return ret;
+}
+/**
+ * By default, soft-stops are disabled when only a single context is present. The ability to
+ * enable soft-stop when only a single context is present can be used for debug and unit-testing purposes.
+ * (see CL t6xx_stress_1 unit-test as an example whereby this feature is used.)
+ */
+DEVICE_ATTR(js_softstop_always, S_IRUGO|S_IWUSR, show_js_softstop_always, set_js_softstop_always);
+#endif /* MALI_DEBUG */
+
+#if MALI_DEBUG
+typedef void (kbasep_debug_command_func)( kbase_device * );
+
+typedef enum
+{
+ KBASEP_DEBUG_COMMAND_DUMPTRACE,
+
+ /* This must be the last enum */
+ KBASEP_DEBUG_COMMAND_COUNT
+} kbasep_debug_command_code;
+
+typedef struct kbasep_debug_command
+{
+ char *str;
+ kbasep_debug_command_func *func;
+} kbasep_debug_command;
+
+/** Debug commands supported by the driver */
+static const kbasep_debug_command debug_commands[] =
+{
+ {
+ .str = "dumptrace",
+ .func = &kbasep_trace_dump,
+ }
+};
+
+/** Show callback for the @c debug_command sysfs file.
+ *
+ * This function is called to get the contents of the @c debug_command sysfs
+ * file. This is a list of the available debug commands, separated by newlines.
+ *
+ * @param dev The device this sysfs file is for
+ * @param attr The attributes of the sysfs file
+ * @param buf The output buffer for the sysfs file contents
+ *
+ * @return The number of bytes output to @c buf.
+ */
+static ssize_t show_debug(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct kbase_device *kbdev;
+ int i;
+ ssize_t ret = 0;
+
+ kbdev = to_kbase_device(dev);
+
+ if (!kbdev)
+ {
+ return -ENODEV;
+ }
+
+ for(i=0; i<KBASEP_DEBUG_COMMAND_COUNT && ret<PAGE_SIZE; i++)
+ {
+ ret += scnprintf(buf+ret, PAGE_SIZE - ret, "%s\n", debug_commands[i].str);
+ }
+
+ if (ret >= PAGE_SIZE )
+ {
+ buf[PAGE_SIZE-2] = '\n';
+ buf[PAGE_SIZE-1] = '\0';
+ ret = PAGE_SIZE-1;
+ }
+
+ return ret;
+}
+
+/** Store callback for the @c debug_command sysfs file.
+ *
+ * This function is called when the @c debug_command sysfs file is written to.
+ * It matches the requested command against the available commands, and if
+ * a matching command is found calls the associated function from
+ * @ref debug_commands to issue the command.
+ *
+ * @param dev The device with sysfs file is for
+ * @param attr The attributes of the sysfs file
+ * @param buf The value written to the sysfs file
+ * @param count The number of bytes written to the sysfs file
+ *
+ * @return @c count if the function succeeded. An error code on failure.
+ */
+static ssize_t issue_debug(struct device *dev, struct device_attribute *attr, const char *buf, size_t count)
+{
+ struct kbase_device *kbdev;
+ int i;
+
+ kbdev = to_kbase_device(dev);
+
+ if (!kbdev)
+ {
+ return -ENODEV;
+ }
+
+ for(i=0; i<KBASEP_DEBUG_COMMAND_COUNT; i++)
+ {
+ if (sysfs_streq(debug_commands[i].str, buf))
+ {
+ debug_commands[i].func( kbdev );
+ return count;
+ }
+ }
+
+ /* Debug Command not found */
+ dev_err(dev, "debug_command: command not known\n");
+ return -EINVAL;
+}
+
+/** The sysfs file @c debug_command.
+ *
+ * This is used to issue general debug commands to the device driver.
+ * Reading it will produce a list of debug commands, separated by newlines.
+ * Writing to it with one of those commands will issue said command.
+ */
+DEVICE_ATTR(debug_command, S_IRUGO|S_IWUSR, show_debug, issue_debug);
+#endif /*MALI_DEBUG*/
+#endif /*MALI_LICENSE_IS_GPL*/
+
+static int kbase_common_reg_map(kbase_device *kbdev)
+{
+ struct kbase_os_device *osdev = &kbdev->osdev;
+ int err = -ENOMEM;
+
+ osdev->reg_res = request_mem_region(osdev->reg_start,
+ osdev->reg_size,
+#if MALI_LICENSE_IS_GPL
+ dev_name(osdev->dev)
+#else
+ mali_dev_name
+#endif
+ );
+ if (!osdev->reg_res)
+ {
+ dev_err(osdev->dev, "Register window unavailable\n");
+ err = -EIO;
+ goto out_region;
+ }
+
+ osdev->reg = ioremap(osdev->reg_start, osdev->reg_size);
+ if (!osdev->reg)
+ {
+ dev_err(osdev->dev, "Can't remap register window\n");
+ err = -EINVAL;
+ goto out_ioremap;
+ }
+
+ return 0;
+
+out_ioremap:
+ release_resource(osdev->reg_res);
+ kfree(osdev->reg_res);
+out_region:
+ return err;
+}
+
+static void kbase_common_reg_unmap(kbase_device *kbdev)
+{
+ struct kbase_os_device *osdev = &kbdev->osdev;
+
+ iounmap(osdev->reg);
+ release_resource(osdev->reg_res);
+ kfree(osdev->reg_res);
+}
+
+static int kbase_common_device_init(kbase_device *kbdev)
+{
+ struct kbase_os_device *osdev = &kbdev->osdev;
+ int err = -ENOMEM;
+ mali_error mali_err;
+ enum
+ {
+ inited_mem = (1u << 0),
+ inited_job_slot = (1u << 1),
+ inited_pm = (1u << 2),
+ inited_js = (1u << 3),
+ inited_irqs = (1u << 4)
+#if MALI_LICENSE_IS_GPL
+ ,inited_debug = (1u << 5)
+ ,inited_js_softstop = (1u << 6)
+#endif
+#if MALI_CUSTOMER_RELEASE == 0
+ ,inited_js_timeouts = (1u << 7)
+#endif
+ /* BASE_HW_ISSUE_8401 */
+ ,inited_workaround = (1u << 8)
+ };
+
+ int inited = 0;
+
+#if MALI_LICENSE_IS_GPL
+ dev_set_drvdata(osdev->dev, kbdev);
+
+ osdev->mdev.minor = MISC_DYNAMIC_MINOR;
+ osdev->mdev.name = osdev->devname;
+ osdev->mdev.fops = &kbase_fops;
+ osdev->mdev.parent = get_device(osdev->dev);
+#endif
+
+ scnprintf(osdev->devname, DEVNAME_SIZE, "%s%d", kbase_drv_name, kbase_dev_nr++);
+
+#if MALI_LICENSE_IS_GPL
+ if (misc_register(&osdev->mdev))
+ {
+ dev_err(osdev->dev, "Couldn't register misc dev %s\n", osdev->devname);
+ err = -EINVAL;
+ goto out_misc;
+ }
+
+ if (device_create_file(osdev->dev, &dev_attr_power_policy))
+ {
+ dev_err(osdev->dev, "Couldn't create power_policy sysfs file\n");
+ goto out_file;
+ }
+
+ down(&kbase_dev_list_lock);
+ list_add(&osdev->entry, &kbase_dev_list);
+ up(&kbase_dev_list_lock);
+ dev_info(osdev->dev, "Probed as %s\n", dev_name(osdev->mdev.this_device));
+#endif
+
+ mali_err = kbase_pm_init(kbdev);
+ if (MALI_ERROR_NONE != mali_err)
+ {
+ goto out_partial;
+ }
+ inited |= inited_pm;
+
+ mali_err = kbase_mem_init(kbdev);
+ if (MALI_ERROR_NONE != mali_err)
+ {
+ goto out_partial;
+ }
+ inited |= inited_mem;
+
+ mali_err = kbase_job_slot_init(kbdev);
+ if (MALI_ERROR_NONE != mali_err)
+ {
+ goto out_partial;
+ }
+ inited |= inited_job_slot;
+
+ mali_err = kbasep_js_devdata_init(kbdev);
+ if (MALI_ERROR_NONE != mali_err)
+ {
+ goto out_partial;
+ }
+ inited |= inited_js;
+
+ err = kbase_install_interrupts(kbdev);
+ if (err)
+ {
+ goto out_partial;
+ }
+ inited |= inited_irqs;
+
+#if MALI_LICENSE_IS_GPL
+#if MALI_DEBUG
+ if (device_create_file(osdev->dev, &dev_attr_debug_command))
+ {
+ dev_err(osdev->dev, "Couldn't create debug_command sysfs file\n");
+ goto out_partial;
+ }
+ inited |= inited_debug;
+
+ if (device_create_file(osdev->dev, &dev_attr_js_softstop_always))
+ {
+ dev_err(osdev->dev, "Couldn't create js_softstop_always sysfs file\n");
+ goto out_partial;
+ }
+ inited |= inited_js_softstop;
+#endif /* MALI_DEBUG */
+#if MALI_CUSTOMER_RELEASE == 0
+ if (device_create_file(osdev->dev, &dev_attr_js_timeouts))
+ {
+ dev_err(osdev->dev, "Couldn't create js_timeouts sysfs file\n");
+ goto out_partial;
+ }
+ inited |= inited_js_timeouts;
+#endif /* MALI_CUSTOMER_RELEASE */
+#endif /*MALI_LICENSE_IS_GPL*/
+
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8401))
+ {
+ if (MALI_ERROR_NONE != kbasep_8401_workaround_init(kbdev))
+ {
+ goto out_partial;
+ }
+ inited |= inited_workaround;
+ }
+
+ mali_err = kbase_pm_powerup(kbdev);
+ if (MALI_ERROR_NONE == mali_err)
+ {
+ return 0;
+ }
+
+out_partial:
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8401))
+ {
+ if (inited & inited_workaround)
+ {
+ kbasep_8401_workaround_term(kbdev);
+ }
+ }
+
+#if MALI_LICENSE_IS_GPL
+#if MALI_CUSTOMER_RELEASE == 0
+ if (inited & inited_js_timeouts)
+ {
+ device_remove_file(kbdev->osdev.dev, &dev_attr_js_timeouts);
+ }
+#endif
+#if MALI_DEBUG
+ if (inited & inited_js_softstop)
+ {
+ device_remove_file(kbdev->osdev.dev, &dev_attr_js_softstop_always);
+ }
+
+ if (inited & inited_debug)
+ {
+ device_remove_file(kbdev->osdev.dev, &dev_attr_debug_command);
+ }
+#endif
+#endif /*MALI_LICENSE_IS_GPL*/
+
+ if (inited & inited_js)
+ {
+ kbasep_js_devdata_halt(kbdev);
+ }
+ if (inited & inited_job_slot)
+ {
+ kbase_job_slot_halt(kbdev);
+ }
+ if (inited & inited_mem)
+ {
+ kbase_mem_halt(kbdev);
+ }
+ if (inited & inited_pm)
+ {
+ kbase_pm_halt(kbdev);
+ }
+
+ if (inited & inited_irqs)
+ {
+ kbase_release_interrupts(kbdev);
+ }
+
+ if (inited & inited_js)
+ {
+ kbasep_js_devdata_term(kbdev);
+ }
+ if (inited & inited_job_slot)
+ {
+ kbase_job_slot_term(kbdev);
+ }
+ if (inited & inited_mem)
+ {
+ kbase_mem_term(kbdev);
+ }
+ if (inited & inited_pm)
+ {
+ kbase_pm_term(kbdev);
+ }
+
+#if MALI_LICENSE_IS_GPL
+ down(&kbase_dev_list_lock);
+ list_del(&osdev->entry);
+ up(&kbase_dev_list_lock);
+
+ device_remove_file(kbdev->osdev.dev, &dev_attr_power_policy);
+out_file:
+ misc_deregister(&kbdev->osdev.mdev);
+out_misc:
+ put_device(osdev->dev);
+#endif
+ return err;
+}
+
+#if MALI_LICENSE_IS_GPL
+static int kbase_platform_device_probe(struct platform_device *pdev)
+{
+ struct kbase_device *kbdev;
+ kbase_device_info *dev_info;
+ struct kbase_os_device *osdev;
+ struct resource *reg_res;
+ kbase_attribute *platform_data;
+ int err;
+ int i;
+ struct mali_base_gpu_core_props *core_props;
+#if MALI_NO_MALI
+ mali_error mali_err;
+#endif
+
+ dev_info = (kbase_device_info *)pdev->id_entry->driver_data;
+ kbdev = kbase_device_alloc();
+ if (!kbdev)
+ {
+ dev_err(&pdev->dev, "Can't allocate device\n");
+ err = -ENOMEM;
+ goto out;
+ }
+
+#if MALI_NO_MALI
+ mali_err = midg_device_create(kbdev);
+ if (MALI_ERROR_NONE != mali_err)
+ {
+ dev_err(&pdev->dev, "Can't initialize dummy model\n");
+ err = -ENOMEM;
+ goto out_midg;
+ }
+#endif
+
+ osdev = &kbdev->osdev;
+ osdev->dev = &pdev->dev;
+ platform_data = (kbase_attribute *)osdev->dev->platform_data;
+
+ if (NULL == platform_data)
+ {
+ dev_err(osdev->dev, "Platform data not specified\n");
+ err = -ENOENT;
+ goto out_free_dev;
+ }
+
+ if (MALI_TRUE != kbasep_validate_configuration_attributes(kbdev, platform_data))
+ {
+ dev_err(osdev->dev, "Configuration attributes failed to validate\n");
+ err = -EINVAL;
+ goto out_free_dev;
+ }
+ kbdev->config_attributes = platform_data;
+
+ /* 3 IRQ resources */
+ for (i = 0; i < 3; i++)
+ {
+ struct resource *irq_res;
+
+ irq_res = platform_get_resource(pdev, IORESOURCE_IRQ, i);
+ if (!irq_res)
+ {
+ dev_err(osdev->dev, "No IRQ resource at index %d\n", i);
+ err = -ENOENT;
+ goto out_free_dev;
+ }
+
+ osdev->irqs[i].irq = irq_res->start;
+ osdev->irqs[i].flags = (irq_res->flags & IRQF_TRIGGER_MASK);
+ }
+
+ /* the first memory resource is the physical address of the GPU registers */
+ reg_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!reg_res)
+ {
+ dev_err(&pdev->dev, "Invalid register resource\n");
+ err = -ENOENT;
+ goto out_free_dev;
+ }
+
+ osdev->reg_start = reg_res->start;
+ osdev->reg_size = resource_size(reg_res);
+
+ err = kbase_common_reg_map(kbdev);
+ if (err)
+ {
+ goto out_free_dev;
+ }
+
+ if (MALI_ERROR_NONE != kbase_device_init(kbdev, dev_info))
+ {
+ dev_err(&pdev->dev, "Can't initialize device\n");
+ err = -ENOMEM;
+ goto out_reg_unmap;
+ }
+
+#if MALI_USE_UMP == 1
+ kbdev->memdev.ump_device_id = kbasep_get_config_value(kbdev, platform_data,
+ KBASE_CONFIG_ATTR_UMP_DEVICE);
+#endif /* MALI_USE_UMP == 1 */
+
+ kbdev->memdev.per_process_memory_limit = kbasep_get_config_value(kbdev, platform_data,
+ KBASE_CONFIG_ATTR_MEMORY_PER_PROCESS_LIMIT);
+
+ /* obtain min/max configured gpu frequencies */
+ core_props = &(kbdev->gpu_props.props.core_props);
+ core_props->gpu_freq_khz_min = kbasep_get_config_value(kbdev, platform_data,
+ KBASE_CONFIG_ATTR_GPU_FREQ_KHZ_MIN);
+ core_props->gpu_freq_khz_max = kbasep_get_config_value(kbdev, platform_data,
+ KBASE_CONFIG_ATTR_GPU_FREQ_KHZ_MAX);
+ kbdev->gpu_props.irq_throttle_time_us = kbasep_get_config_value(kbdev, platform_data,
+ KBASE_CONFIG_ATTR_GPU_IRQ_THROTTLE_TIME_US);
+
+ err = kbase_register_memory_regions(kbdev, (kbase_attribute *)osdev->dev->platform_data);
+ if (err)
+ {
+ dev_err(osdev->dev, "Failed to register memory regions\n");
+ goto out_term_dev;
+ }
+
+ err = kbase_common_device_init(kbdev);
+ if (err)
+ {
+ dev_err(osdev->dev, "Failed kbase_common_device_init\n");
+ goto out_term_dev;
+ }
+ return 0;
+
+out_term_dev:
+ kbase_device_term(kbdev);
+out_reg_unmap:
+ kbase_common_reg_unmap(kbdev);
+out_free_dev:
+#if MALI_NO_MALI
+ midg_device_destroy(kbdev);
+out_midg:
+#endif /* MALI_NO_MALI */
+ kbase_device_free(kbdev);
+out:
+ return err;
+}
+#endif /* MALI_LICENSE_IS_GPL */
+
+static int kbase_common_device_remove(struct kbase_device *kbdev)
+{
+ if (kbase_hw_has_issue(kbdev, BASE_HW_ISSUE_8401))
+ {
+ kbasep_8401_workaround_term(kbdev);
+ }
+#if MALI_LICENSE_IS_GPL
+ /* Remove the sys power policy file */
+ device_remove_file(kbdev->osdev.dev, &dev_attr_power_policy);
+#if MALI_DEBUG
+ device_remove_file(kbdev->osdev.dev, &dev_attr_js_softstop_always);
+ device_remove_file(kbdev->osdev.dev, &dev_attr_debug_command);
+#endif
+#endif
+
+ kbasep_js_devdata_halt(kbdev);
+ kbase_job_slot_halt(kbdev);
+ kbase_mem_halt(kbdev);
+ kbase_pm_halt(kbdev);
+
+ kbase_release_interrupts(kbdev);
+
+ kbasep_js_devdata_term(kbdev);
+ kbase_job_slot_term(kbdev);
+ kbase_mem_term(kbdev);
+ kbase_pm_term(kbdev);
+
+#if MALI_LICENSE_IS_GPL
+ down(&kbase_dev_list_lock);
+ list_del(&kbdev->osdev.entry);
+ up(&kbase_dev_list_lock);
+ misc_deregister(&kbdev->osdev.mdev);
+ put_device(kbdev->osdev.dev);
+#endif
+ kbase_common_reg_unmap(kbdev);
+ kbase_device_term(kbdev);
+#if MALI_NO_MALI
+ midg_device_destroy(kbdev);
+#endif /* MALI_NO_MALI */
+ kbase_device_free(kbdev);
+
+ return 0;
+}
+
+
+#if MALI_LICENSE_IS_GPL
+static int kbase_platform_device_remove(struct platform_device *pdev)
+{
+ struct kbase_device *kbdev = to_kbase_device(&pdev->dev);
+
+ if (!kbdev)
+ {
+ return -ENODEV;
+ }
+
+ return kbase_common_device_remove(kbdev);
+}
+
+/** Suspend callback from the OS.
+ *
+ * This is called by Linux when the device should suspend.
+ *
+ * @param dev The device to suspend
+ *
+ * @return A standard Linux error code
+ */
+static int kbase_device_suspend(struct device *dev)
+{
+ struct kbase_device *kbdev = to_kbase_device(dev);
+
+ if (!kbdev)
+ {
+ return -ENODEV;
+ }
+
+ /* Send the event to the power policy */
+ kbase_pm_send_event(kbdev, KBASE_PM_EVENT_SYSTEM_SUSPEND);
+
+ /* Wait for the policy to suspend the device */
+ kbase_pm_wait_for_power_down(kbdev);
+
+ return 0;
+}
+
+/** Resume callback from the OS.
+ *
+ * This is called by Linux when the device should resume from suspension.
+ *
+ * @param dev The device to resume
+ *
+ * @return A standard Linux error code
+ */
+static int kbase_device_resume(struct device *dev)
+{
+ struct kbase_device *kbdev = to_kbase_device(dev);
+
+ if (!kbdev)
+ {
+ return -ENODEV;
+ }
+
+ /* Send the event to the power policy */
+ kbase_pm_send_event(kbdev, KBASE_PM_EVENT_SYSTEM_RESUME);
+
+ /* Wait for the policy to resume the device */
+ kbase_pm_wait_for_power_up(kbdev);
+
+ return 0;
+}
+
+#define kbdev_info(x) ((kernel_ulong_t)&kbase_dev_info[(x)])
+
+static struct platform_device_id kbase_platform_id_table[] =
+{
+ {
+ .name = "mali-t6xm",
+ .driver_data = kbdev_info(KBASE_MALI_T6XM),
+ },
+ {
+ .name = "mali-t6f1",
+ .driver_data = kbdev_info(KBASE_MALI_T6F1),
+ },
+ {
+ .name = "mali-t601",
+ .driver_data = kbdev_info(KBASE_MALI_T601),
+ },
+ {
+ .name = "mali-t604",
+ .driver_data = kbdev_info(KBASE_MALI_T604),
+ },
+ {
+ .name = "mali-t608",
+ .driver_data = kbdev_info(KBASE_MALI_T608),
+ },
+ {},
+};
+
+MODULE_DEVICE_TABLE(platform, kbase_platform_id_table);
+
+/** The power management operations for the platform driver.
+ */
+static struct dev_pm_ops kbase_pm_ops =
+{
+ .suspend = kbase_device_suspend,
+ .resume = kbase_device_resume,
+};
+
+static struct platform_driver kbase_platform_driver =
+{
+ .probe = kbase_platform_device_probe,
+ .remove = kbase_platform_device_remove,
+ .driver =
+ {
+ .name = kbase_drv_name,
+ .owner = THIS_MODULE,
+ .pm = &kbase_pm_ops,
+ },
+ .id_table = kbase_platform_id_table,
+};
+
+#endif /* MALI_LICENSE_IS_GPL */
+
+#if MALI_LICENSE_IS_GPL && MALI_FAKE_PLATFORM_DEVICE
+static struct platform_device *mali_device;
+#endif /* MALI_LICENSE_IS_GPL && MALI_FAKE_PLATFORM_DEVICE */
+
+#ifdef MALI_PCI_DEVICE
+static kbase_attribute pci_attributes[] =
+{
+ {
+ KBASE_CONFIG_ATTR_MEMORY_PER_PROCESS_LIMIT,
+ 512 * 1024 * 1024UL /* 512MB */
+ },
+#if MALI_USE_UMP == 1
+ {
+ KBASE_CONFIG_ATTR_UMP_DEVICE,
+ UMP_DEVICE_Z_SHIFT
+ },
+#endif /* MALI_USE_UMP == 1 */
+ {
+ KBASE_CONFIG_ATTR_MEMORY_OS_SHARED_MAX,
+ 768 * 1024 * 1024UL /* 768MB */
+ },
+ {
+ KBASE_CONFIG_ATTR_END,
+ 0
+ }
+};
+
+static int kbase_pci_device_probe(struct pci_dev *pdev,
+ const struct pci_device_id *pci_id)
+{
+ const kbase_device_info *dev_info;
+ kbase_device *kbdev;
+ kbase_os_device *osdev;
+ kbase_attribute *platform_data;
+ struct mali_base_gpu_core_props *core_props;
+ int err;
+#if MALI_NO_MALI
+ mali_error mali_err;
+#endif
+
+ dev_info = &kbase_dev_info[pci_id->driver_data];
+ kbdev = kbase_device_alloc();
+ if (!kbdev)
+ {
+ dev_err(&pdev->dev, "Can't allocate device\n");
+ err = -ENOMEM;
+ goto out;
+ }
+
+#if MALI_NO_MALI
+ mali_err = midg_device_create(kbdev);
+ if (MALI_ERROR_NONE != mali_err)
+ {
+ dev_err(&pdev->dev, "Can't initialize dummy model\n");
+ err = -ENOMEM;
+ goto out_midg;
+ }
+#endif
+
+ osdev = &kbdev->osdev;
+ osdev->dev = &pdev->dev;
+ platform_data = (kbase_attribute *)osdev->dev->platform_data;
+
+ err = pci_enable_device(pdev);
+ if (err)
+ {
+ goto out_free_dev;
+ }
+
+ osdev->reg_start = pci_resource_start(pdev, 0);
+ osdev->reg_size = pci_resource_len(pdev, 0);
+ if (!(pci_resource_flags(pdev, 0) & IORESOURCE_MEM))
+ {
+ err = -EINVAL;
+ goto out_disable;
+ }
+
+ err = kbase_common_reg_map(kbdev);
+ if (err)
+ {
+ goto out_disable;
+ }
+
+ if (MALI_ERROR_NONE != kbase_device_init(kbdev, dev_info))
+ {
+ dev_err(&pdev->dev, "Can't initialize device\n");
+ err = -ENOMEM;
+ goto out_reg_unmap;
+ }
+
+ osdev->irqs[0].irq = pdev->irq;
+ osdev->irqs[1].irq = pdev->irq;
+ osdev->irqs[2].irq = pdev->irq;
+
+ pci_set_master(pdev);
+
+ if (MALI_TRUE != kbasep_validate_configuration_attributes(kbdev, pci_attributes))
+ {
+ err = -EINVAL;
+ goto out_term_dev;
+ }
+ /* Use the master passed in instead of the pci attributes */
+ kbdev->config_attributes = platform_data;
+
+#if MALI_USE_UMP == 1
+ kbdev->memdev.ump_device_id = kbasep_get_config_value(kbdev, pci_attributes,
+ KBASE_CONFIG_ATTR_UMP_DEVICE);
+#endif /* MALI_USE_UMP == 1 */
+
+ kbdev->memdev.per_process_memory_limit = kbasep_get_config_value(kbdev, pci_attributes,
+ KBASE_CONFIG_ATTR_MEMORY_PER_PROCESS_LIMIT);
+
+ err = kbase_register_memory_regions(kbdev, pci_attributes);
+ if (err)
+ {
+ goto out_term_dev;
+ }
+
+ /* obtain min/max configured gpu frequencies */
+ core_props = &(kbdev->gpu_props.props.core_props);
+ core_props->gpu_freq_khz_min = kbasep_get_config_value(kbdev, platform_data,
+ KBASE_CONFIG_ATTR_GPU_FREQ_KHZ_MIN);
+ core_props->gpu_freq_khz_max = kbasep_get_config_value(kbdev, platform_data,
+ KBASE_CONFIG_ATTR_GPU_FREQ_KHZ_MAX);
+ kbdev->gpu_props.irq_throttle_time_us = kbasep_get_config_value(kbdev, platform_data,
+ KBASE_CONFIG_ATTR_GPU_IRQ_THROTTLE_TIME_US);
+
+ err = kbase_common_device_init(kbdev);
+ if (err)
+ {
+ goto out_term_dev;
+ }
+
+ return 0;
+
+out_term_dev:
+ kbase_device_term(kbdev);
+out_reg_unmap:
+ kbase_common_reg_unmap(kbdev);
+out_disable:
+ pci_disable_device(pdev);
+out_free_dev:
+#if MALI_NO_MALI
+ midg_device_destroy(kbdev);
+out_midg:
+#endif /* MALI_NO_MALI */
+ kbase_device_free(kbdev);
+out:
+ return err;
+}
+
+static void kbase_pci_device_remove(struct pci_dev *pdev)
+{
+ struct kbase_device *kbdev = to_kbase_device(&pdev->dev);
+
+ if (!kbdev)
+ {
+ return;
+ }
+
+ kbase_common_device_remove(kbdev);
+ pci_disable_device(pdev);
+}
+
+static DEFINE_PCI_DEVICE_TABLE(kbase_pci_id_table) =
+{
+ { PCI_DEVICE(0x13b5, 0x6956), 0, 0, KBASE_MALI_T6XM },
+ {},
+};
+
+MODULE_DEVICE_TABLE(pci, kbase_pci_id_table);
+
+static struct pci_driver kbase_pci_driver =
+{
+ .name = KBASE_DRV_NAME,
+ .probe = kbase_pci_device_probe,
+ .remove = kbase_pci_device_remove,
+ .id_table = kbase_pci_id_table,
+};
+#endif /* MALI_PCI_DEVICE */
+
+#if MALI_LICENSE_IS_GPL
+static int __init kbase_driver_init(void)
+{
+ int err;
+#if MALI_FAKE_PLATFORM_DEVICE
+ kbase_platform_config *config;
+ int attribute_count;
+ struct resource resources[PLATFORM_CONFIG_RESOURCE_COUNT];
+
+ config = kbasep_get_platform_config();
+ attribute_count = kbasep_get_config_attribute_count(config->attributes);
+ mali_device = platform_device_alloc( kbasep_midgard_type_to_string(config->midgard_type), 0);
+ if (mali_device == NULL)
+ {
+ return -ENOMEM;
+ }
+
+ kbasep_config_parse_io_resources(config->io_resources, resources);
+ err = platform_device_add_resources(mali_device, resources, PLATFORM_CONFIG_RESOURCE_COUNT);
+ if (err)
+ {
+ platform_device_put(mali_device);
+ mali_device = NULL;
+ return err;
+ }
+
+ err = platform_device_add_data(mali_device, config->attributes, attribute_count * sizeof(config->attributes[0]));
+ if (err)
+ {
+ platform_device_unregister(mali_device);
+ mali_device = NULL;
+ return err;
+ }
+
+ err = platform_device_add(mali_device);
+ if (err)
+ {
+ platform_device_unregister(mali_device);
+ mali_device = NULL;
+ return err;
+ }
+
+#endif /* MALI_FAKE_PLATFORM_DEVICE */
+ err = platform_driver_register(&kbase_platform_driver);
+ if (err)
+ {
+ return err;
+ }
+
+#ifdef MALI_PCI_DEVICE
+ err = pci_register_driver(&kbase_pci_driver);
+ if (err)
+ {
+ platform_driver_unregister(&kbase_platform_driver);
+ return err;
+ }
+#endif
+
+ return 0;
+}
+#else
+static int __init kbase_driver_init(void)
+{
+ kbase_platform_config *config;
+ struct kbase_device *kbdev;
+ const kbase_device_info *dev_info;
+ kbase_os_device *osdev;
+ int err;
+ dev_t dev = 0;
+ struct mali_base_gpu_core_props *core_props;
+
+ if (0 == mali_major)
+ {
+ /* auto select a major */
+ err = alloc_chrdev_region(&dev, 0, 1, mali_dev_name);
+ mali_major = MAJOR(dev);
+ }
+ else
+ {
+ /* use load time defined major number */
+ dev = MKDEV(mali_major, 0);
+ err = register_chrdev_region(dev, 1, mali_dev_name);
+ }
+
+ if (0 != err)
+ {
+ goto out_region;
+ }
+
+ memset(&mali_linux_device, 0, sizeof(mali_linux_device));
+
+ /* initialize our char dev data */
+ cdev_init(&mali_linux_device.cdev, &kbase_fops);
+ mali_linux_device.cdev.owner = THIS_MODULE;
+ mali_linux_device.cdev.ops = &kbase_fops;
+
+ /* register char dev with the kernel */
+ err = cdev_add(&mali_linux_device.cdev, dev, 1/*count*/);
+ if (0 != err)
+ {
+ goto out_cdev_add;
+ }
+
+ config = kbasep_get_platform_config();
+
+ dev_info = &kbase_dev_info[config->midgard_type];
+ kbdev = kbase_device_alloc();
+ if (!kbdev)
+ {
+ dev_err(&pdev->dev, "Can't allocate device\n");
+ err = -ENOMEM;
+ goto out_kbdev_alloc;
+ }
+
+#if MALI_NO_MALI
+ mali_err = midg_device_create(kbdev);
+ if (MALI_ERROR_NONE != mali_err)
+ {
+ dev_err(&pdev->dev, "Can't initialize dummy model\n");
+ err = -ENOMEM;
+ goto out_midg;
+ }
+#endif
+
+ osdev = &kbdev->osdev;
+ osdev->dev = &mali_linux_device.cdev;
+ osdev->reg_start = config->io_resources->io_memory_region.start;
+ osdev->reg_size = config->io_resources->io_memory_region.end - config->io_resources->io_memory_region.start + 1;
+
+ err = kbase_common_reg_map(kbdev);
+ if (err)
+ {
+ goto out_free_dev;
+ }
+
+ if (MALI_ERROR_NONE != kbase_device_init(kbdev, dev_info))
+ {
+ dev_err(&pdev->dev, "Can't initialize device\n");
+ err = -ENOMEM;
+ goto out_reg_unmap;
+ }
+
+ if (MALI_TRUE != kbasep_validate_configuration_attributes(kbdev, config->attributes))
+ {
+ err = -EINVAL;
+ goto out_device_init;
+ }
+
+ kbdev->config_attributes = config->attributes;
+
+ osdev->irqs[0].irq = config->io_resources->job_irq_number;
+ osdev->irqs[1].irq = config->io_resources->mmu_irq_number;
+ osdev->irqs[2].irq = config->io_resources->gpu_irq_number;
+
+ kbdev->memdev.per_process_memory_limit = kbasep_get_config_value(kbdev, config->attributes,
+ KBASE_CONFIG_ATTR_MEMORY_PER_PROCESS_LIMIT);
+
+#if MALI_USE_UMP == 1
+ kbdev->memdev.ump_device_id = kbasep_get_config_value(kbdev, config->attributes, KBASE_CONFIG_ATTR_UMP_DEVICE);
+#endif /* MALI_USE_UMP == 1 */
+
+ /* obtain min/max configured gpu frequencies */
+ core_props = &(kbdev->gpu_props.props.core_props);
+ core_props->gpu_freq_khz_min = kbasep_get_config_value(kbdev, config->attributes,
+ KBASE_CONFIG_ATTR_GPU_FREQ_KHZ_MIN);
+ core_props->gpu_freq_khz_max = kbasep_get_config_value(kbdev, config->attributes,
+ KBASE_CONFIG_ATTR_GPU_FREQ_KHZ_MAX);
+ kbdev->gpu_props.irq_throttle_time_us = kbasep_get_config_value(kbdev, config->attributes,
+ KBASE_CONFIG_ATTR_GPU_IRQ_THROTTLE_TIME_US);
+
+ err = kbase_register_memory_regions(kbdev, config->attributes);
+ if (err)
+ {
+ goto out_device_init;
+ }
+
+ err = kbase_common_device_init(kbdev);
+ if (0 != err)
+ {
+ goto out_device_init;
+ }
+
+ g_kbdev = kbdev;
+
+ return 0;
+
+out_device_init:
+ kbase_device_term(kbdev);
+ g_kbdev = NULL;
+out_reg_unmap:
+ kbase_common_reg_unmap(kbdev);
+out_free_dev:
+#if MALI_NO_MALI
+ midg_device_destroy(kbdev);
+out_midg:
+#endif /* MALI_NO_MALI */
+ kbase_device_free(kbdev);
+out_kbdev_alloc:
+ cdev_del(&mali_linux_device.cdev);
+out_cdev_add:
+ unregister_chrdev_region(dev, 1);
+out_region:
+ return err;
+}
+
+#endif /* MALI_LICENSE_IS_GPL */
+
+static void __exit kbase_driver_exit(void)
+{
+#if MALI_LICENSE_IS_GPL
+#ifdef MALI_PCI_DEVICE
+ pci_unregister_driver(&kbase_pci_driver);
+#endif
+ platform_driver_unregister(&kbase_platform_driver);
+#if MALI_FAKE_PLATFORM_DEVICE
+ if (mali_device)
+ {
+ platform_device_unregister(mali_device);
+ }
+#endif
+#else
+ dev_t dev = MKDEV(mali_major, 0);
+ struct kbase_device *kbdev = g_kbdev;
+
+ if (!kbdev)
+ {
+ return;
+ }
+
+ kbase_common_device_remove(kbdev);
+
+ /* unregister char device */
+ cdev_del(&mali_linux_device.cdev);
+
+ /* free major */
+ unregister_chrdev_region(dev, 1);
+#endif
+}
+
+module_init(kbase_driver_init);
+module_exit(kbase_driver_exit);
+
+#if MALI_LICENSE_IS_GPL
+MODULE_LICENSE("GPL");
+#else
+MODULE_LICENSE("Proprietary");
+#endif
+
+#if MALI_GATOR_SUPPORT
+/* Create the trace points (otherwise we just get code to call a tracepoint) */
+#define CREATE_TRACE_POINTS
+#include "mali_linux_trace.h"
+
+void kbase_trace_mali_pm_status(u32 event, u64 value)
+{
+ trace_mali_pm_status(event, value);
+}
+
+void kbase_trace_mali_pm_power_off(u32 event, u64 value)
+{
+ trace_mali_pm_power_off(event, value);
+}
+
+void kbase_trace_mali_pm_power_on(u32 event, u64 value)
+{
+ trace_mali_pm_power_on(event, value);
+}
+
+void kbase_trace_mali_job_slots_event(u32 event)
+{
+ trace_mali_job_slots_event(event);
+}
+
+void kbase_trace_mali_page_fault_insert_pages(int event, u32 value)
+{
+ trace_mali_page_fault_insert_pages(event, value);
+}
+
+void kbase_trace_mali_mmu_as_in_use(int event)
+{
+ trace_mali_mmu_as_in_use(event);
+}
+void kbase_trace_mali_mmu_as_released(int event)
+{
+ trace_mali_mmu_as_released(event);
+}
+void kbase_trace_mali_total_alloc_pages_change(long long int event)
+{
+ trace_mali_total_alloc_pages_change(event);
+}
+#endif
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_linux.h
+ * Base kernel APIs, Linux implementation.
+ */
+
+#ifndef _KBASE_LINUX_H_
+#define _KBASE_LINUX_H_
+
+/* All things that are needed for the Linux port. */
+#if MALI_LICENSE_IS_GPL
+#include <linux/platform_device.h>
+#include <linux/miscdevice.h>
+#endif
+#include <linux/list.h>
+#include <linux/module.h>
+
+typedef struct kbase_os_context
+{
+ u64 cookies;
+ osk_dlist reg_pending;
+ wait_queue_head_t event_queue;
+} kbase_os_context;
+
+
+#define DEVNAME_SIZE 16
+
+typedef struct kbase_os_device
+{
+#if MALI_LICENSE_IS_GPL
+ struct list_head entry;
+ struct device *dev;
+ struct miscdevice mdev;
+#else
+ struct cdev *dev;
+#endif
+ u64 reg_start;
+ size_t reg_size;
+ void __iomem *reg;
+ struct resource *reg_res;
+ struct {
+ int irq;
+ int flags;
+ } irqs[3];
+ char devname[DEVNAME_SIZE];
+
+#if MALI_NO_MALI
+ void *model;
+ struct kmem_cache *irq_slab;
+ osk_workq irq_workq;
+ osk_atomic serving_job_irq;
+ osk_atomic serving_gpu_irq;
+ osk_atomic serving_mmu_irq;
+ osk_spinlock reg_op_lock;
+#endif
+} kbase_os_device;
+
+#define KBASE_OS_SUPPORT 1
+
+#if defined(MALI_KERNEL_TEST_API)
+#if (1 == MALI_KERNEL_TEST_API)
+#define KBASE_EXPORT_TEST_API(func) EXPORT_SYMBOL(func);
+#else
+#define KBASE_EXPORT_TEST_API(func)
+#endif
+#else
+#define KBASE_EXPORT_TEST_API(func)
+#endif
+
+#define KBASE_EXPORT_SYMBOL(func) EXPORT_SYMBOL(func);
+
+#endif /* _KBASE_LINUX_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_mem_linux.c
+ * Base kernel memory APIs, Linux implementation.
+ */
+
+/* #define DEBUG 1 */
+
+#include <linux/kernel.h>
+#include <linux/bug.h>
+#include <linux/mm.h>
+#include <linux/fs.h>
+#include <linux/dma-mapping.h>
+
+#include <kbase/src/common/mali_kbase.h>
+#include <kbase/src/linux/mali_kbase_mem_linux.h>
+
+struct kbase_va_region *kbase_pmem_alloc(struct kbase_context *kctx, u32 size,
+ u32 flags, u16 *pmem_cookie)
+{
+ struct kbase_va_region *reg;
+ u16 cookie;
+
+ OSK_ASSERT(kctx != NULL);
+ OSK_ASSERT(pmem_cookie != NULL);
+
+ if ( 0 == size )
+ {
+ goto out1;
+ }
+
+ if (!kbase_check_alloc_flags(flags))
+ {
+ goto out1;
+ }
+
+ reg = kbase_alloc_free_region(kctx, 0, size, KBASE_REG_ZONE_PMEM);
+ if (!reg)
+ goto out1;
+
+ reg->flags &= ~KBASE_REG_FREE;
+
+ kbase_update_region_flags(reg, flags, MALI_FALSE);
+
+ if (kbase_alloc_phy_pages(reg, size, size))
+ goto out2;
+
+ reg->nr_alloc_pages = size;
+ reg->extent = 0;
+
+ kbase_gpu_vm_lock(kctx);
+ if (!kctx->osctx.cookies)
+ goto out3;
+
+ cookie = __ffs(kctx->osctx.cookies);
+ kctx->osctx.cookies &= ~(1UL << cookie);
+ reg->flags &= ~KBASE_REG_COOKIE_MASK;
+ reg->flags |= KBASE_REG_COOKIE(cookie);
+
+ OSK_DLIST_PUSH_FRONT(&kctx->osctx.reg_pending, reg,
+ struct kbase_va_region, link);
+
+ *pmem_cookie = cookie;
+ kbase_gpu_vm_unlock(kctx);
+
+ return reg;
+
+out3:
+ kbase_gpu_vm_unlock(kctx);
+ kbase_free_phy_pages(reg);
+out2:
+ osk_free(reg);
+out1:
+ return NULL;
+
+}
+KBASE_EXPORT_TEST_API(kbase_pmem_alloc)
+
+/*
+ * Callback for munmap(). PMEM receives a special treatment, as it
+ * frees the memory at the same time it gets unmapped. This avoids the
+ * map/unmap race where map reuses a memory range that has been
+ * unmapped from CPU, but still mapped on GPU.
+ */
+STATIC void kbase_cpu_vm_close(struct vm_area_struct *vma)
+{
+ struct kbase_va_region *reg = vma->vm_private_data;
+ kbase_context *kctx = reg->kctx;
+ mali_error err;
+
+ kbase_gpu_vm_lock(kctx);
+
+ err = kbase_cpu_free_mapping(reg, vma);
+ if (!err &&
+ (reg->flags & KBASE_REG_ZONE_MASK) == KBASE_REG_ZONE_PMEM)
+ {
+ kbase_mem_free_region(kctx, reg);
+ }
+
+ kbase_gpu_vm_unlock(kctx);
+}
+KBASE_EXPORT_TEST_API(kbase_cpu_vm_close)
+
+static const struct vm_operations_struct kbase_vm_ops = {
+ .close = kbase_cpu_vm_close,
+};
+
+static int kbase_cpu_mmap(struct kbase_va_region *reg, struct vm_area_struct *vma, void *kaddr, u32 nr_pages)
+{
+ struct kbase_cpu_mapping *map;
+ u64 start_off = vma->vm_pgoff - reg->start_pfn;
+ osk_phy_addr *page_array;
+ int err = 0;
+ int i;
+
+ map = osk_calloc(sizeof(*map));
+ if (!map)
+ {
+ WARN_ON(1);
+ err = -ENOMEM;
+ goto out;
+ }
+
+ /*
+ * VM_DONTCOPY - don't make this mapping available in fork'ed processes
+ * VM_DONTEXPAND - disable mremap on this region
+ * VM_RESERVED & VM_IO - disables paging
+ * VM_MIXEDMAP - Support mixing struct page*s and raw pfns.
+ * This is needed to support using the dedicated and
+ * the OS based memory backends together.
+ */
+ /*
+ * This will need updating to propagate coherency flags
+ * See MIDBASE-1057
+ */
+ vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND | VM_RESERVED | VM_IO | VM_MIXEDMAP;
+ vma->vm_ops = &kbase_vm_ops;
+ vma->vm_private_data = reg;
+
+ page_array = kbase_get_phy_pages(reg);
+
+ if (!(reg->flags & KBASE_REG_CPU_CACHED))
+ {
+ /* We can't map vmalloc'd memory uncached.
+ * Other memory will have been returned from
+ * osk_phy_pages_alloc which should have done the cache
+ * maintenance necessary to support an uncached mapping
+ */
+ BUG_ON(kaddr);
+ vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
+ }
+
+ if (!kaddr)
+ {
+ for (i = 0; i < nr_pages; i++)
+ {
+ err = vm_insert_mixed(vma, vma->vm_start + (i << OSK_PAGE_SHIFT), page_array[i + start_off] >> OSK_PAGE_SHIFT);
+ WARN_ON(err);
+ if (err)
+ break;
+ }
+ }
+ else
+ {
+ /* vmalloc remaping is easy... */
+ err = remap_vmalloc_range(vma, kaddr, 0);
+ WARN_ON(err);
+ }
+
+ if (err)
+ {
+ osk_free(map);
+ goto out;
+ }
+
+ map->uaddr = (osk_virt_addr)vma->vm_start;
+ map->nr_pages = nr_pages;
+ map->page_off = start_off;
+ map->private = vma;
+
+ OSK_DLIST_PUSH_FRONT(®->map_list, map,
+ struct kbase_cpu_mapping, link);
+
+out:
+ return err;
+}
+
+static int kbase_rb_mmap(struct kbase_context *kctx,
+ struct vm_area_struct *vma,
+ struct kbase_va_region **reg,
+ void **kmap_addr)
+{
+ struct kbase_va_region *new_reg;
+ void *kaddr;
+ u32 nr_pages;
+ size_t size;
+ int err = 0;
+ mali_error m_err = MALI_ERROR_NONE;
+
+ pr_debug("in kbase_rb_mmap\n");
+ size = (vma->vm_end - vma->vm_start);
+ nr_pages = size >> OSK_PAGE_SHIFT;
+
+ if (kctx->jctx.pool_size < size)
+ {
+ err = -EINVAL;
+ goto out;
+ }
+
+ kaddr = kctx->jctx.pool;
+
+ new_reg = kbase_alloc_free_region(kctx, 0, nr_pages, KBASE_REG_ZONE_PMEM);
+ if (!new_reg)
+ {
+ err = -ENOMEM;
+ WARN_ON(1);
+ goto out;
+ }
+
+ new_reg->flags &= ~KBASE_REG_FREE;
+ new_reg->flags |= KBASE_REG_IS_RB | KBASE_REG_CPU_CACHED;
+
+ m_err = kbase_add_va_region(kctx, new_reg, vma->vm_start, nr_pages, 1);
+ if (MALI_ERROR_NONE != m_err)
+ {
+ pr_debug("kbase_rb_mmap: kbase_add_va_region failed\n");
+ /* Free allocated new_reg */
+ kbase_free_alloced_region(new_reg);
+ err = -ENOMEM;
+ goto out;
+ }
+
+ *kmap_addr = kaddr;
+ *reg = new_reg;
+
+ pr_debug("kbase_rb_mmap done\n");
+ return 0;
+
+out:
+ return err;
+}
+
+static int kbase_trace_buffer_mmap(struct kbase_context * kctx, struct vm_area_struct * vma, struct kbase_va_region **reg, void **kaddr)
+{
+ struct kbase_va_region *new_reg;
+ u32 nr_pages;
+ size_t size;
+ int err = 0;
+ u32 * tb;
+
+ pr_debug("in %s\n", __func__);
+ size = (vma->vm_end - vma->vm_start);
+ nr_pages = size >> OSK_PAGE_SHIFT;
+
+ if (!kctx->jctx.tb)
+ {
+ tb = osk_vmalloc(size);
+ if (NULL == tb)
+ {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ kbase_device_trace_buffer_install(kctx, tb, size);
+ }
+ else
+ {
+ err = -EINVAL;
+ goto out;
+ }
+
+ *kaddr = kctx->jctx.tb;
+
+ new_reg = kbase_alloc_free_region(kctx, 0, nr_pages, KBASE_REG_ZONE_PMEM);
+ if (!new_reg)
+ {
+ err = -ENOMEM;
+ WARN_ON(1);
+ goto out_disconnect;
+ }
+
+ new_reg->flags &= ~KBASE_REG_FREE;
+ new_reg->flags |= KBASE_REG_IS_TB | KBASE_REG_CPU_CACHED;
+
+ if (MALI_ERROR_NONE != kbase_add_va_region(kctx, new_reg, vma->vm_start, nr_pages, 1))
+ {
+ err = -ENOMEM;
+ WARN_ON(1);
+ goto out_va_region;
+ }
+
+ *reg = new_reg;
+
+ /* map read only, noexec */
+ vma->vm_flags &= ~(VM_WRITE|VM_EXEC);
+ /* the rest of the flags is added by the cpu_mmap handler */
+
+ pr_debug("%s done\n", __func__);
+ return 0;
+
+out_va_region:
+ kbase_free_alloced_region(new_reg);
+out_disconnect:
+ kbase_device_trace_buffer_uninstall(kctx);
+ osk_vfree(tb);
+out:
+ return err;
+
+}
+
+static int kbase_mmu_dump_mmap( struct kbase_context *kctx,
+ struct vm_area_struct *vma,
+ struct kbase_va_region **reg,
+ void **kmap_addr )
+{
+ struct kbase_va_region *new_reg;
+ void *kaddr;
+ u32 nr_pages;
+ size_t size;
+ int err = 0;
+
+ pr_debug("in kbase_mmu_dump_mmap\n");
+ size = (vma->vm_end - vma->vm_start);
+ nr_pages = size >> OSK_PAGE_SHIFT;
+
+ kaddr = kbase_mmu_dump(kctx, nr_pages);
+
+ if (!kaddr)
+ {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ new_reg = kbase_alloc_free_region(kctx, 0, nr_pages, KBASE_REG_ZONE_PMEM);
+ if (!new_reg)
+ {
+ err = -ENOMEM;
+ WARN_ON(1);
+ goto out;
+ }
+
+ new_reg->flags &= ~KBASE_REG_FREE;
+ new_reg->flags |= KBASE_REG_IS_MMU_DUMP | KBASE_REG_CPU_CACHED;
+
+ if (MALI_ERROR_NONE != kbase_add_va_region(kctx, new_reg, vma->vm_start, nr_pages, 1))
+ {
+ err = -ENOMEM;
+ WARN_ON(1);
+ goto out_va_region;
+ }
+
+ *kmap_addr = kaddr;
+ *reg = new_reg;
+
+ pr_debug("kbase_mmu_dump_mmap done\n");
+ return 0;
+
+out_va_region:
+ kbase_free_alloced_region(new_reg);
+out:
+ return err;
+}
+
+/* must be called with the gpu vm lock held */
+
+struct kbase_va_region * kbase_lookup_cookie(struct kbase_context * kctx, mali_addr64 cookie)
+{
+ struct kbase_va_region * reg;
+ mali_addr64 test_cookie;
+
+ OSK_ASSERT(kctx != NULL);
+
+ test_cookie = KBASE_REG_COOKIE(cookie);
+
+ OSK_DLIST_FOREACH(&kctx->osctx.reg_pending, struct kbase_va_region, link, reg)
+ {
+ if ((reg->flags & KBASE_REG_COOKIE_MASK) == test_cookie)
+ {
+ return reg;
+ }
+ }
+
+ return NULL; /* not found */
+}
+KBASE_EXPORT_TEST_API(kbase_lookup_cookie)
+
+void kbase_unlink_cookie(struct kbase_context * kctx, mali_addr64 cookie, struct kbase_va_region * reg)
+{
+ OSKP_ASSERT(kctx != NULL);
+ OSKP_ASSERT(reg != NULL);
+ OSKP_ASSERT(MALI_TRUE == OSK_DLIST_MEMBER_OF(&kctx->osctx.reg_pending, reg, link));
+ OSKP_ASSERT(KBASE_REG_COOKIE(cookie) == (reg->flags & KBASE_REG_COOKIE_MASK));
+ OSKP_ASSERT((kctx->osctx.cookies & (1UL << cookie)) == 0);
+
+ OSK_DLIST_REMOVE(&kctx->osctx.reg_pending, reg, link);
+ kctx->osctx.cookies |= (1UL << cookie); /* mark as resolved */
+}
+
+KBASE_EXPORT_TEST_API(kbase_unlink_cookie)
+
+void kbase_os_mem_map_lock(struct kbase_context * kctx)
+{
+ struct mm_struct * mm = current->mm;
+ (void)kctx;
+ down_read(&mm->mmap_sem);
+}
+
+void kbase_os_mem_map_unlock(struct kbase_context * kctx)
+{
+ struct mm_struct * mm = current->mm;
+ (void)kctx;
+ up_read(&mm->mmap_sem);
+}
+
+int kbase_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ struct kbase_context *kctx = file->private_data;
+ struct kbase_va_region *reg;
+ void *kaddr = NULL;
+ u32 nr_pages;
+ int err = 0;
+
+ pr_debug("kbase_mmap\n");
+ nr_pages = (vma->vm_end - vma->vm_start) >> OSK_PAGE_SHIFT;
+
+ if ( 0 == nr_pages )
+ {
+ err = -EINVAL;
+ goto out;
+ }
+
+ kbase_gpu_vm_lock(kctx);
+
+ if (vma->vm_pgoff == KBASE_REG_COOKIE_RB)
+ {
+ /* Reserve offset 0 for the shared ring-buffer */
+ if ((err = kbase_rb_mmap(kctx, vma, ®, &kaddr)))
+ goto out_unlock;
+
+ pr_debug("kbase_rb_mmap ok\n");
+ goto map;
+ }
+ else if (vma->vm_pgoff == KBASE_REG_COOKIE_TB)
+ {
+ err = kbase_trace_buffer_mmap(kctx, vma, ®, &kaddr);
+ if (0 != err)
+ goto out_unlock;
+ pr_debug("kbase_trace_buffer_mmap ok\n");
+ goto map;
+ }
+ else if (vma->vm_pgoff == KBASE_REG_COOKIE_MMU_DUMP)
+ {
+ /* MMU dump */
+ if ((err = kbase_mmu_dump_mmap(kctx, vma, ®, &kaddr)))
+ goto out_unlock;
+
+ goto map;
+ }
+
+ if (vma->vm_pgoff < OSK_PAGE_SIZE) /* first page is reserved for cookie resolution */
+ {
+ /* PMEM stuff, fetch the right region */
+ reg = kbase_lookup_cookie(kctx, vma->vm_pgoff);
+
+ if (NULL != reg)
+ {
+ if (reg->nr_pages != nr_pages)
+ {
+ /* incorrect mmap size */
+ /* leave the cookie for a potential later mapping, or to be reclaimed later when the context is freed */
+ err = -ENOMEM;
+ goto out_unlock;
+ }
+
+ kbase_unlink_cookie(kctx, vma->vm_pgoff, reg);
+
+ if (MALI_ERROR_NONE != kbase_gpu_mmap(kctx, reg, vma->vm_start, nr_pages, 1))
+ {
+ /* Unable to map in GPU space. Recover from kbase_unlink_cookie */
+ OSK_DLIST_PUSH_FRONT(&kctx->osctx.reg_pending, reg, struct kbase_va_region, link);
+ kctx->osctx.cookies &= ~(1UL << vma->vm_pgoff);
+ WARN_ON(1);
+ err = -ENOMEM;
+ goto out_unlock;
+ }
+
+ /*
+ * Overwrite the offset with the
+ * region start_pfn, so we effectively
+ * map from offset 0 in the region.
+ */
+ vma->vm_pgoff = reg->start_pfn;
+ goto map;
+ }
+
+ err = -ENOMEM;
+ goto out_unlock;
+ }
+ else if (vma->vm_pgoff < KBASE_REG_ZONE_EXEC_BASE)
+ {
+ /* invalid offset as it identifies an already mapped pmem */
+ err = -ENOMEM;
+ goto out_unlock;
+ }
+ else
+ {
+ u32 zone;
+
+ /* TMEM case or EXEC case */
+ if (vma->vm_pgoff < KBASE_REG_ZONE_TMEM_BASE)
+ {
+ zone = KBASE_REG_ZONE_EXEC;
+ }
+ else
+ {
+ zone = KBASE_REG_ZONE_TMEM;
+ }
+
+ OSK_DLIST_FOREACH(&kctx->reg_list,
+ struct kbase_va_region, link, reg)
+ {
+ if (reg->start_pfn <= vma->vm_pgoff &&
+ (reg->start_pfn + reg->nr_alloc_pages) >= (vma->vm_pgoff + nr_pages) &&
+ (reg->flags & (KBASE_REG_ZONE_MASK | KBASE_REG_FREE | KBASE_REG_NO_CPU_MAP)) == zone)
+ {
+ /* Match! */
+ goto map;
+ }
+
+ }
+
+ err = -ENOMEM;
+ goto out_unlock;
+ }
+map:
+ err = kbase_cpu_mmap(reg, vma, kaddr, nr_pages);
+
+ if (vma->vm_pgoff == KBASE_REG_COOKIE_MMU_DUMP) {
+ /* MMU dump - userspace should now have a reference on
+ * the pages, so we can now free the kernel mapping */
+ osk_vfree(kaddr);
+ }
+out_unlock:
+ kbase_gpu_vm_unlock(kctx);
+out:
+ if (err)
+ {
+ pr_err("mmap failed %d\n", err);
+ }
+ return err;
+}
+KBASE_EXPORT_TEST_API(kbase_mmap)
+
+mali_error kbase_create_os_context(kbase_os_context *osctx)
+{
+ OSK_ASSERT(osctx != NULL);
+
+ OSK_DLIST_INIT(&osctx->reg_pending);
+ osctx->cookies = ~KBASE_REG_RESERVED_COOKIES;
+ init_waitqueue_head(&osctx->event_queue);
+
+ return MALI_ERROR_NONE;
+}
+KBASE_EXPORT_TEST_API(kbase_create_os_context)
+
+static void kbase_reg_pending_dtor(struct kbase_va_region *reg)
+{
+ kbase_free_phy_pages(reg);
+ pr_info("Freeing pending unmapped region\n");
+ osk_free(reg);
+}
+
+void kbase_destroy_os_context(kbase_os_context *osctx)
+{
+ OSK_ASSERT(osctx != NULL);
+
+ OSK_DLIST_EMPTY_LIST(&osctx->reg_pending, struct kbase_va_region,
+ link, kbase_reg_pending_dtor);
+}
+KBASE_EXPORT_TEST_API(kbase_destroy_os_context)
+
+void *kbase_va_alloc(kbase_context *kctx, u32 size)
+{
+ void *va;
+ u32 pages = ((size-1) >> OSK_PAGE_SHIFT) + 1;
+ struct kbase_va_region *reg;
+ osk_phy_addr *page_array;
+ u32 flags = BASE_MEM_PROT_CPU_RD | BASE_MEM_PROT_CPU_WR |
+ BASE_MEM_PROT_GPU_RD | BASE_MEM_PROT_GPU_WR;
+ int i;
+
+ OSK_ASSERT(kctx != NULL);
+
+ if (size == 0)
+ {
+ goto err;
+ }
+
+ va = osk_vmalloc(size);
+ if (!va)
+ {
+ goto err;
+ }
+
+ kbase_gpu_vm_lock(kctx);
+
+ reg = kbase_alloc_free_region(kctx, 0, pages, KBASE_REG_ZONE_PMEM);
+ if (!reg)
+ {
+ goto vm_unlock;
+ }
+
+ reg->flags &= ~KBASE_REG_FREE;
+ kbase_update_region_flags(reg, flags, MALI_FALSE);
+
+ reg->nr_alloc_pages = pages;
+ reg->extent = 0;
+
+ page_array = osk_vmalloc(pages * sizeof(*page_array));
+ if (!page_array)
+ {
+ goto free_reg;
+ }
+
+ for (i = 0; i < pages; i++)
+ {
+ uintptr_t addr;
+ struct page *page;
+ addr = (uintptr_t)va + (i << OSK_PAGE_SHIFT);
+ page = vmalloc_to_page((void *)addr);
+ page_array[i] = PFN_PHYS(page_to_pfn(page));
+ }
+
+ kbase_set_phy_pages(reg, page_array);
+
+ if (kbase_gpu_mmap(kctx, reg, (uintptr_t)va, pages, 1))
+ {
+ goto free_array;
+ }
+
+ kbase_gpu_vm_unlock(kctx);
+
+ return va;
+
+free_array:
+ osk_vfree(page_array);
+free_reg:
+ osk_free(reg);
+vm_unlock:
+ kbase_gpu_vm_lock(kctx);
+ osk_vfree(va);
+err:
+ return NULL;
+}
+KBASE_EXPORT_SYMBOL(kbase_va_alloc)
+
+void kbase_va_free(kbase_context *kctx, void *va)
+{
+ struct kbase_va_region *reg;
+ osk_phy_addr *page_array;
+ mali_error err;
+
+ OSK_ASSERT(kctx != NULL);
+ OSK_ASSERT(va != NULL);
+
+ kbase_gpu_vm_lock(kctx);
+
+ reg = kbase_validate_region(kctx, (uintptr_t)va);
+ OSK_ASSERT(reg);
+
+ err = kbase_gpu_munmap(kctx, reg);
+ OSK_ASSERT(err == MALI_ERROR_NONE);
+
+ page_array = kbase_get_phy_pages(reg);
+ osk_vfree(page_array);
+
+ osk_free(reg);
+
+ kbase_gpu_vm_unlock(kctx);
+
+ osk_vfree(va);
+}
+KBASE_EXPORT_SYMBOL(kbase_va_free)
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010, 2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_kbase_mem_linux.h
+ * Base kernel memory APIs, Linux implementation.
+ */
+
+#ifndef _KBASE_MEM_LINUX_H_
+#define _KBASE_MEM_LINUX_H_
+
+struct kbase_va_region *kbase_pmem_alloc(struct kbase_context *kctx, u32 size,
+ u32 flags, u16 *pmem_cookie);
+int kbase_mmap(struct file *file, struct vm_area_struct *vma);
+
+/* @brief Allocate memory from kernel space and map it onto the GPU
+ *
+ * @param kctx The context used for the allocation/mapping
+ * @param size The size of the allocation in bytes
+ * @return the VA for kernel space and GPU MMU
+ */
+void *kbase_va_alloc(kbase_context *kctx, u32 size);
+
+/* @brief Free/unmap memory allocated by kbase_va_alloc
+ *
+ * @param kctx The context used for the allocation/mapping
+ * @param va The VA returned by kbase_va_alloc
+ */
+void kbase_va_free(kbase_context *kctx, void *va);
+
+#endif /* _KBASE_MEM_LINUX_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#if !defined(_TRACE_MALI_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_MALI_H
+
+#include <linux/stringify.h>
+#include <linux/tracepoint.h>
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM mali
+#define TRACE_SYSTEM_STRING __stringify(TRACE_SYSTEM)
+#define TRACE_INCLUDE_FILE mali_linux_trace
+
+/**
+ * mali_job_slots_event - called from mali_kbase_core_linux.c
+ * @event_id: ORed together bitfields representing a type of event, made with the GATOR_MAKE_EVENT() macro.
+ */
+TRACE_EVENT(mali_job_slots_event,
+
+ TP_PROTO(unsigned int event_id),
+
+ TP_ARGS(event_id),
+
+ TP_STRUCT__entry(
+ __field( int, event_id )
+ ),
+
+ TP_fast_assign(
+ __entry->event_id = event_id;
+ ),
+
+ TP_printk("event=%d", __entry->event_id)
+);
+
+/**
+ * mali_timeline_event - not currently used
+ * @event_id: ORed together bitfields representing a type of event, made with the GATOR_MAKE_EVENT() macro.
+ */
+TRACE_EVENT(mali_timeline_event,
+
+ TP_PROTO(unsigned int event_id),
+
+ TP_ARGS(event_id),
+
+ TP_STRUCT__entry(
+ __field( int, event_id )
+ ),
+
+ TP_fast_assign(
+ __entry->event_id = event_id;
+ ),
+
+ TP_printk("event=%d", __entry->event_id)
+);
+
+/**
+ * mali_hw_counter - not currently used
+ */
+TRACE_EVENT(mali_hw_counter,
+
+ TP_PROTO(unsigned int event_id, unsigned int value),
+
+ TP_ARGS(event_id, value),
+
+ TP_STRUCT__entry(
+ __field( int, event_id )
+ __field( int, value )
+ ),
+
+ TP_fast_assign(
+ __entry->event_id = event_id;
+ ),
+
+ TP_printk("event %d = %d", __entry->event_id, __entry->value)
+);
+
+/**
+ * mali_pm_status - Called by mali_kbase_pm_driver.c
+ * @event_id: core type (shader, tiler, l2 cache, l3 cache)
+ * @value: 64bits bitmask reporting either power status of the cores (1-ON, 0-OFF)
+ */
+TRACE_EVENT(mali_pm_status,
+
+ TP_PROTO(unsigned int event_id, unsigned long long value),
+
+ TP_ARGS(event_id, value),
+
+ TP_STRUCT__entry(
+ __field( unsigned int, event_id )
+ __field( unsigned long long, value )
+ ),
+
+ TP_fast_assign(
+ __entry->event_id = event_id;
+ ),
+
+ TP_printk("event %u = %llu", __entry->event_id, __entry->value)
+);
+
+/**
+ * mali_pm_power_on - Called by mali_kbase_pm_driver.c
+ * @event_id: core type (shader, tiler, l2 cache, l3 cache)
+ * @value: 64bits bitmask reporting the cores to power up
+ */
+TRACE_EVENT(mali_pm_power_on,
+
+ TP_PROTO(unsigned int event_id, unsigned long long value),
+
+ TP_ARGS(event_id, value),
+
+ TP_STRUCT__entry(
+ __field( unsigned int, event_id )
+ __field( unsigned long long, value )
+ ),
+
+ TP_fast_assign(
+ __entry->event_id = event_id;
+ ),
+
+ TP_printk("event %u = %llu", __entry->event_id, __entry->value)
+);
+
+/**
+ * mali_pm_power_off - Called by mali_kbase_pm_driver.c
+ * @event_id: core type (shader, tiler, l2 cache, l3 cache)
+ * @value: 64bits bitmask reporting the cores to power down
+ */
+TRACE_EVENT(mali_pm_power_off,
+
+ TP_PROTO(unsigned int event_id, unsigned long long value),
+
+ TP_ARGS(event_id, value),
+
+ TP_STRUCT__entry(
+ __field( unsigned int, event_id )
+ __field( unsigned long long, value )
+ ),
+
+ TP_fast_assign(
+ __entry->event_id = event_id;
+ ),
+
+ TP_printk("event %u = %llu", __entry->event_id, __entry->value)
+);
+
+/**
+ * mali_page_fault_insert_pages - Called by page_fault_worker()
+ * it reports an MMU page fault resulting in new pages being mapped.
+ * @event_id: MMU address space number.
+ * @value: number of newly allocated pages
+ */
+TRACE_EVENT(mali_page_fault_insert_pages,
+
+ TP_PROTO(int event_id, unsigned long value),
+
+ TP_ARGS(event_id, value),
+
+ TP_STRUCT__entry(
+ __field( int, event_id )
+ __field( unsigned long, value )
+ ),
+
+ TP_fast_assign(
+ __entry->event_id = event_id;
+ ),
+
+ TP_printk("event %d = %lu", __entry->event_id, __entry->value)
+);
+
+/**
+ * mali_mmu_as_in_use - Called by assign_and_activate_kctx_addr_space()
+ * it reports that a certain MMU address space is in use now.
+ * @event_id: MMU address space number.
+ */
+TRACE_EVENT(mali_mmu_as_in_use,
+
+ TP_PROTO(int event_id),
+
+ TP_ARGS(event_id),
+
+ TP_STRUCT__entry(
+ __field( int, event_id )
+ ),
+
+ TP_fast_assign(
+ __entry->event_id = event_id;
+ ),
+
+ TP_printk("event=%d", __entry->event_id)
+);
+
+/**
+ * mali_mmu_as_released - Called by kbasep_js_runpool_release_ctx_internal()
+ * it reports that a certain MMU address space has been released now.
+ * @event_id: MMU address space number.
+ */
+TRACE_EVENT(mali_mmu_as_released,
+
+ TP_PROTO(int event_id),
+
+ TP_ARGS(event_id),
+
+ TP_STRUCT__entry(
+ __field( int, event_id )
+ ),
+
+ TP_fast_assign(
+ __entry->event_id = event_id;
+ ),
+
+ TP_printk("event=%d", __entry->event_id)
+);
+
+/**
+ * mali_total_alloc_pages_change - Called by kbase_mem_usage_request_pages()
+ * and by kbase_mem_usage_release_pages
+ * it reports that the total number of allocated pages is changed.
+ * @event_id: number of pages to be added or subtracted (according to the sign).
+ */
+TRACE_EVENT(mali_total_alloc_pages_change,
+
+ TP_PROTO(long long int event_id),
+
+ TP_ARGS(event_id),
+
+ TP_STRUCT__entry(
+ __field( long long int, event_id )
+ ),
+
+ TP_fast_assign(
+ __entry->event_id = event_id;
+ ),
+
+ TP_printk("event=%lld", __entry->event_id)
+);
+
+/**
+ * mali_sw_counter - not currently used
+ * @event_id: counter id
+ */
+TRACE_EVENT(mali_sw_counter,
+
+ TP_PROTO(unsigned int event_id, signed long long value),
+
+ TP_ARGS(event_id, value),
+
+ TP_STRUCT__entry(
+ __field( int, event_id )
+ __field( long long, value )
+ ),
+
+ TP_fast_assign(
+ __entry->event_id = event_id;
+ ),
+
+ TP_printk("event %d = %lld", __entry->event_id, __entry->value)
+);
+
+#endif /* _TRACE_MALI_H */
+
+#undef TRACE_INCLUDE_PATH
+#undef linux
+#define TRACE_INCLUDE_PATH MALI_KBASE_SRC_LINUX_PATH
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#ifndef _BASE_MEM_PRIV_H_
+#define _BASE_MEM_PRIV_H_
+
+#define BASE_SYNCSET_OP_MSYNC (1U << 0)
+#define BASE_SYNCSET_OP_CSYNC (1U << 1)
+
+/*
+ * This structure describe a basic memory coherency operation.
+ * It can either be:
+ * @li a sync from CPU to Memory:
+ * - type = ::BASE_SYNCSET_OP_MSYNC
+ * - mem_handle = a handle to the memory object on which the operation
+ * is taking place
+ * - user_addr = the address of the range to be synced
+ * - size = the amount of data to be synced, in bytes
+ * - offset is ignored.
+ * @li a sync from Memory to CPU:
+ * - type = ::BASE_SYNCSET_OP_CSYNC
+ * - mem_handle = a handle to the memory object on which the operation
+ * is taking place
+ * - user_addr = the address of the range to be synced
+ * - size = the amount of data to be synced, in bytes.
+ * - offset is ignored.
+ */
+typedef struct basep_syncset
+{
+ mali_addr64 mem_handle;
+ u64 user_addr;
+ u32 size;
+ u8 type;
+} basep_syncset;
+
+#endif
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010, 2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+#ifndef _BASE_VENDOR_SPEC_FUNC_H_
+#define _BASE_VENDOR_SPEC_FUNC_H_
+
+#include <malisw/mali_stdtypes.h>
+
+mali_error kbase_get_vendor_specific_cpu_clock_speed(u32*);
+
+#endif /*_BASE_VENDOR_SPEC_FUNC_H_*/
--- /dev/null
+# Copyright:
+# ----------------------------------------------------------------------------
+# This confidential and proprietary software may be used only as authorized
+# by a licensing agreement from ARM Limited.
+# (C) COPYRIGHT 2010-2012 ARM Limited, ALL RIGHTS RESERVED
+# The entire notice above must be reproduced on all authorized copies and
+# copies may only be made to the extent permitted by a licensing agreement
+# from ARM Limited.
+# ----------------------------------------------------------------------------
+#
+
+import os
+import re
+import sys
+Import('env')
+
+scheduling_policy = 'cfs'
+mock_test = 0
+
+if env['error_inject'] == '1':
+ env.Append( CPPDEFINES = {'MALI_ERROR_INJECT_ON' : 1} )
+elif env['error_inject'] == '2':
+ env.Append( CPPDEFINES = {'MALI_ERROR_INJECT_ON' : 2} )
+else:
+ env['error_inject'] = 0
+ env.Append( CPPDEFINES = {'MALI_ERROR_INJECT_ON' : 0} )
+
+if env['hwver'] == 'none':
+ env['mali_kbasep_model']='1'
+else:
+ env['mali_kbasep_model']='0'
+env.Append( CPPDEFINES = { 'MALI_KBASEP_MODEL': env['mali_kbasep_model'] } )
+
+if env['os'] == 'linux' or env['os'] == 'android':
+ if env['backend'] == 'kernel':
+ if env['v'] != '1':
+ env['MAKECOMSTR'] = '[MAKE] ${SOURCE.dir}'
+
+ # Fake platform is a transient solution for GPL drivers running in kernel that does not provide configuration via platform data.
+ # For such kernels fake_platform_device should be set to 1. For kernels providing platform data and for
+ # commercial driver fake_platform_device should be set to 0.
+ if int(env['mali_license_is_gpl']) == 1:
+ fake_platform_device = 1
+ else:
+ fake_platform_device = 0
+
+ # Source files required for kbase.
+ kbase_src = [Glob('#kbase/src/common/*.c'), Glob('#kbase/src/linux/*.c'), Glob('#kbase/src/common/*.h'), Glob('#kbase/src/linux/*.h')]
+
+ if Glob('#kbase/tests/internal/src/mock') and env['unit'] == '1':
+ kbase_src += [Glob('#kbase/tests/internal/src/mock/*.c')]
+ mock_test = 1
+
+ # we need platform config for commercial version of the driver and for GPL version using fake platform
+ if int(env['mali_license_is_gpl']) == 0 or fake_platform_device==1:
+ # Check if we are compiling for PBX
+ linux_config_file = os.path.normpath(os.environ['KDIR']) + '/.config'
+ search_term = '^[\ ]*CONFIG_MACH_REALVIEW_PBX[\ ]*=[\ ]*y'
+ REALVIEW_PBX = 0
+ for line in open(linux_config_file, 'r'):
+ if re.search(search_term, line):
+ REALVIEW_PBX = 1
+ break
+ if REALVIEW_PBX == 1 and env['platform_config'] == 'vexpress':
+ sys.stderr.write("WARNING: Building for a PBX kernel but with platform_config=vexpress\n")
+ # if the file platform config file is in the tpip directory then use that, otherwise use the default config directory
+ if Glob('#kbase/src/linux/config/tpip/*%s.c' % (env['platform_config'])):
+ kbase_src += Glob('#kbase/src/linux/config/tpip/*%s.c' % (env['platform_config']))
+ else:
+ kbase_src += Glob('#kbase/src/linux/config/*%s.c' % (env['platform_config']))
+
+ # Note: cleaning via the Linux kernel build system does not yet work
+ if env.GetOption('clean') :
+ makeAction=Action("cd ${SOURCE.dir}/.. && make clean", '$MAKECOMSTR')
+ else:
+ if env['os'] == 'android':
+ env['android'] = 1
+ else:
+ env['android'] = 0
+
+ if env['unit'] == '1':
+ env['kernel_test'] = 1
+ else:
+ env['kernel_test'] = 0
+ makeAction=Action("cd ${SOURCE.dir}/.. && make PLATFORM=${platform} MALI_KBASEP_MODEL=${mali_kbasep_model} MALI_ERROR_INJECT_ON=${error_inject} MALI_BACKEND_KERNEL=1 MALI_NO_MALI=${no_mali} MALI_USE_UMP=${ump} MALI_DEBUG=${debug} MALI_ANDROID=${android} MALI_BASE_TRACK_MEMLEAK=${base_qa} MALI_KERNEL_TEST_API=${kernel_test} MALI_KBASE_SCHEDULING_POLICY=%s MALI_UNIT_TEST=${unit} MALI_INFINITE_CACHE=${infinite_cache} MALI_LICENSE_IS_GPL=${mali_license_is_gpl} MALI_PLATFORM_CONFIG=${platform_config} MALI_ERROR_INJECT_ON=${error_inject} MALI_UNCACHED=${no_syncsets} MALI_RELEASE_NAME=\"${mali_release_name}\" MALI_FAKE_PLATFORM_DEVICE=%s MALI_MOCK_TEST=%s MALI_GATOR_SUPPORT=${gator} MALI_CUSTOMER_RELEASE=${release} MALI_INSTRUMENTATION_LEVEL=${instr} MALI_COVERAGE=${coverage} && cp mali_kbase.ko $STATIC_LIB_PATH/mali_kbase.ko" % (scheduling_policy, fake_platform_device, mock_test), '$MAKECOMSTR')
+
+ cmd = env.Command('$STATIC_LIB_PATH/mali_kbase.ko', kbase_src, [makeAction])
+
+ if int(env['mali_license_is_gpl']) == 1:
+ env.Depends('$STATIC_LIB_PATH/mali_kbase.ko', '$STATIC_LIB_PATH/kds.ko')
+
+ env.Depends('$STATIC_LIB_PATH/mali_kbase.ko', '$STATIC_LIB_PATH/libosk.a')
+ # need Module.symvers from ukk.kko and ump.ko builds
+ env.Depends('$STATIC_LIB_PATH/mali_kbase.ko', '$STATIC_LIB_PATH/ukk.ko')
+ if int(env['ump']) == 1:
+ env.Depends('$STATIC_LIB_PATH/mali_kbase.ko', '$STATIC_LIB_PATH/ump.ko')
+
+ # Until we fathom out how the invoke the Linux build system to clean, we can use Clean
+ # to remove generated files.
+ patterns = ['*.mod.c', '*.o', '*.ko', '*.a', '.*.cmd', 'modules.order', '.tmp_versions', 'Module.symvers']
+
+ for p in patterns:
+ Clean(cmd, Glob('#kbase/src/%s' % p))
+ Clean(cmd, Glob('#kbase/src/linux/%s' % p))
+ Clean(cmd, Glob('#kbase/src/linux/config/%s' % p))
+ Clean(cmd, Glob('#kbase/src/linux/config/tpip/%s' % p))
+ Clean(cmd, Glob('#kbase/src/common/%s' % p))
+ Clean(cmd, Glob('#kbase/tests/internal/src/mock/%s' % p))
+
+ env.ProgTarget('kbase', cmd)
+
+ env.AppendUnique(BASE=['cutils_list'])
+ else:
+ common_source = [
+ 'common/mali_kbase_cache_policy.c',
+ 'common/mali_kbase_mem.c',
+ 'common/mali_kbase_mmu.c',
+ 'common/mali_kbase_jd.c',
+ 'common/mali_kbase_jm.c',
+ 'common/mali_kbase_js.c',
+ 'common/mali_kbase_js_affinity.c',
+ 'common/mali_kbase_js_ctx_attr.c',
+ 'common/mali_kbase_js_policy_%s.c' % (scheduling_policy),
+ 'common/mali_kbase_pm.c',
+ 'common/mali_kbase_cpuprops.c',
+ 'common/mali_kbase_gpuprops.c',
+ 'common/mali_kbase_event.c',
+ 'common/mali_kbase_context.c',
+ 'common/mali_kbase_pm.c',
+ 'common/mali_kbase_pm_driver.c',
+ 'common/mali_kbase_pm_metrics.c',
+ 'common/mali_kbase_pm_always_on.c',
+ 'common/mali_kbase_pm_demand.c',
+ 'common/mali_kbase_device.c',
+ 'common/mali_kbase_config.c',
+ 'common/mali_kbase_security.c',
+ 'common/mali_kbase_instr.c',
+ 'common/mali_kbase_8401_workaround.c',
+ 'common/mali_kbase_softjobs.c',
+ 'common/mali_kbase_hw.c',
+ 'userspace/mali_kbase_core_userspace.c',
+ 'userspace/mali_kbase_model_userspace.c',
+ 'userspace/mali_kbase_mem_userspace.c',
+ 'userspace/mali_kbase_ump.c',
+ 'userspace/mali_kbase_pm_metrics_userspace.c'
+ ]
+
+ if Glob('#kbase/tests/internal/src/mock') and env['unit'] == '1':
+ common_source += ['../tests/internal/src/mock/mali_kbase_pm_driver_mock.c']
+ mock_test = 1
+
+ os_source = []
+
+ if env['os'] in ['linux']:
+ pass
+ else:
+ sys.stderr.write("*** Unsupported OS: %s\n" % env['os'])
+ Exit(1)
+
+ env.Append( CPPDEFINES = {'MALI_KBASE_USERSPACE' : 1} )
+ env.Append( CPPDEFINES = {'MALI_MOCK_TEST' : mock_test} )
+
+ if env['backend'] == 'user' and env['no_mali'] == '1':
+ hwsim_source = ['common/mali_kbase_model_dummy.c',
+ 'common/mali_kbase_model_error_generator.c']
+ env.AppendUnique(BASE=['cutils_list', 'kbase'])
+ else:
+ # Unpack and extract the model - will only work on x86 Linux
+ if env['arch'] == 'x86_64':
+ hostbits = '64'
+ else:
+ hostbits = '32'
+
+ # Create a builder to handle extracting the model binary from the tarball. Using a builder,
+ # we can define a custom COMSTR to give less verbose output if requested.
+ extract = Action('tar xzf $SOURCE --strip-components 4 Rexported/lib/x86_rhe5_%s/%s/libMidgardModel.so -O > $TARGET' % (hostbits, env['model']), "$EXTRACTCOMSTR")
+ extract_builder = Builder(action = extract)
+ env.Append(BUILDERS = {'Extract' : extract_builder})
+ if not int(env['v']):
+ env.Append(EXTRACTCOMSTR = '[EXTRACT] $TARGET')
+
+ # Any builds dependent on the target "model" will cause the binary to be extracted
+ # Note that to maintain compatability with existing build files that expect to link against
+ # the static version, we extract to $STATIC_LIB_PATH too.
+ model = env.Extract('$STATIC_LIB_PATH/libMidgardModel.so','#model/model.tgz')
+ model_dlib = env.Extract('$SHARED_LIB_PATH/libMidgardModel.so','#model/model.tgz')
+ env.Depends(model, model_dlib)
+
+ # Create an action that can be used as a post-action, to install the model whenever it is unpacked,
+ # provding that the command-line option "libs_install" has been set. We also add a Clean method
+ # to delete the installed model when the extracted model is cleaned.
+ if env.has_key('libs_install'):
+ a = Action("mkdir -p {libs}; cp $STATIC_LIB_PATH/libMidgardModel.so {libs}".format(libs=env['libs_install']), "$COPYCOMSTR")
+ if not int(env['v']):
+ env.Append(COPYCOMSTR = '[COPY] $TARGET')
+ env.AddPostAction(model, a)
+ env.Clean(model, os.path.join(env['libs_install'], "libMidgardModel.so"))
+
+
+ hwsim_source = []
+ env.AppendUnique(
+ BASE=['cutils_list', 'kbase', 'MidgardModel', 'stdc++']
+ )
+ env.Alias('kbase', model)
+
+ cppdefines = dict(env['CPPDEFINES'])
+ if env['unit'] == '1':
+ #make a local definition for STATIC
+ cppdefines.update( {'STATIC':''} )
+
+ libs=env.StaticLibrary( '$STATIC_LIB_PATH/kbase', [common_source, os_source, hwsim_source], CPPDEFINES = cppdefines)
+ env.LibTarget('kbase', libs)
--- /dev/null
+obj-y += src/
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2008-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_osk_atomics.h
+ * Implementation of the OS abstraction layer for the kernel device driver
+ */
+
+#ifndef _OSK_ATOMICS_H_
+#define _OSK_ATOMICS_H_
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @addtogroup base_osk_api
+ * @{
+ */
+
+/**
+ * @defgroup oskatomic Atomic Access
+ *
+ * @anchor oskatomic_important
+ * @par Important Information on Atomic variables
+ *
+ * Atomic variables are objects that can be modified by only one thread at a time.
+ * For use in SMP systems, strongly ordered access is enforced using memory
+ * barriers.
+ *
+ * An atomic variable implements an unsigned integer counter which is exactly
+ * 32 bits long. Arithmetic on it is the same as on u32 values, which is the
+ * arithmetic of integers modulo 2^32. For example, incrementing past
+ * 0xFFFFFFFF rolls over to 0, decrementing past 0 rolls over to
+ * 0xFFFFFFFF. That is, overflow is a well defined condition (unlike signed
+ * integer arithmetic in C).
+ */
+/** @{ */
+
+/** @brief Subtract a value from an atomic variable and return the new value.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a atom parameter.
+ *
+ * @note Please refer to @see oskatomic_important Important Information on Atomic
+ * variables.
+ *
+ * @param atom pointer to an atomic variable
+ * @param value value to subtract from \a atom
+ * @return value of atomic variable after \a value has been subtracted from it.
+ */
+OSK_STATIC_INLINE u32 osk_atomic_sub(osk_atomic *atom, u32 value);
+
+/** @brief Add a value to an atomic variable and return the new value.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a atom parameter.
+ *
+ * @note Please refer to @see oskatomic_important Important Information on Atomic
+ * variables.
+ *
+ * @param atom pointer to an atomic variable
+ * @param value value to add to \a atom
+ * @return value of atomic variable after \a value has been added to it.
+ */
+OSK_STATIC_INLINE u32 osk_atomic_add(osk_atomic *atom, u32 value);
+
+/** @brief Decrement an atomic variable and return its decremented value.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a atom parameter.
+ *
+ * @note Please refer to @see oskatomic_important Important Information on Atomic
+ * variables.
+ *
+ * @param atom pointer to an atomic variable
+ * @return decremented value of atomic variable
+ */
+OSK_STATIC_INLINE u32 osk_atomic_dec(osk_atomic *atom);
+
+/** @brief Increment an atomic variable and return its incremented value.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a atom parameter.
+ *
+ * @note Please refer to @see oskatomic_important Important Information on Atomic
+ * variables.
+ *
+ * @param atom pointer to an atomic variable
+ * @return incremented value of atomic variable
+ */
+OSK_STATIC_INLINE u32 osk_atomic_inc(osk_atomic *atom);
+
+/** @brief Sets the value of an atomic variable.
+ *
+ * Note: if the value of the atomic variable is set as part of a read-modify-write
+ * operation and multiple threads have access to the atomic variable at that time,
+ * please use osk_atomic_compare_and_swap() instead, which can ensure no other
+ * process changed the atomic variable during the read-write-modify operation.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a atom parameter.
+ *
+ * @note Please refer to @see oskatomic_important Important Information on Atomic
+ * variables.
+ *
+ * @param atom pointer to an atomic variable
+ * @param value the value to set
+ */
+OSK_STATIC_INLINE void osk_atomic_set(osk_atomic *atom, u32 value);
+
+/** @brief Return the value of an atomic variable.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a atom parameter.
+ *
+ * @note Please refer to @see oskatomic_important Important Information on Atomic
+ * variables.
+ *
+ * @param atom pointer to an atomic variable
+ * @return value of the atomic variable
+ */
+OSK_STATIC_INLINE u32 osk_atomic_get(osk_atomic *atom);
+
+/** @brief Compare the value of an atomic variable, and atomically exchange it
+ * if the comparison succeeds.
+ *
+ * This function implements the Atomic Compare-And-Swap operation (CAS) which
+ * allows atomically performing a read-modify-write operation on atomic variables.
+ * The CAS operation is suited for implementing synchronization primitives such
+ * as semaphores and mutexes, as well as lock-free and wait-free algorithms.
+ *
+ * It atomically does the following: compare \a atom with \a old_value and sets \a
+ * atom to \a new_value if the comparison was true.
+ *
+ * Regardless of the outcome of the comparison, the initial value of \a atom is
+ * returned - hence the reason for this being a 'swap' operation. If the value
+ * returned is equal to \a old_value, then the atomic operation succeeded. Any
+ * other value shows that the atomic operation failed, and should be repeated
+ * based upon the returned value.
+ *
+ * For example:
+@code
+typedef struct my_data
+{
+ osk_atomic index;
+ object objects[10];
+} data;
+u32 index, old_index, new_index;
+
+// Updates the index into an array of objects based on the current indexed object.
+// If another process updated the index in the mean time, the index will not be
+// updated and we try again based on the updated index.
+
+index = osk_atomic_get(&data.index);
+do {
+ old_index = index;
+ new_index = calc_new_index(&data.objects[old_index]);
+ index = osk_atomic_compare_and_swap(&data.index, old_index, new_index)
+} while (index != old_index);
+
+@endcode
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a atom parameter.
+ *
+ * @note Please refer to @see oskatomic_important Important Information on Atomic
+ * variables.
+ *
+ * @param atom pointer to an atomic variable
+ * @param old_value The value to make the comparison with \a atom
+ * @param new_value The value to atomically write to atom, depending on whether
+ * the comparison succeeded.
+ * @return The \em initial value of \a atom, before the operation commenced.
+ */
+OSK_STATIC_INLINE u32 osk_atomic_compare_and_swap(osk_atomic * atom, u32 old_value, u32 new_value) CHECK_RESULT;
+
+/** @} */ /* end group oskatomic */
+
+/** @} */ /* end group base_osk_api */
+
+/** @} */ /* end group base_api */
+
+/* pull in the arch header with the implementation */
+#include <osk/mali_osk_arch_atomics.h>
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _OSK_ATOMICS_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2008-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_osk_bitops.h
+ * Implementation of the OS abstraction layer for the kernel device driver
+ */
+
+#ifndef _OSK_BITOPS_H_
+#define _OSK_BITOPS_H_
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+
+#include <osk/mali_osk_arch_bitops.h>
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @addtogroup base_osk_api
+ * @{
+ */
+
+/** @defgroup oskbitops Bit-operations
+ *
+ * These bit-operations do not work atomically, and so locks must be used if
+ * atomicity is required.
+ *
+ * Reference implementations for Little Endian are provided, and so it should
+ * not normally be necessary to re-implement these. Efficient bit-twiddling
+ * techniques are used where possible, implemented in portable C.
+ *
+ * Note that these reference implementations rely on osk_clz() being
+ * implemented.
+ *
+ * @{
+ */
+
+/**
+ * @brief Tests if a bit is set in a unsigned long value (internal function)
+ * @param[in] bit bit number to test [0..OSK_BITS_PER_LONG-1], starting from the (Little-endian) least significant bit
+ * @param[in] value unsigned long value
+ * @return zero if bit was clear, non-zero if set. Do not rely on the return
+ * value being related to the actual word under test.
+ */
+OSK_STATIC_INLINE unsigned long oskp_test_bit(unsigned long bit, unsigned long value)
+{
+ OSK_ASSERT( bit < OSK_BITS_PER_LONG );
+
+ return value & (1UL << bit);
+}
+
+/**
+ * @brief Find the first zero bit in a unsigned long value
+ * @param[in] value unsigned long value
+ * @return a postive number [0..OSK_BITS_PER_LONG-1], starting from the least significant bit,
+ * indicating the first zero bit found, or a negative number if no zero bit was found.
+ */
+CHECK_RESULT OSK_STATIC_INLINE long oskp_find_first_zero_bit(unsigned long value)
+{
+ unsigned long inverted;
+ unsigned long negated;
+ unsigned long isolated;
+ unsigned long leading_zeros;
+
+ /* Begin with xxx...x0yyy...y, where ys are 1, number of ys is in range 0..31/63 */
+ inverted = ~value; /* zzz...z1000...0 */
+ /* Using count_trailing_zeros on inverted value -
+ * See ARM System Developers Guide for details of count_trailing_zeros */
+
+ /* Isolate the zero: it is preceeded by a run of 1s, so add 1 to it */
+ negated = (unsigned long)-inverted ; /* -a == ~a + 1 (mod 2^n) for n-bit numbers */
+ /* negated = xxx...x1000...0 */
+
+ isolated = negated & inverted ; /* xxx...x1000...0 & zzz...z1000...0, zs are ~xs */
+ /* And so the first zero bit is in the same position as the 1 == number of 1s that preceeded it
+ * Note that the output is zero if value was all 1s */
+
+ leading_zeros = osk_clz( isolated );
+
+ return (OSK_BITS_PER_LONG - 1) - leading_zeros;
+}
+
+/**
+ * @brief Clear a bit in a sequence of unsigned longs
+ * @param[in] nr bit number to clear, starting from the (Little-endian) least
+ * significant bit
+ * @param[in,out] addr starting point for counting.
+ */
+OSK_STATIC_INLINE void osk_bitarray_clear_bit(unsigned long nr, unsigned long *addr )
+{
+ OSK_ASSERT(NULL != addr);
+ addr += nr / OSK_BITS_PER_LONG; /* find the correct word */
+ nr = nr & (OSK_BITS_PER_LONG - 1); /* The bit number within the word */
+ *addr &= ~(1UL << nr);
+}
+
+/**
+ * @brief Set a bit in a sequence of unsigned longs
+ * @param[in] nr bit number to set, starting from the (Little-endian) least
+ * significant bit
+ * @param[in,out] addr starting point for counting.
+ */
+OSK_STATIC_INLINE void osk_bitarray_set_bit(unsigned long nr, unsigned long *addr)
+{
+ OSK_ASSERT(NULL != addr);
+ addr += nr / OSK_BITS_PER_LONG; /* find the correct word */
+ nr = nr & (OSK_BITS_PER_LONG - 1); /* The bit number within the word */
+ *addr |= (1UL << nr);
+}
+
+/**
+ * @brief Test a bit in a sequence of unsigned longs
+ * @param[in] nr bit number to test, starting from the (Little-endian) least
+ * significant bit
+ * @param[in,out] addr starting point for counting.
+ * @return zero if bit was clear, non-zero if set. Do not rely on the return
+ * value being related to the actual word under test.
+ */
+CHECK_RESULT OSK_STATIC_INLINE unsigned long osk_bitarray_test_bit(unsigned long nr, unsigned long *addr)
+{
+ OSK_ASSERT(NULL != addr);
+ addr += nr / OSK_BITS_PER_LONG; /* find the correct word */
+ nr = nr & (OSK_BITS_PER_LONG - 1); /* The bit number within the word */
+ return *addr & (1UL << nr);
+}
+
+/**
+ * @brief Find the first zero bit in a sequence of unsigned longs
+ * @param[in] addr starting point for search.
+ * @param[in] maxbit the maximum number of bits to search
+ * @return the number of the first zero bit found, or maxbit if none were found
+ * in the specified range.
+ */
+CHECK_RESULT unsigned long osk_bitarray_find_first_zero_bit(const unsigned long *addr, unsigned long maxbit);
+
+/**
+ * @brief Find the first set bit in a unsigned long
+ * @param val value to find first set bit in
+ * @return the number of the set bit found (starting from 0), -1 if no bits set
+ */
+CHECK_RESULT OSK_STATIC_INLINE long osk_find_first_set_bit(unsigned long val)
+{
+ return (OSK_BITS_PER_LONG - 1) - osk_clz( val & -val );
+}
+
+/**
+ * @brief Count leading zeros in an unsigned long
+ *
+ * Same behavior as ARM CLZ instruction.
+ *
+ * Returns the number of binary zero bits before the first (most significant)
+ * binary one bit in \a val.
+ *
+ * If \a val is zero, this function returns the number of bits in an unsigned
+ * long, ie. sizeof(unsigned long) * 8.
+ *
+ * @param val unsigned long value to count leading zeros in
+ * @return the number of leading zeros
+ */
+CHECK_RESULT OSK_STATIC_INLINE long osk_clz(unsigned long val);
+
+/**
+ * @brief Count leading zeros in an u64
+ *
+ * Same behavior as ARM CLZ instruction.
+ *
+ * Returns the number of binary zero bits before the first (most significant)
+ * binary one bit in \a val.
+ *
+ * If \a val is zero, this function returns the number of bits in an u64,
+ * ie. sizeof(u64) * 8 = 64
+ *
+ * Note that on platforms where an unsigned long is 64 bits then this is the same as osk_clz.
+ *
+ * @param val value to count leading zeros in
+ * @return the number of leading zeros
+ */
+CHECK_RESULT OSK_STATIC_INLINE long osk_clz_64(u64 val);
+
+/**
+ * @brief Count the number of bits set in an unsigned long
+ *
+ * This returns the number of bits set in a unsigned long value.
+ *
+ * @param val The value to count bits set in
+ * @return The number of bits set in \c val.
+ */
+OSK_STATIC_INLINE int osk_count_set_bits(unsigned long val) CHECK_RESULT;
+
+/**
+ * @brief Count the number of bits set in an u64
+ *
+ * This returns the number of bits set in a u64 value.
+ *
+ * @param val The value to count bits set in
+ * @return The number of bits set in \c val.
+ */
+CHECK_RESULT OSK_STATIC_INLINE int osk_count_set_bits64(u64 val)
+{
+ return osk_count_set_bits(val & U32_MAX)
+ + osk_count_set_bits((val >> 32) & U32_MAX);
+}
+
+/** @} */ /* end group oskbitops */
+
+/** @} */ /* end group base_osk_api */
+
+/** @} */ /* end group base_api */
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _OSK_BITOPS_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2008-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_osk_credentials.h
+ * Implementation of the OS abstraction layer for the kernel device driver
+ */
+
+#ifndef _OSK_CREDENTIALS_H_
+#define _OSK_CREDENTIALS_H_
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @addtogroup base_osk_api
+ * @{
+ */
+
+/**
+ * @defgroup oskcredentials Credentials Access
+ */
+/** @{ */
+
+/** @brief Check if the caller is privileged.
+ *
+ * @return MALI_TRUE if the caller is privileged.
+ */
+OSK_STATIC_INLINE mali_bool osk_is_privileged(void);
+
+#define OSK_PROCESS_PRIORITY_MIN ( -20 )
+#define OSK_PROCESS_PRIORITY_MAX ( 19 )
+
+typedef struct osk_process_priority
+{
+ /* MALI_TRUE if process is using a realtime scheduling policy */
+ mali_bool is_realtime;
+ /* The process priority in the range of OSK_PROCESS_PRIORITY_MIN
+ and OSK_PROCESS_PRIORITY_MAX. */
+ int priority;
+} osk_process_priority;
+
+/** @brief Check if the caller is using a realtime scheduling policy
+ *
+ * @return MALI_TRUE if process is running a realtime policy.
+ */
+OSK_STATIC_INLINE mali_bool osk_is_policy_realtime(void);
+
+/** @brief Retrieve the calling process priority and policy
+ *
+ * @param[out] prio structure to contain the process policy type
+ * and priority number
+ */
+OSK_STATIC_INLINE void osk_get_process_priority(osk_process_priority *prio);
+
+/** @} */ /* end group oskcredentials */
+
+/** @} */ /* end group base_osk_api */
+
+/** @} */ /* end group base_api */
+
+/* pull in the arch header with the implementation */
+#include <osk/mali_osk_arch_credentials.h>
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _OSK_CREDENTIALS_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#ifndef _OSK_DEBUG_H_
+#define _OSK_DEBUG_H_
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+
+#include <stdarg.h>
+#include <malisw/mali_malisw.h>
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @addtogroup base_osk_api
+ * @{
+ */
+
+/**
+ * @addtogroup oskdebug Debug
+ *
+ * OSK debug macros for asserts and debug messages. Mimics CDBG functionality.
+ *
+ * @{
+ */
+
+/**
+ * @brief OSK module IDs
+ */
+typedef enum
+{
+ OSK_UNKNOWN = 0, /**< @brief Unknown module */
+ OSK_OSK, /**< @brief ID of OSK module */
+ OSK_UKK, /**< @brief ID of UKK module */
+ OSK_BASE_MMU, /**< @brief ID of Base MMU */
+ OSK_BASE_JD, /**< @brief ID of Base Job Dispatch */
+ OSK_BASE_JM, /**< @brief ID of Base Job Manager */
+ OSK_BASE_CORE, /**< @brief ID of Base Core */
+ OSK_BASE_MEM, /**< @brief ID of Base Memory */
+ OSK_BASE_EVENT, /**< @brief ID of Base Event */
+ OSK_BASE_CTX, /**< @brief ID of Base Context */
+ OSK_BASE_PM, /**< @brief ID of Base Power Management */
+ OSK_UMP, /**< @brief ID of UMP module */
+ OSK_MODULES_ALL /**< @brief Select all the modules at once / Also gives the number of modules in the enum */
+} osk_module;
+
+/**
+ * Debug messages are sent to a particular channel (info, warn or error) or to all channels
+ */
+#define OSK_CHANNEL_INFO OSKP_CHANNEL_INFO /**< @brief No output*/
+#define OSK_CHANNEL_WARN OSKP_CHANNEL_WARN /**< @brief Standard output*/
+#define OSK_CHANNEL_ERROR OSKP_CHANNEL_ERROR /**< @brief Error output*/
+#define OSK_CHANNEL_RAW OSKP_CHANNEL_RAW /**< @brief Raw output*/
+#define OSK_CHANNEL_ALL OSKP_CHANNEL_ALL /**< @brief All the channels at the same time*/
+
+/** Function type that is called on an OSK_ASSERT() or OSK_ASSERT_MSG() */
+typedef void (osk_debug_assert_hook)( void * );
+
+typedef struct oskp_debug_assert_cb
+{
+ osk_debug_assert_hook *func;
+ void *param;
+} oskp_debug_assert_cb;
+
+/**
+ * @def OSK_DISABLE_ASSERTS
+ *
+ * @brief Indicates whether asserts are in use and evaluate their
+ * expressions. 0 indicates they are, any other value indicates that they are
+ * not.
+ */
+
+/**
+ * @def OSK_ASSERT_MSG(expr, ...)
+ * @brief Prints the given message if @a expr is false
+ *
+ * @note This macro does nothing if the flag @see OSK_DISABLE_ASSERTS is set to 1
+ *
+ * @param expr Boolean expression
+ * @param ... Message to display when @a expr is false, as a format string followed by format arguments.
+ * The format string and format arguments needs to be enclosed by parentheses.
+ * See oskp_validate_format_string for a list of supported format specifiers.
+ */
+#define OSK_ASSERT_MSG(expr, ...) OSKP_ASSERT_MSG(expr, __VA_ARGS__)
+
+/**
+ * @def OSK_ASSERT(expr)
+ * @brief Prints the expression @a expr if @a expr is false
+ *
+ * @note This macro does nothing if the flag @see OSK_DISABLE_ASSERTS is set to 1
+ *
+ * @param expr Boolean expression
+ */
+#define OSK_ASSERT(expr) OSKP_ASSERT(expr)
+
+/**
+ * @def OSK_INTERNAL_ASSERT(expr)
+ * @brief Asserts if @a expr is false.
+ * This assert function is for internal use of OSK functions which themselves are used to implement
+ * the OSK_ASSERT functionality. These functions should use OSK_INTERNAL_ASSERT which does not use
+ * any OSK functions to prevent ending up in a recursive loop.
+ *
+ * @note This macro does nothing if the flag @see OSK_DISABLE_ASSERTS is set to 1
+ *
+ * @param expr Boolean expression
+ */
+#define OSK_INTERNAL_ASSERT(expr) OSKP_INTERNAL_ASSERT(expr)
+
+/**
+ * @def OSK_DEBUG_CODE( X )
+ * @brief Executes the code inside the macro only in debug mode
+ *
+ * @param X Code to compile only in debug mode.
+ */
+#define OSK_DEBUG_CODE( X ) OSKP_DEBUG_CODE( X )
+
+/**
+ * @def OSK_PRINT(module, ...)
+ * @brief Prints given message
+ *
+ * Example:
+ * @code OSK_PRINT(OSK_BASE_MEM, " %d blocks could not be allocated", mem_allocated); @endcode will print:
+ * \n
+ * "10 blocks could not be allocated\n"
+ *
+ * @param module Name of the module which prints the message.
+ * @param ... Format string followed by a varying number of parameters
+ * See oskp_validate_format_string for a list of supported format specifiers.
+ */
+#define OSK_PRINT(module, ...) OSKP_PRINT_RAW(module, __VA_ARGS__)
+
+/**
+ * @def OSKP_PRINT_INFO(module, ...)
+ * @brief Prints "MALI<INFO,module_name>: " followed by the given message.
+ *
+ * Example:
+ * @code OSK_PRINT_INFO(OSK_BASE_MEM, " %d blocks could not be allocated", mem_allocated); @endcode will print:
+ * \n
+ * "MALI<INFO,BASE_MEM>: 10 blocks could not be allocated"\n
+ *
+ * @note Only gets compiled in for debug builds
+ *
+ * @param module Name of the module which prints the message.
+ * @param ... Format string followed by a varying number of parameters
+ * See oskp_validate_format_string for a list of supported format specifiers.
+ */
+#define OSK_PRINT_INFO(module, ...) OSKP_PRINT_INFO(module, __VA_ARGS__)
+
+/**
+ * @def OSK_PRINT_WARN(module, ...)
+ * @brief Prints "MALI<WARN,module_name>: " followed by the given message.
+ *
+ * Example:
+ * @code OSK_PRINT_WARN(OSK_BASE_MEM, " %d blocks could not be allocated", mem_allocated); @endcode will print:
+ * \n
+ * "MALI<WARN,BASE_MEM>: 10 blocks could not be allocated"\n
+ *
+ * @note Only gets compiled in for debug builds
+ *
+ * @param module Name of the module which prints the message.
+ * @param ... Format string followed by a varying number of parameters
+ * See oskp_validate_format_string for a list of supported format specifiers.
+ */
+#define OSK_PRINT_WARN(module, ...) OSKP_PRINT_WARN(module, __VA_ARGS__)
+
+/**
+ * @def OSK_PRINT_ERROR(module, ...)
+ * @brief Prints "MALI<ERROR,module_name>: " followed by the given message.
+ *
+ * Example:
+ * @code OSK_PRINT_ERROR(OSK_BASE_MEM, " %d blocks could not be allocated", mem_allocated); @endcode will print:
+ * \n
+ * "MALI<ERROR,BASE_MEM>: 10 blocks could not be allocated"\n
+ *
+ * @param module Name of the module which prints the message.
+ * @param ... Format string followed by a varying number of parameters
+ * See oskp_validate_format_string for a list of supported format specifiers.
+ */
+#define OSK_PRINT_ERROR(module, ...) OSKP_PRINT_ERROR(module, __VA_ARGS__)
+
+/**
+ * @def OSK_PRINT_ALLOW(module, channel)
+ * @brief Allow the given module to print on the given channel
+ * @note If @see OSK_USE_RUNTIME_CONFIG is disabled then this macro doesn't do anything
+ * @note Only gets compiled in for debug builds
+ * @param module is a @see osk_module
+ * @param channel is one of @see OSK_CHANNEL_INFO, @see OSK_CHANNEL_WARN, @see OSK_CHANNEL_ERROR,
+ * @see OSK_CHANNEL_ALL
+ * @return MALI_TRUE if the module is allowed to print on the channel.
+ */
+#define OSK_PRINT_ALLOW(module, channel) OSKP_PRINT_ALLOW(module, channel)
+
+/**
+ * @def OSK_PRINT_BLOCK(module, channel)
+ * @brief Prevent the given module from printing on the given channel
+ * @note If @see OSK_USE_RUNTIME_CONFIG is disabled then this macro doesn't do anything
+ * @note Only gets compiled in for debug builds
+ * @param module is a @see osk_module
+ * @param channel is one of @see OSK_CHANNEL_INFO, @see OSK_CHANNEL_WARN, @see OSK_CHANNEL_ERROR,
+ * @see OSK_CHANNEL_ALL
+ * @return MALI_TRUE if the module is allowed to print on the channel.
+ */
+#define OSK_PRINT_BLOCK(module, channel) OSKP_PRINT_BLOCK(module, channel)
+
+/**
+ * @brief Register a function to call on ASSERT
+ *
+ * Such functions will \b only be called during Debug mode, and for debugging
+ * features \b only. Do not rely on them to be called in general use.
+ *
+ * To disable the hook, supply NULL to \a func.
+ *
+ * @note This function is not thread-safe, and should only be used to
+ * register/deregister once in the module's lifetime.
+ *
+ * @param[in] func the function to call when an assert is triggered.
+ * @param[in] param the parameter to pass to \a func when calling it
+ */
+void osk_debug_assert_register_hook( osk_debug_assert_hook *func, void *param );
+
+/**
+ * @brief Call a debug assert hook previously registered with osk_debug_assert_register_hook()
+ *
+ * @note This function is not thread-safe with respect to multiple threads
+ * registering functions and parameters with
+ * osk_debug_assert_register_hook(). Otherwise, thread safety is the
+ * responsibility of the registered hook.
+ */
+void oskp_debug_assert_call_hook( void );
+
+/**
+ * @brief Convert a module id into a module name.
+ *
+ * @param module ID of the module to convert
+ * @note module names are stored in : @see oskp_str_modules.
+ * @return the name of the given module ID as a string of characters.
+ */
+const char* oskp_module_to_str(const osk_module module);
+
+/**
+ * @brief Validate the format string
+ *
+ * Validates the printf style format string against the formats
+ * that are supported by the OSK_PRINT macros. If an invalid
+ * format is used, a warning message is printed identifying
+ * the unsupported format specifier.
+ *
+ * Supported length and specifiers in the format string are:
+ *
+ * "d", "ld", "lld",
+ * "x", "lx", "llx",
+ * "X", "lX", "llX",
+ * "p",
+ * "c",
+ * "s"
+ *
+ * Notes:
+ * - in release builds this function does nothing.
+ * - this function takes a variable number of arguments to
+ * ease using it with variadic macros. Only the format
+ * argument is used though.
+ *
+ * @param format format string
+ *
+ */
+void oskp_validate_format_string(const char *format, ...);
+
+/**
+ * @brief printf-style string formatting.
+ *
+ * Refer to the cutils specification for restrictions on the format string.
+ *
+ * @param str output buffer
+ * @param size size of the output buffer in bytes (incl. eos)
+ * @param format the format string
+ * See oskp_validate_format_string for a list of supported
+ * format specifiers.
+ * @param [in] ... The variadic arguments
+ *
+ * @return The number of characters written on success, or a negative value
+ * on failure.
+ */
+s32 osk_snprintf(char *str, size_t size, const char *format, ...);
+
+/**
+ * @brief Get thread information for the current thread
+ *
+ * The information is for debug purposes only. For example, the current CPU for
+ * the thread could've changed by the time you access the returned information.
+ *
+ * On systems that support 64-bit thread IDs, the thread ID will be
+ * truncated. Therefore, this only gives an appoximate guide as to which thread
+ * is making the call.
+ *
+ * @param[out] thread_id first 32-bits of the current thread's ID
+ * @param[out] cpu_nr the CPU that the thread was probably executing on at the
+ * time of the call.
+ */
+OSK_STATIC_INLINE void osk_debug_get_thread_info( u32 *thread_id, u32 *cpu_nr );
+
+/* @} */ /* end group oskdebug */
+
+/** @} */ /* end group base_osk_api */
+
+/** @} */ /* end group base_api */
+
+#include <osk/mali_osk_arch_debug.h>
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _OSK_DEBUG_H_ */
--- /dev/null
+/*
+ * * Copyright:
+ * ----------------------------------------------------------------------------
+ * This confidential and proprietary software may be used only as authorized
+ * by a licensing agreement from ARM Limited.
+ * (C) COPYRIGHT 2011 ARM Limited, ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorized copies and
+ * copies may only be made to the extent permitted by a licensing agreement
+ * from ARM Limited.
+ * ----------------------------------------------------------------------------
+ */
+
+#ifndef _OSK_FAILURE_H_
+#define _OSK_FAILURE_H_
+/** @file mali_osk_failure.h
+ *
+ * Base kernel side failure simulation mechanism interface.
+ *
+ * Provides a mechanism to simulate failure of
+ * functions which use the OSK_SIMULATE_FAILURE macro. This is intended to
+ * exercise error-handling paths during testing.
+ */
+#include <malisw/mali_malisw.h>
+#include "osk/include/mali_osk_debug.h"
+
+/**
+ * @addtogroup osk
+ * @{
+ */
+
+/**
+ * @addtogroup osk_failure Simulated failure
+ * @{
+ */
+
+/**
+ * @addtogroup osk_failure_public Public
+ * @{
+ */
+/**
+ * @brief Decide whether or not to simulate a failure in a given module
+ *
+ * Functions that can return a failure indication should use this macro to
+ * decide whether to do so in cases where no genuine failure occurred. This
+ * allows testing of error-handling paths in callers of those functions. A
+ * module ID must be specified to ensure that failures are only simulated in
+ * those modules for which they have been enabled.
+ *
+ * If it evaluates as MALI_TRUE, a message may be printed giving the location
+ * of the macro usage: the actual behavior is defined by @ref OSK_ON_FAIL.
+ *
+ * A break point set on the oskp_failure function will halt execution
+ * before this macro evaluates as MALI_TRUE.
+ *
+ * @param[in] module Numeric ID of the module using the macro
+ *
+ * @return MALI_FALSE if execution should continue as normal; otherwise
+ * a failure should be simulated by the code using this macro.
+ *
+ * @note Unless simulation of failures was enabled at compilation time, this
+ * macro always evaluates as MALI_FALSE.
+ */
+#if OSK_SIMULATE_FAILURES
+#define OSK_SIMULATE_FAILURE( module ) \
+ ( OSKP_SIMULATE_FAILURE_IS_ENABLED( (module), OSK_CHANNEL_INFO ) && \
+ oskp_is_failure_on() &&\
+ oskp_simulate_failure( module, OSKP_PRINT_TRACE, OSKP_PRINT_FUNCTION ) )
+#else
+#define OSK_SIMULATE_FAILURE( module ) \
+ ( CSTD_NOP( module ), MALI_FALSE )
+#endif
+
+
+/**
+ * @brief Get the number of potential failures
+ *
+ * This function can be used to find out the total number of potential
+ * failures during a test, before using @ref osk_set_failure_range to
+ * set the number of successes to allow. This allows testing of error-
+ * handling paths to be parallelized (in different processes) by sub-
+ * dividing the range of successes to allow before provoking a failure.
+ *
+ * @return The number of times the @ref OSK_SIMULATE_FAILURE macro has been
+ * evaluated since the counter was last reset.
+ */
+u64 osk_get_potential_failures( void );
+
+/**
+ * @brief Set the range of failures to simulate
+ *
+ * This function configures a range of potential failures to be tested by
+ * simulating actual failure. The @ref OSK_SIMULATE_FAILURE macro will
+ * evaluate as MALI_FALSE for the first @p start evaluations after the range
+ * is set; then as MALI_TRUE for the next @p end - @p start evaluations;
+ * finally, as MALI_FALSE after being evaluated @p end times (until the
+ * mechanism is reset). @p end must be greater than or equal to @p start.
+ *
+ * This function also resets the count of successes allowed so far.
+ *
+ * @param[in] start Number of potential failures to count before simulating
+ * the first failure, or U64_MAX to never fail.
+ * @param[in] end Number of potential failures to count before allowing
+ * resumption of success, or U64_MAX to fail all after
+ * @p first.
+ */
+void osk_set_failure_range( u64 start, u64 end );
+
+/**
+ * @brief Find out whether a failure was simulated
+ *
+ * This function can be used to find out whether an apparent failure was
+ * genuine or simulated by @ref OSK_SIMULATE_FAILURE macro.
+ *
+ * @return MALI_FALSE unless a failure was simulated since the last call to
+ * the @ref osk_set_failure_range function.
+ * @since 2.3
+ */
+mali_bool osk_failure_simulated( void );
+
+/** @} */
+/* end public*/
+
+/**
+ * @addtogroup osk_failure_private Private
+ * @{
+ */
+
+/**
+ * @brief Decide whether or not to simulate a failure
+ *
+ * @param[in] module Numeric ID of the module that can fail
+ * @param[in] trace Pointer to string giving the location in the source code
+ * @param[in] function Pointer to name of the calling function
+ *
+ * @return MALI_FALSE if execution should continue as normal; otherwise
+ * a failure should be simulated by the calling code.
+ */
+
+mali_bool oskp_simulate_failure( osk_module module,
+ const char *trace,
+ const char *function );
+mali_bool oskp_is_failure_on(void);
+void oskp_failure_init( void );
+void oskp_failure_term( void );
+/** @} */
+/* end osk_failure_private group*/
+
+/** @} */
+/* end osk_failure group*/
+
+/** @} */
+/* end osk group*/
+
+
+
+
+#endif /* _OSK_FAILURE_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2008-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_osk_lists.h
+ * Implementation of the OS abstraction layer for the kernel device driver.
+ * Note that the OSK list implementation is copied from the CUTILS
+ * doubly linked list (DLIST) implementation.
+ */
+
+#ifndef _OSK_LISTS_H_
+#define _OSK_LISTS_H_
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+
+#include <osk/mali_osk_common.h>
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @addtogroup base_osk_api
+ * @{
+ */
+
+/**
+ * @addtogroup osk_dlist Doubly-linked list
+ * @{
+ */
+/**
+ * @addtogroup osk_dlist_public Public
+ * @{
+ */
+/**
+ * @brief Item of a list
+ *
+ * @note Can be integrated inside a wider structure.
+ */
+typedef struct osk_dlist_item
+{
+ struct
+ {
+ struct osk_dlist_item *next; /**< @private */
+ struct osk_dlist_item *prev; /**< @private */
+ }oskp; /**< @private*/
+}osk_dlist_item;
+
+/**
+ * @brief Doubly-linked list
+ */
+typedef struct osk_dlist
+{
+ struct
+ {
+ struct osk_dlist_item *front; /**< @private */
+ struct osk_dlist_item *back; /**< @private */
+ }oskp; /**< @private*/
+}osk_dlist;
+
+/**
+ * @brief Test if @c container_ptr is the back of the list
+ *
+ * @param [in] container_ptr Pointer to the front of the container to test.
+ * @param [in] attribute Attribute of the container of type @c osk_dlist_item
+ *
+ * @note An assert is triggered if @a container_ptr is NULL.
+ * @note If @c attribute is invalid then the behavior is undefined.
+ *
+ * @return Returns MALI_TRUE if @c container_ptr is the back of the list.
+ */
+#define OSK_DLIST_IS_BACK(container_ptr, attribute)\
+ (NULL == (OSK_CHECK_PTR(container_ptr))->attribute.oskp.next)
+
+/**
+ * @brief Test if @c container_ptr is the front of the list
+ *
+ * @param [in] container_ptr Pointer to the front of the container to test.
+ * @param [in] attribute Attribute of the container of type @c osk_dlist_item
+ *
+ * @note An assert is triggered if @a container_ptr is NULL.
+ * @note If @c attribute is invalid then the behavior is undefined.
+ *
+ * @return Returns MALI_TRUE if @c container_ptr is the front of the list.
+ */
+#define OSK_DLIST_IS_FRONT(container_ptr, attribute)\
+ (NULL == (OSK_CHECK_PTR(container_ptr))->attribute.oskp.prev)
+
+/**
+ * @brief Test if @c container_ptr is valid
+ *
+ * @param [in] container_ptr Pointer to the front of the container to test.
+ * @param [in] attribute Attribute of the container of type @c osk_dlist_item
+ *
+ * @note If @c attribute is invalid then the behavior is undefined.
+ *
+ * @return Returns MALI_TRUE if @c container_ptr is valid or MALI_FALSE otherwise.
+ */
+#define OSK_DLIST_IS_VALID(container_ptr, attribute)\
+ ( NULL != (container_ptr) )
+
+/**
+ * @brief Return the next item in the list
+ *
+ * @param [in] container_ptr Pointer to an item of type @c type
+ * @param [in] type Type of the container
+ * @param [in] attribute Attribute of the container of type @c osk_dlist_item
+ *
+ * @return A pointer to the next container item, or @c NULL.
+
+ * @note If this macro evaluates as null then the back of the list has been reached.
+ * @note An assert is triggered if @a container_ptr is NULL.
+ * @note If @c type or @c attribute is invalid then the behavior is undefined.
+ */
+#define OSK_DLIST_NEXT(container_ptr, type, attribute)\
+ ( OSK_DLIST_IS_BACK( container_ptr, attribute ) ?\
+ NULL :CONTAINER_OF( (container_ptr)->attribute.oskp.next, type, attribute ) )
+
+/**
+ * @brief Return MALI_TRUE if the list is empty
+ *
+ * @param [in] osk_dlist_ptr Pointer to the @c osk_dlist to test.
+ *
+ * @note An assert is triggered if @a osk_dlist_ptr is NULL.
+ *
+ * @return Returns MALI_TRUE if @c osk_dlist_ptr is an empty list.
+ */
+#define OSK_DLIST_IS_EMPTY(osk_dlist_ptr)\
+ (NULL == OSK_CHECK_PTR(osk_dlist_ptr)->oskp.front)
+
+/**
+ * @brief Return the previous item in the list
+ *
+ * @param [in] container_ptr Pointer to an item of type @c type
+ * @param [in] type Type of the container
+ * @param [in] attribute Attribute of the container of type @c osk_dlist_item
+ *
+ * @return A pointer to the previous container item, or @c NULL.
+
+ * @note If this macro evaluates as null then the front of the list has been reached.
+ * @note An assert is triggered if @a container_ptr is NULL.
+ * @note If @c type or @c attribute is invalid then the behavior is undefined.
+ */
+#define OSK_DLIST_PREV(container_ptr, type, attribute)\
+ ( OSK_DLIST_IS_FRONT( container_ptr, attribute ) ?\
+ NULL : CONTAINER_OF( (container_ptr)->attribute.oskp.prev, type, attribute) )
+
+/**
+ * @brief Return the front container of the list
+ *
+ * @param [in] osk_dlist_ptr Pointer to a list
+ * @param [in] type Type of the list container
+ * @param [in] attribute Attribute of the container of type @c osk_dlist_item
+ *
+ * @return A pointer to the front container item, or @c NULL.
+
+ * @note If this macro evaluates as null then the list is empty.
+ * @note An assert is triggered if @a osk_dlist_ptr is NULL.
+ * @note If @c type or @c attribute is invalid then the behavior is undefined.
+ */
+#define OSK_DLIST_FRONT(osk_dlist_ptr, type, attribute)\
+ ( OSK_CHECK_PTR( osk_dlist_ptr )->oskp.front == NULL ?\
+ NULL : CONTAINER_OF( (osk_dlist_ptr)->oskp.front, type, attribute ) )
+
+/**
+ * @brief Check whether or not @c container_ptr is a member of @c osk_dlist_ptr.
+ *
+ * @param [in] osk_dlist_ptr Pointer to a list
+ * @param [in] container_ptr Pointer to the item to check.
+ * @param [in] attribute Attribute of the container of type @c osk_dlist_item
+ *
+ * @return MALI_TRUE if @c container_ptr is a member of @c osk_dlist_ptr, MALI_FALSE if not.
+ *
+ * @note An assert is triggered if @c osk_dlist_ptr is NULL.
+ * @note An assert is triggered if @c container_to_remove_ptr is NULL.
+ * @note If @c attribute is invalid then the behavior is undefined.
+ */
+#define OSK_DLIST_MEMBER_OF(osk_dlist_ptr, container_ptr, attribute)\
+ oskp_dlist_member_of(osk_dlist_ptr, &(OSK_CHECK_PTR(container_ptr))->attribute)
+
+/**
+ * @brief Return the back container of the list
+ *
+ * @param [in] osk_dlist_ptr Pointer to a list
+ * @param [in] type Type of the list container
+ * @param [in] attribute Attribute of the container of type @c osk_dlist_item
+ *
+ * @return A pointer to the back container item, or @c NULL.
+ *
+ * @note If this macro evaluates as null then the list is empty.
+ * @note An assert is triggered if @a osk_dlist_ptr is NULL.
+ * @note If @c type or @c attribute is invalid then the behavior is undefined.
+ */
+#define OSK_DLIST_BACK(osk_dlist_ptr, type, attribute)\
+ ( OSK_CHECK_PTR( osk_dlist_ptr )->oskp.back == NULL ?\
+ NULL : CONTAINER_OF( (osk_dlist_ptr)->oskp.back, type, attribute) )
+
+/**
+ * @brief Initialize a list
+ *
+ * @param [out] osk_dlist_ptr Pointer to a osk_dlist
+ *
+ * @note An assert is triggered if @c osk_dlist_ptr is NULL.
+ */
+#define OSK_DLIST_INIT(osk_dlist_ptr)\
+ do\
+ {\
+ OSK_CHECK_PTR(osk_dlist_ptr); \
+ (osk_dlist_ptr)->oskp.front = NULL; \
+ (osk_dlist_ptr)->oskp.back = NULL;\
+ }while(MALI_FALSE)
+
+/**
+ * @brief Append @c container_to_insert_ptr at the back of @c osk_dlist_ptr
+ *
+ * The front and the back of the list are automatically adjusted.
+ *
+ * @param [in, out] osk_dlist_ptr Pointer to a list
+ * @param [in, out] container_to_insert_ptr Pointer to an item to insert of type @c type.
+ * @param [in] type Type of the list container
+ * @param [in] attribute Attribute of the container of type @c osk_dlist_item
+ *
+ * @note An assert is triggered if @c osk_dlist_ptr is NULL.
+ * @note An assert is triggered if @c container_to_insert_ptr is NULL or if it already belongs to the list.
+ * @note If @c type or @c attribute is invalid then the behavior is undefined.
+ */
+#define OSK_DLIST_PUSH_BACK(osk_dlist_ptr, container_to_insert_ptr, type, attribute)\
+ OSK_DLIST_INSERT_BEFORE(osk_dlist_ptr, container_to_insert_ptr, NULL, type, attribute)
+
+/**
+ * @brief Insert @c container_to_insert_ptr at the front of @c osk_dlist_ptr
+ *
+ * The front and the back of the list are automatically adjusted.
+ *
+ * @param [in, out] osk_dlist_ptr Pointer to a list
+ * @param [in, out] container_to_insert_ptr Pointer to an item to insert of type @c type.
+ * @param [in] type Type of the list container
+ * @param [in] attribute Attribute of the container of type @c osk_dlist_item
+ *
+ * @note An assert is triggered if @c osk_dlist_ptr is NULL.
+ * @note An assert is triggered if @c container_to_insert_ptr is NULL or if it already belongs to the list.
+ * @note If @c type or @c attribute is invalid then the behavior is undefined.
+ */
+#define OSK_DLIST_PUSH_FRONT(osk_dlist_ptr, container_to_insert_ptr, type, attribute)\
+ OSK_DLIST_INSERT_AFTER(osk_dlist_ptr, container_to_insert_ptr, NULL, type, attribute)
+
+ /**
+ * @brief Remove the back of @c osk_dlist_ptr and return the element just removed
+ *
+ * The front and the back of the list are automatically adjusted.
+ *
+ * @param [in, out] osk_dlist_ptr Pointer to a list
+ * @param [in] type Type of the list container
+ * @param [in] attribute Attribute of the container of type @c osk_dlist_item
+ *
+ * @return A pointer to a container item.
+ *
+ * @note If @c OSK_DLIST_IS_VALID returns MALI_FALSE when testing the returned pointer then the list is empty
+ * @note An assert is triggered if @c osk_dlist_ptr is NULL or empty.
+ * @note If @c type or @c attribute is invalid then the behavior is undefined.
+ */
+#define OSK_DLIST_POP_BACK(osk_dlist_ptr, type, attribute)\
+ CONTAINER_OF(\
+ oskp_dlist_remove(\
+ osk_dlist_ptr, \
+ &OSK_CHECK_PTR( OSK_DLIST_BACK(osk_dlist_ptr, type, attribute) )->attribute), \
+ type, \
+ attribute)
+
+ /**
+ * @brief Remove the front of @c osk_dlist_ptr and return the element just removed
+ *
+ * The front and the back of the list are automatically adjusted.
+ *
+ * @note The list must contain at least one item.
+ *
+ * @param [in, out] osk_dlist_ptr Pointer to a list
+ * @param [in] type Type of the list container
+ * @param [in] attribute Attribute of the container of type @c osk_dlist_item
+ *
+ * @return A pointer to a container item.
+ *
+ * @note An assert is triggered if @c osk_dlist_ptr is NULL or empty.
+ * @note If @c type or @c attribute is invalid then the behavior is undefined.
+ */
+#define OSK_DLIST_POP_FRONT(osk_dlist_ptr, type, attribute)\
+ CONTAINER_OF(\
+ oskp_dlist_remove(\
+ osk_dlist_ptr, \
+ &OSK_CHECK_PTR( OSK_DLIST_FRONT(osk_dlist_ptr, type, attribute) )->attribute), \
+ type, \
+ attribute)
+
+/**
+ * @brief Append @c container_to_insert_ptr after @c container_pos_ptr in @c osk_dlist_ptr
+ *
+ * @note Insert the new element at the list front if @c container_pos_ptr is NULL.
+ *
+ * The front and the back of the list are automatically adjusted.
+ *
+ * @param [in, out] osk_dlist_ptr Pointer to a list
+ * @param [in, out] container_to_insert_ptr Pointer to an item to insert of type @c type.
+ * @param [in, out] container_pos_ptr Pointer to the item of type @c type after which inserting the new item.
+ * @param [in] type Type of the list container
+ * @param [in] attribute Attribute of the container of type @c osk_dlist_item
+ *
+ * @note An assert is triggered if @c osk_dlist_ptr is NULL.
+ * @note An assert is triggered if @c container_pos_ptr is not NULL and not a member of the list.
+ * @note An assert is triggered if @c container_to_insert_ptr is NULL or if it already belongs to the list.
+ * @note If @c type or @c attribute is invalid then the behavior is undefined.
+ */
+#define OSK_DLIST_INSERT_AFTER(osk_dlist_ptr, container_to_insert_ptr, container_pos_ptr, type, attribute)\
+ oskp_dlist_insert_after(\
+ osk_dlist_ptr, \
+ &(OSK_CHECK_PTR(container_to_insert_ptr))->attribute, \
+ &((type*)container_pos_ptr)->attribute, \
+ NULL == container_pos_ptr)
+/**
+ * @brief Append @c container_to_insert_ptr before @c container_pos_ptr in @c osk_dlist_ptr
+ *
+ * @note Insert the new element at the list back if @c container_pos_ptr is NULL.
+ *
+ * The front and the back of the list are automatically adjusted.
+ *
+ * @param [in, out] osk_dlist_ptr Pointer to a list
+ * @param [in, out] container_to_insert_ptr Pointer to an item to insert of type @c type.
+ * @param [in, out] container_pos_ptr Pointer to the item of type @c type before which inserting the new item.
+ * @param [in] type Type of the list container
+ * @param [in] attribute Attribute of the container of type @c osk_dlist_item
+ *
+ * @note An assert is triggered if @c osk_dlist_ptr is NULL.
+ * @note An assert is triggered if @c container_pos_ptr is not NULL and not a member of the list.
+ * @note An assert is triggered if @c container_to_insert_ptr is NULL or if it already belongs to the list.
+ * @note If @c type or @c attribute is invalid then the behavior is undefined.
+ */
+
+#define OSK_DLIST_INSERT_BEFORE(osk_dlist_ptr, container_to_insert_ptr, container_pos_ptr, type, attribute)\
+ oskp_dlist_insert_before(\
+ osk_dlist_ptr, \
+ &(OSK_CHECK_PTR(container_to_insert_ptr))->attribute, \
+ &((type*)container_pos_ptr)->attribute, \
+ NULL == container_pos_ptr)
+
+/**
+ * @brief Remove an item container from a doubly-linked list and return a pointer to the element
+ * which was next in the list.
+ *
+ * The front and the back of the list are automatically adjusted.
+ *
+ * @param [in, out] osk_dlist_ptr Pointer to a list
+ * @param [in, out] container_to_remove_ptr Pointer to an item to remove of type @c type.
+ * @param [in] type Type of the list container
+ * @param [in] attribute Attribute of the container of type @c osk_dlist_item
+ *
+ * @return A pointer to the item container that was immediately after the one
+ * removed from the list, or @c NULL.
+ *
+ * @note If this macro evaluates as null then the back of the list has been reached.
+ * @note An assert is triggered if @c osk_dlist_ptr is NULL.
+ * @note An assert is triggered if @c container_to_remove_ptr is NULL or if it doesn't belong to the list.
+ * @note If @c type or @c attribute is invalid then the behavior is undefined.
+
+ * @pre @p osk_dlist_ptr must have been initialized by @ref OSK_DLIST_INIT.
+ * @pre @p container_to_remove_ptr must be a member of list @p osk_dlist_ptr.
+ * @post @p container_to_remove_ptr is no longer a member of list @p osk_dlist_ptr.
+ *
+ */
+
+#define OSK_DLIST_REMOVE_AND_RETURN_NEXT(osk_dlist_ptr, container_to_remove_ptr, type, attribute)\
+ ( OSK_DLIST_IS_BACK( container_to_remove_ptr, attribute ) ?\
+ ( oskp_dlist_remove( osk_dlist_ptr, &( container_to_remove_ptr )->attribute ), NULL ) :\
+ CONTAINER_OF( oskp_dlist_remove_and_return_next( osk_dlist_ptr,\
+ &( container_to_remove_ptr )->attribute ),\
+ type,\
+ attribute ) )
+
+/**
+ * @brief Remove an item container from a doubly-linked list.
+ *
+ * The front and the back of the list are automatically adjusted.
+ *
+ * @param [in, out] osk_dlist_ptr Pointer to a list
+ * @param [in, out] container_to_remove_ptr Pointer to an item to remove of type @c type.
+ * @param [in] attribute Attribute of the container of type @c osk_dlist_item
+ *
+ * @note An assert error is triggered if @c osk_dlist_ptr is NULL.
+ * @note An assert error is triggered if @c container_to_remove_ptr is NULL or if it doesn't belong to the list.
+ * @note If @c attribute is invalid then the behavior is undefined.
+ *
+ * @pre @p osk_dlist_ptr must have been initialized by @ref OSK_DLIST_INIT.
+ * @pre @p container_to_remove_ptr must be a member of list @p osk_dlist_ptr.
+ * @post @p container_to_remove_ptr is no longer a member of list @p osk_dlist_ptr.
+ */
+#define OSK_DLIST_REMOVE(osk_dlist_ptr, container_to_remove_ptr, attribute)\
+ oskp_dlist_remove_item(osk_dlist_ptr, &((OSK_CHECK_PTR(container_to_remove_ptr))->attribute) )
+
+/**
+ * @brief Remove an item container from a doubly-linked list and return a pointer to the element which was the
+ * previous one in the list.
+ *
+ * The front and the back of the list are automatically adjusted.
+ *
+ * @param [in, out] osk_dlist_ptr Pointer to a list
+ * @param [in, out] container_to_remove_ptr Pointer to an item to remove of type @c type.
+ * @param [in] type Type of the list container
+ * @param [in] attribute Attribute of the container of type @c osk_dlist_item
+ *
+ * @return A pointer to the item container that was immediately before the one
+ * removed from the list, or @c NULL.
+ *
+ * @note If this macro evaluates as null then the front of the list has been reached.
+ * @note An assert is triggered if @c osk_dlist_ptr is NULL.
+ * @note An assert is triggered if @c container_to_remove_ptr is NULL or if it doesn't belong to the list.
+ * @note If @c type or @c attribute is invalid then the behavior is undefined.
+ *
+ * @pre @p osk_dlist_ptr must have been initialized by @ref OSK_DLIST_INIT.
+ * @pre @p container_to_remove_ptr must be a member of list @p osk_dlist_ptr.
+ * @post @p container_to_remove_ptr is no longer a member of list @p osk_dlist_ptr.
+ */
+
+#define OSK_DLIST_REMOVE_AND_RETURN_PREV(osk_dlist_ptr, container_to_remove_ptr, type, attribute)\
+ ( OSK_DLIST_IS_FRONT( container_to_remove_ptr, attribute ) ?\
+ ( oskp_dlist_remove( osk_dlist_ptr, &( container_to_remove_ptr )->attribute ), NULL ) :\
+ CONTAINER_OF( oskp_dlist_remove_and_return_prev( osk_dlist_ptr,\
+ &( container_to_remove_ptr )->attribute ),\
+ type,\
+ attribute ) )
+
+
+/**
+ * @brief Remove and call the destructor function for every item in the list, walking from start to end.
+ *
+ * @param [in, out] osk_dlist_ptr Pointer to the list to empty
+ * @param [in] type Type of the list container.
+ * @param [in] attribute Attribute of the container of type @c osk_dlist_item
+ * @param [in] destructor_func Destructor function called for every item present in the list.
+ *
+ * This function has to be of the form void func(type* item);
+ *
+ * @note An assert is triggered if @c osk_dlist_ptr is NULL.
+ * @note An assert is triggered if @c destructor_func is NULL.
+ * @note If @c type or @c attribute is invalid then the behavior is undefined.
+ */
+#define OSK_DLIST_EMPTY_LIST(osk_dlist_ptr, type, attribute, destructor_func)\
+ do\
+ {\
+ type* oskp_it;\
+ OSK_ASSERT(NULL != osk_dlist_ptr); \
+ OSK_ASSERT(NULL != destructor_func); \
+ oskp_it = OSK_DLIST_FRONT(osk_dlist_ptr, type, attribute);\
+ while ( oskp_it != NULL )\
+ {\
+ type* to_delete = oskp_it;\
+ oskp_it = OSK_DLIST_REMOVE_AND_RETURN_NEXT(osk_dlist_ptr, oskp_it, type, attribute);\
+ destructor_func(to_delete);\
+ }\
+ }while(MALI_FALSE)
+
+/**
+ * @brief Remove and call the destructor function for every item in the list, walking from the end and to the front.
+ *
+ * @param [in, out] osk_dlist_ptr Pointer to the list to empty
+ * @param [in] type Type of the list container.
+ * @param [in] attribute Attribute of the container of type @c osk_dlist_item
+ * @param [in] destructor_func Destructor function called for every item present in the list.
+ *
+ * This function has to be of the form void func(type* item);
+ *
+ * @note An assert is triggered if @c osk_dlist_ptr is NULL.
+ * @note An assert is triggered if @c destructor_func is NULL.
+ * @note If @c type or @c attribute is invalid then the behavior is undefined.
+ */
+
+#define OSK_DLIST_EMPTY_LIST_REVERSE(osk_dlist_ptr, type, attribute, destructor_func)\
+ do\
+ {\
+ type* oskp_it;\
+ OSK_ASSERT(NULL != osk_dlist_ptr); \
+ OSK_ASSERT(NULL != destructor_func); \
+ oskp_it = OSK_DLIST_BACK(osk_dlist_ptr, type, attribute);\
+ while ( oskp_it != NULL )\
+ {\
+ type* to_delete = oskp_it;\
+ oskp_it = OSK_DLIST_REMOVE_AND_RETURN_PREV(osk_dlist_ptr, oskp_it, type, attribute);\
+ destructor_func(to_delete);\
+ }\
+ }while(MALI_FALSE)
+
+
+
+/**
+ * @brief Iterate forward through each container item of the given list
+ *
+ * @param [in, out] osk_dlist_ptr Pointer to a list
+ * @param [in] type Container type of the list
+ * @param [in] attribute Attribute of the container of type @c osk_dlist_item
+ * @param [out] container_iterator Iterator variable of type "type*" to use to iterate through the list.
+ *
+ * @note An assert is triggered if @c osk_dlist_ptr is NULL.
+ * @note If @c type or @c attribute is invalid then the behavior is undefined.
+ */
+#define OSK_DLIST_FOREACH(osk_dlist_ptr, type, attribute, container_iterator)\
+ OSK_ASSERT(NULL != osk_dlist_ptr); \
+ for(\
+ container_iterator = OSK_DLIST_FRONT(osk_dlist_ptr, type, attribute);\
+ NULL != container_iterator; \
+ container_iterator = OSK_DLIST_NEXT(container_iterator, type, attribute))
+
+/**
+ * @brief Reverse iterate through each container item of the given list
+ *
+ * @param [in, out] osk_dlist_ptr Pointer to a list
+ * @param [in] type Container type of the list
+ * @param [in] attribute Attribute of the container of type @c osk_dlist_item
+ * @param [out] container_iterator Iterator variable of type "type*" to use to iterate through the list.
+ *
+ * @note An assert is triggered if @c osk_dlist_ptr is NULL.
+ * @note If @c type or @c attribute is invalid then the behavior is undefined.
+ */
+#define OSK_DLIST_FOREACH_REVERSE(osk_dlist_ptr, type, attribute, container_iterator)\
+ OSK_ASSERT(NULL != osk_dlist_ptr); \
+ for(\
+ container_iterator = OSK_DLIST_BACK(osk_dlist_ptr, type, attribute);\
+ NULL != container_iterator; \
+ container_iterator = OSK_DLIST_PREV(container_iterator, type, attribute))
+
+/**
+ * @}
+ */
+/* End osk_dlist_public */
+/**
+ * @addtogroup osk_dlist_private Private
+ * @{
+ */
+
+/**
+ * @brief Insert a new item after an existing one.
+ *
+ * @param [in, out] list_ptr Pointer to the list the new item is going to be added to.
+ * @param [in, out] item_to_insert New item to insert in the list.
+ * @param [in, out] position Position after which to add the new item.
+ * @param [in] insert_at_front If this argument is equal to MALI_TRUE then @c position is ignored and the
+ * new item is added to the front.
+ *
+ * @note An assert is triggered if @c list_ptr is NULL.
+ * @note An assert is triggered if @c insert_at_front is MALI_FALSE and @c position is NULL.
+ */
+OSK_STATIC_INLINE void oskp_dlist_insert_after(osk_dlist * const list_ptr, osk_dlist_item * const item_to_insert,
+ osk_dlist_item * const position, const mali_bool insert_at_front);
+
+/**
+ * @brief Insert a new item before an existing one.
+ *
+ * @param [in, out] list_ptr Pointer to the list the new item is going to be added to.
+ * @param [in, out] item_to_insert New item to insert in the list.
+ * @param [in, out] position Position before which to add the new item.
+ * @param [in] insert_at_back If this argument is equal to MALI_TRUE then @c position is ignored and the new
+ * item is added to the back
+ *
+ * @note An assert is triggered if @c list_ptr is NULL.
+ * @note An assert is triggered if @c insert_at_back is MALI_FALSE and @c position is NULL.
+ */
+
+OSK_STATIC_INLINE void oskp_dlist_insert_before(osk_dlist * const list_ptr, osk_dlist_item* const item_to_insert,
+ osk_dlist_item * const position, const mali_bool insert_at_back);
+
+/**
+ * @brief Remove a given item from the list and return the item which was next in the list
+ *
+ * @param [in, out] list_ptr List from which the item needs to be removed
+ * @param [in, out] item_to_remove Item to remove from the list
+ *
+ * @return A pointer to the item which was next in the list. Return NULL if the back has just been removed.
+ *
+ * @note An assert is triggered if @c list_ptr is NULL.
+ * @note An assert is triggered if @c item_to_remove is not a member of @c list_ptr
+
+ */
+OSK_STATIC_INLINE osk_dlist_item* oskp_dlist_remove_and_return_next(osk_dlist * const list_ptr,
+ osk_dlist_item * const item_to_remove) CHECK_RESULT;
+
+/**
+ * @brief Remove a given item from the list and return the item which was before in the list
+ *
+ * @param [in, out] list_ptr List from which the item needs to be removed
+ * @param [in, out] item_to_remove Item to remove from the list
+ *
+ * @return A pointer to the item which was before in the list. Return NULL if the front has just been removed.
+ *
+ * @note An assert is triggered if @c list_ptr is NULL.
+ * @note An assert is triggered if @c item_to_remove is not a member of @c list_ptr
+ */
+OSK_STATIC_INLINE osk_dlist_item* oskp_dlist_remove_and_return_prev(osk_dlist * const list_ptr,
+ osk_dlist_item * const item_to_remove) CHECK_RESULT;
+
+/**
+ * @brief Remove a given item from the list and return it.
+ *
+ * @param [in, out] list_ptr List from which the item needs to be removed
+ * @param [in, out] item_to_remove Item to remove from the list
+ *
+ * @return A pointer to the item which has been removed from the list.
+ *
+ * @note An assert is triggered if @c list_ptr is NULL.
+ * @note An assert is triggered if @c item_to_remove is not a member of @c list_ptr
+ */
+
+OSK_STATIC_INLINE osk_dlist_item* oskp_dlist_remove(osk_dlist * const list_ptr,
+ osk_dlist_item * const item_to_remove);
+
+/**
+ * @brief Check that @c item is a member of the @c list
+ *
+ * @param [in] list Metadata of the list
+ * @param [in] item Item to check
+ *
+ * @note An assert error is triggered if @c list is NULL.
+ *
+ * @return MALI_TRUE if @c item is a member of @c list or MALI_FALSE otherwise.
+ */
+OSK_STATIC_INLINE mali_bool oskp_dlist_member_of(const osk_dlist* const list, const osk_dlist_item* const item) CHECK_RESULT;
+
+/**
+ * @brief remove @c item_to_remove from @c front
+ *
+ * @param [in, out] front List from which the item needs to be removed
+ * @param [in, out] item_to_remove Item to remove from the list.
+ *
+ * @note An assert is triggered if @c list_ptr is NULL.
+ * @note An assert is triggered if @c item_to_remove is not a member of @c list_ptr
+ */
+OSK_STATIC_INLINE void oskp_dlist_remove_item(osk_dlist* const front, osk_dlist_item* const item_to_remove);
+
+/**
+ * @}
+ */
+/* end osk_dlist_private */
+/**
+ * @}
+ */
+/* end osk_dlist group */
+
+/**
+ * @addtogroup osk_dlist Doubly-linked list
+ * @{
+ */
+/**
+ * @addtogroup osk_dlist_private Private
+ * @{
+ */
+
+CHECK_RESULT OSK_STATIC_INLINE mali_bool oskp_dlist_member_of(const osk_dlist* const list, const osk_dlist_item* const item)
+{
+ mali_bool return_value = MALI_FALSE;
+ const osk_dlist_item* it;
+
+ OSK_ASSERT(NULL != list);
+
+ it = list->oskp.front;
+ while(NULL != it)
+ {
+ if(item == it)
+ {
+ return_value = MALI_TRUE;
+ break;
+ }
+
+ it = it->oskp.next;
+ }
+ return return_value;
+}
+
+OSK_STATIC_INLINE void oskp_dlist_insert_before(osk_dlist * const front, osk_dlist_item * const item_to_insert,
+ osk_dlist_item * const position, const mali_bool insert_at_back)
+{
+ OSK_ASSERT(NULL != front);
+ OSK_ASSERT(NULL != item_to_insert);
+ OSK_ASSERT((insert_at_back == MALI_TRUE) || (NULL != position));
+ OSK_ASSERT(MALI_FALSE == oskp_dlist_member_of(front, item_to_insert));
+
+ if(insert_at_back)
+ {
+ item_to_insert->oskp.prev = front->oskp.back;
+
+ /*if there are some other items in the list, update their links.*/
+ if(NULL != front->oskp.back)
+ {
+ front->oskp.back->oskp.next = item_to_insert;
+ }
+ item_to_insert->oskp.next = NULL;
+ front->oskp.back = item_to_insert;
+ }
+ else
+ {
+ /* insertion at a position which is not the back*/
+ OSK_ASSERT(MALI_FALSE != oskp_dlist_member_of(front, position));
+
+ item_to_insert->oskp.prev = position->oskp.prev;
+ item_to_insert->oskp.next = position;
+ position->oskp.prev = item_to_insert;
+
+ /*if there are some other items in the list, update their links.*/
+ if(NULL != item_to_insert->oskp.prev)
+ {
+ item_to_insert->oskp.prev->oskp.next = item_to_insert;
+ }
+
+ }
+
+ /* Did the element inserted became the new front */
+ if(front->oskp.front == item_to_insert->oskp.next)
+ {
+ front->oskp.front = item_to_insert;
+ }
+}
+
+OSK_STATIC_INLINE
+void oskp_dlist_insert_after(osk_dlist * const front, osk_dlist_item * const item_to_insert,
+ osk_dlist_item * const position, mali_bool insert_at_front)
+{
+ OSK_ASSERT(NULL != front);
+ OSK_ASSERT(NULL != item_to_insert);
+ OSK_ASSERT((insert_at_front == MALI_TRUE) || (NULL != position));
+ OSK_ASSERT(MALI_FALSE == oskp_dlist_member_of(front, item_to_insert));
+
+ if(insert_at_front)
+ {
+ item_to_insert->oskp.next = front->oskp.front;
+
+ /*if there are some other items in the list, update their links.*/
+ if(NULL != front->oskp.front)
+ {
+ front->oskp.front->oskp.prev = item_to_insert;
+ }
+ item_to_insert->oskp.prev = NULL;
+ front->oskp.front = item_to_insert;
+ }
+ else
+ {
+ /* insertion at a position which is not the front */
+ OSK_ASSERT(MALI_FALSE != oskp_dlist_member_of(front, position));
+
+ item_to_insert->oskp.next = position->oskp.next;
+ item_to_insert->oskp.prev = position;
+ position->oskp.next = item_to_insert;
+
+ /*if the item has not been inserted at the back, then update the links of the next item*/
+ if(NULL != item_to_insert->oskp.next)
+ {
+ item_to_insert->oskp.next->oskp.prev = item_to_insert;
+ }
+ }
+
+ /* Is the item inserted the new back ?*/
+ if(front->oskp.back == item_to_insert->oskp.prev)
+ {
+ front->oskp.back = item_to_insert;
+ }
+}
+
+OSK_STATIC_INLINE
+void oskp_dlist_remove_item(osk_dlist* const front, osk_dlist_item* const item_to_remove)
+{
+ OSK_ASSERT(NULL != front);
+ OSK_ASSERT(NULL != item_to_remove);
+ OSK_ASSERT(MALI_TRUE == oskp_dlist_member_of(front, item_to_remove));
+
+ /* if the item to remove is the current front*/
+ if( front->oskp.front == item_to_remove )
+ {
+ /* then make the front point to the next item*/
+ front->oskp.front = item_to_remove->oskp.next;
+ }
+ else
+ {
+ /* else just the previous item point to the next one*/
+ item_to_remove->oskp.prev->oskp.next = item_to_remove->oskp.next;
+ }
+
+ /* if the item to remove is the current back*/
+ if(front->oskp.back == item_to_remove)
+ {
+ /* then make the back point to the previous item*/
+ front->oskp.back = item_to_remove->oskp.prev;
+ }
+ else
+ {
+ /* else just the next item point to the previous one*/
+ item_to_remove->oskp.next->oskp.prev = item_to_remove->oskp.prev;
+ }
+
+ item_to_remove->oskp.next = NULL;
+ item_to_remove->oskp.prev = NULL;
+}
+
+OSK_STATIC_INLINE
+osk_dlist_item* oskp_dlist_remove(osk_dlist * const front, osk_dlist_item * const item_to_remove)
+{
+ oskp_dlist_remove_item(front, item_to_remove);
+
+ item_to_remove->oskp.next = NULL;
+ item_to_remove->oskp.prev = NULL;
+
+ return item_to_remove;
+}
+
+
+CHECK_RESULT OSK_STATIC_INLINE
+osk_dlist_item* oskp_dlist_remove_and_return_next(osk_dlist * const front,
+ osk_dlist_item * const item_to_remove)
+{
+ osk_dlist_item *next;
+
+ OSK_ASSERT(NULL != front);
+ OSK_ASSERT(NULL != item_to_remove);
+
+ next = item_to_remove->oskp.next;
+ oskp_dlist_remove_item(front, item_to_remove);
+ return next;
+}
+
+CHECK_RESULT OSK_STATIC_INLINE
+osk_dlist_item* oskp_dlist_remove_and_return_prev(osk_dlist * const front,
+ osk_dlist_item * const item_to_remove)
+{
+ osk_dlist_item *prev;
+
+ OSK_ASSERT(NULL != front);
+ OSK_ASSERT(NULL != item_to_remove);
+
+ prev = item_to_remove->oskp.prev;
+ oskp_dlist_remove_item(front, item_to_remove);
+ return prev;
+}
+
+/**
+ * @}
+ */
+/* end osk_dlist_private */
+
+/**
+ * @}
+ */
+/* end osk_dlist group */
+
+/** @} */ /* end group base_osk_api */
+
+/** @} */ /* end group base_api */
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _OSK_LISTS_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#ifndef _OSK_LOCK_ORDER_H_
+#define _OSK_LOCK_ORDER_H_
+
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @addtogroup base_osk_api
+ * @{
+ */
+
+/**
+ * @addtogroup oskmutex_lockorder
+ * @{
+ */
+
+/**
+ * @anchor oskmutex_lockorder
+ * @par Lock ordering for Mutexes and Spinlocks
+ *
+ * When an OSK Rwlock, Mutex or Spinlock is initialized, it is given a locking order.
+ * This is a number that is checked in QA builds to detect possible deadlock
+ * conditions. The order is checked when a thread calls
+ * osk_rwlock_read_lock() / osk_rwlock_write_lock() / osk_mutex_lock() /
+ * osk_spinlock_lock() / osk_spinlock_irq_lock(). If the calling
+ * thread already holds a lock with an order less than that of the object being
+ * locked, an assertion failure will occur.
+ *
+ * Lock ordering must be respected between OSK Rwlocks, Mutexes, and Spinlocks.
+ * That is, when obtaining an OSK Rwlock, Mutex or Spinlock, its lock order
+ * must be lower than any other OSK Rwlock, Mutex or Spinlock held by the current thread.
+ *
+ */
+/** @{ */
+
+typedef enum
+{
+ /**
+ * Reserved mutex order, indicating that the mutex will be the last to be
+ * locked, and all other OSK mutexes are obtained before this one.
+ *
+ * All other lock orders must be after this one, because we use this to
+ * ASSERT that lock orders are >= OSK_LOCK_ORDER_LAST
+ */
+ OSK_LOCK_ORDER_LAST = 0,
+
+ /**
+ * Lock order for umpp_descriptor_mapping.
+ *
+ * This lock is always obtained last: no other locks are obtained whilst
+ * operating on a descriptor mapping, and so this should be as high as
+ * possible in this enum (lower in number) than any other lock held by UMP.
+ *
+ * It can have the same order as any other lock in UMP that is always
+ * obtained last.
+ */
+ OSK_LOCK_ORDER_UMP_DESCRIPTOR_MAPPING,
+
+ /**
+ * Lock order for mutex protecting umpp_device::secure_id_map (this is in
+ * the 'single global UMP device').
+ *
+ * This must be obtained after (lower in number than) the
+ * OSK_LOCK_ORDER_UMP_SESSION_LOCK, since the allocation is often looked up
+ * in secure_id_map while manipulating the umpp_session::memory_usage list.
+ */
+ OSK_LOCK_ORDER_UMP_IDMAP_LOCK,
+
+ /**
+ * Lock order for mutex protecting the umpp_session::memory_usage list
+ */
+ OSK_LOCK_ORDER_UMP_SESSION_LOCK,
+
+
+ /**
+ *
+ */
+ OSK_LOCK_ORDER_OSK_FAILURE,
+
+ /**
+ * For the power management metrics system
+ */
+ OSK_LOCK_ORDER_PM_METRICS,
+
+ /**
+ * For fast queue management, with very little processing and
+ * no other lock held within the critical section.
+ */
+ OSK_LOCK_ORDER_QUEUE = OSK_LOCK_ORDER_PM_METRICS,
+
+ /**
+ * For register trace buffer access in kernel space
+ */
+
+ OSK_LOCK_ORDER_TB,
+
+ /**
+ * For KBASE_TRACE_ADD<...> macros
+ */
+ OSK_LOCK_ORDER_TRACE,
+
+ /**
+ * For modification of the MMU mask register, which is done as a read-modify-write
+ */
+ OSK_LOCK_ORDER_MMU_MASK,
+ /**
+ * For access and modification to the power state of a device
+ */
+ OSK_LOCK_ORDER_POWER_MGMT = OSK_LOCK_ORDER_MMU_MASK,
+
+ /**
+ * For access to active_count in kbase_pm_device_data
+ */
+ OSK_LOCK_ORDER_POWER_MGMT_ACTIVE = OSK_LOCK_ORDER_POWER_MGMT,
+
+ /**
+ * For access to gpu_cycle_counter_requests in kbase_pm_device_data
+ */
+ OSK_LOCK_ORDER_POWER_MGMT_GPU_CYCLE_COUNTER,
+ /**
+ * For the resources used during MMU pf or low-level job handling
+ */
+ OSK_LOCK_ORDER_JS_RUNPOOL_IRQ,
+
+ /**
+ * For job slot management
+ *
+ * This is an IRQ lock, and so must be held after all sleeping locks
+ */
+ OSK_LOCK_ORDER_JSLOT,
+
+ /**
+ * For hardware counters collection setup
+ */
+ OSK_LOCK_ORDER_HWCNT,
+
+ /**
+ * For use when zapping a context (see kbase_jd_zap_context)
+ */
+ OSK_LOCK_ORDER_JD_ZAP_CONTEXT,
+
+ /**
+ * AS lock, used to access kbase_as structure.
+ *
+ * This must be held after:
+ * - Job Scheduler Run Pool lock (OSK_LOCK_ORDER_RUNPOOL)
+ *
+ * This is an IRQ lock, and so must be held after all sleeping locks
+ *
+ * @since OSU 1.9
+ */
+ OSK_LOCK_ORDER_AS,
+
+ /**
+ * Job Scheduling Run Pool lock
+ *
+ * This must be held after:
+ * - Job Scheduling Context Lock (OSK_LOCK_ORDER_JS_CTX)
+ * - Job Slot management lock (OSK_LOCK_ORDER_JSLOT)
+ *
+ * This is an IRQ lock, and so must be held after all sleeping locks
+ *
+ */
+ OSK_LOCK_ORDER_JS_RUNPOOL,
+
+
+ /**
+ * Job Scheduling Policy Queue lock
+ *
+ * This must be held after Job Scheduling Context Lock (OSK_LOCK_ORDER_JS_CTX).
+ *
+ * Currently, there's no restriction on holding this at the same time as the JSLOT/JS_RUNPOOL locks - but, this doesn't happen anyway.
+ *
+ */
+ OSK_LOCK_ORDER_JS_QUEUE,
+
+ /**
+ * Job Scheduling Context Lock
+ *
+ * This must be held after Job Dispatch lock (OSK_LOCK_ORDER_JCTX), but before:
+ * - The Job Slot lock (OSK_LOCK_ORDER_JSLOT)
+ * - The Run Pool lock (OSK_LOCK_ORDER_JS_RUNPOOL)
+ * - The Policy Queue lock (OSK_LOCK_ORDER_JS_QUEUE)
+ *
+ * In addition, it must be held before the VM Region Lock (OSK_LOCK_ORDER_MEM_REG),
+ * because at some point need to modify the MMU registers to update the address
+ * space on scheduling in the context.
+ *
+ */
+ OSK_LOCK_ORDER_JS_CTX,
+
+ /**
+ * For memory mapping management
+ */
+ OSK_LOCK_ORDER_MEM_REG,
+
+ /**
+ * For job dispatch management
+ */
+ OSK_LOCK_ORDER_JCTX,
+
+ /**
+ * Register queue lock for model
+ */
+ OSK_LOCK_ORDER_BASE_REG_QUEUE,
+
+ /**
+ * Reserved mutex order, indicating that the mutex will be the first to be
+ * locked, and all other OSK mutexes are obtained after this one.
+ *
+ * All other lock orders must be before this one, because we use this to
+ * ASSERT that lock orders are <= OSK_LOCK_ORDER_FIRST
+ */
+ OSK_LOCK_ORDER_FIRST
+} osk_lock_order;
+
+/** @} */
+
+/** @} */ /* end group oskmutex_lockorder */
+
+/** @} */ /* end group base_osk_api */
+
+/** @} */ /* end group base_api */
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _OSK_LOCK_ORDER_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#ifndef _OSK_LOCKS_H_
+#define _OSK_LOCKS_H_
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @addtogroup base_osk_api
+ * @{
+ */
+
+/**
+ * @defgroup osklocks Mutual Exclusion
+ *
+ * A read/write lock (rwlock) is used to control access to a shared resource,
+ * where multiple threads are allowed to read from the shared resource, but
+ * only one thread is allowed to write to the shared resources at any one time.
+ * A thread must specify the type of access (read/write) when locking the
+ * rwlock. If a rwlock is locked for write access, other threads that attempt
+ * to lock the same rwlock will block. If a rwlock is locked for read access,
+ * threads that attempts to lock the rwlock for write access, will block until
+ * until all threads with read access have unlocked the rwlock.
+
+ * @note If an OS does not provide a synchronisation object to implement a
+ * rwlock, a OSK mutex can be used instead for its implementation. This would
+ * only allow one reader or writer to access the shared resources at any one
+ * time.
+ *
+ * A mutex is used to control access to a shared resource, where only one
+ * thread is allowed access at any one time. A thread must lock the mutex
+ * to gain access; other threads that attempt to lock the same mutex will
+ * block. Mutexes can only be unlocked by the thread that holds the lock.
+ *
+ * @note OSK mutexes are intended for use in a situation where access to the
+ * shared resource is likely to be contended. OSK mutexes make use of the
+ * mutual exclusion primitives provided by the target OS, which often
+ * are considered "heavyweight".
+ *
+ * Spinlocks are also used to control access to a shared resource and
+ * enforce that only one thread has access at any one time. They differ from
+ * OSK mutexes in that they poll the mutex to obtain the lock. This makes a
+ * spinlock especially suited for contexts where you are not allowed to block
+ * while waiting for access to the shared resource. A OSK mutex could not be
+ * used in such a context as it can block while trying to obtain the mutex.
+ *
+ * A spinlock should be held for the minimum time possible, as in the contended
+ * case threads will not sleep but poll and therefore use CPU-cycles.
+ *
+ * While holding a spinlock, you must not sleep. You must not obtain a rwlock,
+ * mutex or do anything else that might block your thread. This is to prevent another
+ * thread trying to lock the same spinlock while your thread holds the spinlock,
+ * which could take a very long time (as it requires your thread to get scheduled
+ * in again and unlock the spinlock) or could even deadlock your system.
+ *
+ * Spinlocks are considered 'lightweight': for the uncontended cases, the mutex
+ * can be obtained quickly. For the lightly-contended cases on Multiprocessor
+ * systems, the mutex can be obtained quickly without resorting to
+ * "heavyweight" OS primitives.
+ *
+ * Two types of spinlocks are provided. A type that is safe to use when sharing
+ * a resource with an interrupt service routine, and one that should only be
+ * used to share the resource between threads. The former should be used to
+ * prevent deadlock between a thread that holds a spinlock while an
+ * interrupt occurs and the interrupt service routine trying to obtain the same
+ * spinlock too.
+ *
+ * @anchor oskmutex_spinlockdetails
+ * @par Important details of OSK Spinlocks.
+ *
+ * OSK spinlocks are not intended for high-contention cases. If high-contention
+ * usecases occurs frequently for a particular spinlock, then it is wise to
+ * consider using an OSK Mutex instead.
+ *
+ * @note An especially important reason for not using OSK Spinlocks in highly
+ * contended cases is that they defeat the OS's Priority Inheritance mechanisms
+ * that would normally alleviate Priority Inversion problems. This is because
+ * once the spinlock is obtained, the OS usually does not know which thread has
+ * obtained the lock, and so cannot know which thread must have its priority
+ * boosted to alleviate the Priority Inversion.
+ *
+ * As a guide, use a spinlock when CPU-bound for a short period of time
+ * (thousands of cycles). CPU-bound operations include reading/writing of
+ * memory or registers. Do not use a spinlock when IO bound (e.g. user input,
+ * buffered IO reads/writes, calls involving significant device driver IO
+ * calls).
+ */
+/** @{ */
+
+/**
+ * @brief Initialize a mutex
+ *
+ * Initialize a mutex structure. If the function returns successfully, the
+ * mutex is in the unlocked state.
+ *
+ * The caller must allocate the memory for the @see osk_mutex
+ * structure, which is then populated within this function. If the OS-specific
+ * mutex referenced from the structure cannot be initialized, an error is
+ * returned.
+ *
+ * The mutex must be terminated when no longer required, by using
+ * osk_mutex_term(). Otherwise, a resource leak may result in the OS.
+ *
+ * The mutex is initialized with a lock order parameter, \a order. Refer to
+ * @see oskmutex_lockorder for more information on Rwlock/Mutex/Spinlock lock
+ * ordering.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a lock parameter.
+ *
+ * It is a programming error to attempt to initialize a mutex that is
+ * currently initialized.
+ *
+ * @param[out] lock pointer to an uninitialized mutex structure
+ * @param[in] order the locking order of the mutex
+ * @return OSK_ERR_NONE on success, any other value indicates a failure.
+ */
+OSK_STATIC_INLINE osk_error osk_mutex_init(osk_mutex * const lock, osk_lock_order order) CHECK_RESULT;
+
+/**
+ * @brief Terminate a mutex
+ *
+ * Terminate the mutex pointed to by \a lock, which must be
+ * a pointer to a valid unlocked mutex. When the mutex is terminated, the
+ * OS-specific mutex is freed.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a lock parameter.
+ *
+ * It is a programming error to attempt to terminate a mutex that is currently
+ * terminated.
+ *
+ * @illegal It is illegal to call osk_mutex_term() on a locked mutex.
+ *
+ * @param[in] lock pointer to a valid mutex structure
+ */
+OSK_STATIC_INLINE void osk_mutex_term(osk_mutex * lock);
+
+/**
+ * @brief Lock a mutex
+ *
+ * Lock the mutex pointed to by \a lock. If the mutex is currently unlocked,
+ * the calling thread returns with the mutex locked. If a second thread
+ * attempts to lock the same mutex, it blocks until the first thread
+ * unlocks the mutex. If two or more threads are blocked waiting on the first
+ * thread to unlock the mutex, it is undefined as to which thread is unblocked
+ * when the first thread unlocks the mutex.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a lock parameter.
+ *
+ * It is a programming error to lock a mutex or spinlock with an order that is
+ * higher than any mutex or spinlock held by the current thread. Mutexes and
+ * spinlocks must be locked in the order of highest to lowest, to prevent
+ * deadlocks. Refer to @see oskmutex_lockorder for more information.
+ *
+ * It is a programming error to exit a thread while it has a locked mutex.
+ *
+ * It is a programming error to lock a mutex from an ISR context. In an ISR
+ * context you are not allowed to block what osk_mutex_lock() potentially does.
+ *
+ * @illegal It is illegal to call osk_mutex_lock() on a mutex that is currently
+ * locked by the caller thread. That is, it is illegal for the same thread to
+ * lock a mutex twice, without unlocking it in between.
+ *
+ * @param[in] lock pointer to a valid mutex structure
+ */
+OSK_STATIC_INLINE void osk_mutex_lock(osk_mutex * lock);
+
+/**
+ * @brief Unlock a mutex
+ *
+ * Unlock the mutex pointed to by \a lock. The calling thread must be the
+ * same thread that locked the mutex. If no other threads are waiting on the
+ * mutex to be unlocked, the function returns immediately, with the mutex
+ * unlocked. If one or more threads are waiting on the mutex to be unlocked,
+ * then this function returns, and a thread waiting on the mutex can be
+ * unblocked. It is undefined as to which thread is unblocked.
+ *
+ * @note It is not defined \em when a waiting thread is unblocked. For example,
+ * a thread calling osk_mutex_unlock() followed by osk_mutex_lock() may (or may
+ * not) obtain the lock again, preventing other threads from being
+ * released. Neither the 'immediately releasing', nor the 'delayed releasing'
+ * behavior of osk_mutex_unlock() can be relied upon. If such behavior is
+ * required, then you must implement it yourself, such as by using a second
+ * synchronization primitive.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a lock parameter.
+ *
+ * @illegal It is illegal for a thread to call osk_mutex_unlock() on a mutex
+ * that it has not locked, even if that mutex is currently locked by another
+ * thread. That is, it is illegal for any thread other than the 'owner' of the
+ * mutex to unlock it. And, you must not unlock an already unlocked mutex.
+ *
+ * @param[in] lock pointer to a valid mutex structure
+ */
+OSK_STATIC_INLINE void osk_mutex_unlock(osk_mutex * lock);
+
+/**
+ * @brief Initialize a spinlock
+ *
+ * Initialize a spinlock. If the function returns successfully, the
+ * spinlock is in the unlocked state.
+ *
+ * @note If the spinlock is used for sharing a resource with an interrupt service
+ * routine, use the IRQ safe variant of the spinlock, see osk_spinlock_irq.
+ * The IRQ safe variant should be used in that situation to prevent
+ * deadlock between a thread/ISR that holds a spinlock while an interrupt occurs
+ * and the interrupt service routine trying to obtain the same spinlock too.
+
+ * The caller must allocate the memory for the @see osk_spinlock
+ * structure, which is then populated within this function. If the OS-specific
+ * spinlock referenced from the structure cannot be initialized, an error is
+ * returned.
+ *
+ * The spinlock must be terminated when no longer required, by using
+ * osk_spinlock_term(). Otherwise, a resource leak may result in the OS.
+ *
+ * The spinlock is initialized with a lock order parameter, \a order. Refer to
+ * @see oskmutex_lockorder for more information on Rwlock/Mutex/Spinlock lock
+ * ordering.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a lock parameter.
+ *
+ * It is a programming error to attempt to initialize a spinlock that is
+ * currently initialized.
+ *
+ * @param[out] lock pointer to a spinlock structure
+ * @param[in] order the locking order of the spinlock
+ * @return OSK_ERR_NONE on success, any other value indicates a failure.
+ */
+OSK_STATIC_INLINE osk_error osk_spinlock_init(osk_spinlock * const lock, osk_lock_order order) CHECK_RESULT;
+
+/**
+ * @brief Terminate a spinlock
+ *
+ * Terminates the spinlock and releases any associated resources.
+ * The spinlock must be in an unlocked state.
+ *
+ * Terminate the spinlock pointed to by \a lock, which must be
+ * a pointer to a valid unlocked spinlock. When the spinlock is terminated, the
+ * OS-specific spinlock is freed.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a lock parameter.
+ *
+ * It is a programming error to attempt to terminate a spinlock that is currently
+ * terminated.
+ *
+ * @illegal It is illegal to call osk_spinlock_term() on a locked spinlock.
+ * @param[in] lock pointer to a valid spinlock structure
+ */
+OSK_STATIC_INLINE void osk_spinlock_term(osk_spinlock * lock);
+
+/**
+ * @brief Lock a spinlock
+ *
+ * Lock the spinlock pointed to by \a lock. If the spinlock is currently unlocked,
+ * the calling thread returns with the spinlock locked. If a second thread
+ * attempts to lock the same spinlock, it polls the spinlock until the first thread
+ * unlocks the spinlock. If two or more threads are polling the spinlock waiting
+ * on the first thread to unlock the spinlock, it is undefined as to which thread
+ * will lock the spinlock when the first thread unlocks the spinlock.
+ *
+ * While the spinlock is locked by the calling thread, the spinlock implementation
+ * should prevent any possible deadlock issues arising from another thread on the
+ * same CPU trying to lock the same spinlock.
+ *
+ * While holding a spinlock, you must not sleep. You must not obtain a rwlock,
+ * mutex or do anything else that might block your thread. This is to prevent another
+ * thread trying to lock the same spinlock while your thread holds the spinlock,
+ * which could take a very long time (as it requires your thread to get scheduled
+ * in again and unlock the spinlock) or could even deadlock your system.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a lock parameter.
+ *
+ * It is a programming error to lock a spinlock, rwlock or mutex with an order that
+ * is higher than any spinlock, rwlock, or mutex held by the current thread. Spinlocks,
+ * Rwlocks, and Mutexes must be locked in the order of highest to lowest, to prevent
+ * deadlocks. Refer to @see oskmutex_lockorder for more information.
+ *
+ * It is a programming error to exit a thread while it has a locked spinlock.
+ *
+ * It is a programming error to lock a spinlock from an ISR context. Use the IRQ
+ * safe spinlock type instead.
+ *
+ * @illegal It is illegal to call osk_spinlock_lock() on a spinlock that is currently
+ * locked by the caller thread. That is, it is illegal for the same thread to
+ * lock a spinlock twice, without unlocking it in between.
+ *
+ * @param[in] lock pointer to a valid spinlock structure
+ */
+OSK_STATIC_INLINE void osk_spinlock_lock(osk_spinlock * lock);
+
+/**
+ * @brief Unlock a spinlock
+ *
+ * Unlock the spinlock pointed to by \a lock. The calling thread must be the
+ * same thread that locked the spinlock. If no other threads are polling the
+ * spinlock waiting on the spinlock to be unlocked, the function returns
+ * immediately, with the spinlock unlocked. If one or more threads are polling
+ * the spinlock waiting on the spinlock to be unlocked, then this function
+ * returns, and a thread waiting on the spinlock can stop polling and continue
+ * with the spinlock locked. It is undefined as to which thread this is.
+ *
+ * @note It is not defined \em when a waiting thread continues. For example,
+ * a thread calling osk_spinlock_unlock() followed by osk_spinlock_lock() may (or may
+ * not) obtain the spinlock again, preventing other threads from continueing.
+ * Neither the 'immediately releasing', nor the 'delayed releasing'
+ * behavior of osk_spinlock_unlock() can be relied upon. If such behavior is
+ * required, then you must implement it yourself, such as by using a second
+ * synchronization primitive.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a lock parameter.
+ *
+ * @illegal It is illegal for a thread to call osk_spinlock_unlock() on a spinlock
+ * that it has not locked, even if that spinlock is currently locked by another
+ * thread. That is, it is illegal for any thread other than the 'owner' of the
+ * spinlock to unlock it. And, you must not unlock an already unlocked spinlock.
+ *
+ * @param[in] lock pointer to a valid spinlock structure
+ */
+OSK_STATIC_INLINE void osk_spinlock_unlock(osk_spinlock * lock);
+
+/**
+ * @brief Initialize an IRQ safe spinlock
+ *
+ * Initialize an IRQ safe spinlock. If the function returns successfully, the
+ * spinlock is in the unlocked state.
+ *
+ * This variant of spinlock is used for sharing a resource with an interrupt
+ * service routine. The IRQ safe variant should be used in this siutation to
+ * prevent deadlock between a thread/ISR that holds a spinlock while an interrupt
+ * occurs and the interrupt service routine trying to obtain the same spinlock
+ * too. If the spinlock is not used to share a resource with an interrupt service
+ * routine, one should use the osk_spinlock instead of the osk_spinlock_irq
+ * variant, see osk_spinlock_init().
+
+ * The caller must allocate the memory for the @see osk_spinlock_irq
+ * structure, which is then populated within this function. If the OS-specific
+ * spinlock referenced from the structure cannot be initialized, an error is
+ * returned.
+ *
+ * The spinlock must be terminated when no longer required, by using
+ * osk_spinlock_irq_term(). Otherwise, a resource leak may result in the OS.
+ *
+ * The spinlock is initialized with a lock order parameter, \a order. Refer to
+ * @see oskmutex_lockorder for more information on Rwlock/Mutex/Spinlock lock
+ * ordering.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a lock parameter.
+ *
+ * It is a programming error to attempt to initialize a spinlock that is
+ * currently initialized.
+ *
+ * @param[out] lock pointer to a IRQ safe spinlock structure
+ * @param[in] order the locking order of the IRQ safe spinlock
+ * @return OSK_ERR_NONE on success, any other value indicates a failure.
+ */
+OSK_STATIC_INLINE osk_error osk_spinlock_irq_init(osk_spinlock_irq * const lock, osk_lock_order order) CHECK_RESULT;
+
+/**
+ * @brief Terminate an IRQ safe spinlock
+ *
+ * Terminate the IRQ safe spinlock pointed to by \a lock, which must be
+ * a pointer to a valid unlocked IRQ safe spinlock. When the IRQ safe spinlock
+ * is terminated, the OS-specific spinlock is freed.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a lock parameter.
+ *
+ * It is a programming error to attempt to terminate a IRQ safe pinlock that is
+ * currently terminated.
+ *
+ * @param[in] lock pointer to a valid IRQ safe spinlock structure
+ */
+OSK_STATIC_INLINE void osk_spinlock_irq_term(osk_spinlock_irq * lock);
+
+/**
+ * @brief Lock an IRQ safe spinlock
+ *
+ * Lock the IRQ safe spinlock (from here on refered to as 'spinlock') pointed to
+ * by \a lock. If the spinlock is currently unlocked, the calling thread returns
+ * with the spinlock locked. If a second thread attempts to lock the same spinlock,
+ * it polls the spinlock until the first thread unlocks the spinlock. If two or
+ * more threads are polling the spinlock waiting on the first thread to unlock the
+ * spinlock, it is undefined as to which thread will lock the spinlock when the
+ * first thread unlocks the spinlock.
+ *
+ * While the spinlock is locked by the calling thread, the spinlock implementation
+ * should prevent any possible deadlock issues arising from another thread on the
+ * same CPU trying to lock the same spinlock.
+ *
+ * While holding a spinlock, you must not sleep. You must not obtain a rwlock,
+ * mutex or do anything else that might block your thread. This is to prevent another
+ * thread trying to lock the same spinlock while your thread holds the spinlock,
+ * which could take a very long time (as it requires your thread to get scheduled
+ * in again and unlock the spinlock) or could even deadlock your system.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a lock parameter.
+ *
+ * It is a programming error to lock a spinlock, rwlock or mutex with an order that
+ * is higher than any spinlock, rwlock, or mutex held by the current thread. Spinlocks,
+ * Rwlocks, and Mutexes must be locked in the order of highest to lowest, to prevent
+ * deadlocks. Refer to @see oskmutex_lockorder for more information.
+ *
+ * It is a programming error to exit a thread while it has a locked spinlock.
+ *
+ * @illegal It is illegal to call osk_spinlock_irq_lock() on a spinlock that is
+ * currently locked by the caller thread. That is, it is illegal for the same thread
+ * to lock a spinlock twice, without unlocking it in between.
+ *
+ * @param[in] lock pointer to a valid IRQ safe spinlock structure
+ */
+OSK_STATIC_INLINE void osk_spinlock_irq_lock(osk_spinlock_irq * lock);
+
+/**
+ * @brief Unlock an IRQ safe spinlock
+ *
+ * Unlock the IRQ safe spinlock (from hereon refered to as 'spinlock') pointed to
+ * by \a lock. The calling thread/ISR must be the same thread/ISR that locked the
+ * spinlock. If no other threads/ISRs are polling the spinlock waiting on the spinlock
+ * to be unlocked, the function returns* immediately, with the spinlock unlocked. If
+ * one or more threads/ISRs are polling the spinlock waiting on the spinlock to be unlocked,
+ * then this function returns, and a thread/ISR waiting on the spinlock can stop polling
+ * and continue with the spinlock locked. It is undefined as to which thread/ISR this is.
+ *
+ * @note It is not defined \em when a waiting thread/ISR continues. For example,
+ * a thread/ISR calling osk_spinlock_irq_unlock() followed by osk_spinlock_irq_lock() may
+ * (or may not) obtain the spinlock again, preventing other threads from continueing.
+ * Neither the 'immediately releasing', nor the 'delayed releasing'
+ * behavior of osk_spinlock_irq_unlock() can be relied upon. If such behavior is
+ * required, then you must implement it yourself, such as by using a second
+ * synchronization primitive.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a lock parameter.
+ *
+ * @illegal It is illegal for a thread to call osk_spinlock_irq_unlock() on a spinlock
+ * that it has not locked, even if that spinlock is currently locked by another
+ * thread. That is, it is illegal for any thread other than the 'owner' of the
+ * spinlock to unlock it. And, you must not unlock an already unlocked spinlock.
+ *
+ * @param[in] lock pointer to a valid IRQ safe spinlock structure
+ */
+OSK_STATIC_INLINE void osk_spinlock_irq_unlock(osk_spinlock_irq * lock);
+
+/**
+ * @brief Initialize a rwlock
+ *
+ * Read/write locks allow multiple readers to obtain the lock (shared access),
+ * or one writer to obtain the lock (exclusive access).
+ * Read/write locks are created in an unlocked state.
+ *
+ * Initialize a rwlock structure. If the function returns successfully, the
+ * rwlock is in the unlocked state.
+ *
+ * The caller must allocate the memory for the @see osk_rwlock
+ * structure, which is then populated within this function. If the OS-specific
+ * rwlock referenced from the structure cannot be initialized, an error is
+ * returned.
+ *
+ * The rwlock must be terminated when no longer required, by using
+ * osk_rwlock_term(). Otherwise, a resource leak may result in the OS.
+ *
+ * The rwlock is initialized with a lock order parameter, \a order. Refer to
+ * @see oskmutex_lockorder for more information on Rwlock/Mutex/Spinlock lock
+ * ordering.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a lock parameter.
+ *
+ * It is a programming error to attempt to initialize a rwlock that is
+ * currently initialized.
+ *
+ * @param[out] lock pointer to a rwlock structure
+ * @param[in] order the locking order of the rwlock
+ * @return OSK_ERR_NONE on success, any other value indicates a failure.
+ */
+OSK_STATIC_INLINE osk_error osk_rwlock_init(osk_rwlock * const lock, osk_lock_order order) CHECK_RESULT;
+
+/**
+ * @brief Terminate a rwlock
+ *
+ * Terminate the rwlock pointed to by \a lock, which must be
+ * a pointer to a valid unlocked rwlock. When the rwlock is terminated, the
+ * OS-specific rwlock is freed.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a lock parameter.
+ *
+ * It is a programming error to attempt to terminate a rwlock that is currently
+ * terminated.
+ *
+ * @illegal It is illegal to call osk_rwlock_term() on a locked rwlock.
+ *
+ * @param[in] lock pointer to a valid rwlock structure
+ */
+OSK_STATIC_INLINE void osk_rwlock_term(osk_rwlock * lock);
+
+/**
+ * @brief Lock a rwlock for read access
+ *
+ * Lock the rwlock pointed to by \a lock for read access. A rwlock may
+ * be locked for read access by multiple threads. If the mutex
+ * mutex is not locked for exclusive write access, the calling thread
+ * returns with the rwlock locked for read access. If the mutex is
+ * currently locked for exclusive write access, the calling thread blocks
+ * until the thread with exclusive write access unlocks the rwlock.
+ * If multiple threads are blocked waiting for read access or exclusive
+ * write access to the rwlock, it is undefined as to which thread is
+ * unblocked when the rwlock is unlocked (by the thread with exclusive
+ * write access).
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a lock parameter.
+ *
+ * It is a programming error to lock a rwlock, mutex or spinlock with an order that is
+ * higher than any rwlock, mutex or spinlock held by the current thread. Rwlocks, mutexes and
+ * spinlocks must be locked in the order of highest to lowest, to prevent
+ * deadlocks. Refer to @see oskmutex_lockorder for more information.
+ *
+ * It is a programming error to exit a thread while it has a locked rwlock.
+ *
+ * It is a programming error to lock a rwlock from an ISR context. In an ISR
+ * context you are not allowed to block what osk_rwlock_read_lock() potentially does.
+ *
+ * @illegal It is illegal to call osk_rwlock_read_lock() on a rwlock that is currently
+ * locked by the caller thread. That is, it is illegal for the same thread to
+ * lock a rwlock twice, without unlocking it in between.
+ * @param[in] lock pointer to a valid rwlock structure
+ */
+OSK_STATIC_INLINE void osk_rwlock_read_lock(osk_rwlock * lock);
+
+/**
+ * @brief Unlock a rwlock for read access
+ *
+ * Unlock the rwlock pointed to by \a lock. The calling thread must be the
+ * same thread that locked the rwlock for read access. If no other threads
+ * are waiting on the rwlock to be unlocked, the function returns
+ * immediately, with the rwlock unlocked. If one or more threads are waiting
+ * on the rwlock to be unlocked for write access, and the calling thread
+ * is the last thread holding the rwlock for read access, then this function
+ * returns, and a thread waiting on the rwlock for write access can be
+ * unblocked. It is undefined as to which thread is unblocked.
+ *
+ * @note It is not defined \em when a waiting thread is unblocked. For example,
+ * a thread calling osk_rwlock_read_unlock() followed by osk_rwlock_read_lock()
+ * may (or may not) obtain the lock again, preventing other threads from being
+ * released. Neither the 'immediately releasing', nor the 'delayed releasing'
+ * behavior of osk_rwlock_read_unlock() can be relied upon. If such behavior is
+ * required, then you must implement it yourself, such as by using a second
+ * synchronization primitve.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a lock parameter.
+ *
+ * @illegal It is illegal for a thread to call osk_rwlock_read_unlock() on a
+ * rwlock that it has not locked, even if that rwlock is currently locked by another
+ * thread. That is, it is illegal for any thread other than the 'owner' of the
+ * rwlock to unlock it. And, you must not unlock an already unlocked rwlock.
+
+ * @param[in] lock pointer to a valid rwlock structure
+ */
+OSK_STATIC_INLINE void osk_rwlock_read_unlock(osk_rwlock * lock);
+
+/**
+ * @brief Lock a rwlock for exclusive write access
+ *
+ * Lock the rwlock pointed to by \a lock for exclusive write access. If the
+ * rwlock is currently unlocked, the calling thread returns with the rwlock
+ * locked. If the rwlock is currently locked, the calling thread blocks
+ * until the last thread with read access or the thread with exclusive write
+ * access unlocks the rwlock. If multiple threads are blocked waiting
+ * for exclusive write access to the rwlock, it is undefined as to which
+ * thread is unblocked when the rwlock is unlocked (by either the last thread
+ * thread with read access or the thread with exclusive write access).
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a lock parameter.
+ *
+ * It is a programming error to lock a rwlock, mutex or spinlock with an order that is
+ * higher than any rwlock, mutex or spinlock held by the current thread. Rwlocks, mutexes and
+ * spinlocks must be locked in the order of highest to lowest, to prevent
+ * deadlocks. Refer to @see oskmutex_lockorder for more information.
+ *
+ * It is a programming error to exit a thread while it has a locked rwlock.
+ *
+ * It is a programming error to lock a rwlock from an ISR context. In an ISR
+ * context you are not allowed to block what osk_rwlock_write_lock() potentially does.
+ *
+ * @illegal It is illegal to call osk_rwlock_write_lock() on a rwlock that is currently
+ * locked by the caller thread. That is, it is illegal for the same thread to
+ * lock a rwlock twice, without unlocking it in between.
+ *
+ * @param[in] lock pointer to a valid rwlock structure
+ */
+OSK_STATIC_INLINE void osk_rwlock_write_lock(osk_rwlock * lock);
+
+/**
+ * @brief Unlock a rwlock for exclusive write access
+ *
+ * Unlock the rwlock pointed to by \a lock. The calling thread must be the
+ * same thread that locked the rwlock for exclusive write access. If no
+ * other threads are waiting on the rwlock to be unlocked, the function returns
+ * immediately, with the rwlock unlocked. If one or more threads are waiting
+ * on the rwlock to be unlocked, then this function returns, and a thread
+ * waiting on the rwlock can be unblocked. It is undefined as to which
+ * thread is unblocked.
+ *
+ * @note It is not defined \em when a waiting thread is unblocked. For example,
+ * a thread calling osk_rwlock_write_unlock() followed by osk_rwlock_write_lock()
+ * may (or may not) obtain the lock again, preventing other threads from being
+ * released. Neither the 'immediately releasing', nor the 'delayed releasing'
+ * behavior of osk_rwlock_write_unlock() can be relied upon. If such behavior is
+ * required, then you must implement it yourself, such as by using a second
+ * synchronization primitve.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL)
+ * through the \a lock parameter.
+ *
+ * @illegal It is illegal for a thread to call osk_rwlock_write_unlock() on a
+ * rwlock that it has not locked, even if that rwlock is currently locked by another
+ * thread. That is, it is illegal for any thread other than the 'owner' of the
+ * rwlock to unlock it. And, you must not unlock an already unlocked rwlock.
+ *
+ * @param[in] lock pointer to a valid read/write lock structure
+ */
+OSK_STATIC_INLINE void osk_rwlock_write_unlock(osk_rwlock * lock);
+
+/* @} */ /* end group osklocks */
+
+/** @} */ /* end group base_osk_api */
+
+/** @} */ /* end group base_api */
+
+/* pull in the arch header with the implementation */
+#include <osk/mali_osk_arch_locks.h>
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _OSK_LOCKS_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2008-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file
+ * Implementation of the dedicated memory allocator for the kernel device driver
+ */
+
+#ifndef _OSK_LOW_LEVEL_DEDICATED_MEM_H_
+#define _OSK_LOW_LEVEL_DEDICATED_MEM_H_
+
+#ifdef __KERNEL__
+#include <linux/io.h>
+#endif /* __KERNEL__ */
+
+struct oskp_phy_dedicated_allocator
+{
+ /* lock to protect the free map management */
+ osk_mutex lock;
+
+ osk_phy_addr base;
+ u32 num_pages;
+ u32 free_pages;
+
+ unsigned long * free_map;
+};
+
+OSK_STATIC_INLINE osk_error oskp_phy_dedicated_allocator_init(oskp_phy_dedicated_allocator * const allocator,
+ osk_phy_addr mem, u32 nr_pages, const char* name)
+{
+ osk_error error;
+
+ OSK_ASSERT(allocator);
+ OSK_ASSERT(nr_pages > 0);
+ /* Assert if not page aligned */
+ OSK_ASSERT( 0 == (mem & (OSK_PAGE_SIZE-1)) );
+
+ if (!mem)
+ {
+ /* no address to manage specified */
+ return OSK_ERR_FAIL;
+ }
+ else
+ {
+ u32 i;
+
+ /* try to obtain dedicated memory */
+ if(oskp_phy_dedicated_allocator_request_memory(mem, nr_pages, name) != OSK_ERR_NONE)
+ {
+ /* requested memory not available */
+ return OSK_ERR_FAIL;
+ }
+
+ allocator->base = mem;
+ allocator->num_pages = nr_pages;
+ allocator->free_pages = allocator->num_pages;
+
+ error = osk_mutex_init(&allocator->lock, OSK_LOCK_ORDER_LAST );
+ if (OSK_ERR_NONE != error)
+ {
+ return OSK_ERR_FAIL;
+ }
+
+ allocator->free_map = osk_calloc(sizeof(unsigned long) * ((nr_pages + OSK_BITS_PER_LONG - 1) / OSK_BITS_PER_LONG));
+ if (NULL == allocator->free_map)
+ {
+ osk_mutex_term(&allocator->lock);
+ return OSK_ERR_ALLOC;
+ }
+
+ /* correct for nr_pages not being a multiple of OSK_BITS_PER_LONG */
+ for (i = nr_pages; i < ((nr_pages + OSK_BITS_PER_LONG - 1) & ~(OSK_BITS_PER_LONG-1)); i++)
+ {
+ osk_bitarray_set_bit(i, allocator->free_map);
+ }
+
+ return OSK_ERR_NONE;
+ }
+}
+
+OSK_STATIC_INLINE void oskp_phy_dedicated_allocator_term(oskp_phy_dedicated_allocator *allocator)
+{
+ OSK_ASSERT(allocator);
+ OSK_ASSERT(allocator->free_map);
+ oskp_phy_dedicated_allocator_release_memory(allocator->base, allocator->num_pages);
+ osk_free(allocator->free_map);
+ osk_mutex_term(&allocator->lock);
+}
+
+OSK_STATIC_INLINE u32 oskp_phy_dedicated_pages_alloc(oskp_phy_dedicated_allocator *allocator,
+ u32 nr_pages, osk_phy_addr *pages)
+{
+ u32 pages_allocated;
+
+ OSK_ASSERT(pages);
+ OSK_ASSERT(allocator);
+ OSK_ASSERT(allocator->free_map);
+
+ osk_mutex_lock(&allocator->lock);
+
+ for (pages_allocated = 0; pages_allocated < OSK_MIN(nr_pages, allocator->free_pages); pages_allocated++)
+ {
+ u32 pfn;
+ void * mapping;
+
+ pfn = osk_bitarray_find_first_zero_bit(allocator->free_map, allocator->num_pages);
+ /* As the free_pages test passed ffz should never fail */
+ OSK_ASSERT(pfn != allocator->num_pages);
+
+ /* mark as allocated */
+ osk_bitarray_set_bit(pfn, allocator->free_map);
+
+ /* find phys addr of the page */
+ pages[pages_allocated] = allocator->base + (pfn << OSK_PAGE_SHIFT);
+
+#ifdef __KERNEL__
+ /* zero the page */
+ if(OSK_SIMULATE_FAILURE(OSK_OSK))
+ {
+ mapping = NULL;
+ }
+ else
+ {
+ mapping = ioremap_wc(pages[pages_allocated], SZ_4K);
+ }
+#else
+ mapping = osk_kmap(pages[pages_allocated]);
+#endif /* __KERNEL__ */
+
+ if (NULL == mapping)
+ {
+ /* roll back */
+ for (pages_allocated++; pages_allocated > 0; pages_allocated--)
+ {
+ pfn = (pages[pages_allocated-1] - allocator->base) >> OSK_PAGE_SHIFT;
+ osk_bitarray_clear_bit(pfn, allocator->free_map);
+ }
+ break;
+ }
+
+ OSK_MEMSET(mapping, 0x00, OSK_PAGE_SIZE);
+
+ osk_sync_to_memory(pages[pages_allocated], mapping, OSK_PAGE_SIZE);
+#ifdef __KERNEL__
+ iounmap(mapping);
+#else
+ osk_kunmap(pages[pages_allocated], mapping);
+#endif /* __KERNEL__ */
+ }
+
+ allocator->free_pages -= pages_allocated;
+ osk_mutex_unlock(&allocator->lock);
+
+ return pages_allocated;
+}
+
+OSK_STATIC_INLINE void oskp_phy_dedicated_pages_free(oskp_phy_dedicated_allocator *allocator,
+ u32 nr_pages, osk_phy_addr *pages)
+{
+ u32 i;
+
+ OSK_ASSERT(pages);
+ OSK_ASSERT(allocator);
+ OSK_ASSERT(allocator->free_map);
+
+ osk_mutex_lock(&allocator->lock);
+
+ for (i = 0; i < nr_pages; i++)
+ {
+ if (0 != pages[i])
+ {
+ u32 pfn;
+
+ OSK_ASSERT(pages[i] >= allocator->base);
+ OSK_ASSERT(pages[i] < allocator->base + (allocator->num_pages << OSK_PAGE_SHIFT));
+
+ pfn = (pages[i] - allocator->base) >> OSK_PAGE_SHIFT;
+ osk_bitarray_clear_bit(pfn, allocator->free_map);
+
+ allocator->free_pages++;
+
+ pages[i] = 0;
+ }
+ }
+
+ osk_mutex_unlock(&allocator->lock);
+}
+
+#endif /* _OSK_LOW_LEVEL_DEDICATED_MEM_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_osk_low_level_mem.h
+ *
+ * Defines the kernel low level memory abstraction layer for the base
+ * driver.
+ */
+
+#ifndef _OSK_LOW_LEVEL_MEM_H_
+#define _OSK_LOW_LEVEL_MEM_H_
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @addtogroup base_osk_api
+ * @{
+ */
+
+/**
+ * @addtogroup osklowlevelmem Low level memory
+ *
+ * Provides functions to allocate physical memory and ensure cache coherency.
+ *
+ * @{
+ */
+
+/**
+ * CPU virtual address
+ */
+typedef void *osk_virt_addr;
+
+/**
+ * Physical page allocator
+ */
+typedef struct osk_phy_allocator osk_phy_allocator;
+
+/**
+ * Dedicated physical page allocator
+ */
+typedef struct oskp_phy_os_allocator oskp_phy_os_allocator;
+/**
+ * OS physical page allocator
+ */
+typedef struct oskp_phy_dedicated_allocator oskp_phy_dedicated_allocator;
+
+/**
+ * @brief Initialize a physical page allocator
+ *
+ * The physical page allocator is responsible for allocating physical memory pages of
+ * OSK_PAGE_SIZE bytes each. Pages are allocated through the OS or from a reserved
+ * memory region.
+ *
+ * Physical page allocation through the OS
+ *
+ * If \a mem is 0, upto \a nr_pages of pages may be allocated through the OS for use
+ * by a user process. OSs that require allocating CPU virtual address space in order
+ * to allocate physical pages must observe that the CPU virtual address space is
+ * allocated for the current user process and that the physical allocator must always
+ * be used with this same user process.
+ *
+ * If \a mem is 0, and \a nr_pages is 0, a variable number of pages may be allocated
+ * through the OS for use by the kernel (only limited by the available OS memory).
+ * Allocated pages may be mapped into the kernel using osk_kmap(). The use case for
+ * this type of physical allocator is the allocation of physical pages for MMU page
+ * tables. OSs that require allocating CPU virtual address space in order
+ * to allocate physical pages must likely manage a list of fixed size virtual
+ * address regions against which pages are committed as more pages are allocated.
+ *
+ * Physical page allocation from a reserved memory region
+ *
+ * If \a mem is not 0, \a mem specifies the physical start address of a physically
+ * contiguous memory region, from which \a nr_pages of pages may be allocated, for
+ * use by a user process. The start address is aligned to OSK_PAGE_SIZE bytes.
+ * The memory region must not be in use by the OS and solely for use by the physical
+ * allocator. OSs that require allocating CPU virtual address space in order
+ * to allocate physical pages must observe that the CPU virtual address space is
+ * allocated for the current user process and that the physical allocator must always
+ * be used with this same user process.
+ *
+ * @param[out] allocator physical allocator to initialize
+ * @param[in] mem Set \a mem to 0 if physical pages should be allocated through the OS,
+ * otherwise \a mem represents the physical address of a reserved
+ * memory region from which pages should be allocated. The physical
+ * address must be OSK_PAGE_SIZE aligned.
+ * @param[in] nr_pages maximum number of physical pages that can be allocated.
+ * If nr_pages > 0, pages are for use in user space.
+ * If nr_pages is 0, a variable number number of pages can be allocated
+ * (limited by the available pages from the OS) but the pages are
+ * for use by the kernel and \a mem must be set to 0
+ * (to enable allocating physical pages through the OS).
+ * @param[in] name name of the reserved memory region
+ * @return OSK_ERR_NONE if successful. Any other value indicates failure.
+ */
+OSK_STATIC_INLINE osk_error osk_phy_allocator_init(osk_phy_allocator * const allocator, osk_phy_addr mem, u32 nr_pages, const char* name) CHECK_RESULT;
+
+OSK_STATIC_INLINE osk_error oskp_phy_os_allocator_init(oskp_phy_os_allocator * const allocator,
+ osk_phy_addr mem, u32 nr_pages) CHECK_RESULT;
+OSK_STATIC_INLINE osk_error oskp_phy_dedicated_allocator_init(oskp_phy_dedicated_allocator * const allocator,
+ osk_phy_addr mem, u32 nr_pages, const char* name) CHECK_RESULT;
+OSK_STATIC_INLINE osk_error oskp_phy_dedicated_allocator_request_memory(osk_phy_addr mem,u32 nr_pages, const char* name) CHECK_RESULT;
+
+
+/**
+ * @brief Terminate a physical page allocator
+ *
+ * Frees any resources necessary to manage the physical allocator. Any physical pages that
+ * were allocated or mapped by the allocator must have been freed and unmapped earlier.
+ *
+ * Allocating and mapping pages using the terminated allocator is prohibited until the
+ * the \a allocator is reinitailized with osk_phy_allocator_init().
+ *
+ * @param[in] allocator initialized physical allocator
+ */
+OSK_STATIC_INLINE void osk_phy_allocator_term(osk_phy_allocator *allocator);
+
+OSK_STATIC_INLINE void oskp_phy_os_allocator_term(oskp_phy_os_allocator *allocator);
+OSK_STATIC_INLINE void oskp_phy_dedicated_allocator_term(oskp_phy_dedicated_allocator *allocator);
+OSK_STATIC_INLINE void oskp_phy_dedicated_allocator_release_memory(osk_phy_addr mem,u32 nr_pages);
+
+/**
+ * @brief Allocate physical pages
+ *
+ * Allocates \a nr_pages physical pages of OSK_PAGE_SIZE each using the physical
+ * allocator \a allocator and stores the physical address of each allocated page
+ * in the \a pages array.
+ *
+ * If the physical allocator was initialized to allocate pages for use by a user
+ * process, the pages need to be allocated in the same user space context as the
+ * physical allocator was initialized in.
+ *
+ * This function may block and cannot be used from ISR context.
+ *
+ * @param[in] allocator initialized physical allocator
+ * @param[in] nr_pages number of physical pages to allocate
+ * @param[out] pages array of \a nr_pages elements storing the physical
+ * address of an allocated page
+ * @return The number of pages successfully allocated,
+ * which might be lower than requested, including zero pages.
+ */
+OSK_STATIC_INLINE u32 osk_phy_pages_alloc(osk_phy_allocator *allocator, u32 nr_pages, osk_phy_addr *pages) CHECK_RESULT;
+
+OSK_STATIC_INLINE u32 oskp_phy_os_pages_alloc(oskp_phy_os_allocator *allocator,
+ u32 nr_pages, osk_phy_addr *pages) CHECK_RESULT;
+OSK_STATIC_INLINE u32 oskp_phy_dedicated_pages_alloc(oskp_phy_dedicated_allocator *allocator,
+ u32 nr_pages, osk_phy_addr *pages) CHECK_RESULT;
+
+/**
+ * @brief Free physical pages
+ *
+ * Frees physical pages previously allocated by osk_phy_pages_alloc(). The same
+ * arguments used for the allocation need to be specified when freeing them.
+ *
+ * Freeing individual pages of a set of pages allocated by osk_phy_pages_alloc()
+ * is not allowed.
+ *
+ * If the physical allocator was initialized to allocate pages for use by a user
+ * process, the pages need to be freed in the same user space context as the
+ * physical allocator was initialized in.
+ *
+ * The contents of the \a pages array is undefined after osk_phy_pages_free has
+ * freed the pages.
+ *
+ * @param[in] allocator initialized physical allocator
+ * @param[in] nr_pages number of physical pages to free (as used during the allocation)
+ * @param[in] pages array of \a nr_pages storing the physical address of an
+ * allocated page (as used during the allocation).
+ */
+OSK_STATIC_INLINE void osk_phy_pages_free(osk_phy_allocator *allocator, u32 nr_pages, osk_phy_addr *pages);
+
+OSK_STATIC_INLINE void oskp_phy_os_pages_free(oskp_phy_os_allocator *allocator,
+ u32 nr_pages, osk_phy_addr *pages);
+OSK_STATIC_INLINE void oskp_phy_dedicated_pages_free(oskp_phy_dedicated_allocator *allocator,
+ u32 nr_pages, osk_phy_addr *pages);
+/**
+ * @brief Map a physical page into the kernel
+ *
+ * Maps a physical page that was previously allocated by osk_phy_pages_alloc()
+ * with a physical allocator setup for allocating pages for use by the kernel,
+ * @see osk_phy_allocator_init().
+ *
+ * Notes:
+ * - Kernel virtual memory is limited. Limit the number of pages mapped into
+ * the kernel and limit the duration of the mapping.
+ *
+ * @param[in] page physical address of the page to unmap
+ * @return CPU virtual address in the kernel, NULL in case of a failure.
+ */
+OSK_STATIC_INLINE void *osk_kmap(osk_phy_addr page) CHECK_RESULT;
+
+/**
+ * @brief Unmap a physical page from the kernel
+ *
+ * Unmaps a previously mapped physical page (with osk_kmap) from the kernel.
+ *
+ * @param[in] page physical address of the page to unmap
+ * @param[in] mapping virtual address of the mapping to unmap
+ */
+OSK_STATIC_INLINE void osk_kunmap(osk_phy_addr page, void * mapping);
+
+/**
+ * @brief Map a physical page into the kernel
+ *
+ * Maps a physical page that was previously allocated by osk_phy_pages_alloc()
+ * with a physical allocator setup for allocating pages for use by the kernel,
+ * @see osk_phy_allocator_init().
+ *
+ * Notes:
+ * @li Used for mapping a single page for a very short duration
+ * @li The system only supports limited number of atomic mappings,
+ * so use should be limited
+ * @li The caller must not sleep until after the osk_kunmap_atomic is called.
+ * @li It may be assumed that osk_k[un]map_atomic will not fail.
+ *
+ * @param[in] page physical address of the page to unmap
+ * @return CPU virtual address in the kernel, NULL in case of a failure.
+ */
+OSK_STATIC_INLINE void *osk_kmap_atomic(osk_phy_addr page) CHECK_RESULT;
+
+/**
+ * @brief Unmap a physical page from the kernel
+ *
+ * Unmaps a previously mapped physical page (with osk_kmap_atomic) from the kernel.
+ *
+ * @param[in] page physical address of the page to unmap
+ * @param[in] mapping virtual address of the mapping to unmap
+ */
+OSK_STATIC_INLINE void osk_kunmap_atomic(osk_phy_addr page, void * mapping);
+
+/**
+ * A pointer to a cache synchronization function, either osk_sync_to_cpu()
+ * or osk_sync_to_memory().
+ */
+typedef void (*osk_sync_kmem_fn)(osk_phy_addr, osk_virt_addr, size_t);
+
+/**
+ * @brief Synchronize a memory area for other system components usage
+ *
+ * Performs the necessary memory coherency operations on a given memory area,
+ * such that after the call, changes in memory are correctly seen by other
+ * system components. Any change made to memory after that call may not be seen
+ * by other system components.
+ *
+ * In effect:
+ * - all CPUs will perform a cache clean operation on their inner & outer data caches
+ * - any write buffers are drained (including that of outer cache controllers)
+ *
+ * This function waits until all operations have completed.
+ *
+ * The area is restricted to one page or less and must not cross a page boundary.
+ * The offset within the page is aligned to cache line size and size is ensured
+ * to be a multiple of the cache line size.
+ *
+ * Both physical and virtual address of the area need to be provided to support OS
+ * cache flushing APIs that either use the virtual or the physical address. When
+ * called from OS specific code it is allowed to only provide the address that
+ * is actually used by the specific OS and leave the other address as 0.
+ *
+ * @param[in] paddr physical address
+ * @param[in] vaddr CPU virtual address valid in the current user VM or the kernel VM
+ * @param[in] sz size of the area, <= OSK_PAGE_SIZE.
+ */
+OSK_STATIC_INLINE void osk_sync_to_memory(osk_phy_addr paddr, osk_virt_addr vaddr, size_t sz);
+
+/**
+ * @brief Synchronize a memory area for CPU usage
+ *
+ * Performs the necessary memory coherency operations on a given memory area,
+ * such that after the call, changes in memory are correctly seen by any CPU.
+ * Any change made to this area by any CPU before this call may be lost.
+ *
+ * In effect:
+ * - all CPUs will perform a cache clean & invalidate operation on their inner &
+ * outer data caches.
+ *
+ * @note Stricly only an invalidate operation is required but by cleaning the cache
+ * too we prevent loosing changes made to the memory area due to software bugs. By
+ * having these changes cleaned from the cache it allows us to catch the memory
+ * area getting corrupted with the help of watch points. In correct operation the
+ * clean & invalidate operation would not be more expensive than an invalidate
+ * operation. Also note that for security reasons, it is dangerous to expose a
+ * cache 'invalidate only' operation to user space.
+ *
+ * - any read buffers are flushed (including that of outer cache controllers)
+ *
+ * This function waits until all operations have completed.
+ *
+ * The area is restricted to one page or less and must not cross a page boundary.
+ * The offset within the page is aligned to cache line size and size is ensured
+ * to be a multiple of the cache line size.
+ *
+ * Both physical and virtual address of the area need to be provided to support OS
+ * cache flushing APIs that either use the virtual or the physical address. When
+ * called from OS specific code it is allowed to only provide the address that
+ * is actually used by the specific OS and leave the other address as 0.
+ *
+ * @param[in] paddr physical address
+ * @param[in] vaddr CPU virtual address valid in the current user VM or the kernel VM
+ * @param[in] sz size of the area, <= OSK_PAGE_SIZE.
+ */
+OSK_STATIC_INLINE void osk_sync_to_cpu(osk_phy_addr paddr, osk_virt_addr vaddr, size_t sz);
+
+/** @} */ /* end group osklowlevelmem */
+
+/** @} */ /* end group base_osk_api */
+
+/** @} */ /* end group base_api */
+
+/* pull in the arch header with the implementation */
+#include "mali_osk_low_level_dedicated_mem.h"
+#include <osk/mali_osk_arch_low_level_mem.h>
+
+typedef enum oskp_phy_allocator_type
+{
+ OSKP_PHY_ALLOCATOR_OS,
+ OSKP_PHY_ALLOCATOR_DEDICATED
+} oskp_phy_allocator_type;
+
+struct osk_phy_allocator
+{
+ oskp_phy_allocator_type type;
+ union {
+ struct oskp_phy_dedicated_allocator dedicated;
+ struct oskp_phy_os_allocator os;
+ } data;
+};
+
+
+OSK_STATIC_INLINE osk_error osk_phy_allocator_init(osk_phy_allocator * const allocator, osk_phy_addr mem, u32 nr_pages, const char* name)
+{
+ OSK_ASSERT(allocator);
+ if (mem == 0)
+ {
+ allocator->type = OSKP_PHY_ALLOCATOR_OS;
+ return oskp_phy_os_allocator_init(&allocator->data.os, mem, nr_pages);
+ }
+ else
+ {
+ allocator->type = OSKP_PHY_ALLOCATOR_DEDICATED;
+ return oskp_phy_dedicated_allocator_init(&allocator->data.dedicated, mem, nr_pages, name);
+ }
+}
+
+OSK_STATIC_INLINE void osk_phy_allocator_term(osk_phy_allocator *allocator)
+{
+ OSK_ASSERT(allocator);
+ if (allocator->type == OSKP_PHY_ALLOCATOR_OS)
+ {
+ oskp_phy_os_allocator_term(&allocator->data.os);
+ }
+ else
+ {
+ oskp_phy_dedicated_allocator_term(&allocator->data.dedicated);
+ }
+}
+
+OSK_STATIC_INLINE u32 osk_phy_pages_alloc(osk_phy_allocator *allocator, u32 nr_pages, osk_phy_addr *pages)
+{
+ OSK_ASSERT(allocator);
+ OSK_ASSERT(pages);
+ if (allocator->type != OSKP_PHY_ALLOCATOR_OS && allocator->type != OSKP_PHY_ALLOCATOR_DEDICATED)
+ {
+ return 0;
+ }
+ if (allocator->type == OSKP_PHY_ALLOCATOR_OS)
+ {
+ return oskp_phy_os_pages_alloc(&allocator->data.os, nr_pages, pages);
+ }
+ else
+ {
+ return oskp_phy_dedicated_pages_alloc(&allocator->data.dedicated, nr_pages, pages);
+ }
+}
+
+OSK_STATIC_INLINE void osk_phy_pages_free(osk_phy_allocator *allocator, u32 nr_pages, osk_phy_addr *pages)
+{
+ OSK_ASSERT(allocator);
+ OSK_ASSERT(pages);
+ if (allocator->type == OSKP_PHY_ALLOCATOR_OS)
+ {
+ oskp_phy_os_pages_free(&allocator->data.os, nr_pages, pages);
+ }
+ else
+ {
+ oskp_phy_dedicated_pages_free(&allocator->data.dedicated, nr_pages, pages);
+ }
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _OSK_LOW_LEVEL_MEM_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_osk_math.h
+ * Implementation of the OS abstraction layer for the kernel device driver
+ */
+
+#ifndef _OSK_MATH_H_
+#define _OSK_MATH_H_
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @addtogroup base_osk_api
+ * @{
+ */
+
+/**
+ * @addtogroup oskmath Math
+ *
+ * Math related functions for which no commmon behavior exists on OS.
+ *
+ * @{
+ */
+
+/**
+ * @brief Divide a 64-bit value with a 32-bit divider
+ *
+ * Performs an (unsigned) integer division of a 64-bit value
+ * with a 32-bit divider and returns the 64-bit result and
+ * 32-bit remainder.
+ *
+ * Provided as part of the OSK as not all OSs support 64-bit
+ * division in an uniform way. Currently required to support
+ * printing 64-bit numbers in the OSK debug message functions.
+ *
+ * @param[in,out] value pointer to a 64-bit value to be divided by
+ * \a divisor. The integer result of the division
+ * is stored in \a value on output.
+ * @param[in] divisor 32-bit divisor
+ * @return 32-bit remainder of the division
+ */
+OSK_STATIC_INLINE u32 osk_divmod6432(u64 *value, u32 divisor);
+
+/** @} */ /* end group oskmath */
+
+/** @} */ /* end group base_osk_api */
+
+/** @} */ /* end group base_api */
+
+/* pull in the arch header with the implementation */
+#include <osk/mali_osk_arch_math.h>
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _OSK_MATH_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2008-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_osk_mem.h
+ * Implementation of the OS abstraction layer for the kernel device driver
+ */
+
+#ifndef _OSK_MEM_H_
+#define _OSK_MEM_H_
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+
+/* pull in the arch header with the implementation */
+#include <osk/mali_osk_arch_mem.h>
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @addtogroup base_osk_api
+ * @{
+ */
+
+/**
+ * @addtogroup oskmem Memory
+ *
+ * Provides C standard library style memory allocation functions (e.g. malloc, free).
+ *
+ * @{
+ */
+
+/**
+ * @brief Allocate kernel heap memory
+ *
+ * Returns a buffer capable of containing at least \a size bytes. The
+ * contents of the buffer are undefined.
+ *
+ * The buffer is suitably aligned for storage and subsequent access of every
+ * type that the compiler supports. Therefore, the pointer to the start of the
+ * buffer may be cast into any pointer type, and be subsequently accessed from
+ * such a pointer, without loss of information.
+ *
+ * When the buffer is no longer in use, it must be freed with osk_free().
+ * Failure to do so will cause a memory leak.
+ *
+ * @note For the implementor: most toolchains supply memory allocation
+ * functions that meet the compiler's alignment requirements. Therefore, there
+ * is often no need to write code to align the pointer returned by your
+ * system's memory allocator. Refer to your system's memory allocator for more
+ * information (e.g. the malloc() function, if used).
+ *
+ * The buffer can be accessed by all threads in the kernel. You need
+ * not free the buffer from the same thread that allocated the memory.
+ *
+ * May block while allocating memory and is therefore not allowed in interrupt
+ * service routines.
+ *
+ * @illegal It is illegal to call osk_malloc() with \a size == 0.
+ *
+ * @param size Number of bytes to allocate
+ * @return On success, the buffer allocated. NULL on failure.
+ *
+ */
+OSK_STATIC_INLINE void *osk_malloc(size_t size) CHECK_RESULT;
+
+/**
+ * @brief Allocate and zero kernel heap memory
+ *
+ * Returns a buffer capable of containing at least \a size bytes.
+ * The buffer is initialized to zero.
+ *
+ * The buffer is suitably aligned for storage and subsequent access of every
+ * type that the compiler supports. Therefore, the pointer to the start of the
+ * buffer may be cast into any pointer type, and be subsequently accessed from
+ * such a pointer, without loss of information.
+ *
+ * When the buffer is no longer in use, it must be freed with osk_free().
+ * Failure to do so will cause a memory leak.
+ *
+ * @note For the implementor: most toolchains supply memory allocation
+ * functions that meet the compiler's alignment requirements. Therefore, there
+ * is often no need to write code to align the pointer returned by your
+ * system's memory allocator. Refer to your system's memory allocator for more
+ * information (e.g. the malloc() function, if used).
+ *
+ * The buffer can be accessed by all threads in the kernel. You need
+ * not free the buffer from the same thread that allocated the memory.
+ *
+ * May block while allocating memory and is therefore not allowed in interrupt
+ * service routines.
+ *
+ * @illegal It is illegal to call osk_calloc() with \a size == 0.
+ *
+ * @param[in] size number of bytes to allocate
+ * @return On success, the zero initialized buffer allocated. NULL on failure
+ */
+OSK_STATIC_INLINE void *osk_calloc(size_t size);
+
+/**
+ * @brief Free kernel heap memory
+ *
+ * Reclaims the buffer pointed to by the parameter \a ptr for the kernel.
+ * All memory returned from osk_malloc() and osk_calloc() must
+ * be freed before the kernel driver exits. Otherwise, a memory leak will
+ * occur.
+ *
+ * Memory must be freed once. It is an error to free the same non-NULL pointer
+ * more than once.
+ *
+ * It is legal to free the NULL pointer.
+ *
+ * @param[in] ptr Pointer to buffer to free
+ *
+ * May block while freeing memory and is therefore not allowed in interrupt
+ * service routines.
+ *
+ * @param[in] ptr pointer to memory previously allocated by
+ * osk_malloc() or osk_calloc(). If ptr is NULL no operation
+ * is performed.
+ */
+OSK_STATIC_INLINE void osk_free(void * ptr);
+
+/**
+ * @brief Allocate kernel memory at page granularity suitable for mapping into user space
+ *
+ * Allocates a number of pages from kernel virtual memory to store at least \a size bytes.
+ *
+ * The allocated memory is aligned to OSK_PAGE_SIZE bytes and is allowed to be mapped
+ * into user space with read/write access.
+ *
+ * The allocated memory is initialized to zero to prevent any data leaking from kernel space.
+ * One needs to be aware not to store any kernel objects or pointers here as these
+ * could be modified by the user at any time.
+ *
+ * If \a size is not a multiple of OSK_PAGE_SIZE, the last page of the allocation is
+ * only partially used. It is not allowed to store any data in the unused area of the
+ * last page.
+ *
+ * @illegal It is illegal to call osk_vmalloc() with \a size == 0.
+
+ * May block while allocating memory and is therefore not allowed in interrupt
+ * service routines.
+ *
+ * @param[in] size number of bytes to allocate (will be rounded up to
+ * a multiple of OSK_PAGE_SIZE).
+ * @return pointer to allocated memory, NULL on failure
+ */
+OSK_STATIC_INLINE void *osk_vmalloc(size_t size) CHECK_RESULT;
+
+/**
+ * @brief Free kernel memory
+ *
+ * Releases memory to the kernel, previously allocated with osk_vmalloc().
+ * The same pointer returned from osk_vmalloc() needs to be provided --
+ * freeing portions of an allocation is not allowed.
+ *
+ * May block while freeing memory and is therefore not allowed in interrupt
+ * service routines.
+ *
+ * @param[in] vaddr pointer to memory previously allocated by osk_vmalloc().
+ * If vaddr is NULL no operation is performed.
+ */
+OSK_STATIC_INLINE void osk_vfree(void *vaddr);
+
+#ifndef OSK_MEMSET
+/** @brief Fills memory.
+ *
+ * Sets the first \a size bytes of the block of memory pointed to by \a ptr to
+ * the specified value
+ * @param[out] ptr Pointer to the block of memory to fill.
+ * @param[in] chr Value to be set, passed as an int. The byte written into
+ * memory will be the smallest positive integer equal to (\a
+ * chr mod 256).
+ * @param[in] size Number of bytes to be set to the value.
+ * @return \a ptr is always passed through unmodified
+ *
+ * @note the prototype of the function is:
+ * @code void *OSK_MEMSET( void *ptr, int chr, size_t size ); @endcode
+ */
+#define OSK_MEMSET( ptr, chr, size ) You_must_define_the_OSK_MEMSET_macro_in_the_platform_layer
+
+#error You must define the OSK_MEMSET macro in the mali_osk_arch_mem.h layer.
+
+/* The definition was only provided for documentation purposes; remove it now. */
+#undef OSK_MEMSET
+#endif /* OSK_MEMSET */
+
+#ifndef OSK_MEMCPY
+/** @brief Copies memory.
+ *
+ * Copies the \a len bytes from the buffer pointed by the parameter \a src
+ * directly to the buffer pointed by \a dst.
+ *
+ * @illegal It is illegal to call OSK_MEMCPY with \a src overlapping \a
+ * dst anywhere in \a len bytes.
+ *
+ * @param[out] dst Pointer to the destination array where the content is to be copied.
+ * @param[in] src Pointer to the source of data to be copied.
+ * @param[in] len Number of bytes to copy.
+ * @return \a dst is always passed through unmodified.
+ *
+ * @note the prototype of the function is:
+ * @code void *OSK_MEMCPY( void *dst, CONST void *src, size_t len ); @endcode
+ */
+#define OSK_MEMCPY( dst, src, len ) You_must_define_the_OSK_MEMCPY_macro_in_the_platform_layer
+
+#error You must define the OSK_MEMCPY macro in the mali_osk_arch_mem.h file.
+
+/* The definition was only provided for documentation purposes; remove it now. */
+#undef OSK_MEMCPY
+#endif /* OSK_MEMCPY */
+
+#ifndef OSK_MEMCMP
+/** @brief Compare memory areas
+ *
+ * Compares \a len bytes of the memory areas pointed by the parameter \a s1 and
+ * \a s2.
+ *
+ * @param[in] s1 Pointer to the first area of memory to compare.
+ * @param[in] s2 Pointer to the second area of memory to compare.
+ * @param[in] len Number of bytes to compare.
+ * @return an integer less than, equal to, or greater than zero if the first
+ * \a len bytes of s1 is found, respectively, to be less than, to match, or
+ * be greater than the first \a len bytes of s2.
+ *
+ * @note the prototype of the function is:
+ * @code int OSK_MEMCMP( CONST void *s1, CONST void *s2, size_t len ); @endcode
+ */
+#define OSK_MEMCMP( s1, s2, len ) You_must_define_the_OSK_MEMCMP_macro_in_the_platform_layer
+
+#error You must define the OSK_MEMCMP macro in the mali_osk_arch_mem.h file.
+
+/* The definition was only provided for documentation purposes; remove it now. */
+#undef OSK_MEMCMP
+#endif /* OSK_MEMCMP */
+
+/** @} */ /* end group oskmem */
+
+/** @} */ /* end group base_osk_api */
+
+/** @} */ /* end group base_api */
+
+/* Include osk alloc wrappers header */
+
+#if (1 == MALI_BASE_TRACK_MEMLEAK)
+#include "osk/include/mali_osk_mem_wrappers.h"
+#endif
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _OSK_MEM_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#ifndef _OSK_POWER_H_
+#define _OSK_POWER_H_
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @addtogroup base_osk_api
+ * @{
+ */
+
+/**
+ * @addtogroup oskpower Power management
+ *
+ * Some OS need to approve power state changes to a device, either to
+ * control power to a bus the device resides on, or to give the OS
+ * power manager the chance to see if the power state change is allowed
+ * for the current OS power management policy.
+ *
+ * @{
+ */
+
+/**
+ * @brief Request and perform a change in the power state of the device
+ *
+ * A function to request the OS to perform a change the power state of a device. This
+ * function returns when the power state change has completed.
+ *
+ * This allows the OS to control the power for the bus on which the GPU device resides,
+ * and the OS power manager can verify changing the power state is allowed according to
+ * its own power management policy (the OS may have been informed that an application will
+ * make heavy use of the GPU soon). As a result of the request the OS is likely to
+ * request the GPU device driver to actually perform the power state change (in Windows
+ * CE for instance, the OS power manager will issue an IOCTL_POWER_SET to actually make
+ * the GPU device change the power state).
+ *
+ * The result of the request is either success (the GPU device driver has successfully
+ * completed the power state change for the GPU device), refused (the OS didn't allow
+ * the power state change), or failure (the GPU device driver encountered an error
+ * changing the power state).
+ *
+ * @param[in,out] info OS specific information necessary to control power to the device
+ * @param[in] state power state to switch to (off, idle, or active)
+ * @return OSK_POWER_REQUEST_FINISHED when the driver successfully completed the power
+ * state change for the device, OSK_POWER_REQUEST_FAILED when it failed, or
+ * OSK_POWER_REQUEST_REFUSED when the OS didn't allow the power state change.
+ */
+OSK_STATIC_INLINE osk_power_request_result osk_power_request(osk_power_info *info, osk_power_state state) CHECK_RESULT;
+
+/* @} */ /* end group oskpower */
+
+/** @} */ /* end group base_osk_api */
+
+/** @} */ /* end group base_api */
+
+/* pull in the arch header with the implementation */
+#include <osk/mali_osk_arch_power.h>
+
+#endif /* _OSK_POWER_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_osk_time.h
+ * Implementation of the OS abstraction layer for the kernel device driver
+ */
+
+#ifndef _OSK_TIME_H_
+#define _OSK_TIME_H_
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @addtogroup base_osk_api
+ * @{
+ */
+
+/**
+ * @addtogroup osktime Time
+ *
+ * A set of time functions based on the OS tick timer which allow
+ * - retrieving the current tick count
+ * - determining if a deadline has passed
+ * - calculating the elapsed time between two tick counts
+ * - converting a time interval to a tick count
+ *
+ * Used in combination with scheduling timers, or determining duration of certain actions (timeouts, profiling).
+ * @{
+ */
+
+/**
+ * @brief Get current tick count
+ *
+ * Retrieves the current tick count in the OS native resolution. Use in
+ * combination with osk_time_after() to determine if current time has
+ * passed a deadline, or to determine the elapsed time with osk_time_tickstoms()
+ * and osk_time_after().
+ *
+ * The tick count resolution is 32-bit or higher, supporting a 1000Hz tick count
+ * of 2^32/1000*60*60*24 ~= 49.7 days before the counter wraps.
+ *
+ * @return current tick count
+ */
+OSK_STATIC_INLINE osk_ticks osk_time_now(void) CHECK_RESULT;
+
+/**
+ * @brief Convert milliseconds to ticks
+ *
+ * Converts \a ms milliseconds to ticks.
+ *
+ * Intended use of this function is to convert small timeout periods or
+ * calculate nearby deadlines (e.g. osk_ticks_now() + osk_time_mstoticks(50)).
+ *
+ * Supports converting a period of up to 2^32/1000*60*60*24 ~= 49.7 days
+ * in the case of a 1000Hz OS tick timer.
+ *
+ * @param[in] ms number of millisecons to convert to ticks
+ * @return number of ticks in \a ms milliseconds
+ */
+OSK_STATIC_INLINE u32 osk_time_mstoticks(u32 ms) CHECK_RESULT;
+
+/**
+ * @brief Calculate elapsed time
+ *
+ * Calculates elapsed time in milliseconds between tick \a ticka and \a tickb,
+ * taking into account that the tick counter may have wrapped. Note that
+ * \a tickb must be later than \a ticka in time.
+ *
+ * @param[in] ticka a tick count value (as returned from osk_time_now())
+ * @param[in] tickb a tick count value (as returned from osk_time_now())
+ * @return elapsed time in milliseconds
+ */
+OSK_STATIC_INLINE u32 osk_time_elapsed(osk_ticks ticka, osk_ticks tickb) CHECK_RESULT;
+
+/**
+ * @brief Determines which tick count is later in time
+ *
+ * Determines if \a ticka comes after \a tickb in time. Handles the case where
+ * the tick counter may have wrapped.
+ *
+ * Intended use of this function is to determine if a deadline has passed
+ * or to determine how the difference between two tick count values should
+ * be calculated.
+ *
+ * @param[in] ticka a tick count value (as returned from osk_time_now())
+ * @param[in] tickb a tick count value (as returned from osk_time_now())
+ * @return MALI_TRUE when \a ticka is after \a tickb in time.
+ * @return MALI_FALSE when \a tickb is after \a ticka in time.
+ */
+OSK_STATIC_INLINE mali_bool osk_time_after(osk_ticks ticka, osk_ticks tickb) CHECK_RESULT;
+
+/**
+ * @brief Retrieve current "wall clock" time
+ *
+ * This function returns the current time in a format that userspace can also
+ * produce and allows direct comparison of events in the kernel with events
+ * that userspace controls.
+ *
+ * @param[in] ts An osk_timespec structure
+ */
+OSK_STATIC_INLINE void osk_gettimeofday(osk_timeval *ts);
+
+/* @} */ /* end group osktime */
+
+/** @} */ /* end group base_osk_api */
+
+/** @} */ /* end group base_api */
+
+/* pull in the arch header with the implementation */
+#include <osk/mali_osk_arch_time.h>
+
+#ifdef __cplusplus
+}
+#endif
+
+
+#endif /* _OSK_TIME_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_osk_timers.h
+ * Implementation of the OS abstraction layer for the kernel device driver
+ */
+
+#ifndef _OSK_TIMERS_H_
+#define _OSK_TIMERS_H_
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @addtogroup base_osk_api
+ * @{
+ */
+
+/**
+ * @addtogroup osktimers Timers
+ *
+ * A set of functions to control a one-shot timer. Timer expirations are
+ * set relative to current time at millisecond resolution. A minimum time
+ * period of 1 millisecond needs to be observed -- immediate firing of
+ * timers is not supported. A user-supplied function is called with a
+ * user-supplied argument when the timer expires. The expiration time
+ * of a timer cannot be changed once started - a timer first needs to
+ * be stopped before it can be started with another timer expiration.
+ *
+ * Examples of use: watchdog timeout on job execution duration,
+ * a job progress checker, power management profile timeouts.
+ *
+ * @{
+ */
+
+/**
+ * @brief Initializes a timer
+ *
+ * Initializes a timer. The timer is not started yet.
+ *
+ * Note For timers created on stack @ref osk_timer_on_stack_init() should be used.
+ *
+ * The timer may be reinitialized but only after having called osk_timer_term()
+ * on \a tim.
+ *
+ * It is a programming error to pass a NULL pointer for \a tim. This will raise
+ * an assertion in debug builds.
+ *
+ * @param[out] tim an osk timer object to initialize
+ * @return OSK_ERR_NONE on success. Any other value indicates failure.
+ */
+OSK_STATIC_INLINE osk_error osk_timer_init(osk_timer * const tim) CHECK_RESULT;
+
+/**
+ * @brief Initializes a timer on stack
+ *
+ * Initializes a timer created on stack. The timer is not started yet.
+ *
+ * The timer may be reinitialized but only after having called osk_timer_term()
+ * on \a tim.
+ *
+ * It is a programming error to pass a NULL pointer for \a tim. This will raise
+ * an assertion in debug builds.
+ *
+ * @param[out] tim an osk timer object to initialize
+ * @return OSK_ERR_NONE on success. Any other value indicates failure.
+ */
+OSK_STATIC_INLINE osk_error osk_timer_on_stack_init(osk_timer * const tim) CHECK_RESULT;
+
+/**
+ * @brief Starts a timer
+ *
+ * Starts a timer. When the timer expires in \a delay milliseconds, the
+ * registered callback function will be called with the user supplied
+ * argument.
+ *
+ * The callback needs to be registered with osk_timer_callback_set()
+ * at least once during the lifetime of timer \a tim and before starting
+ * the timer.
+ *
+ * You cannot start a timer that has been started already. A timer needs
+ * to be stopped before starting it again.
+ *
+ * A timer cannot expire immediately. The minimum \a delay value is 1.
+ *
+ * A timer may fail to start and is important to check the result of this
+ * function to prevent waiting for a callback that will never get called.
+ *
+ * It is a programming error to pass a NULL pointer for \a tim or to specify
+ * 0 for \a delay. This will raise an assertion in debug builds.
+ *
+ * @param[in] tim an initialized osk timer object
+ * @param[in] delay timer expiration in milliseconds, at least 1.
+ * @return OSK_ERR_NONE on success. Any other value indicates failure.
+ */
+OSK_STATIC_INLINE osk_error osk_timer_start(osk_timer *tim, u32 delay) CHECK_RESULT;
+
+/**
+ * @brief Starts a timer using a high-resolution parameter
+ *
+ * This is identical to osk_timer_start(), except that the argument is
+ * expressed in nanoseconds.
+ *
+ * @note whilst the parameter is high-resolution, the actual resolution of the
+ * timer may be much more coarse than nanoseconds. In this case, \a delay_ns
+ * will be rounded up to the timer resolution.
+ *
+ * It is a programming error to pass a NULL pointer for \a tim or to specify
+ * 0 for \a delay_ns. This will raise an assertion in debug builds.
+ *
+ * @param[in] tim an initialized osk timer object
+ * @param[in] delay_ns timer expiration in nanoseconds, at least 1.
+ * @return OSK_ERR_NONE on success. Any other value indicates failure.
+ */
+OSK_STATIC_INLINE osk_error osk_timer_start_ns(osk_timer *tim, u64 delay_ns) CHECK_RESULT;
+
+/**
+ * @brief Modifies a timer's timeout
+ *
+ * See \a osk_timer_start for details.
+ *
+ * The only difference of this function from \a osk_timer_start is:
+ * If the timer was already set to expire the timer is modified to expire in \a new_delay milliseconds.
+ *
+ * It is a programming error to pass a NULL pointer for \a tim or to specify
+ * 0 for \a new_delay. This will raise an assertion in debug builds.
+ *
+ * @param[in] tim an initialized osk timer object
+ * @param[in] new_delay timer expiration in milliseconds, at least 1.
+ * @return OSK_ERR_NONE on success. Any other value indicates failure.
+ */
+OSK_STATIC_INLINE osk_error osk_timer_modify(osk_timer *tim, u32 new_delay) CHECK_RESULT;
+
+/**
+ * @brief Modifies a timer's timeout using a high-resolution parameter
+ *
+ * This is identical to osk_timer_modify(), except that the argument is
+ * expressed in nanoseconds.
+ *
+ * @note whilst the parameter is high-resolution, the actual resolution of the
+ * timer may be much more coarse than nanoseconds. In this case, \a new_delay_ns
+ * will be rounded up to the timer resolution.
+ *
+ * It is a programming error to pass a NULL pointer for \a tim or to specify
+ * 0 for \a new_delay_ns. This will raise an assertion in debug builds.
+ *
+ * @param[in] tim an initialized osk timer object
+ * @param[in] new_delay_ns timer expiration in nanoseconds, at least 1.
+ * @return OSK_ERR_NONE on success. Any other value indicates failure.
+ */
+OSK_STATIC_INLINE osk_error osk_timer_modify_ns(osk_timer *tim, u64 new_delay_ns) CHECK_RESULT;
+
+/**
+ * @brief Stops a timer
+ *
+ * Stops a timer. If the timer already expired this will have no
+ * effect. If the callback for the timer is currently executing,
+ * this function will block on its completion.
+ *
+ * A non-expired timer will have to be stopped before it can be
+ * started again with osk_timer_start().
+ *
+ * It is a programming error to pass a NULL pointer for \a tim. This will raise
+ * an assertion in debug builds.
+ *
+ * @param[in] tim an initialized osk timer object
+ */
+OSK_STATIC_INLINE void osk_timer_stop(osk_timer *tim);
+
+/**
+ * @brief Registers a callback function with a timer
+ *
+ * Registers a callback function to be called when the timer expires.
+ * The callback function is called with the provided \a data argument.
+ * The timer should be stopped when registering the callback.
+ *
+ * The code executing within the timer call back is limited as it
+ * is assumed it executes in an IRQ context:
+ * - Access to user space is not allowed - there is no process context
+ * - It is not allowed to call any function that may block
+ * - Only spinlocks or atomics may be used to access shared data structures
+ *
+ * If a timer requires more work to be done than can be achieved in an IRQ
+ * context, then it should defer the work to an OSK workqueue.
+ *
+ * It is a programming error to pass a NULL pointer for \a tim, \a callback.
+ * This will raise an assertion in debug builds. NULL is allowed for \a data.
+ *
+ * @param[in] tim an initialized osk timer object
+ * @param[in] callback timer callback function
+ * @param[in] data argument to pass to timer callback
+ */
+OSK_STATIC_INLINE void osk_timer_callback_set(osk_timer *tim, osk_timer_callback callback, void *data);
+
+/**
+ * @brief Terminates a timer
+ *
+ * Frees any resources allocated for a timer. A timer needs to be
+ * stopped with osk_timer_stop() before a timer can be terminated.
+ *
+ * Note For timers created on stack @ref osk_timer_on_stack_term() should be used.
+ *
+ * A timer may be reinitialized after it has been terminated.
+ *
+ * It is a programming error to pass a NULL pointer for \a tim. This will raise
+ * an assertion in debug builds.
+ *
+ * @param[in] tim an initialized osk timer object
+ */
+OSK_STATIC_INLINE void osk_timer_term(osk_timer *tim);
+
+/**
+ * @brief Terminates a timer on stack
+ *
+ * Frees any resources allocated for a timer created on stack. A timer needs to be
+ * stopped with osk_timer_stop() before a timer can be terminated.
+ *
+ * A timer may be reinitialized after it has been terminated.
+ *
+ * It is a programming error to pass a NULL pointer for \a tim. This will raise
+ * an assertion in debug builds.
+ *
+ * @param[in] tim an initialized osk timer object
+ */
+OSK_STATIC_INLINE void osk_timer_on_stack_term(osk_timer *tim);
+
+/* @} */ /* end group osktimers */
+
+/** @} */ /* end group base_osk_api */
+
+/** @} */ /* end group base_api */
+
+/* pull in the arch header with the implementation */
+#include <osk/mali_osk_arch_timers.h>
+
+#ifdef __cplusplus
+}
+#endif
+
+
+#endif /* _OSK_TIMERS_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#ifndef _OSK_TYPES_H_
+#define _OSK_TYPES_H_
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+
+typedef enum osk_error
+{
+ OSK_ERR_NONE, /**< Success */
+ OSK_ERR_FAIL, /**< Unclassified failure */
+ OSK_ERR_MAP, /**< Memory mapping operation failed */
+ OSK_ERR_ALLOC, /**< Memory allocation failed */
+ OSK_ERR_ACCESS /**< Permissions to access an object failed */
+} osk_error;
+
+#define OSK_STATIC_INLINE static __inline
+#define OSK_BITS_PER_LONG (8 * sizeof(unsigned long))
+#define OSK_ULONG_MAX (~0UL)
+
+/**
+ * OSK workqueue flags
+ *
+ * Flags specifying the kind of workqueue to create. Flags can be combined.
+ */
+
+/**
+ * By default a work queue guarantees non-reentrace on the same CPU.
+ * When the OSK_WORKQ_NON_REENTRANT flag is set, this guarantee is
+ * extended to all CPUs.
+ */
+#define OSK_WORKQ_NON_REENTRANT (1 << 0)
+/**
+ * Work units submitted to a high priority queue start execution as soon
+ * as resources are available.
+ */
+#define OSK_WORKQ_HIGH_PRIORITY (1 << 1)
+/**
+ * Ensures there is always a thread available to run tasks on this queue. This
+ * flag should be set if the work queue is involved in reclaiming memory when
+ * its work units run.
+ */
+#define OSK_WORKQ_RESCUER (1 << 2)
+
+/**
+ * Prototype for a function called when a OSK timer expires. See osk_timer_callback_set()
+ * that registers the callback function with a OSK timer.
+ */
+typedef void (*osk_timer_callback)(void *);
+
+typedef enum osk_power_state
+{
+ OSK_POWER_STATE_OFF, /**< Device is off */
+ OSK_POWER_STATE_IDLE, /**< Device is idle */
+ OSK_POWER_STATE_ACTIVE /**< Device is active */
+} osk_power_state;
+
+typedef enum osk_power_request_result
+{
+ OSK_POWER_REQUEST_FINISHED, /**< The driver successfully completed the power state change for the device */
+ OSK_POWER_REQUEST_FAILED, /**< The driver for the device encountered an error changing the power state */
+ OSK_POWER_REQUEST_REFUSED /**< The OS didn't allow the power state change for the device */
+} osk_power_request_result;
+
+
+#include <osk/mali_osk_arch_types.h>
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _OSK_TYPES_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_osk_waitq.h
+ * Implementation of the OS abstraction layer for the kernel device driver
+ */
+
+#ifndef _OSK_WAITQ_H_
+#define _OSK_WAITQ_H_
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @addtogroup base_osk_api
+ * @{
+ */
+
+/**
+ * @addtogroup oskwaitq Wait queue
+ *
+ * A waitqueue is used to wait for a specific condition to become true.
+ * The waitqueue has a flag that needs to be set when the condition
+ * becomes true and cleared when the condition becomes false.
+ *
+ * Threads wait for the specific condition to become true by calling
+ * osk_waitq_wait(). If the condition is already true osk_waitq_wait()
+ * will return immediately.
+ *
+ * When a thread causes the specific condition to become true, it needs
+ * to set the waitqueue flag with osk_waitq_set(), which will wakeup
+ * all threads waiting on the waitqueue.
+ *
+ * When a thread causes the specific condition to become false, it needs
+ * to clear the waitqueue flag with osk_waitq_clear().
+ *
+ * @{
+ */
+
+/**
+ * @brief Initialize a wait queue
+ *
+ * Initializes a waitqueue. The waitqueue flag is cleared assuming the
+ * specific condition associated with the waitqueue is false.
+ *
+ * @param[out] wq wait queue to initialize
+ * @return OSK_ERROR_NONE on success. Any other value indicates failure.
+ */
+OSK_STATIC_INLINE osk_error osk_waitq_init(osk_waitq * const wq) CHECK_RESULT;
+
+/**
+ * @brief Wait until waitqueue flag is set
+ *
+ * Blocks until a thread signals the waitqueue that the condition has
+ * become true. Use osk_waitq_set() to set the waitqueue flag to signal
+ * the condition has become true. If the condition is already true,
+ * this function will return immediately.
+ *
+ * @param[in] wq initialized waitqueue
+ */
+OSK_STATIC_INLINE void osk_waitq_wait(osk_waitq *wq);
+
+/**
+ * @brief Set the waitqueue flag
+ *
+ * Signals the waitqueue that the condition associated with the waitqueue
+ * has become true. All threads on the waitqueue will be woken up. The
+ * waitqueue flag is set.
+ *
+ * @param[in] wq initialized waitqueue
+ */
+OSK_STATIC_INLINE void osk_waitq_set(osk_waitq *wq);
+
+/**
+ * @brief Clear the waitqueue flag
+ *
+ * Signals the waitqueue that the condition associated with the waitqueue
+ * has become false. The waitqueue flag is reset (cleared).
+ *
+ * @param[in] wq initialized waitqueue
+ */
+OSK_STATIC_INLINE void osk_waitq_clear(osk_waitq *wq);
+
+/**
+ * @brief Terminate a wait queue
+ *
+ * Frees any resources allocated for a waitqueue.
+ *
+ * No threads are allowed to be waiting on the waitqueue when terminating
+ * the waitqueue. If there are waiting threads, they should be woken up
+ * first by setting the waitqueue flag with osk_waitq_set() after which
+ * they must cease using the waitqueue.
+ *
+ * @param[in] wq initialized waitqueue
+ */
+OSK_STATIC_INLINE void osk_waitq_term(osk_waitq *wq);
+
+/* @} */ /* end group oskwaitq */
+
+/** @} */ /* end group base_osk_api */
+
+/** @} */ /* end group base_api */
+
+/* pull in the arch header with the implementation */
+#include <osk/mali_osk_arch_waitq.h>
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _OSK_WAITQ_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#ifndef _OSK_WORKQ_H
+#define _OSK_WORKQ_H
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+#ifdef __cplusplus
+extern "C"
+{
+#endif
+
+/* pull in the arch header with the implementation */
+#include <osk/mali_osk_arch_workq.h>
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @addtogroup base_osk_api
+ * @{
+ */
+
+/**
+ * @addtogroup oskworkqueue Work queue
+ *
+ * A workqueue is a queue of functions that will be invoked by one or more worker threads
+ * at some future time. Functions are invoked in FIFO order by each worker thread. However,
+ * overall execution of work is <b>not guaranteed to occur in FIFO order</b>, because two or
+ * more worker threads may be processing items concurrently from the same work queue.
+ *
+ * Each function that is submitted to the workqueue needs to be represented by a work unit
+ * (osk_workq_work). When a function is invoked, a pointer to the work unit is passed to the
+ * invoked function. A work unit needs to be embedded within the object that the invoked
+ * function needs to operate on, so that the invoked function can determine a pointer to the
+ * object it needs to operate on.
+ *
+ * @{
+ */
+
+/**
+ * @brief Initialize a work queue
+ *
+ * Initializes an empty work queue. One or more threads within the system will
+ * be servicing the work units submitted to the work queue.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL) for the
+ * wq parameter. Passing NULL will assert in debug builds.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL) for the
+ * name parameter. Passing NULL will assert in debug builds.
+ *
+ * It is a programming error to pass a value for flags other than a combination
+ * of the OSK_WORK_ constants or 0. Doing so will assert in debug builds.
+ *
+ * @param[out] wq workqueue to initialize
+ * @param[in] name The name for the queue (may be visible in the process list)
+ * @param[in] flags flags specifying behavior of work queue, see OSK_WORKQ_ constants.
+ * @return OSK_ERR_NONE on success. Any other value indicates failure.
+ */
+OSK_STATIC_INLINE osk_error osk_workq_init(osk_workq * const wq, const char *name, u32 flags) CHECK_RESULT;
+
+/**
+ * @brief Terminate a work queue
+ *
+ * Stops accepting new work and waits until the work queue is empty and
+ * all work has been completed, then frees any resources allocated for the workqueue.
+ *
+ * @param[in] wq intialized workqueue
+ */
+OSK_STATIC_INLINE void osk_workq_term(osk_workq *wq);
+
+/**
+ * @brief (Re)initialize a work object
+ *
+ * Sets up a work object to call the given function pointer.
+ * See \a osk_workq_work_init_on_stack if the work object
+ * is a stack object
+ * The function \a fn needs to match the prototype: void fn(osk_workq_work *).
+ *
+ * It is a programming error to pass an invalid pointer (including NULL) for
+ * any parameter. Passing NULL will assert in debug builds.
+ *
+ * @param[out] wk work unit to be initialized
+ * @param[in] fn function to be invoked at some future time
+ */
+OSK_STATIC_INLINE void osk_workq_work_init(osk_workq_work * const wk, osk_workq_fn fn);
+
+/**
+ * @brief (Re)initialize a work object allocated on the stack
+ *
+ * Sets up a work object to call the given function pointer.
+ * Special version needed for work objects on the stack.
+ * The function \a fn needs to match the prototype: void fn(osk_workq_work *).
+ *
+ * It is a programming error to pass an invalid pointer (including NULL) for
+ * any parameter. Passing NULL will assert in debug builds.
+ *
+ * @param[out] wk work unit to be initialized
+ * @param[in] fn function to be invoked at some future time
+ */
+OSK_STATIC_INLINE void osk_workq_work_init_on_stack(osk_workq_work * const wk, osk_workq_fn fn);
+
+
+/**
+ * @brief Submit work to a work queue
+ *
+ * Adds work (a work unit) to a work queue.
+ *
+ * The work unit (osk_workq_work) represents a function \a fn to be invoked at some
+ * future time. The invoked function \a fn is set via \a osk_workq_work_init or
+ * \a osk_workq_work_init_on_stack if the work object resides on the stack.
+ *
+ * The work unit should be embedded within the object that the invoked function needs
+ * to operate on, so that the invoked function can determine a pointer to the object
+ * it needs to operate on.
+ *
+ * osk_workq_submit() must be callable from IRQ context (it may not block nor access user space)
+ *
+ * The work unit memory \a wk needs to remain allocated until the function \a fn has been invoked.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL) for
+ * any parameter. Passing NULL will assert in debug builds.
+ *
+ * @param[in] wq intialized workqueue
+ * @param[out] wk initialized work object to submit
+ */
+OSK_STATIC_INLINE void osk_workq_submit(osk_workq *wq, osk_workq_work * const wk);
+
+/**
+ * @brief Flush a work queue
+ *
+ * All work units submitted to \a wq before this call will be complete by the
+ * time this function returns. The work units are guaranteed to be completed
+ * across the pool of worker threads.
+ *
+ * However, if a thread submits new work units to \a wq during the flush, then
+ * this function will not prevent those work units from running, nor will it
+ * guarantee to wait until after those work units are complete.
+ *
+ * Providing that no other thread attempts to submit work units to \a wq during
+ * or after this call, then it is guaranteed that no worker thread is executing
+ * any work from \a wq.
+ *
+ * @note The caller must ensure that they hold no locks that are also obtained
+ * by any work units on \a wq. Otherwise, a deadlock \b will occur.
+ *
+ * @note In addition, you must never call osk_workq_flush() from within any
+ * work unit, since this would cause a deadlock. Whilst it would normally be
+ * possible for a work unit to flush a different work queue, this may still
+ * cause a deadlock when the underlying implementation is using a single
+ * work queue for all work queues in the system.
+ *
+ * It is a programming error to pass an invalid pointer (including NULL) for
+ * any parameter. Passing NULL will assert in debug builds.
+ *
+ * @param[in] wq intialized workqueue to flush
+ */
+OSK_STATIC_INLINE void osk_workq_flush(osk_workq *wq);
+
+/** @} */ /* end group oskworkqueue */
+
+/** @} */ /* end group base_osk_api */
+
+/** @} */ /* end group base_api */
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _OSK_WORKQ_H */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#ifndef _OSK_H_
+#define _OSK_H_
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @defgroup base_osk_api Kernel-side OSK APIs
+ */
+
+/** @} */ /* end group base_api */
+
+#include "include/mali_osk_types.h"
+#include "include/mali_osk_debug.h"
+#if (1== MALI_BASE_TRACK_MEMLEAK)
+#include "include/mali_osk_failure.h"
+#endif
+#include "include/mali_osk_math.h"
+#include "include/mali_osk_lists.h"
+#include "include/mali_osk_lock_order.h"
+#include "include/mali_osk_locks.h"
+#include "include/mali_osk_atomics.h"
+#include "include/mali_osk_timers.h"
+#include "include/mali_osk_time.h"
+#include "include/mali_osk_bitops.h"
+#include "include/mali_osk_workq.h"
+#include "include/mali_osk_mem.h"
+#include "include/mali_osk_low_level_mem.h"
+#include "include/mali_osk_waitq.h"
+#include "include/mali_osk_power.h"
+#include "include/mali_osk_credentials.h"
+
+#endif /* _OSK_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_osk_common.h
+ * This header defines macros shared by the Common Layer public interfaces for
+ * all utilities, to ensure they are available even if a client does not include
+ * the convenience header mali_osk.h.
+ */
+
+#ifndef _OSK_COMMON_H_
+#define _OSK_COMMON_H_
+
+#include <osk/include/mali_osk_debug.h>
+
+/**
+ * @private
+ */
+static INLINE mali_bool oskp_ptr_is_null(const void* ptr)
+{
+ CSTD_UNUSED(ptr);
+ OSK_ASSERT(ptr != NULL);
+ return MALI_FALSE;
+}
+
+/**
+ * @brief Check if a pointer is NULL, if so an assert is triggered, otherwise the pointer itself is returned.
+ *
+ * @param [in] ptr Pointer to test
+ *
+ * @return @c ptr if it's not NULL.
+ */
+#define OSK_CHECK_PTR(ptr)\
+ (oskp_ptr_is_null(ptr) ? NULL : ptr)
+
+#endif /* _OSK_COMMON_H_ */
--- /dev/null
+obj-y += linux/
+obj-y += common/
--- /dev/null
+ccflags-$(CONFIG_VITHAR) += -DMALI_DEBUG=0 -DMALI_HW_TYPE=2 \
+-DMALI_USE_UMP=0 -DMALI_HW_VERSION=r0p0 -DMALI_BASE_TRACK_MEMLEAK=0 \
+-DMALI_ANDROID=1 -DMALI_ERROR_INJECT_ON=0 -DMALI_NO_MALI=0 -DMALI_BACKEND_KERNEL=1 \
+-DMALI_FAKE_PLATFORM_DEVICE=1 -DMALI_MOCK_TEST=0 -DMALI_KERNEL_TEST_API=0 \
+-DMALI_INFINITE_CACHE=0 -DMALI_LICENSE_IS_GPL=1 -DMALI_PLATFORM_CONFIG=exynos5 \
+-DMALI_UNIT_TEST=0 -DMALI_GATOR_SUPPORT=0 -DUMP_LICENSE_IS_GPL=1 \
+-DUMP_SVN_REV_STRING="\"dummy\"" -DMALI_RELEASE_NAME="\"dummy\""
+
+ROOTDIR = $(src)/../../..
+
+ccflags-$(CONFIG_VITHAR) += -I$(ROOTDIR)/kbase
+ccflags-$(CONFIG_VITHAR) += -I$(ROOTDIR)
+
+ccflags-y += -I$(ROOTDIR) -I$(ROOTDIR)/include -I$(ROOTDIR)/osk/src/linux/include -I$(ROOTDIR)/uk/platform_dummy
+ccflags-y += -I$(ROOTDIR)/kbase/midg_gpus/r0p0
+
+
+obj-y += mali_osk_bitops_cmn.o mali_osk_debug_cmn.o
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2008-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+
+#include <osk/mali_osk.h>
+
+unsigned long osk_bitarray_find_first_zero_bit(const unsigned long *addr, unsigned long maxbit)
+{
+ unsigned long total;
+
+ OSK_ASSERT(NULL != addr);
+
+ for ( total = 0; total < maxbit; total += OSK_BITS_PER_LONG, ++addr )
+ {
+ if (OSK_ULONG_MAX != *addr)
+ {
+ int result;
+ result = oskp_find_first_zero_bit( *addr );
+ /* non-negative signifies the bit was found */
+ if ( result >= 0 )
+ {
+ total += (unsigned long)result;
+ break;
+ }
+ }
+ }
+
+ /* Now check if we reached maxbit or above */
+ if ( total >= maxbit )
+ {
+ total = maxbit;
+ }
+
+ return total; /* either the found bit nr, or maxbit if not found */
+}
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file osk/src/common/mali_osk_compile_asserts.h
+ *
+ * Private definitions of compile time asserts.
+ **/
+
+/**
+ * Unreachable function needed to check values at compile-time, in both debug
+ * and release builds
+ */
+void oskp_cmn_compile_time_assertions(void);
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2008-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+
+#include <osk/mali_osk.h>
+#include "mali_osk_compile_asserts.h"
+
+/**
+ * @brief Contains the module names (modules in the same order as for the osk_module enumeration)
+ * @sa oskp_module_to_str
+ */
+static const char* CONST oskp_str_modules[] =
+{
+ "UNKNOWN", /**< Unknown module */
+ "OSK", /**< OSK */
+ "UKK", /**< UKK */
+ "BASE_MMU", /**< Base MMU */
+ "BASE_JD", /**< Base Job Dispatch */
+ "BASE_JM", /**< Base Job Manager */
+ "BASE_CORE", /**< Base Core */
+ "BASE_MEM", /**< Base Memory */
+ "BASE_EVENT", /**< Base Event */
+ "BASE_CTX", /**< Base Context */
+ "BASE_PM", /**< Base Power Management */
+ "UMP", /**< UMP */
+};
+
+#define MODULE_STRING_ARRAY_SIZE (sizeof(oskp_str_modules)/sizeof(oskp_str_modules[0]))
+
+INLINE void oskp_cmn_compile_time_assertions(void)
+{
+ /*
+ * If this assert triggers you have forgotten to update oskp_str_modules
+ * when you added a module to the osk_module enum
+ * */
+ CSTD_COMPILE_TIME_ASSERT(OSK_MODULES_ALL == MODULE_STRING_ARRAY_SIZE );
+}
+
+const char* oskp_module_to_str(const osk_module module)
+{
+ if( MODULE_STRING_ARRAY_SIZE <= module)
+ {
+ return "";
+ }
+ return oskp_str_modules[module];
+}
+
+void oskp_validate_format_string(const char *format, ...)
+{
+#if MALI_DEBUG
+ char c;
+ static const char *supported[] =
+ {
+ "d", "ld", "lld",
+ "x", "lx", "llx",
+ "X", "lX", "llX",
+ "u", "lu", "llu",
+ "p",
+ "c",
+ "s",
+ };
+ static const unsigned char sizes[] = { 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 1, 1 };
+
+ unsigned int i;
+
+ /* %[flags][width][.precision][length]specifier */
+
+ while ( (c = *format++) )
+ {
+ if (c == '%')
+ {
+ c = *format;
+
+ if (c == '\0')
+ {
+ /* Unsupported format */
+ OSK_PRINT_WARN(OSK_OSK, "OSK Format specification not complete (%% not followed by anything)\n");
+ return;
+ }
+ else if (c != '%')
+ {
+ /* Skip to the [length]specifier part assuming it starts with
+ * an alphabetic character and flags, width, precision do not
+ * contain alphabetic characters.
+ */
+ do
+ {
+ if ((c >= 'a' && c <= 'z') || c == 'X')
+ {
+ /* Match supported formats with current position in format string */
+ for (i = 0; i < NELEMS(supported); i++)
+ {
+ if (strncmp(format, supported[i], sizes[i]) == 0)
+ {
+ /* Supported format */
+ break;
+ }
+ }
+
+ if (i == NELEMS(supported))
+ {
+ /* Unsupported format */
+ OSK_PRINT_WARN(OSK_OSK, "OSK Format string specifier not supported (starting at '%s')\n", format);
+ return;
+ }
+
+ /* Start looking for next '%' */
+ break;
+ }
+ } while ( (c = *++format) );
+ }
+ }
+ }
+#else
+ CSTD_UNUSED(format);
+#endif
+}
--- /dev/null
+ccflags-$(CONFIG_VITHAR) += -DMALI_DEBUG=0 -DMALI_HW_TYPE=2 \
+ -DMALI_USE_UMP=0 -DMALI_HW_VERSION=r0p0 -DMALI_BASE_TRACK_MEMLEAK=0 \
+ -DMALI_ANDROID=1 -DMALI_ERROR_INJECT_ON=0 -DMALI_NO_MALI=0 -DMALI_BACKEND_KERNEL=1 \
+ -DMALI_FAKE_PLATFORM_DEVICE=1 -DMALI_MOCK_TEST=0 -DMALI_KERNEL_TEST_API=0 \
+ -DMALI_INFINITE_CACHE=0 -DMALI_LICENSE_IS_GPL=1 -DMALI_PLATFORM_CONFIG=exynos5 \
+ -DMALI_UNIT_TEST=0 -DMALI_GATOR_SUPPORT=0 -DUMP_LICENSE_IS_GPL=1 \
+ -DUMP_SVN_REV_STRING="\"dummy\"" -DMALI_RELEASE_NAME="\"dummy\""
+
+ROOTDIR = $(src)/../../..
+
+ccflags-$(CONFIG_VITHAR) += -I$(ROOTDIR)/kbase
+ccflags-$(CONFIG_VITHAR) += -I$(ROOTDIR)
+
+ccflags-y += -I$(ROOTDIR) -I$(ROOTDIR)/include -I$(ROOTDIR)/osk/src/linux/include -I$(ROOTDIR)/uk/platform_dummy
+ccflags-y += -I$(ROOTDIR)/kbase/midg_gpus/r0p0
+
+obj-y += mali_osk_timers.o mali_osk_debug.o
--- /dev/null
+# Copyright:
+# ----------------------------------------------------------------------------
+# This confidential and proprietary software may be used only as authorized
+# by a licensing agreement from ARM Limited.
+# (C) COPYRIGHT 2010 ARM Limited, ALL RIGHTS RESERVED
+# The entire notice above must be reproduced on all authorized copies and
+# copies may only be made to the extent permitted by a licensing agreement
+# from ARM Limited.
+# ----------------------------------------------------------------------------
+#
+
+EXTRA_CFLAGS += -I$(ROOT) -I$(ROOT)/osk/src/linux/include
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2008-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file
+ * Implementation of the OS abstraction layer for the kernel device driver
+ */
+
+#ifndef _OSK_ARCH_ATOMICS_H_
+#define _OSK_ARCH_ATOMICS_H_
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+OSK_STATIC_INLINE u32 osk_atomic_sub(osk_atomic * atom, u32 value)
+{
+ OSK_ASSERT(NULL != atom);
+ return atomic_sub_return(value, atom);
+}
+
+OSK_STATIC_INLINE u32 osk_atomic_add(osk_atomic * atom, u32 value)
+{
+ OSK_ASSERT(NULL != atom);
+ return atomic_add_return(value, atom);
+}
+
+OSK_STATIC_INLINE u32 osk_atomic_dec(osk_atomic * atom)
+{
+ OSK_ASSERT(NULL != atom);
+ return osk_atomic_sub(atom, 1);
+}
+
+OSK_STATIC_INLINE u32 osk_atomic_inc(osk_atomic * atom)
+{
+ OSK_ASSERT(NULL != atom);
+ return osk_atomic_add(atom, 1);
+}
+
+OSK_STATIC_INLINE void osk_atomic_set(osk_atomic * atom, u32 value)
+{
+ OSK_ASSERT(NULL != atom);
+ atomic_set(atom, value);
+}
+
+OSK_STATIC_INLINE u32 osk_atomic_get(osk_atomic * atom)
+{
+ OSK_ASSERT(NULL != atom);
+ return atomic_read(atom);
+}
+
+OSK_STATIC_INLINE u32 osk_atomic_compare_and_swap(osk_atomic * atom, u32 old_value, u32 new_value)
+{
+ OSK_ASSERT(NULL != atom);
+ return atomic_cmpxchg(atom, old_value, new_value);
+}
+
+#endif /* _OSK_ARCH_ATOMICS_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2008-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file
+ * Implementation of the OS abstraction layer for the kernel device driver
+ */
+
+#ifndef _OSK_ARCH_BITOPS_H_
+#define _OSK_ARCH_BITOPS_H_
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+#include <linux/bitops.h>
+
+OSK_STATIC_INLINE long osk_clz(unsigned long val)
+{
+ return OSK_BITS_PER_LONG - fls_long(val);
+}
+
+OSK_STATIC_INLINE long osk_clz_64(u64 val)
+{
+ return 64 - fls64(val);
+}
+
+OSK_STATIC_INLINE int osk_count_set_bits(unsigned long val)
+{
+ /* note: __builtin_popcountl() not available in kernel */
+ int count = 0;
+ while (val)
+ {
+ count++;
+ val &= (val-1);
+ }
+ return count;
+}
+
+#endif /* _OSK_ARCH_BITOPS_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2008-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file
+ * Implementation of the OS abstraction layer for the kernel device driver
+ */
+
+#ifndef _OSK_ARCH_CREDENTIALS_H_
+#define _OSK_ARCH_CREDENTIALS_H_
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+#include <linux/cred.h>
+
+OSK_STATIC_INLINE mali_bool osk_is_privileged(void)
+{
+ mali_bool is_privileged = MALI_FALSE;
+
+ /* Check if the caller is root */
+ if (current_euid() == 0)
+ {
+ is_privileged = MALI_TRUE;
+ }
+
+ return is_privileged;
+}
+
+OSK_STATIC_INLINE mali_bool osk_is_policy_realtime(void)
+{
+ int policy = current->policy;
+
+ if (policy == SCHED_FIFO || policy == SCHED_RR)
+ {
+ return MALI_TRUE;
+ }
+
+ return MALI_FALSE;
+}
+
+OSK_STATIC_INLINE void osk_get_process_priority(osk_process_priority *prio)
+{
+ /* Note that we return the current process priority.
+ * If called from a kernel thread the priority returned
+ * will be the kernel thread priority and not the user
+ * process that is currently submitting jobs to the scheduler.
+ */
+ OSK_ASSERT(prio);
+
+ if(osk_is_policy_realtime())
+ {
+ prio->is_realtime = MALI_TRUE;
+ /* NOTE: realtime range was in the range 0..99 (lowest to highest) so we invert
+ * the priority and scale to -20..0 to normalize the result with the NICE range
+ */
+ prio->priority = (((MAX_RT_PRIO-1) - current->rt_priority) / 5) - 20;
+ /* Realtime range returned:
+ * -20 - highest priority
+ * 0 - lowest priority
+ */
+ }
+ else
+ {
+ prio->is_realtime = MALI_FALSE;
+ prio->priority = (current->static_prio - MAX_RT_PRIO) - 20;
+ /* NICE range returned:
+ * -20 - highest priority
+ * +19 - lowest priority
+ */
+ }
+}
+
+#endif /* _OSK_ARCH_CREDENTIALS_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file
+ * Implementation of the OS abstraction layer for the kernel device driver
+ */
+
+#ifndef _OSK_ARCH_DEBUG_H_
+#define _OSK_ARCH_DEBUG_H_
+
+#include <malisw/mali_stdtypes.h>
+#include "mali_osk_arch_types.h"
+
+#if MALI_UNIT_TEST
+/* Kernel testing helpers */
+void osk_kernel_test_init(void);
+void osk_kernel_test_term(void);
+void osk_kernel_test_wait(void);
+void osk_kernel_test_signal(void);
+mali_bool osk_kernel_test_has_asserted(void);
+void oskp_kernel_test_exit(void);
+#endif
+
+/** Maximum number of bytes (incl. end of string character) supported in the generated debug output string */
+#define OSK_DEBUG_MESSAGE_SIZE 256
+
+/**
+ * All OSKP_ASSERT* and OSKP_PRINT_* macros will eventually call OSKP_PRINT to output messages
+ */
+void oskp_debug_print(const char *fmt, ...);
+#define OSKP_PRINT(...) oskp_debug_print(__VA_ARGS__)
+
+/**
+ * Insert a breakpoint to cause entry in an attached debugger. However, since there is
+ * no API available to trigger entry in a debugger, we dereference a NULL
+ * pointer which should cause an exception and enter a debugger.
+ */
+#define OSKP_BREAKPOINT() *(int *)0 = 0
+
+/**
+ * Quit the driver and halt.
+ */
+#define OSKP_QUIT() BUG()
+
+/**
+ * Print a backtrace
+ */
+#define OSKP_TRACE() WARN_ON(1)
+
+#define OSKP_CHANNEL_INFO ((u32)0x00000001) /**< @brief No output*/
+#define OSKP_CHANNEL_WARN ((u32)0x00000002) /**< @brief Standard output*/
+#define OSKP_CHANNEL_ERROR ((u32)0x00000004) /**< @brief Error output*/
+#define OSKP_CHANNEL_RAW ((u32)0x00000008) /**< @brief Raw output*/
+#define OSKP_CHANNEL_ALL ((u32)0xFFFFFFFF) /**< @brief All the channels at the same time*/
+
+/** @brief Disable the asserts tests if set to 1. Default is to enable the asserts. */
+#ifndef OSK_DISABLE_ASSERT
+#define OSK_DISABLE_ASSERTS 0
+#endif
+
+/** @brief If equals to 0, a trace containing the file, line, and function will be displayed before each message. */
+#define OSK_SKIP_TRACE 0
+
+/** @brief If different from 0, the trace will only contain the file and line. */
+#define OSK_SKIP_FUNCTION_NAME 0
+
+/** @brief Variable to set the permissions per module and per channel.
+ */
+#define OSK_MODULES_PERMISSIONS "ALL_ALL"
+
+/** @brief String terminating every message printed by the debug API */
+#define OSK_STOP_MSG "\n"
+
+/** @brief Enables support for runtime configuration if set to 1.
+ */
+#define OSK_USE_RUNTIME_CONFIG 0
+#define OSK_SIMULATE_FAILURES MALI_BASE_TRACK_MEMLEAK /**< @brief Enables simulation of failures (for testing) if non-zero */
+
+#define OSK_ACTION_IGNORE 0 /**< @brief The given message is ignored then the execution continues*/
+#define OSK_ACTION_PRINT_AND_CONTINUE 1 /**< @brief The given message is printed then the execution continues*/
+#define OSK_ACTION_PRINT_AND_BREAK 2 /**< @brief The given message is printed then a break point is triggered*/
+#define OSK_ACTION_PRINT_AND_QUIT 3 /**< @brief The given message is printed then the execution is stopped*/
+#define OSK_ACTION_PRINT_AND_TRACE 4 /**< @brief The given message and a backtrace is printed then the execution continues*/
+
+/**
+ * @def OSK_ON_INFO
+ * @brief Defines the API behavior when @ref OSK_PRINT_INFO() is called
+ * @note Must be set to one of the following values: @see OSK_ACTION_PRINT_AND_CONTINUE,
+ * @note @ref OSK_ACTION_PRINT_AND_BREAK, @see OSK_ACTION_PRINT_AND_QUIT, @see OSK_ACTION_IGNORE
+ *
+ * @def OSK_ON_WARN
+ * @brief Defines the API behavior when @see OSK_PRINT_WARN() is called
+ * @note Must be set to one of the following values: @see OSK_ACTION_PRINT_AND_CONTINUE,
+ * @note @see OSK_ACTION_PRINT_AND_BREAK, @see OSK_ACTION_PRINT_AND_QUIT, @see OSK_ACTION_IGNORE
+ *
+ * @def OSK_ON_ERROR
+ * @brief Defines the API behavior when @see OSK_PRINT_ERROR() is called
+ * @note Must be set to one of the following values: @see OSK_ACTION_PRINT_AND_CONTINUE,
+ * @note @see OSK_ACTION_PRINT_AND_BREAK, @see OSK_ACTION_PRINT_AND_QUIT, @see OSK_ACTION_IGNORE
+ *
+ * @def OSK_ON_ASSERT
+ * @brief Defines the API behavior when @see OSKP_PRINT_ASSERT() is called
+ * @note Must be set to one of the following values: @see OSK_ACTION_PRINT_AND_CONTINUE,
+ * @note @see OSK_ACTION_PRINT_AND_BREAK, @see OSK_ACTION_PRINT_AND_QUIT, @see OSK_ACTION_IGNORE
+ *
+ * @def OSK_ON_RAW
+ * @brief Defines the API behavior when @see OSKP_PRINT_RAW() is called
+ * @note Must be set to one of the following values: @see OSK_ACTION_PRINT_AND_CONTINUE,
+ * @note @see OSK_ACTION_PRINT_AND_BREAK, @see OSK_ACTION_PRINT_AND_QUIT, @see OSK_ACTION_IGNORE
+ *
+ *
+ */
+#if MALI_DEBUG
+ #define OSK_ON_INFO OSK_ACTION_IGNORE
+ #define OSK_ON_WARN OSK_ACTION_PRINT_AND_CONTINUE
+ #define OSK_ON_ASSERT OSK_ACTION_PRINT_AND_QUIT
+ #define OSK_ON_ERROR OSK_ACTION_PRINT_AND_CONTINUE
+ #define OSK_ON_RAW OSK_ACTION_PRINT_AND_CONTINUE
+#else
+ #define OSK_ON_INFO OSK_ACTION_IGNORE
+ #define OSK_ON_WARN OSK_ACTION_IGNORE
+ #define OSK_ON_ASSERT OSK_ACTION_IGNORE
+ #define OSK_ON_ERROR OSK_ACTION_PRINT_AND_CONTINUE
+ #define OSK_ON_RAW OSK_ACTION_PRINT_AND_CONTINUE
+#endif
+
+#if MALI_UNIT_TEST
+#define OSKP_KERNEL_TEST_ASSERT() oskp_kernel_test_exit()
+#else
+#define OSKP_KERNEL_TEST_ASSERT() CSTD_NOP()
+#endif
+
+/**
+ * OSK_ASSERT macros do nothing if the flag @see OSK_DISABLE_ASSERTS is set to 1
+ */
+#if OSK_DISABLE_ASSERTS
+ #define OSKP_ASSERT(expr) CSTD_NOP()
+ #define OSKP_INTERNAL_ASSERT(expr) CSTD_NOP()
+ #define OSKP_ASSERT_MSG(expr, ...) CSTD_NOP()
+#else /* OSK_DISABLE_ASSERTS */
+
+/**
+ * @def OSKP_ASSERT_MSG(expr, ...)
+ * @brief Calls @see OSKP_PRINT_ASSERT and prints the given message if @a expr is false
+ *
+ * @note This macro does nothing if the flag @see OSK_DISABLE_ASSERTS is set to 1
+ *
+ * @param expr Boolean expression
+ * @param ... Message to display when @a expr is false, as a format string followed by format arguments.
+ */
+#define OSKP_ASSERT_MSG(expr, ...)\
+ do\
+ {\
+ if(MALI_FALSE == (expr))\
+ {\
+ OSKP_PRINT_ASSERT(__VA_ARGS__);\
+ }\
+ }while(MALI_FALSE)
+
+/**
+ * @def OSKP_ASSERT(expr)
+ * @brief Calls @see OSKP_PRINT_ASSERT and prints the expression @a expr if @a expr is false
+ *
+ * @note This macro does nothing if the flag @see OSK_DISABLE_ASSERTS is set to 1
+ *
+ * @param expr Boolean expression
+ */
+#define OSKP_ASSERT(expr)\
+ OSKP_ASSERT_MSG(expr, #expr)
+
+/**
+ * @def OSKP_INTERNAL_ASSERT(expr)
+ * @brief Calls @see OSKP_BREAKPOINT if @a expr is false
+ * This assert function is for internal use of OSK functions which themselves are used to implement
+ * the OSK_ASSERT functionality. These functions should use OSK_INTERNAL_ASSERT which does not use
+ * any OSK functions to prevent ending up in a recursive loop.
+ *
+ * @note This macro does nothing if the flag @see OSK_DISABLE_ASSERTS is set to 1
+ *
+ * @param expr Boolean expression
+ */
+#define OSKP_INTERNAL_ASSERT(expr)\
+ do\
+ {\
+ if(MALI_FALSE == (expr))\
+ {\
+ OSKP_BREAKPOINT();\
+ }\
+ }while(MALI_FALSE)
+
+/**
+ * @def OSKP_PRINT_ASSERT(...)
+ * @brief Prints "MALI<ASSERT>" followed by trace, function name and the given message.
+ *
+ * The behavior of this function is defined by the macro @see OSK_ON_ASSERT.
+ *
+ * @note This macro does nothing if the flag @see OSK_DISABLE_ASSERTS is set to 1
+ *
+ * Example: OSKP_PRINT_ASSERT(" %d blocks could not be allocated", mem_alocated) will print:\n
+ * "MALI<ASSERT> In file <path> line: <line number> function:<function name> 10 blocks could not be allocated"
+ *
+ * @note Depending on the values of @see OSK_SKIP_FUNCTION_NAME and @see OSK_SKIP_TRACE the trace will be displayed
+ * before the message.
+ *
+ * @param ... Message to print, passed as a format string followed by format arguments.
+ */
+#define OSKP_PRINT_ASSERT(...)\
+ do\
+ {\
+ OSKP_ASSERT_OUT(OSKP_PRINT_TRACE, OSKP_PRINT_FUNCTION, __VA_ARGS__);\
+ oskp_debug_assert_call_hook();\
+ OSKP_KERNEL_TEST_ASSERT();\
+ OSKP_ASSERT_ACTION();\
+ }while(MALI_FALSE)
+
+#endif
+
+/**
+ * @def OSKP_DEBUG_CODE( X )
+ * @brief Executes the code inside the macro only in debug mode
+ *
+ * @param X Code to compile only in debug mode.
+ */
+#if MALI_DEBUG
+ #define OSKP_DEBUG_CODE( X ) X
+#else
+ #define OSKP_DEBUG_CODE( X ) CSTD_NOP()
+#endif
+
+/**
+ * @def OSKP_ASSERT_ACTION
+ * @brief (Private) Action associated to the @see OSKP_PRINT_ASSERT event.
+ */
+/* Configure the post display action */
+#if OSK_ON_ASSERT == OSK_ACTION_PRINT_AND_BREAK
+ #define OSKP_ASSERT_ACTION OSKP_BREAKPOINT
+#elif OSK_ON_ASSERT == OSK_ACTION_PRINT_AND_QUIT
+ #define OSKP_ASSERT_ACTION OSKP_QUIT
+#elif OSK_ON_ASSERT == OSK_ACTION_PRINT_AND_TRACE
+ #define OSKP_ASSERT_ACTION OSKP_TRACE
+#elif OSK_ON_ASSERT == OSK_ACTION_PRINT_AND_CONTINUE || OSK_ON_ASSERT == OSK_ACTION_IGNORE
+ #define OSKP_ASSERT_ACTION() CSTD_NOP()
+#else
+ #error invalid value for OSK_ON_ASSERT
+#endif
+
+/**
+ * @def OSKP_RAW_ACTION
+ * @brief (Private) Action associated to the @see OSK_PRINT_RAW event.
+ */
+/* Configure the post display action */
+#if OSK_ON_RAW == OSK_ACTION_PRINT_AND_BREAK
+ #define OSKP_RAW_ACTION OSKP_BREAKPOINT
+#elif OSK_ON_RAW == OSK_ACTION_PRINT_AND_QUIT
+ #define OSKP_RAW_ACTION OSKP_QUIT
+#elif OSK_ON_RAW == OSK_ACTION_PRINT_AND_TRACE
+ #define OSKP_RAW_ACTION OSKP_TRACE
+#elif OSK_ON_RAW == OSK_ACTION_PRINT_AND_CONTINUE || OSK_ON_RAW == OSK_ACTION_IGNORE
+ #define OSKP_RAW_ACTION() CSTD_NOP()
+#else
+ #error invalid value for OSK_ON_RAW
+#endif
+
+/**
+ * @def OSKP_INFO_ACTION
+ * @brief (Private) Action associated to the @see OSK_PRINT_INFO event.
+ */
+/* Configure the post display action */
+#if OSK_ON_INFO == OSK_ACTION_PRINT_AND_BREAK
+ #define OSKP_INFO_ACTION OSKP_BREAKPOINT
+#elif OSK_ON_INFO == OSK_ACTION_PRINT_AND_QUIT
+ #define OSKP_INFO_ACTION OSKP_QUIT
+#elif OSK_ON_INFO == OSK_ACTION_PRINT_AND_TRACE
+ #define OSKP_INFO_ACTION OSKP_TRACE
+#elif OSK_ON_INFO == OSK_ACTION_PRINT_AND_CONTINUE || OSK_ON_INFO == OSK_ACTION_IGNORE
+ #define OSKP_INFO_ACTION() CSTD_NOP()
+#else
+ #error invalid value for OSK_ON_INFO
+#endif
+
+/**
+ * @def OSKP_ERROR_ACTION
+ * @brief (Private) Action associated to the @see OSK_PRINT_ERROR event.
+ */
+/* Configure the post display action */
+#if OSK_ON_ERROR == OSK_ACTION_PRINT_AND_BREAK
+ #define OSKP_ERROR_ACTION OSKP_BREAKPOINT
+#elif OSK_ON_ERROR == OSK_ACTION_PRINT_AND_QUIT
+ #define OSKP_ERROR_ACTION OSKP_QUIT
+#elif OSK_ON_ERROR == OSK_ACTION_PRINT_AND_TRACE
+ #define OSKP_ERROR_ACTION OSKP_TRACE
+#elif OSK_ON_ERROR == OSK_ACTION_PRINT_AND_CONTINUE || OSK_ON_ERROR == OSK_ACTION_IGNORE
+ #define OSKP_ERROR_ACTION() CSTD_NOP()
+#else
+ #error invalid value for OSK_ON_ERROR
+#endif
+
+/**
+ * @def OSKP_WARN_ACTION
+ * @brief (Private) Action associated to the @see OSK_PRINT_WARN event.
+ */
+/* Configure the post display action */
+#if OSK_ON_WARN == OSK_ACTION_PRINT_AND_BREAK
+ #define OSKP_WARN_ACTION OSKP_BREAKPOINT
+#elif OSK_ON_WARN == OSK_ACTION_PRINT_AND_QUIT
+ #define OSKP_WARN_ACTION OSKP_QUIT
+#elif OSK_ON_WARN == OSK_ACTION_PRINT_AND_TRACE
+ #define OSKP_WARN_ACTION OSKP_TRACE
+#elif OSK_ON_WARN == OSK_ACTION_PRINT_AND_CONTINUE || OSK_ON_WARN == OSK_ACTION_IGNORE
+ #define OSKP_WARN_ACTION() CSTD_NOP()
+#else
+ #error invalid value for OSK_ON_WARN
+#endif
+
+/**
+ * @def OSKP_PRINT_RAW(module, ...)
+ * @brief Prints given message
+ *
+ * The behavior of this function is defined by macro @see OSK_ON_RAW
+ *
+ * Example:
+ * @code OSKP_PRINT_RAW(OSK_BASE_MEM, " %d blocks could not be allocated", mem_allocated); @endcode will print:
+ * \n
+ * "10 blocks could not be allocated"
+ *
+ * @param module Name of the module which prints the message.
+ * @param ... Format string followed by a varying number of parameters
+ */
+#define OSKP_PRINT_RAW(module, ...)\
+ do\
+ {\
+ if(MALI_FALSE != OSKP_PRINT_IS_ALLOWED( (module), OSK_CHANNEL_RAW))\
+ {\
+ OSKP_RAW_OUT(oskp_module_to_str( (module) ), \
+ OSKP_PRINT_TRACE, OSKP_PRINT_FUNCTION, __VA_ARGS__ );\
+ }\
+ OSKP_RAW_ACTION();\
+ }while(MALI_FALSE)
+
+/**
+ * @def OSKP_PRINT_INFO(module, ...)
+ * @brief Prints "MALI<INFO,module_name>: " followed by the given message.
+ *
+ * The behavior of this function is defined by the macro @see OSK_ON_INFO
+ *
+ * Example:
+ * @code OSKP_PRINT_INFO(OSK_BASE_MEM, " %d blocks could not be allocated", mem_allocated); @endcode will print:
+ * \n
+ * "MALI<INFO,BASE_MEM>: 10 blocks could not be allocated"\n
+ *
+ * @param module Name of the module which prints the message.
+ * @param ... Format string followed by a varying number of parameters
+ */
+#define OSKP_PRINT_INFO(module, ...)\
+ do\
+ {\
+ if(MALI_FALSE != OSKP_PRINT_IS_ALLOWED( (module), OSK_CHANNEL_INFO))\
+ {\
+ OSKP_INFO_OUT(oskp_module_to_str( (module) ), \
+ OSKP_PRINT_TRACE, OSKP_PRINT_FUNCTION, __VA_ARGS__ );\
+ }\
+ OSKP_INFO_ACTION();\
+ }while(MALI_FALSE)
+
+/**
+ * @def OSKP_PRINT_WARN(module, ...)
+ * @brief Prints "MALI<WARN,module_name>: " followed by the given message.
+ *
+ * The behavior of this function is defined by the macro @see OSK_ON_WARN
+ *
+ * Example:
+ * @code OSK_PRINT_WARN(OSK_BASE_MEM, " %d blocks could not be allocated", mem_allocated);@endcode will print: \n
+ * "MALI<WARN,BASE_MEM>: 10 blocks could not be allocated"\n
+ *
+ * @param module Name of the module which prints the message.
+ * @param ... Format string followed by a varying number of parameters
+ */
+#define OSKP_PRINT_WARN(module, ...)\
+ do\
+ {\
+ if(MALI_FALSE != OSKP_PRINT_IS_ALLOWED( (module), OSK_CHANNEL_WARN))\
+ {\
+ OSKP_WARN_OUT(oskp_module_to_str( (module) ), \
+ OSKP_PRINT_TRACE, OSKP_PRINT_FUNCTION, __VA_ARGS__ );\
+ }\
+ OSKP_WARN_ACTION();\
+ }while(MALI_FALSE)
+
+/**
+ * @def OSKP_PRINT_ERROR(module, ...)
+ * @brief Prints "MALI<ERROR,module_name>: " followed by the given message.
+ *
+ * The behavior of this function is defined by the macro @see OSK_ON_ERROR
+ *
+ * Example:
+ * @code OSKP_PRINT_ERROR(OSK_BASE_MEM, " %d blocks could not be allocated", mem_allocated); @endcode will print:
+ * \n
+ * "MALI<ERROR,BASE_MEM>: 10 blocks could not be allocated"\n
+ *
+ * @param module Name of the module which prints the message.
+ * @param ... Format string followed by a varying number of parameters
+ */
+#define OSKP_PRINT_ERROR(module, ...)\
+ do\
+ {\
+ if(MALI_FALSE != OSKP_PRINT_IS_ALLOWED( (module), OSK_CHANNEL_ERROR))\
+ {\
+ OSKP_ERROR_OUT(oskp_module_to_str( (module) ), \
+ OSKP_PRINT_TRACE, OSKP_PRINT_FUNCTION, __VA_ARGS__ );\
+ }\
+ OSKP_ERROR_ACTION();\
+ }while(MALI_FALSE)
+
+/**
+ * @def OSKP_PRINT_TRACE
+ * @brief Private macro containing the format of the trace to display before every message
+ * @sa OSK_SKIP_TRACE, OSK_SKIP_FUNCTION_NAME
+ */
+#if OSK_SKIP_TRACE == 0
+ #define OSKP_PRINT_TRACE \
+ "In file: " __FILE__ " line: " CSTD_STR2(__LINE__)
+ #if OSK_SKIP_FUNCTION_NAME == 0
+ #define OSKP_PRINT_FUNCTION CSTD_FUNC
+ #else
+ #define OSKP_PRINT_FUNCTION ""
+ #endif
+#else
+ #define OSKP_PRINT_TRACE ""
+#endif
+
+/**
+ * @def OSKP_PRINT_ALLOW(module, channel)
+ * @brief Allow the given module to print on the given channel
+ * @note If @see OSK_USE_RUNTIME_CONFIG is disabled then this macro doesn't do anything
+ * @param module is a @see osk_module
+ * @param channel is one of @see OSK_CHANNEL_INFO, @see OSK_CHANNEL_WARN, @see OSK_CHANNEL_ERROR,
+ * @see OSK_CHANNEL_ALL
+ * @return MALI_TRUE if the module is allowed to print on the channel.
+ */
+/**
+ * @def OSKP_PRINT_BLOCK(module, channel)
+ * @brief Prevent the given module from printing on the given channel
+ * @note If @see OSK_USE_RUNTIME_CONFIG is disabled then this macro doesn't do anything
+ * @param module is a @see osk_module
+ * @param channel is one of @see OSK_CHANNEL_INFO, @see OSK_CHANNEL_WARN, @see OSK_CHANNEL_ERROR,
+ * @see OSK_CHANNEL_ALL
+ * @return MALI_TRUE if the module is allowed to print on the channel.
+ */
+#if OSK_USE_RUNTIME_CONFIG
+ #define OSKP_PRINT_ALLOW(module, channel) oskp_debug_print_allow( (module), (channel) )
+ #define OSKP_PRINT_BLOCK(module, channel) oskp_debug_print_block( (module), (channel) )
+#else
+ #define OSKP_PRINT_ALLOW(module, channel) CSTD_NOP()
+ #define OSKP_PRINT_BLOCK(module, channel) CSTD_NOP()
+#endif
+
+/**
+ * @def OSKP_RAW_OUT(module, trace, ...)
+ * @brief (Private) system printing function associated to the @see OSK_PRINT_RAW event.
+ * @param module module ID
+ * @param trace location in the code from where the message is printed
+ * @param function function from where the message is printed
+ * @param ... Format string followed by format arguments.
+ */
+/* Select the correct system output function*/
+#if OSK_ON_RAW != OSK_ACTION_IGNORE
+ #define OSKP_RAW_OUT(module, trace, function, ...)\
+ do\
+ {\
+ OSKP_PRINT(__VA_ARGS__);\
+ OSKP_PRINT(OSK_STOP_MSG);\
+ }while(MALI_FALSE)
+#else
+ #define OSKP_RAW_OUT(module, trace, function, ...) CSTD_NOP()
+#endif
+
+
+/**
+ * @def OSKP_INFO_OUT(module, trace, ...)
+ * @brief (Private) system printing function associated to the @see OSK_PRINT_INFO event.
+ * @param module module ID
+ * @param trace location in the code from where the message is printed
+ * @param function function from where the message is printed
+ * @param ... Format string followed by format arguments.
+ */
+/* Select the correct system output function*/
+#if OSK_ON_INFO != OSK_ACTION_IGNORE
+ #define OSKP_INFO_OUT(module, trace, function, ...)\
+ do\
+ {\
+ /* Split up in several lines to prevent hitting max 128 chars limit of OSK print function */ \
+ OSKP_PRINT("Mali<INFO,%s>: ", module);\
+ OSKP_PRINT(__VA_ARGS__);\
+ OSKP_PRINT(OSK_STOP_MSG);\
+ }while(MALI_FALSE)
+#else
+ #define OSKP_INFO_OUT(module, trace, function, ...) CSTD_NOP()
+#endif
+
+/**
+ * @def OSKP_ASSERT_OUT(trace, function, ...)
+ * @brief (Private) system printing function associated to the @see OSKP_PRINT_ASSERT event.
+ * @param trace location in the code from where the message is printed
+ * @param function function from where the message is printed
+ * @param ... Format string followed by format arguments.
+ * @note function parameter cannot be concatenated with other strings
+ */
+/* Select the correct system output function*/
+#if OSK_ON_ASSERT != OSK_ACTION_IGNORE
+ #define OSKP_ASSERT_OUT(trace, function, ...)\
+ do\
+ {\
+ /* Split up in several lines to prevent hitting max 128 chars limit of OSK print function */ \
+ OSKP_PRINT("Mali<ASSERT>: %s function:%s ", trace, function);\
+ OSKP_PRINT(__VA_ARGS__);\
+ OSKP_PRINT(OSK_STOP_MSG);\
+ }while(MALI_FALSE)
+#else
+ #define OSKP_ASSERT_OUT(trace, function, ...) CSTD_NOP()
+#endif
+
+/**
+ * @def OSKP_WARN_OUT(module, trace, ...)
+ * @brief (Private) system printing function associated to the @see OSK_PRINT_WARN event.
+ * @param module module ID
+ * @param trace location in the code from where the message is printed
+ * @param function function from where the message is printed
+ * @param ... Format string followed by format arguments.
+ * @note function parameter cannot be concatenated with other strings
+ */
+/* Select the correct system output function*/
+#if OSK_ON_WARN != OSK_ACTION_IGNORE
+ #define OSKP_WARN_OUT(module, trace, function, ...)\
+ do\
+ {\
+ /* Split up in several lines to prevent hitting max 128 chars limit of OSK print function */ \
+ OSKP_PRINT("Mali<WARN,%s>: ", module);\
+ OSKP_PRINT(__VA_ARGS__);\
+ OSKP_PRINT(OSK_STOP_MSG);\
+ }while(MALI_FALSE)
+#else
+ #define OSKP_WARN_OUT(module, trace, function, ...) CSTD_NOP()
+#endif
+
+/**
+ * @def OSKP_ERROR_OUT(module, trace, ...)
+ * @brief (Private) system printing function associated to the @see OSK_PRINT_ERROR event.
+ * @param module module ID
+ * @param trace location in the code from where the message is printed
+ * @param function function from where the message is printed
+ * @param ... Format string followed by format arguments.
+ * @note function parameter cannot be concatenated with other strings
+ */
+/* Select the correct system output function*/
+#if OSK_ON_ERROR != OSK_ACTION_IGNORE
+ #define OSKP_ERROR_OUT(module, trace, function, ...)\
+ do\
+ {\
+ /* Split up in several lines to prevent hitting max 128 chars limit of OSK print function */ \
+ OSKP_PRINT("Mali<ERROR,%s>: ", module);\
+ OSKP_PRINT(__VA_ARGS__);\
+ OSKP_PRINT(OSK_STOP_MSG);\
+ }while(MALI_FALSE)
+#else
+ #define OSKP_ERROR_OUT(module, trace, function, ...) CSTD_NOP()
+#endif
+
+/**
+ * @def OSKP_PRINT_IS_ALLOWED(module, channel)
+ * @brief function or constant indicating if the given module is allowed to print on the given channel
+ * @note If @see OSK_USE_RUNTIME_CONFIG is disabled then this macro is set to MALI_TRUE to avoid any overhead
+ * @param module is a @see osk_module
+ * @param channel is one of @see OSK_CHANNEL_INFO, @see OSK_CHANNEL_WARN, @see OSK_CHANNEL_ERROR,
+ * @see OSK_CHANNEL_ALL
+ * @return MALI_TRUE if the module is allowed to print on the channel.
+ */
+#if OSK_USE_RUNTIME_CONFIG
+ #define OSKP_PRINT_IS_ALLOWED(module, channel) oskp_is_allowed_to_print( (module), (channel) )
+#else
+ #define OSKP_PRINT_IS_ALLOWED(module, channel) MALI_TRUE
+#endif
+
+/**
+ * @def OSKP_SIMULATE_FAILURE_IS_ENABLED(module, feature)
+ * @brief Macro that evaluates as true if the specified feature is enabled for the given module
+ * @note If @ref OSK_USE_RUNTIME_CONFIG is disabled then this macro always evaluates as true.
+ * @param[in] module is a @ref cdbg_module
+ * @param[in] channel is one of @see OSK_CHANNEL_INFO, @see OSK_CHANNEL_WARN, @see OSK_CHANNEL_ERROR,
+ * @return MALI_TRUE if the feature is enabled
+ */
+#if OSK_USE_RUNTIME_CONFIG
+#define OSKP_SIMULATE_FAILURE_IS_ENABLED(module, channel) oskp_is_allowed_to_simulate_failure( (module), (channel) )
+#else
+#define OSKP_SIMULATE_FAILURE_IS_ENABLED(module, channel) MALI_TRUE
+#endif
+
+OSK_STATIC_INLINE void osk_debug_get_thread_info( u32 *thread_id, u32 *cpu_nr )
+{
+ OSK_ASSERT( thread_id != NULL );
+ OSK_ASSERT( cpu_nr != NULL );
+
+ /* This implementation uses the PID as shown in ps listings.
+ * On 64-bit systems, this could narrow from signed 64-bit to unsigned 32-bit */
+ *thread_id = (u32)task_pid_nr(current);
+
+ /* On 64-bit systems, this could narrow from unsigned 64-bit to unsigned 32-bit */
+ *cpu_nr = (u32)task_cpu(current);
+}
+
+
+#endif /* _OSK_ARCH_DEBUG_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2008-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file
+ * Implementation of the OS abstraction layer for the kernel device driver
+ */
+
+#ifndef _OSK_ARCH_LOCKS_H_
+#define _OSK_ARCH_LOCKS_H_
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+/**
+ * Private macro to safely allow asserting on a mutex/rwlock/spinlock/irq
+ * spinlock pointer whilst still allowing its name to appear during
+ * CONFIG_PROVE_LOCKING.
+ *
+ * It's safe because \a lock can safely have side-effects.
+ *
+ * Makes use of a GNU C extension, but this macro is only needed under Linux
+ * anyway.
+ *
+ * NOTE: the local variable must not conflict with an identifier in a wider
+ * scope
+ *
+ * NOTE: Due to the way this is used in this file, this definition must persist
+ * outside of this file
+ */
+#define OSKP_LOCK_PTR_ASSERT( lock ) \
+ ({ \
+ __typeof__( lock ) __oskp_lock__ = ( lock ); \
+ OSK_ASSERT( NULL != __oskp_lock__ ); \
+ __oskp_lock__; })
+
+/*
+ * A definition must be provided of each lock init function to eliminate
+ * warnings. They'lll be hidden by subsequent macro definitions
+ */
+
+OSK_STATIC_INLINE osk_error osk_rwlock_init(osk_rwlock * const lock, osk_lock_order order)
+{
+ OSK_ASSERT_MSG( MALI_FALSE,
+ "FATAL: this definition of osk_rwlock_init() should've been uncallable - a macro redefines it\n" );
+ CSTD_UNUSED( lock );
+ CSTD_UNUSED( order );
+ return OSK_ERR_FAIL;
+}
+
+OSK_STATIC_INLINE osk_error osk_mutex_init(osk_mutex * const lock, osk_lock_order order)
+{
+ OSK_ASSERT_MSG( MALI_FALSE,
+ "FATAL: this definition of osk_mutex_init() should've been uncallable - a macro redefines it\n" );
+ CSTD_UNUSED( lock );
+ CSTD_UNUSED( order );
+ return OSK_ERR_FAIL;
+}
+
+OSK_STATIC_INLINE osk_error osk_spinlock_init(osk_spinlock * const lock, osk_lock_order order)
+{
+ OSK_ASSERT_MSG( MALI_FALSE,
+ "FATAL: this definition of osk_spinlock_init() should've been uncallable - a macro redefines it\n" );
+ CSTD_UNUSED( lock );
+ CSTD_UNUSED( order );
+ return OSK_ERR_FAIL;
+}
+
+OSK_STATIC_INLINE osk_error osk_spinlock_irq_init(osk_spinlock_irq * const lock, osk_lock_order order)
+{
+ OSK_ASSERT_MSG( MALI_FALSE,
+ "FATAL: this definition of osk_spinlock_irq_init() should've been uncallable - a macro redefines it\n" );
+ CSTD_UNUSED( lock );
+ CSTD_UNUSED( order );
+ return OSK_ERR_FAIL;
+}
+
+/*
+ * End of 'dummy' definitions
+ */
+
+
+/* Note: This uses a GNU C Extension to allow Linux's CONFIG_PROVE_LOCKING to work correctly
+ *
+ * This is not required outside of Linux
+ *
+ * NOTE: the local variable must not conflict with an identifier in a wider scope */
+#define osk_rwlock_init( ARG_LOCK, ARG_ORDER ) \
+ ({ \
+ osk_lock_order __oskp_order__ = (ARG_ORDER); \
+ OSK_ASSERT( OSK_LOCK_ORDER_LAST <= __oskp_order__ && __oskp_order__ <= OSK_LOCK_ORDER_FIRST ); \
+ init_rwsem( OSKP_LOCK_PTR_ASSERT((ARG_LOCK)) ); \
+ OSK_ERR_NONE;})
+
+OSK_STATIC_INLINE void osk_rwlock_term(osk_rwlock * lock)
+{
+ OSK_ASSERT(NULL != lock);
+ /* nop */
+}
+
+OSK_STATIC_INLINE void osk_rwlock_read_lock(osk_rwlock * lock)
+{
+ OSK_ASSERT(NULL != lock);
+ down_read(lock);
+}
+
+OSK_STATIC_INLINE void osk_rwlock_read_unlock(osk_rwlock * lock)
+{
+ OSK_ASSERT(NULL != lock);
+ up_read(lock);
+}
+
+OSK_STATIC_INLINE void osk_rwlock_write_lock(osk_rwlock * lock)
+{
+ OSK_ASSERT(NULL != lock);
+ down_write(lock);
+}
+
+OSK_STATIC_INLINE void osk_rwlock_write_unlock(osk_rwlock * lock)
+{
+ OSK_ASSERT(NULL != lock);
+ up_write(lock);
+}
+
+/* Note: This uses a GNU C Extension to allow Linux's CONFIG_PROVE_LOCKING to work correctly
+ *
+ * This is not required outside of Linux
+ *
+ * NOTE: the local variable must not conflict with an identifier in a wider scope */
+#define osk_mutex_init( ARG_LOCK, ARG_ORDER ) \
+ ({ \
+ osk_lock_order __oskp_order__ = (ARG_ORDER); \
+ OSK_ASSERT( OSK_LOCK_ORDER_LAST <= __oskp_order__ && __oskp_order__ <= OSK_LOCK_ORDER_FIRST ); \
+ mutex_init( OSKP_LOCK_PTR_ASSERT((ARG_LOCK)) ); \
+ OSK_ERR_NONE;})
+
+
+OSK_STATIC_INLINE void osk_mutex_term(osk_mutex * lock)
+{
+ OSK_ASSERT(NULL != lock);
+ return; /* nop */
+}
+
+OSK_STATIC_INLINE void osk_mutex_lock(osk_mutex * lock)
+{
+ OSK_ASSERT(NULL != lock);
+ mutex_lock(lock);
+}
+
+OSK_STATIC_INLINE void osk_mutex_unlock(osk_mutex * lock)
+{
+ OSK_ASSERT(NULL != lock);
+ mutex_unlock(lock);
+}
+
+/* Note: This uses a GNU C Extension to allow Linux's CONFIG_PROVE_LOCKING to work correctly
+ *
+ * This is not required outside of Linux
+ *
+ * NOTE: the local variable must not conflict with an identifier in a wider scope */
+#define osk_spinlock_init( ARG_LOCK, ARG_ORDER ) \
+ ({ \
+ osk_lock_order __oskp_order__ = (ARG_ORDER); \
+ OSK_ASSERT( OSK_LOCK_ORDER_LAST <= __oskp_order__ && __oskp_order__ <= OSK_LOCK_ORDER_FIRST ); \
+ spin_lock_init( OSKP_LOCK_PTR_ASSERT((ARG_LOCK)) ); \
+ OSK_ERR_NONE;})
+
+OSK_STATIC_INLINE void osk_spinlock_term(osk_spinlock * lock)
+{
+ OSK_ASSERT(NULL != lock);
+ /* nop */
+}
+
+OSK_STATIC_INLINE void osk_spinlock_lock(osk_spinlock * lock)
+{
+ OSK_ASSERT(NULL != lock);
+ spin_lock(lock);
+}
+
+OSK_STATIC_INLINE void osk_spinlock_unlock(osk_spinlock * lock)
+{
+ OSK_ASSERT(NULL != lock);
+ spin_unlock(lock);
+}
+
+/* Note: This uses a GNU C Extension to allow Linux's CONFIG_PROVE_LOCKING to work correctly
+ *
+ * This is not required outside of Linux
+ *
+ * NOTE: the local variable must not conflict with an identifier in a wider scope */
+#define osk_spinlock_irq_init( ARG_LOCK, ARG_ORDER ) \
+ ({ \
+ osk_lock_order __oskp_order__ = (ARG_ORDER); \
+ OSK_ASSERT( OSK_LOCK_ORDER_LAST <= __oskp_order__ && __oskp_order__ <= OSK_LOCK_ORDER_FIRST ); \
+ spin_lock_init( &(OSKP_LOCK_PTR_ASSERT((ARG_LOCK))->lock) ); \
+ OSK_ERR_NONE;})
+
+OSK_STATIC_INLINE void osk_spinlock_irq_term(osk_spinlock_irq * lock)
+{
+ OSK_ASSERT(NULL != lock);
+}
+
+OSK_STATIC_INLINE void osk_spinlock_irq_lock(osk_spinlock_irq * lock)
+{
+ OSK_ASSERT(NULL != lock);
+ spin_lock_irqsave(&lock->lock, lock->flags);
+}
+
+OSK_STATIC_INLINE void osk_spinlock_irq_unlock(osk_spinlock_irq * lock)
+{
+ OSK_ASSERT(NULL != lock);
+ spin_unlock_irqrestore(&lock->lock, lock->flags);
+}
+
+#endif /* _OSK_ARCH_LOCKS_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2008-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file
+ * Implementation of the OS abstraction layer for the kernel device driver
+ */
+
+#ifndef _OSK_ARCH_LOW_LEVEL_MEM_H_
+#define _OSK_ARCH_LOW_LEVEL_MEM_H_
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+#include <linux/mm.h>
+#include <linux/highmem.h>
+#include <linux/dma-mapping.h>
+
+struct oskp_phy_os_allocator
+{
+};
+
+OSK_STATIC_INLINE osk_error oskp_phy_os_allocator_init(oskp_phy_os_allocator * const allocator,
+ osk_phy_addr mem, u32 nr_pages)
+{
+ OSK_ASSERT(NULL != allocator);
+
+ return OSK_ERR_NONE;
+}
+
+OSK_STATIC_INLINE void oskp_phy_os_allocator_term(oskp_phy_os_allocator *allocator)
+{
+ OSK_ASSERT(NULL != allocator);
+ /* Nothing needed */
+}
+
+OSK_STATIC_INLINE u32 oskp_phy_os_pages_alloc(oskp_phy_os_allocator *allocator,
+ u32 nr_pages, osk_phy_addr *pages)
+{
+ int i;
+
+ OSK_ASSERT(NULL != allocator);
+
+ if(OSK_SIMULATE_FAILURE(OSK_OSK))
+ {
+ return 0;
+ }
+
+ for (i = 0; i < nr_pages; i++)
+ {
+ struct page *p;
+ void * mp;
+
+ p = alloc_page(__GFP_IO |
+ __GFP_FS |
+ __GFP_COLD |
+ __GFP_NOWARN |
+ __GFP_NORETRY |
+ __GFP_NOMEMALLOC |
+ __GFP_HIGHMEM |
+ __GFP_HARDWALL);
+ if (NULL == p)
+ {
+ break;
+ }
+
+ mp = kmap(p);
+ if (NULL == mp)
+ {
+ __free_page(p);
+ break;
+ }
+
+ memset(mp, 0x00, PAGE_SIZE); /* instead of __GFP_ZERO, so we can do cache maintenance */
+ osk_sync_to_memory(PFN_PHYS(page_to_pfn(p)), mp, PAGE_SIZE);
+ kunmap(p);
+
+ pages[i] = PFN_PHYS(page_to_pfn(p));
+ }
+
+ return i;
+}
+
+static inline void oskp_phy_os_pages_free(oskp_phy_os_allocator *allocator,
+ u32 nr_pages, osk_phy_addr *pages)
+{
+ int i;
+
+ OSK_ASSERT(NULL != allocator);
+
+ for (i = 0; i < nr_pages; i++)
+ {
+ if (0 != pages[i])
+ {
+ __free_page(pfn_to_page(PFN_DOWN(pages[i])));
+ pages[i] = (osk_phy_addr)0;
+ }
+ }
+}
+
+
+OSK_STATIC_INLINE osk_error oskp_phy_dedicated_allocator_request_memory(osk_phy_addr mem,u32 nr_pages, const char* name)
+{
+ if(OSK_SIMULATE_FAILURE(OSK_OSK))
+ {
+ return OSK_ERR_FAIL;
+ }
+
+ if (NULL != request_mem_region(mem, nr_pages << OSK_PAGE_SHIFT , name))
+ {
+ return OSK_ERR_NONE;
+ }
+ return OSK_ERR_FAIL;
+}
+
+OSK_STATIC_INLINE void oskp_phy_dedicated_allocator_release_memory(osk_phy_addr mem,u32 nr_pages)
+{
+ release_mem_region(mem, nr_pages << OSK_PAGE_SHIFT);
+}
+
+
+static inline void *osk_kmap(osk_phy_addr page)
+{
+ if(OSK_SIMULATE_FAILURE(OSK_OSK))
+ {
+ return NULL;
+ }
+
+ return kmap(pfn_to_page(PFN_DOWN(page)));
+}
+
+static inline void osk_kunmap(osk_phy_addr page, void * mapping)
+{
+ kunmap(pfn_to_page(PFN_DOWN(page)));
+}
+
+static inline void *osk_kmap_atomic(osk_phy_addr page)
+{
+ /**
+ * Note: kmap_atomic should never fail and so OSK_SIMULATE_FAILURE is not
+ * included for this function call.
+ */
+ return kmap_atomic(pfn_to_page(PFN_DOWN(page)));
+}
+
+static inline void osk_kunmap_atomic(osk_phy_addr page, void *mapping)
+{
+ kunmap_atomic(mapping);
+}
+
+static inline void osk_sync_to_memory(osk_phy_addr paddr, osk_virt_addr vaddr, size_t sz)
+{
+#ifdef CONFIG_ARM
+ dmac_flush_range(vaddr, vaddr+sz-1);
+ outer_flush_range(paddr, paddr+sz-1);
+#elif defined(CONFIG_X86)
+ struct scatterlist scl = {0, };
+ sg_set_page(&scl, pfn_to_page(PFN_DOWN(paddr)), sz,
+ paddr & (PAGE_SIZE -1 ));
+ dma_sync_sg_for_cpu(NULL, &scl, 1, DMA_TO_DEVICE);
+ mb(); /* for outer_sync (if needed) */
+#else
+#error Implement cache maintenance for your architecture here
+#endif
+}
+
+static inline void osk_sync_to_cpu(osk_phy_addr paddr, osk_virt_addr vaddr, size_t sz)
+{
+#ifdef CONFIG_ARM
+ dmac_flush_range(vaddr, vaddr+sz-1);
+ outer_flush_range(paddr, paddr+sz-1);
+#elif defined(CONFIG_X86)
+ struct scatterlist scl = {0, };
+ sg_set_page(&scl, pfn_to_page(PFN_DOWN(paddr)), sz,
+ paddr & (PAGE_SIZE -1 ));
+ dma_sync_sg_for_cpu(NULL, &scl, 1, DMA_FROM_DEVICE);
+#else
+#error Implement cache maintenance for your architecture here
+#endif
+}
+
+#endif /* _OSK_ARCH_LOW_LEVEL_MEM_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file
+ * Implementation of the OS abstraction layer for the kernel device driver
+ */
+
+#ifndef _OSK_ARCH_MATH_H
+#define _OSK_ARCH_MATH_H
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+#include <asm/div64.h>
+
+OSK_STATIC_INLINE u32 osk_divmod6432(u64 *value, u32 divisor)
+{
+ u64 v;
+ u32 r;
+
+ OSK_ASSERT(NULL != value);
+
+ v = *value;
+ r = do_div(v, divisor);
+ *value = v;
+ return r;
+}
+
+#endif /* _OSK_ARCH_MATH_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2008-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file
+ * Implementation of the OS abstraction layer for the kernel device driver
+ */
+
+#ifndef _OSK_ARCH_MEM_H_
+#define _OSK_ARCH_MEM_H_
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+
+OSK_STATIC_INLINE void * osk_malloc(size_t size)
+{
+ OSK_ASSERT(0 != size);
+
+ if(OSK_SIMULATE_FAILURE(OSK_OSK))
+ {
+ return NULL;
+ }
+
+ return kmalloc(size, GFP_KERNEL);
+}
+
+OSK_STATIC_INLINE void * osk_calloc(size_t size)
+{
+ OSK_ASSERT(0 != size);
+
+ if(OSK_SIMULATE_FAILURE(OSK_OSK))
+ {
+ return NULL;
+ }
+
+ return kzalloc(size, GFP_KERNEL);
+}
+
+OSK_STATIC_INLINE void osk_free(void * ptr)
+{
+ kfree(ptr);
+}
+
+OSK_STATIC_INLINE void * osk_vmalloc(size_t size)
+{
+ OSK_ASSERT(0 != size);
+
+ if(OSK_SIMULATE_FAILURE(OSK_OSK))
+ {
+ return NULL;
+ }
+
+ return vmalloc_user(size);
+}
+
+OSK_STATIC_INLINE void osk_vfree(void * ptr)
+{
+ vfree(ptr);
+}
+
+#define OSK_MEMCPY( dst, src, len ) memcpy(dst, src, len)
+
+#define OSK_MEMCMP( s1, s2, len ) memcmp(s1, s2, len)
+
+#define OSK_MEMSET( ptr, chr, size ) memset(ptr, chr, size)
+
+
+#endif /* _OSK_ARCH_MEM_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file
+ * Implementation of the OS abstraction layer for the kernel device driver
+ */
+
+#ifndef _OSK_ARCH_POWER_H_
+#define _OSK_ARCH_POWER_H_
+
+#include <linux/pm_runtime.h>
+
+OSK_STATIC_INLINE osk_power_request_result osk_power_request(osk_power_info *info, osk_power_state state)
+{
+ osk_power_request_result request_result = OSK_POWER_REQUEST_FINISHED;
+
+ OSK_ASSERT(NULL != info);
+
+ switch(state)
+ {
+ case OSK_POWER_STATE_OFF:
+ /* request OS to suspend device*/
+ break;
+ case OSK_POWER_STATE_IDLE:
+ /* request OS to idle device */
+ break;
+ case OSK_POWER_STATE_ACTIVE:
+ /* request OS to resume device */
+ break;
+ }
+ return request_result;
+}
+
+#endif
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file
+ * Implementation of the OS abstraction layer for the kernel device driver
+ */
+
+#ifndef _OSK_ARCH_TIME_H
+#define _OSK_ARCH_TIME_H
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+OSK_STATIC_INLINE osk_ticks osk_time_now(void)
+{
+ return jiffies;
+}
+
+OSK_STATIC_INLINE u32 osk_time_mstoticks(u32 ms)
+{
+ return msecs_to_jiffies(ms);
+}
+
+OSK_STATIC_INLINE u32 osk_time_elapsed(osk_ticks ticka, osk_ticks tickb)
+{
+ return jiffies_to_msecs((long)tickb - (long)ticka);
+}
+
+OSK_STATIC_INLINE mali_bool osk_time_after(osk_ticks ticka, osk_ticks tickb)
+{
+ return time_after(ticka, tickb);
+}
+
+OSK_STATIC_INLINE void osk_gettimeofday(osk_timeval *tv)
+{
+ struct timespec ts;
+ getnstimeofday(&ts);
+
+ tv->tv_sec = ts.tv_sec;
+ tv->tv_usec = ts.tv_nsec/1000;
+}
+
+#endif /* _OSK_ARCH_TIME_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_osk_arch_timers.h
+ * Implementation of the OS abstraction layer for the kernel device driver
+ */
+
+#ifndef _OSK_ARCH_TIMERS_H
+#define _OSK_ARCH_TIMERS_H
+
+#if MALI_LICENSE_IS_GPL
+ #include "mali_osk_arch_timers_gpl.h"
+#else
+ #include "mali_osk_arch_timers_commercial.h"
+#endif
+
+#endif /* _OSK_ARCH_TIMERS_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_osk_arch_timers_commercial.h
+ * Implementation of the OS abstraction layer for the kernel device driver
+ * For Commercial builds - does not use the hrtimers functionality.
+ */
+
+#ifndef _OSK_ARCH_TIMERS_COMMERCIAL_H
+#define _OSK_ARCH_TIMERS_COMMERCIAL_H
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+OSK_STATIC_INLINE osk_error osk_timer_init(osk_timer * const tim)
+{
+ OSK_ASSERT(NULL != tim);
+ init_timer(&tim->timer);
+ OSK_DEBUG_CODE( tim->active = MALI_FALSE );
+ OSK_ASSERT(0 == object_is_on_stack(tim));
+ return OSK_ERR_NONE;
+}
+
+OSK_STATIC_INLINE osk_error osk_timer_on_stack_init(osk_timer * const tim)
+{
+ OSK_ASSERT(NULL != tim);
+ init_timer_on_stack(&tim->timer);
+ OSK_DEBUG_CODE( tim->active = MALI_FALSE );
+ OSK_ASSERT(0 != object_is_on_stack(tim));
+ return OSK_ERR_NONE;
+}
+
+OSK_STATIC_INLINE osk_error osk_timer_start(osk_timer *tim, u32 delay)
+{
+ OSK_ASSERT(NULL != tim);
+ OSK_ASSERT(NULL != tim->timer.function);
+ OSK_ASSERT(0 != delay);
+ tim->timer.expires = jiffies + ((delay * HZ + 999) / 1000);
+ add_timer(&tim->timer);
+ OSK_DEBUG_CODE( tim->active = MALI_TRUE );
+ return OSK_ERR_NONE;
+}
+
+OSK_STATIC_INLINE osk_error osk_timer_start_ns(osk_timer *tim, u64 delay)
+{
+ osk_error err;
+ osk_divmod6432(&delay, 1000000);
+
+ err = osk_timer_start( tim, delay );
+
+ return err;
+}
+
+OSK_STATIC_INLINE osk_error osk_timer_modify(osk_timer *tim, u32 delay)
+{
+ OSK_ASSERT(NULL != tim);
+ OSK_ASSERT(NULL != tim->timer.function);
+ OSK_ASSERT(0 != delay);
+ mod_timer(&tim->timer, jiffies + ((delay * HZ + 999) / 1000));
+ OSK_DEBUG_CODE( tim->active = MALI_TRUE );
+ return OSK_ERR_NONE;
+}
+
+OSK_STATIC_INLINE osk_error osk_timer_modify_ns(osk_timer *tim, u64 new_delay)
+{
+ osk_error err;
+ osk_divmod6432(&new_delay, 1000000);
+
+ err = osk_timer_modify( tim, new_delay );
+ return err;
+}
+
+OSK_STATIC_INLINE void osk_timer_stop(osk_timer *tim)
+{
+ OSK_ASSERT(NULL != tim);
+ OSK_ASSERT(NULL != tim->timer.function);
+ del_timer_sync(&tim->timer);
+ OSK_DEBUG_CODE( tim->active = MALI_FALSE );
+}
+
+OSK_STATIC_INLINE void osk_timer_callback_set(osk_timer *tim, osk_timer_callback callback, void *data)
+{
+ OSK_ASSERT(NULL != tim);
+ OSK_ASSERT(NULL != callback);
+ OSK_DEBUG_CODE(
+ if (MALI_FALSE == tim->active)
+ {
+ }
+ );
+ /* osk_timer_callback uses void * for the callback parameter instead of unsigned long in Linux */
+ tim->timer.function = (void (*)(unsigned long))callback;
+ tim->timer.data = (unsigned long)data;
+}
+
+OSK_STATIC_INLINE void osk_timer_term(osk_timer *tim)
+{
+ OSK_ASSERT(NULL != tim);
+ OSK_ASSERT(0 == object_is_on_stack(tim));
+ OSK_DEBUG_CODE(
+ if (MALI_FALSE == tim->active)
+ {
+ }
+ );
+ /* Nothing to do */
+}
+
+OSK_STATIC_INLINE void osk_timer_on_stack_term(osk_timer *tim)
+{
+ OSK_ASSERT(NULL != tim);
+ OSK_ASSERT(0 != object_is_on_stack(tim));
+ OSK_DEBUG_CODE(
+ if (MALI_FALSE == tim->active)
+ {
+ }
+ );
+ destroy_timer_on_stack(&tim->timer);
+}
+
+#endif /* _OSK_ARCH_TIMERS_COMMERCIAL_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_osk_arch_timers_gpl.h
+ * Implementation of the OS abstraction layer for the kernel device driver
+ * For GPL builds - uses the hrtimers functionality.
+ */
+
+#ifndef _OSK_ARCH_TIMERS_GPL_H
+#define _OSK_ARCH_TIMERS_GPL_H
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+#if MALI_DEBUG != 0
+void oskp_debug_test_timer_stats( void );
+#endif
+
+enum hrtimer_restart oskp_timer_callback_wrapper( struct hrtimer * hr_timer );
+
+OSK_STATIC_INLINE osk_error osk_timer_init(osk_timer * const tim)
+{
+ OSK_ASSERT(NULL != tim);
+
+ OSK_DEBUG_CODE( oskp_debug_test_timer_stats() );
+
+ hrtimer_init(&tim->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ tim->timer.function = NULL;
+
+ OSK_DEBUG_CODE( tim->active = MALI_FALSE );
+ OSK_ASSERT(0 == object_is_on_stack(tim));
+ return OSK_ERR_NONE;
+}
+
+OSK_STATIC_INLINE osk_error osk_timer_on_stack_init(osk_timer * const tim)
+{
+ OSK_ASSERT(NULL != tim);
+ hrtimer_init_on_stack(&tim->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ tim->timer.function = NULL;
+
+ OSK_DEBUG_CODE( tim->active = MALI_FALSE );
+ OSK_ASSERT(0 != object_is_on_stack(tim));
+ return OSK_ERR_NONE;
+}
+
+OSK_STATIC_INLINE osk_error osk_timer_start(osk_timer *tim, u32 delay)
+{
+ osk_error err;
+ u64 delay_ns = delay * (u64)1000000U;
+
+ err = osk_timer_start_ns( tim, delay_ns );
+
+ return err;
+}
+
+OSK_STATIC_INLINE osk_error osk_timer_start_ns(osk_timer *tim, u64 delay_ns)
+{
+ ktime_t kdelay;
+ int was_active;
+ OSK_ASSERT(NULL != tim);
+ OSK_ASSERT(NULL != tim->timer.function);
+ OSK_ASSERT(delay_ns != 0);
+
+ kdelay = ns_to_ktime( delay_ns );
+
+ was_active = hrtimer_start( &tim->timer, kdelay, HRTIMER_MODE_REL );
+
+ OSK_ASSERT( was_active == 0 ); /* You cannot start a timer that has already been started */
+
+ CSTD_UNUSED( was_active );
+ OSK_DEBUG_CODE( tim->active = MALI_TRUE );
+ return OSK_ERR_NONE;
+}
+
+OSK_STATIC_INLINE osk_error osk_timer_modify(osk_timer *tim, u32 new_delay)
+{
+ osk_error err;
+ u64 delay_ns = new_delay * (u64)1000000U;
+
+ err = osk_timer_modify_ns( tim, delay_ns );
+ return err;
+}
+
+OSK_STATIC_INLINE osk_error osk_timer_modify_ns(osk_timer *tim, u64 new_delay_ns)
+{
+ ktime_t kdelay;
+ int was_active;
+ OSK_ASSERT(NULL != tim);
+ OSK_ASSERT(NULL != tim->timer.function);
+ OSK_ASSERT(0 != new_delay_ns);
+
+ kdelay = ns_to_ktime( new_delay_ns );
+
+ /* hrtimers will stop the existing timer if it's running on any cpu, so
+ * it's safe just to start the timer again: */
+ was_active = hrtimer_start( &tim->timer, kdelay, HRTIMER_MODE_REL );
+
+ CSTD_UNUSED( was_active );
+ OSK_DEBUG_CODE( tim->active = MALI_TRUE );
+ return OSK_ERR_NONE;
+}
+
+OSK_STATIC_INLINE void osk_timer_stop(osk_timer *tim)
+{
+ int was_active;
+ OSK_ASSERT(NULL != tim);
+ OSK_ASSERT(NULL != tim->timer.function);
+
+ was_active = hrtimer_cancel(&tim->timer);
+
+ CSTD_UNUSED( was_active );
+ OSK_DEBUG_CODE( tim->active = MALI_FALSE );
+}
+
+OSK_STATIC_INLINE void osk_timer_callback_set(osk_timer *tim, osk_timer_callback callback, void *data)
+{
+ OSK_ASSERT(NULL != tim);
+ OSK_ASSERT(NULL != callback);
+ OSK_DEBUG_CODE(
+ if (MALI_FALSE == tim->active)
+ {
+ }
+ );
+
+ tim->timer.function = &oskp_timer_callback_wrapper;
+
+ /* osk_timer_callback uses void * for the callback parameter instead of unsigned long in Linux */
+ tim->callback = callback;
+ tim->data = data;
+}
+
+OSK_STATIC_INLINE void osk_timer_term(osk_timer *tim)
+{
+ OSK_ASSERT(NULL != tim);
+ OSK_ASSERT(0 == object_is_on_stack(tim));
+ OSK_DEBUG_CODE(
+ if (MALI_FALSE == tim->active)
+ {
+ }
+ );
+ /* Nothing to do */
+}
+
+OSK_STATIC_INLINE void osk_timer_on_stack_term(osk_timer *tim)
+{
+ OSK_ASSERT(NULL != tim);
+ OSK_ASSERT(0 != object_is_on_stack(tim));
+ OSK_DEBUG_CODE(
+ if (MALI_FALSE == tim->active)
+ {
+ }
+ );
+ destroy_hrtimer_on_stack(&tim->timer);
+}
+
+#endif /* _OSK_ARCH_TIMERS_GPL_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file
+ * Implementation of the OS abstraction layer for the kernel device driver
+ */
+
+#ifndef _OSK_ARCH_TYPES_H_
+#define _OSK_ARCH_TYPES_H_
+
+#include <linux/version.h> /* version detection */
+#include <linux/spinlock.h>
+#include <linux/mutex.h>
+#include <linux/rwsem.h>
+#include <linux/wait.h>
+#if MALI_LICENSE_IS_GPL
+#include <linux/hrtimer.h>
+#endif
+#include <linux/workqueue.h>
+#include <linux/mm_types.h>
+#include <asm/atomic.h>
+#include <linux/sched.h>
+
+#include <malisw/mali_malisw.h>
+
+ /* This will have to agree with the OSU definition of the CPU page size: CONFIG_CPU_PAGE_SIZE_LOG2 */
+#define OSK_PAGE_SHIFT PAGE_SHIFT
+#define OSK_PAGE_SIZE PAGE_SIZE
+#define OSK_PAGE_MASK PAGE_MASK
+
+/** Number of CPU Cores */
+#define OSK_NUM_CPUS NR_CPUS
+
+/** Total amount of memory, in pages */
+#define OSK_MEM_PAGES totalram_pages
+
+/**
+ * @def OSK_L1_DCACHE_LINE_SIZE_LOG2
+ * @brief CPU L1 Data Cache Line size, in the form of a Logarithm to base 2.
+ *
+ * Must agree with the OSU definition: CONFIG_CPU_L1_DCACHE_LINE_SIZE_LOG2.
+ */
+#define OSK_L1_DCACHE_LINE_SIZE_LOG2 6
+
+/**
+ * @def OSK_L1_DCACHE_SIZE
+ * @brief CPU L1 Data Cache size, in bytes.
+ *
+ * Must agree with the OSU definition: CONFIG_CPU_L1_DCACHE_SIZE.
+ */
+#define OSK_L1_DCACHE_SIZE ((u32)0x00008000)
+
+
+#define OSK_MIN(x,y) min((x), (y))
+
+typedef spinlock_t osk_spinlock;
+typedef struct osk_spinlock_irq {
+ spinlock_t lock;
+ unsigned long flags;
+} osk_spinlock_irq;
+
+typedef struct mutex osk_mutex;
+typedef struct rw_semaphore osk_rwlock;
+
+typedef atomic_t osk_atomic;
+
+typedef struct osk_waitq
+{
+ /**
+ * set to MALI_TRUE when the waitq is signaled; set to MALI_FALSE when
+ * not signaled.
+ *
+ * This does not require locking for the setter, clearer and waiter.
+ * Here's why:
+ * - The only sensible use of a waitq is for operations to occur in
+ * strict order, without possibility of race between the callers of
+ * osk_waitq_set() and osk_waitq_clear() (the setter and clearer).
+ * Effectively, the clear must cause a later set to occur.
+ * - When the clear/set operations occur in different threads, some
+ * form of communication needs to happen for the clear to cause the
+ * signal to occur later.
+ * - This itself \b must involve a memory barrier, and so the clear is
+ * guarenteed to be observed by the waiter such that it is before the set.
+ * (and the set is observed after the clear).
+ *
+ * For example, running a GPU job might clear a "there are jobs in
+ * flight" waitq. Running the job must issue an register write, (and
+ * likely a post to a workqueue due to IRQ handling). Those operations
+ * must cause a data barrier to occur. During IRQ handling/workqueue
+ * processing, we might then set the waitq, and this happens after the
+ * barrier. Hence, the set and clear are observed in strict order.
+ */
+ mali_bool signaled;
+ wait_queue_head_t wq; /**< threads waiting for flag to be signalled */
+} osk_waitq;
+
+typedef struct osk_timer {
+#if MALI_LICENSE_IS_GPL
+ struct hrtimer timer;
+ osk_timer_callback callback;
+ void *data;
+#else /* MALI_LICENSE_IS_GPL */
+ struct timer_list timer;
+#endif /* MALI_LICENSE_IS_GPL */
+#ifdef MALI_DEBUG
+ mali_bool active;
+#endif
+} osk_timer;
+
+typedef struct page osk_page;
+typedef struct vm_area_struct osk_vma;
+
+typedef unsigned long osk_ticks; /* 32-bit resolution deemed to be sufficient */
+
+/* Separate definitions for the following, to avoid wrapper functions for GPL drivers */
+#if MALI_LICENSE_IS_GPL
+typedef work_func_t osk_workq_fn;
+
+typedef struct work_struct osk_workq_work;
+
+typedef struct osk_workq
+{
+ struct workqueue_struct *wqs;
+} osk_workq;
+
+#else /* MALI_LICENSE_IS_GPL */
+
+/* Forward decls */
+typedef struct osk_workq_work osk_workq_work;
+typedef struct osk_workq osk_workq;
+
+typedef void (*osk_workq_fn)(osk_workq_work *);
+
+struct osk_workq_work
+{
+ struct work_struct os_work;
+ /* Non-GPL driver must manually track work */
+ osk_workq_fn actual_fn;
+ osk_workq *parent_wq;
+};
+
+struct osk_workq
+{
+ /* Non-GPL driver shouldn't flush the global workqueue, so we do a manual form of flushing */
+ spinlock_t active_items_lock;
+ u32 nr_active_items;
+ wait_queue_head_t waitq_zero_active_items;
+};
+
+#endif /* MALI_LICENSE_IS_GPL */
+
+typedef struct device osk_power_info;
+
+typedef struct timeval osk_timeval;
+
+/**
+ * Physical address
+ */
+typedef phys_addr_t osk_phy_addr;
+
+#endif /* _OSK_ARCH_TYPES_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file
+ * Implementation of the OS abstraction layer for the kernel device driver
+ */
+
+#ifndef _OSK_ARCH_WAITQ_H
+#define _OSK_ARCH_WAITQ_H
+
+#ifndef _OSK_H_
+#error "Include mali_osk.h directly"
+#endif
+
+#include <linux/wait.h>
+#include <linux/sched.h>
+
+/*
+ * Note:
+ *
+ * We do not need locking on the signalled member (see its doxygen description)
+ */
+
+OSK_STATIC_INLINE osk_error osk_waitq_init(osk_waitq * const waitq)
+{
+ OSK_ASSERT(NULL != waitq);
+ waitq->signaled = MALI_FALSE;
+ init_waitqueue_head(&waitq->wq);
+ return OSK_ERR_NONE;
+}
+
+OSK_STATIC_INLINE void osk_waitq_wait(osk_waitq *waitq)
+{
+ OSK_ASSERT(NULL != waitq);
+ wait_event(waitq->wq, waitq->signaled != MALI_FALSE);
+}
+
+OSK_STATIC_INLINE void osk_waitq_set(osk_waitq *waitq)
+{
+ OSK_ASSERT(NULL != waitq);
+ waitq->signaled = MALI_TRUE;
+ wake_up(&waitq->wq);
+}
+
+OSK_STATIC_INLINE void osk_waitq_clear(osk_waitq *waitq)
+{
+ OSK_ASSERT(NULL != waitq);
+ waitq->signaled = MALI_FALSE;
+}
+
+OSK_STATIC_INLINE void osk_waitq_term(osk_waitq *waitq)
+{
+ OSK_ASSERT(NULL != waitq);
+ /* NOP on Linux */
+}
+
+#endif /* _OSK_ARCH_WAITQ_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file
+ * Implementation of the OS abstraction layer for the kernel device driver
+ */
+
+#ifndef _OSK_ARCH_WORKQ_H_
+#define _OSK_ARCH_WORKQ_H_
+
+#include <linux/version.h> /* version detection */
+#include "osk/include/mali_osk_failure.h"
+
+#if MALI_LICENSE_IS_GPL == 0
+/* Wrapper function to allow flushing in non-GPL driver */
+void oskp_work_func_wrapper( struct work_struct *work );
+
+/* Forward decl */
+OSK_STATIC_INLINE void osk_workq_flush(osk_workq *wq);
+#endif /* MALI_LICENSE_IS_GPL == 0 */
+
+OSK_STATIC_INLINE osk_error osk_workq_init(osk_workq * const wq, const char *name, u32 flags)
+{
+#if (MALI_LICENSE_IS_GPL) && (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,36))
+ unsigned int wqflags = 0;
+#endif /*(MALI_LICENSE_IS_GPL) && (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,36))*/
+
+ OSK_ASSERT(NULL != wq);
+ OSK_ASSERT(NULL != name);
+ OSK_ASSERT(0 == (flags & ~(OSK_WORKQ_NON_REENTRANT|OSK_WORKQ_HIGH_PRIORITY|OSK_WORKQ_RESCUER)));
+
+#if MALI_LICENSE_IS_GPL
+ if(OSK_SIMULATE_FAILURE(OSK_OSK))
+ {
+ return OSK_ERR_FAIL;
+ }
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,36)
+ if (flags & OSK_WORKQ_NON_REENTRANT)
+ {
+ wqflags |= WQ_NON_REENTRANT;
+ }
+ if (flags & OSK_WORKQ_HIGH_PRIORITY)
+ {
+ wqflags |= WQ_HIGHPRI;
+ }
+ if (flags & OSK_WORKQ_RESCUER)
+ {
+ wqflags |= WQ_RESCUER;
+ }
+
+ wq->wqs = alloc_workqueue(name, wqflags, 1);
+#else
+ if (flags & OSK_WORKQ_NON_REENTRANT)
+ {
+ wq->wqs = create_singlethread_workqueue(name);
+ }
+ else
+ {
+ wq->wqs = create_workqueue(name);
+ }
+#endif
+ if (NULL == wq->wqs)
+ {
+ return OSK_ERR_FAIL;
+ }
+#else
+ /* Non-GPL driver uses global workqueue */
+ spin_lock_init( &wq->active_items_lock );
+ wq->nr_active_items = 0;
+ init_waitqueue_head( &wq->waitq_zero_active_items );
+#endif
+ return OSK_ERR_NONE;
+}
+
+OSK_STATIC_INLINE void osk_workq_term(osk_workq *wq)
+{
+ OSK_ASSERT(NULL != wq);
+#if MALI_LICENSE_IS_GPL
+ destroy_workqueue(wq->wqs);
+#else
+ /* Non-GPL driver uses global workqueue, so flush it manually */
+ osk_workq_flush(wq);
+#endif
+}
+
+OSK_STATIC_INLINE void osk_workq_work_init(osk_workq_work * const wk, osk_workq_fn fn)
+{
+ work_func_t func_to_call_first;
+ struct work_struct *os_work;
+ OSK_ASSERT(NULL != wk);
+ OSK_ASSERT(NULL != fn);
+ OSK_ASSERT(0 == object_is_on_stack(wk));
+
+#if MALI_LICENSE_IS_GPL
+ func_to_call_first = fn;
+ os_work = wk;
+#else /* MALI_LICENSE_IS_GPL */
+ func_to_call_first = &oskp_work_func_wrapper;
+ wk->actual_fn = fn;
+ os_work = &wk->os_work;
+#endif /* MALI_LICENSE_IS_GPL */
+
+ INIT_WORK(os_work, func_to_call_first);
+}
+
+OSK_STATIC_INLINE void osk_workq_work_init_on_stack(osk_workq_work * const wk, osk_workq_fn fn)
+{
+ work_func_t func_to_call_first;
+ struct work_struct *os_work;
+ OSK_ASSERT(NULL != wk);
+ OSK_ASSERT(NULL != fn);
+ OSK_ASSERT(0 != object_is_on_stack(wk));
+
+#if MALI_LICENSE_IS_GPL
+ func_to_call_first = fn;
+ os_work = wk;
+#else /* MALI_LICENSE_IS_GPL */
+ func_to_call_first = &oskp_work_func_wrapper;
+ wk->actual_fn = fn;
+ os_work = &wk->os_work;
+#endif /* MALI_LICENSE_IS_GPL */
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,37)
+ INIT_WORK_ONSTACK(os_work, func_to_call_first);
+#else
+ INIT_WORK_ON_STACK(os_work, func_to_call_first);
+#endif
+}
+
+OSK_STATIC_INLINE void osk_workq_submit(osk_workq *wq, osk_workq_work * const wk)
+{
+ OSK_ASSERT(NULL != wk);
+ OSK_ASSERT(NULL != wq);
+
+#if MALI_LICENSE_IS_GPL
+ queue_work(wq->wqs, wk);
+#else
+ /* Non-GPL driver uses global workqueue */
+ {
+ unsigned long flags;
+
+ wk->parent_wq = wq;
+
+ spin_lock_irqsave( &wq->active_items_lock, flags );
+ ++(wq->nr_active_items);
+ spin_unlock_irqrestore( &wq->active_items_lock, flags );
+ schedule_work(&wk->os_work);
+ }
+#endif
+}
+
+OSK_STATIC_INLINE void osk_workq_flush(osk_workq *wq)
+{
+ OSK_ASSERT(NULL != wq);
+
+#if MALI_LICENSE_IS_GPL
+ flush_workqueue(wq->wqs);
+#else
+ /* Non-GPL driver uses global workqueue, but we shouldn't use flush_scheduled_work() */
+
+ /* Locking is required here, to ensure that the wait_queue object doesn't
+ * disappear from the work item. This is the sequence we must guard
+ * against:
+ * - work item thread (signaller) atomic-sets nr_active_items = 0
+ * - signaller gets scheduled out
+ * - another thread (waiter) calls osk_workq_term()
+ * - ...which calls osk_workq_flush()
+ * - ...which calls wait_event(), and immediately returns without blocking
+ * - osk_workq_term() completes
+ * - caller (waiter) frees the memory backing the osk_workq
+ * - signaller gets scheduled back in
+ * - tries to call wake_up() on a freed address - error!
+ */
+ {
+ unsigned long flags;
+ spin_lock_irqsave( &wq->active_items_lock, flags );
+ while( wq->nr_active_items != 0 )
+ {
+ spin_unlock_irqrestore( &wq->active_items_lock, flags );
+ wait_event( wq->waitq_zero_active_items, (wq->nr_active_items == 0) );
+ spin_lock_irqsave( &wq->active_items_lock, flags );
+ }
+ spin_unlock_irqrestore( &wq->active_items_lock, flags );
+ }
+#endif
+}
+
+
+#endif
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#include <osk/mali_osk.h>
+
+oskp_debug_assert_cb oskp_debug_assert_registered_cb =
+{
+ NULL,
+ NULL
+};
+
+void oskp_debug_print(const char *format, ...)
+{
+ va_list args;
+ va_start(args, format);
+ oskp_validate_format_string(format);
+ vprintk(format, args);
+ va_end(args);
+}
+
+s32 osk_snprintf(char *str, size_t size, const char *format, ...)
+{
+ va_list args;
+ s32 ret;
+ va_start(args, format);
+ oskp_validate_format_string(format);
+ ret = vsnprintf(str, size, format, args);
+ va_end(args);
+ return ret;
+}
+
+void osk_debug_assert_register_hook( osk_debug_assert_hook *func, void *param )
+{
+ oskp_debug_assert_registered_cb.func = func;
+ oskp_debug_assert_registered_cb.param = param;
+}
+
+void oskp_debug_assert_call_hook( void )
+{
+ if ( oskp_debug_assert_registered_cb.func != NULL )
+ {
+ oskp_debug_assert_registered_cb.func( oskp_debug_assert_registered_cb.param );
+ }
+}
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#include <osk/mali_osk.h>
+
+#if MALI_DEBUG != 0
+#include <linux/delay.h>
+
+#define TIMER_PERIOD_NS 100
+#define TIMER_TEST_TIME_MS 1000
+typedef struct oskp_time_test
+{
+ osk_timer timer;
+ u32 val;
+ mali_bool should_stop;
+} oskp_time_test;
+
+static mali_bool oskp_timer_has_been_checked = MALI_FALSE;
+#endif
+
+enum hrtimer_restart oskp_timer_callback_wrapper( struct hrtimer * hr_timer )
+{
+ osk_timer *tim;
+
+ tim = CONTAINER_OF( hr_timer, osk_timer, timer );
+ tim->callback( tim->data );
+
+ return HRTIMER_NORESTART;
+}
+
+#if MALI_DEBUG != 0
+static void oskp_check_timer_callback( void *data )
+{
+ oskp_time_test *time_tester = (oskp_time_test*)data;
+
+ (time_tester->val)++;
+
+ if ( time_tester->should_stop == MALI_FALSE )
+ {
+ osk_error err;
+ err = osk_timer_start_ns( &time_tester->timer, TIMER_PERIOD_NS );
+ if ( err != OSK_ERR_NONE )
+ {
+ OSK_PRINT_WARN( OSK_BASE_CORE, "OSK Timer couldn't restart - testing stats will be inaccurate" );
+ }
+ }
+}
+
+void oskp_debug_test_timer_stats( void )
+{
+ oskp_time_test time_tester;
+ osk_ticks start_timestamp;
+ osk_ticks end_timestamp;
+ u32 msec_elapsed;
+ osk_error err;
+
+ if ( oskp_timer_has_been_checked != MALI_FALSE )
+ {
+ return;
+ }
+ oskp_timer_has_been_checked = MALI_TRUE;
+
+ OSK_MEMSET( &time_tester, 0, sizeof(time_tester) );
+
+ err = osk_timer_on_stack_init( &time_tester.timer );
+ if ( err != OSK_ERR_NONE )
+ {
+ goto fail_init;
+ }
+
+ osk_timer_callback_set( &time_tester.timer, &oskp_check_timer_callback, &time_tester );
+
+ start_timestamp = osk_time_now();
+ err = osk_timer_start_ns( &time_tester.timer, TIMER_PERIOD_NS );
+ if ( err != OSK_ERR_NONE )
+ {
+ goto fail_start;
+ }
+
+ msleep( TIMER_TEST_TIME_MS );
+
+ time_tester.should_stop = MALI_TRUE;
+
+ osk_timer_stop( &time_tester.timer );
+ end_timestamp = osk_time_now();
+
+ msec_elapsed = osk_time_elapsed( start_timestamp, end_timestamp );
+
+ OSK_PRINT( OSK_BASE_CORE, "OSK Timer did %d iterations in %dms", time_tester.val, msec_elapsed );
+
+ osk_timer_on_stack_term( &time_tester.timer );
+ return;
+
+ fail_start:
+ osk_timer_on_stack_term( &time_tester.timer );
+ fail_init:
+ OSK_PRINT_WARN( OSK_BASE_CORE, "OSK Timer couldn't init/start for testing stats" );
+ return;
+}
+#endif
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#include <osk/mali_osk.h>
+
+#if MALI_LICENSE_IS_GPL == 0
+/* Wrapper function to allow flushing in Non-GPL driver */
+void oskp_work_func_wrapper( struct work_struct *work )
+{
+ osk_workq_work *osk_work = CONTAINER_OF( work, osk_workq_work, os_work );
+ osk_workq *parent_wq;
+ u32 val;
+ unsigned long flags;
+
+ parent_wq = osk_work->parent_wq;
+ osk_work->actual_fn( osk_work );
+ /* work and osk_work could disappear from this point on */
+
+ /* parent_wq of course shouldn't disappear *yet*, because it must itself flush this function before term */
+
+ spin_lock_irqsave( &parent_wq->active_items_lock, flags );
+ val = --(parent_wq->nr_active_items);
+ /* The operations above and below must form an atomic operation themselves,
+ * hence the lock. See osk_workq_flush() for why */
+ if ( val == 0 )
+ {
+ wake_up( &parent_wq->waitq_zero_active_items );
+ }
+ spin_unlock_irqrestore( &parent_wq->active_items_lock, flags );
+
+ /* parent_wq may've now disappeared */
+}
+
+#endif /* MALI_LICENSE_IS_GPL == 0 */
--- /dev/null
+# Copyright:
+# ----------------------------------------------------------------------------
+# This confidential and proprietary software may be used only as authorized
+# by a licensing agreement from ARM Limited.
+# (C) COPYRIGHT 2010-2012 ARM Limited, ALL RIGHTS RESERVED
+# The entire notice above must be reproduced on all authorized copies and
+# copies may only be made to the extent permitted by a licensing agreement
+# from ARM Limited.
+# ----------------------------------------------------------------------------
+#
+from os.path import basename
+
+Import('env')
+
+# Clone the environment so changes don't affect other build files
+env_osk = env.Clone()
+
+# basenames of files to exclude
+osk_excludes = []
+
+# Do not build mali_osk_timers in commercial build
+if env['mali_license_is_gpl'] == '0':
+ osk_excludes.append('mali_osk_timers.c')
+
+# Source files required for the OSK. We include a "#" in the Glob expression
+# to cause SCons to look in the directory relative that in which the SCons
+# command is executed; otherwise, it start looking for the C source files in
+# the variant directory and will fail to spot changes as the files are not
+# present there.
+osk_src = [Glob('*.c'), Glob('#osk/src/common/*.c')]
+
+# Remove any excluded files
+osk_src = filter(lambda node: basename(node.path) not in osk_excludes, env.Flatten(osk_src))
+
+env_osk.Append( CPPPATH='#osk/src/linux/include' )
+
+if env_osk['v'] != '1':
+ env_osk['MAKECOMSTR'] = '[MAKE] ${SOURCE.dir}'
+
+# Note: cleaning via the Linux kernel build system does not yet work
+if env_osk.GetOption('clean') :
+ makeAction=Action("cd ${SOURCE.dir} && make clean", '$MAKECOMSTR')
+else:
+ makeAction=Action("cd ${SOURCE.dir} && make MALI_DEBUG=${debug} MALI_HW_VERSION=${hwver} MALI_BASE_TRACK_MEMLEAK=${base_qa} MALI_LICENSE_IS_GPL=${mali_license_is_gpl} MALI_USE_UMP=${ump} MALI_UNIT_TEST=${unit} && cp lib.a $STATIC_LIB_PATH/libosk.a", '$MAKECOMSTR')
+
+# The target is libosk.a, built from the source in osk_src, via the action makeAction
+# libosk.a will be copied to $STATIC_LIB_PATH after being built by the standard Linux
+# kernel build system, after which it can be installed to the directory specified if
+# "libs_install" is set; this is done by LibTarget.
+cmd = env_osk.Command('$STATIC_LIB_PATH/libosk.a', osk_src, [makeAction])
+
+# Until we fathom out how the invoke the Linux build system to clean, we can use Clean
+# to remove generated files.
+
+patterns = ['*.o', '*.a', '.*.cmd', 'modules.order', '.tmp_versions', 'Module.symvers']
+
+for p in patterns:
+ Clean(cmd, Glob('#osk/src/linux/%s' % p))
+ Clean(cmd, Glob('#osk/src/common/%s' % p))
+
+env_osk.LibTarget('osk', cmd)
--- /dev/null
+# This confidential and proprietary software may be used only as
+# authorised by a licensing agreement from ARM Limited
+# (C) COPYRIGHT 2010-2011 ARM Limited
+# ALL RIGHTS RESERVED
+# The entire notice above must be reproduced on all authorised
+# copies and copies may only be made to the extent permitted
+# by a licensing agreement from ARM Limited.
+
+Import( 'env' )
+import os
+
+if env['backend'] == 'kernel':
+ SConscript(os.path.join(env['kernel'], 'sconscript'))
+else:
+ SConscript(os.path.join(env['os'] + '_userspace', 'sconscript'))
+ SConscript(os.path.join('userspace', 'sconscript'))
--- /dev/null
+obj-y += src/
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_uk.h
+ * Types and definitions that are common across OSs for both the user
+ * and kernel side of the User-Kernel interface.
+ */
+
+#ifndef _UK_H_
+#define _UK_H_
+
+#ifdef __cplusplus
+extern "C"
+{
+#endif /* __cplusplus */
+
+#include <malisw/mali_stdtypes.h>
+
+/**
+ * @addtogroup base_api
+ * @{
+ */
+
+/**
+ * @defgroup uk_api User-Kernel Interface API
+ *
+ * The User-Kernel Interface abstracts the communication mechanism between the user and kernel-side code of device
+ * drivers developed as part of the Midgard DDK. Currently that includes the Base driver and the UMP driver.
+ *
+ * It exposes an OS independent API to user-side code (UKU) which routes functions calls to an OS-independent
+ * kernel-side API (UKK) via an OS-specific communication mechanism.
+ *
+ * This API is internal to the Midgard DDK and is not exposed to any applications.
+ *
+ * @{
+ */
+
+/**
+ * @brief UK major version
+ */
+#define MALI_MODULE_UK_MAJOR 0
+
+/**
+ * @brief UK minor version
+ */
+#define MALI_MODULE_UK_MINOR 0
+
+/**
+ * These are identifiers for kernel-side drivers implementing a UK interface, aka UKK clients. The
+ * UK module maps this to an OS specific device name, e.g. "gpu_base" -> "GPU0:". Specify this
+ * identifier to select a UKK client to the uku_open() function.
+ *
+ * When a new UKK client driver is created a new identifier needs to be added to the uk_client_id
+ * enumeration and the uku_open() implemenation for the various OS ports need to be updated to
+ * provide a mapping of the identifier to the OS specific device name.
+ *
+ */
+typedef enum uk_client_id
+{
+ /**
+ * Value used to identify the Base driver UK client.
+ */
+ UK_CLIENT_MALI_T600_BASE,
+
+ /** The number of uk clients supported. This must be the last member of the enum */
+ UK_CLIENT_COUNT
+} uk_client_id;
+
+/**
+ * Each function callable through the UK interface has a unique number.
+ * Functions provided by UK clients start from number UK_FUNC_ID.
+ * Numbers below UK_FUNC_ID are used for internal UK functions.
+ */
+typedef enum uk_func
+{
+ UKP_FUNC_ID_CHECK_VERSION, /**< UKK Core internal function */
+ /**
+ * Each UK client numbers the functions they provide starting from
+ * number UK_FUNC_ID. This number is then eventually assigned to the
+ * id field of the uk_header structure when preparing to make a
+ * UK call. See your UK client for a list of their function numbers.
+ */
+ UK_FUNC_ID = 512
+} uk_func;
+
+/**
+ * Arguments for a UK call are stored in a structure. This structure consists
+ * of a fixed size header and a payload. The header carries a 32-bit number
+ * identifying the UK function to be called (see uk_func). When the UKK client
+ * receives this header and executed the requested UK function, it will use
+ * the same header to store the result of the function in the form of a
+ * mali_error return code. The size of this structure is such that the
+ * first member of the payload following the header can be accessed efficiently
+ * on a 32 and 64-bit kernel and the structure has the same size regardless
+ * of a 32 or 64-bit kernel. The uk_kernel_size_type type should be defined
+ * accordingly in the OS specific mali_uk_os.h header file.
+ */
+typedef union uk_header
+{
+ /**
+ * 32-bit number identifying the UK function to be called.
+ * Also see uk_func.
+ */
+ u32 id;
+ /**
+ * The mali_error return code returned by the called UK function.
+ * See the specification of the particular UK function you are
+ * calling for the meaning of the error codes returned. All
+ * UK functions return MALI_ERROR_NONE on success.
+ */
+ mali_error ret;
+ /*
+ * Used to ensure 64-bit alignment of this union. Do not remove.
+ * This field is used for padding and does not need to be initialized.
+ */
+ u64 sizer;
+} uk_header;
+
+/**
+ * This structure carries a 16-bit major and minor number and is sent along with an internal UK call
+ * used during uku_open to identify the versions of the UK module in use by the user-side and kernel-side.
+ */
+typedef struct uku_version_check_args
+{
+ uk_header header; /**< UK call header */
+ u16 major; /**< This field carries the user-side major version on input and the kernel-side major version on output */
+ u16 minor; /**< This field carries the user-side minor version on input and the kernel-side minor version on output. */
+} uku_version_check_args;
+
+/** @} end group uk_api */
+
+/** @} */ /* end group base_api */
+
+#ifdef __cplusplus
+}
+#endif /* __cplusplus */
+
+#endif /* _UK_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_ukk.h
+ * Types and definitions that are common across OSs for the kernel side of the
+ * User-Kernel interface.
+ */
+
+#ifndef _UKK_H_
+#define _UKK_H_
+
+#ifdef __cplusplus
+extern "C"
+{
+#endif /* __cplusplus */
+
+#include <osk/mali_osk.h>
+#include <malisw/mali_stdtypes.h>
+#include <uk/mali_uk.h>
+
+/**
+ * Incomplete definitions of ukk_session, ukk_call_context to satisfy header file dependency in plat/mali_ukk_os.h
+ */
+typedef struct ukk_session ukk_session;
+typedef struct ukk_call_context ukk_call_context;
+#include <plat/mali_ukk_os.h> /* needed for ukkp_session definition */
+
+/**
+ * @addtogroup uk_api User-Kernel Interface API
+ * @{
+ */
+
+/**
+ * @addtogroup uk_api_kernel UKK (Kernel side)
+ * @{
+ *
+ * A kernel-side device driver implements the UK interface with the help of the UKK. The UKK
+ * is an OS independent API for kernel-side code to accept requests from the user-side to
+ * execute functions in the kernel-side device driver.
+ *
+ * A few definitions:
+ * - the kernel-side device driver implementing the UK interface is called the UKK client driver
+ * - the user-side library, application, or driver communicating with the UKK client driver is called the UKU client driver
+ * - the UKK API implementation is called the UKK core. The UKK core is linked with your UKK client driver.
+ *
+ * When a UKK client driver starts it needs to initialize the UKK core by calling ukk_start() and
+ * similarly ukk_stop() when the UKK client driver terminates. A UKK client driver is normally
+ * started by an operating system when a device boots.
+ *
+ * A UKU client driver provides services that are implemented either completely in user-space, kernel-space, or
+ * a combination of both. A UKU client driver makes UK calls to its UKK client driver to execute any functionality
+ * of services that is implemented in kernel-space.
+ *
+ * To make a UK call the UKU client driver needs to establish a connection with the UKK client driver. The UKU API
+ * provides the uku_open() call to establish this connection. Normally this results in the OS calling the open
+ * entry point of the UKK client driver. Here, the UKK client driver needs to initialize a UKK session object
+ * with ukk_session_init() to represent this connection, and register a function that will execute the UK calls
+ * requested over this connection (or UKK session). This function is called the UKK client dispatch handler.
+ *
+ * To prevent the UKU client driver executing an incompatible UK call implementation, the UKK session object
+ * stores the version of the UK calls supported by the function registered to execute the UK calls. As soon as the
+ * UKU client driver established a connection with the UKK client driver, uku_open() makes an internal UK call to
+ * request the version of the UK calls supported by the UKK client driver and will fail if the version expected
+ * by the UKU client driver is not compatible with the version supported by the UKK client driver. Internal UK calls
+ * are handled by the UKK core itself and don't reach the UKK client dispatch handler.
+ *
+ * Once the UKU client driver has established a (compatible) connection with the UKK client driver, the UKU
+ * client driver can execute UK calls by using uku_call(). This normally results in the OS calling the ioctl
+ * handler of your UKK client driver and presenting it with the UK call argument structure that was passed
+ * to uku_call(). It is the responsibility of the ioctl handler to copy the UK call argument structure from
+ * user-space to kernel-space and provide it to the UKK dispatch function, ukk_dispatch(), for execution. Depending
+ * on the particular UK call, the UKK dispatch function will either call the UKK client dispatch handler associated
+ * with the UKK session, or the UKK core dispatch handler if it is an UK internal call. When the UKK dispatch
+ * function returns, the return code of the UK call and the output and input/output parameters in the UK call argument
+ * structure will have been updated. Again, it is the responsibility of the ioctl handler to copy the updated
+ * UK call argument structure from kernel-space back to user-space.
+ *
+ * When the UKK client dispatch handler is called it is passed the UK call argument structure (along with a
+ * UK call context which is discussed later). The UKK client dispatch handler uses the uk_header structure in the
+ * UK call argument structure (which is always the first member in this structure) to determine which UK call in
+ * particular needs to be executed. The uk_header structure is a union of a 32-bit number containing the UK call
+ * function number (as defined by the UKK client driver) and a mali_error return value that will store the return
+ * value of the UK call. The 32-bit UK call function number is normally used to select a particular case in a switch
+ * statement that implements the particular UK call which finally stores the result of the UK call in the mali_error
+ * return value of the uk_header structure.
+ *
+ * A UK call implementation is provided with access to a number of objects it may need during the UK call through
+ * a UKK call context. This UKK call context currently only contains
+ * - a pointer to the UKK session for the UK call
+ *
+ * It is the responsibility of the ioctl handler to initialize a UKK call context using ukk_call_prepare() and pass
+ * it on to the UKK dispatch function. The UK call implementation then uses ukk_session_get() to retrieve the stored
+ * objects in the UKK call context. The UK call implementation normally uses the UKK session pointer returned from
+ * ukk_session_get() to access the UKK client driver's context in which the UKK session is embedded. For example:
+ * struct kbase_context {
+ * int some_kbase_context_data;
+ * int more_kbase_context_data;
+ * ukk_session ukk_session_member;
+ * } *kctx;
+ * kctx = CONTAINER_OF(ukk_session_get(ukk_call_ctx), kbase_context, ukk_session_member);
+ *
+ * A UK call may not use an argument structure with embedded pointers.
+ *
+ * All of this can be translated into the following minimal sample code for a UKK client driver:
+@code
+// Sample code for an imaginary UKK client driver 'TESTDRV' implementing the 'TESTDRV_UK_FOO_FUNC' UK call
+//
+#define TESTDRV_VERSION_MAJOR 0
+#define TESTDRV_VERSION_MINOR 1
+
+typedef enum testdrv_uk_function
+{
+ TESTDRV_UK_FOO_FUNC = (UK_FUNC_ID + 0)
+} testdrv_uk_function;
+
+typedef struct testdrv_uk_foo_args
+{
+ uk_header header;
+ int counters[10]; // input
+ int prev_counters[10]; // output
+} testdrv_uk_foo_args;
+
+typedef struct testdrv_session
+{
+ int counters[10];
+ ukk_session ukk_session_obj;
+} testdrv_session;
+
+testdrv_open(os_driver_context *osctx)
+{
+ testdrv_session *ts;
+ ts = osk_malloc(sizeof(*ts));
+ ukk_session_init(&ts->ukk_session_obj, testdrv_ukk_dispatch, TESTDRV_VERSION_MAJOR, TESTDRV_VERSION_MINOR);
+ osctx->some_field = ts;
+}
+testdrv_close(os_driver_context *osctx)
+{
+ testdrv_session *ts = osctx->some_field;
+ ukk_session_term(&ts->ukk_session_obj)
+ osk_free(ts);
+ osctx->some_field = NULL;
+}
+testdrv_ioctl(os_driver_context *osctx, void *user_arg, u32 args_size)
+{
+ testdrv_session *ts = osctx->some_field;
+ ukk_call_context call_ctx;
+ void *kernel_arg;
+
+ kernel_arg = os_copy_to_kernel_space(user_arg, args_size);
+
+ ukk_call_prepare(&call_ctx, &ts->ukk_session_obj);
+
+ ukk_dispatch(&call_ctx, kernel_arg, args_size);
+
+ os_copy_to_user_space(user_arg, kernel_arg, args_size);
+}
+mali_error testdrv_ukk_dispatch(ukk_call_context *call_ctx, void *arg, u32 args_size)
+{
+ uk_header *header = arg;
+ mali_error ret = MALI_ERROR_FUNCTION_FAILED;
+
+ switch(header->id)
+ {
+ case TESTDRV_UK_FOO_FUNC:
+ {
+ testdrv_uk_foo_args *foo_args = arg;
+ if (sizeof(*foo_args) == args_size)
+ {
+ mali_error result;
+ result = foo(call_ctx, foo_args);
+ header->ret = result;
+ ret = MALI_ERROR_NONE;
+ }
+ break;
+ }
+ }
+ return ret;
+}
+mali_error foo(ukk_call_context *call_ctx, testdrv_uk_foo_args *args) {
+ // foo updates the counters in the testdrv_session object and returns the old counters
+ testdrv_session *session = CONTAINER_OF(ukk_session_get(call_ctx), testdrv_session, ukk_session_obj);
+ memcpy(&args->prev_counters, session->counters, 10 * sizeof(int));
+ memcpy(&session->counters, &args->counters, 10 * sizeof(int));
+ return MALI_ERROR_NONE;
+}
+@endcode
+*/
+
+/**
+ * Maximum size of UK call argument structure supported by UKK clients
+ */
+#define UKK_CALL_MAX_SIZE 512
+
+/**
+ * @brief Dispatch callback of UKK client
+ *
+ * The UKK client's dispatch function is called by UKK core ukk_dispatch()
+ *
+ * The UKK client's dispatch function should return MALI_ERROR_NONE when it
+ * has accepted and executed the UK call. If the UK call is not recognized it
+ * should return MALI_ERROR_FUNCTION_FAILED.
+ *
+ * An example of a piece of code from a UKK client dispatch function:
+ * @code
+ * uk_header *header = (uk_header *)arg;
+ * switch(header->id) {
+ * case MYCLIENT_FUNCTION: {
+ * if (args_size != sizeof(myclient_function_args)) {
+ * return MALI_ERROR_FUNCTION_FAILED; // argument structure size mismatch
+ * } else {
+ * // execute UK call and store result back in header
+ * header->ret = do_my_client_function(ukk_ctx, args);
+ * return MALI_ERROR_NONE;
+ * }
+ * default:
+ * return MALI_ERROR_FUNCTION_FAILED; // UK call function number not recognized
+ * }
+ * @endcode
+ *
+ * For details, see ukk_dispatch().
+ *
+ * Debug builds will assert when a NULL pointer is passed for ukk_ctx or args, or,
+ * args_size < sizeof(uk_header).
+ *
+ * @param[in] ukk_ctx Pointer to a call context
+ * @param[in,out] args Pointer to a argument structure of a UK call
+ * @param[in] args_size Size of the argument structure (in bytes)
+ * @return MALI_ERROR_NONE on success. MALI_ERROR_FUNCTION_FAILED when the UK call was not recognized.
+ */
+typedef mali_error (*ukk_dispatch_function)(ukk_call_context * const ukk_ctx, void * const args, u32 args_size);
+
+/**
+ * Driver session related data for the UKK core.
+ */
+struct ukk_session
+{
+ /**
+ * Session data stored by the OS specific implementation of the UKK core
+ */
+ ukkp_session internal_session;
+
+ /**
+ * UKK client version supported by the call backs provided below - major number of version
+ */
+ u16 version_major;
+
+ /**
+ * UKK client version supported by the call backs provided below - minor number of version
+ */
+ u16 version_minor;
+
+ /**
+ * Function in UKK client that executes UK calls for this UKK session, see
+ * ukk_dispatch_function.
+ */
+ ukk_dispatch_function dispatch;
+};
+
+/**
+ * Stucture containing context data passed in to each UK call. Before each UK call it is initialized
+ * by the ukk_call_prepare() function. UK calls can retrieve the context data using the function
+ * ukk_session_get().
+ */
+struct ukk_call_context
+{
+ /**
+ * Pointer to UKK core session data.
+ */
+ ukk_session *ukk_session;
+};
+
+/**
+ * @brief UKK core startup
+ *
+ * Must be called during the UKK client driver initialization before accessing any UKK provided functionality.
+ *
+ * @return MALI_ERROR_NONE on success. Any other value indicates failure.
+ */
+mali_error ukk_start(void);
+
+/**
+ * @brief UKK core shutdown
+ *
+ * Must be called during the UKK client driver termination to free any resources UKK might have allocated.
+ *
+ * After this has been called no UKK functionality may be accessed.
+ */
+void ukk_stop(void);
+
+/**
+ * @brief Initialize a UKK session
+ *
+ * When a UKK client driver is opened, a UKK session object needs to be initialized to
+ * store information specific to that session with the UKK client driver.
+ *
+ * This UKK session object is normally contained in a session specific data structure created
+ * by the OS specific open entry point of the UKK client driver. The entry point of the
+ * UKK client driver that receives requests from user space to execute UK calls will
+ * need to pass on a pointer to this UKK session object to the ukk_dispatch() function to
+ * execute a UK call for the active session.
+ *
+ * A UKK session supports executing UK calls for a particular version of the UKK client driver
+ * interface. A pointer to the dispatch function that will execute the UK calls needs to
+ * be passed to ukk_session_init(), along with the version (major and minor) of the UKK client
+ * driver interface that this dispatch function supports.
+ *
+ * When the UKK client driver is closed, the initialized UKK session object needs to be
+ * terminated. See ukk_session_term().
+ *
+ * Debug builds will assert when a NULL pointer is passed for ukk_ctx or dispatch.
+ *
+ * @param[out] ukk_session Pointer to UKK session to initialize
+ * @param[in] dispatch Pointer to dispatch function to associate with the UKK session
+ * @param[in] version_major Dispatch function will handle UK calls for this major version
+ * @param[in] version_minor Dispatch function will handle UK calls for this minor version
+ * @return MALI_ERROR_NONE on success. Any other value indicates failure.
+ */
+mali_error ukk_session_init(ukk_session *ukk_session, ukk_dispatch_function dispatch, u16 version_major, u16 version_minor);
+
+/**
+ * @brief Terminates a UKK session
+ *
+ * Frees any resources allocated for the UKK session object. No UK calls for this session
+ * may be executing when calling this function. This function invalidates the UKK session
+ * object and must not be used anymore until it is initialized again with ukk_session_init().
+ *
+ * Debug builds will assert when a NULL pointer is passed for ukk_session.
+ *
+ * @param[in,out] ukk_session Pointer to UKK session to terminate
+ */
+void ukk_session_term(ukk_session *ukk_session);
+
+/**
+ * @brief Prepare a context in which to execute a UK call
+ *
+ * UK calls are passed a call context that allows them to get access to the UKK session data.
+ * Given a call context, UK calls use ukk_session_get() to get access to the UKK session data.
+ *
+ * Debug builds will assert when a NULL pointer is passed for ukk_ctx, ukk_session.
+ *
+ * @param[out] ukk_ctx Pointer to call context to initialize.
+ * @param[in] ukk_session Pointer to UKK session to associate with the call context
+ */
+void ukk_call_prepare(ukk_call_context * const ukk_ctx, ukk_session * const ukk_session);
+
+/**
+ * @brief Get the UKK session of a call context
+ *
+ * Returns the UKK session associated with a call context. See ukk_call_prepare.
+ *
+ * Debug builds will assert when a NULL pointer is passed for ukk_ctx.
+ *
+ * @param[in] ukk_ctx Pointer to call context
+ * @return Pointer to UKK session associated with the call context
+ */
+void *ukk_session_get(ukk_call_context * const ukk_ctx);
+
+/**
+ * @brief Copy data from user space to kernel space
+ *
+ * @param[in] bytes Number of bytes to copy from @ref user_buffer to @ref_kernel_buffer
+ * @param[out] kernel_buffer Pointer to data buffer in kernel space.
+ * @param[in] user_buffer Pointer to data buffer in user space.
+ *
+ * @return Returns MALI_ERROR_NONE on success.
+ */
+
+mali_error ukk_copy_from_user( size_t bytes, void * kernel_buffer, const void * const user_buffer );
+
+/**
+ * @brief Copy data from kernel space to user space
+ *
+ * @param[in] bytes Pointer to a call context
+ * @param[in] user_buffer Pointer to a call context
+ * @param[out] kernel_buffer Pointer to a call context
+ *
+ * @return Returns MALI_ERROR_NONE on success.
+ */
+
+mali_error ukk_copy_to_user( size_t bytes, void * user_buffer, const void * const kernel_buffer );
+
+/**
+ * @brief Dispatch a UK call
+ *
+ * Dispatches the UK call to the UKK client or the UKK core in case of an internal UK call. The id field
+ * in the header field of the argument structure identifies which UK call needs to be executed. Any
+ * UK call with id equal or larger than UK_FUNC_ID is dispatched to the UKK client.
+ *
+ * If the UK call was accepted by the dispatch handler of the UKK client or UKK core, this function returns
+ * with MALI_ERROR_NONE and the result of executing the UK call is stored in the header.ret field of the
+ * in the argument structure. This function returns MALI_ERROR_FUNCTION_FAILED when the UK call is not
+ * accepted by the dispatch handler.
+ *
+ * If a UK call fails while executing in the dispatch handler of the UKK client or UKK core
+ * the UK call is reponsible for cleaning up any resources it allocated up to the point a failure
+ * occurred.
+ *
+ * Before accepting a UK call, the dispatch handler of the UKK client or UKK core should compare the
+ * the size of the argument structure based on the function id in header.id with the args_size parameter.
+ * Only if they match the UK call should be attempted, otherwise MALI_ERROR_FUNCTION_FAILED
+ * should be returned.
+ *
+ * An example of a piece of code from a UKK client dispatch handler:
+ * @code
+ * uk_header *header = (uk_header *)arg;
+ * switch(header->id) {
+ * case MYCLIENT_FUNCTION: {
+ * if (args_size != sizeof(myclient_function_args)) {
+ * return MALI_ERROR_FUNCTION_FAILED; // argument structure size mismatch
+ * } else {
+ * // execute UK call and store result back in header
+ * header->ret = do_my_client_function(ukk_ctx, args);
+ * return MALI_ERROR_NONE;
+ * }
+ * default:
+ * return MALI_ERROR_FUNCTION_FAILED; // UK call function number not recognized
+ * }
+ * @endcode
+ *
+ * Debug builds will assert when a NULL pointer is passed for ukk_ctx, args or args_size
+ * is < sizeof(uk_header).
+ *
+ * @param[in] ukk_ctx Pointer to a call context
+ * @param[in,out] args Pointer to a argument structure of a UK call
+ * @param[in] args_size Size of the argument structure (in bytes)
+ * @return MALI_ERROR_NONE on success. MALI_ERROR_FUNCTION_FAILED when the UK call was not accepted
+ * by the dispatch handler of the UKK client or UKK core, or the passed in argument structure
+ * is not large enough to store the required uk_header structure.
+ */
+mali_error ukk_dispatch(ukk_call_context * const ukk_ctx, void * const args, u32 args_size);
+
+/** @} end group uk_api_kernel */
+
+/** @} end group uk_api */
+
+#ifdef __cplusplus
+}
+#endif /* __cplusplus */
+
+#endif /* _UKK_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_uku.h
+ * Types and definitions that are common across OSs for the user side of the
+ * User-Kernel interface.
+ */
+
+#ifndef _UKU_H_
+#define _UKU_H_
+
+#ifdef __cplusplus
+extern "C"
+{
+#endif /* __cplusplus */
+
+#include <malisw/mali_stdtypes.h>
+#include <uk/mali_uk.h>
+#include <plat/mali_uk_os.h>
+
+/**
+ * @addtogroup uk_api User-Kernel Interface API
+ * @{
+ */
+
+/**
+ * @defgroup uk_api_user UKU (User side)
+ *
+ * The UKU is an OS independent API for user-side code which provides functions to
+ * - open and close a UKK client driver, a kernel-side device driver implementing the UK interface
+ * - call functions inside a UKK client driver
+ *
+ * The code snippets below show an example using the UKU API:
+ *
+ * Start with opening the imaginary Midgard Base UKK client driver
+ *
+@code
+ mali_error ret;
+ uku_context uku_ctx;
+ uku_client_version client_version;
+ uku_open_status open_status;
+
+ // open a user-kernel context
+ client_version.major = TESTDRV_UK_MAJOR;
+ client_version.minor = TESTDRV_UK_MINOR;
+ open_status = uku_open(UK_CLIENT_MALI_T600_BASE, &client_version, &uku_ctx);
+ if (UKU_OPEN_OK != open_status)
+ {
+ mali_tpi_printf("failed to open a user-kernel context\n");
+ goto cleanup;
+ }
+@endcode
+ *
+ * We are going to call a function foo in the Midgard Base UKK client driver. For sample purposes this
+ * function foo will simply double the provided input value.
+ *
+ * First we setup the header of the argument structure to identify that we are calling the
+ * a function foo in the Midgard Base UKK client driver identified with the id BASE_UK_FOO_FUNC.
+ *
+@code
+ base_uk_foo_args foo_args;
+ foo_args.header.id = BASE_UK_FOO_FUNC;
+@endcode
+ *
+ * Followed by the setup of the arguments for the function foo.
+ *
+@code
+ foo_args.input_value = 48;
+@endcode
+ *
+ * Then we use UKU to actually call the function in the Midgard Base UKK client driver.
+ *
+@code
+ // call kernel-side foo function
+ ret = uku_call(uku_ctx, &foo_args, sizeof(foo_args));
+ if (MALI_ERROR_NONE == ret && MALI_ERROR_NONE == foo_args.header.ret)
+ {
+@endcode
+ *
+ * If the uku_call() function succeeded we can check the return code of the foo function. The return
+ * value is returned in the ret field of the header. If it succeeded, we verify here that the
+ * foo function indeed doubled the input value.
+ *
+@code
+ // retrieve data returned by kernel-side foo function
+ mali_tpi_printf("foo returned value %d\n", foo_args.output_value);
+
+ // output_value should match input_value * 2
+ if (foo_args.output_value != foo_args.input_value * 2)
+ {
+ // data didn't match: test fails
+ ret = MALI_ERROR_FUNCTION_FAILED;
+ }
+ }
+@endcode
+ *
+ * When we are done, we close the Midgard Base UKK client.
+ *
+@code
+cleanup:
+ // close an opened user-kernel context
+ if (UKU_OPEN_OK == open_status)
+ {
+ uku_close(uku_ctx);
+ }
+@endcode
+ * @{
+ */
+
+/**
+ * User-side representation of a connection with a UKK client.
+ * See uku_open for opening a connection, and uku_close for closing
+ * a connection.
+ */
+/* Some compilers require a forward declaration of the structure */
+struct uku_context;
+typedef struct uku_context uku_context;
+
+/**
+ * Status returned from uku_open() as a result of trying to open a connection to a UKK client
+ */
+typedef enum uku_open_status
+{
+ UKU_OPEN_OK, /**< UKK client opened succesfully and versions are compatible */
+ UKU_OPEN_INCOMPATIBLE, /**< UKK client opened succesfully but versions are not compatible and the UKK client
+ connection was closed. */
+ UKU_OPEN_FAILED /**< Could not open UKK client or UKK client failed to perform version check */
+} uku_open_status;
+
+/**
+ * This structure carries a 16-bit major and minor number and is provided to a uku_open call
+ * to identify the versions of the UK client of the user-side on input, and on output the
+ * version of the UK client on the kernel-side. See uku_open.
+ */
+typedef struct uku_client_version
+{
+ /**
+ * 16-bit number identifying the major version. Interfaces with different major version numbers
+ * are incompatible. This field carries the user-side major version on input and the kernel-side
+ * major version on output.
+ */
+ u16 major;
+ /**
+ * 16-bit number identifying the minor version. A user-side interface minor version that is equal
+ * to or less than the kernel-side interface minor version is compatible. A user-side interface
+ * minor version that is greater than the kernel-side interface minor version is incompatible
+ * (as it is requesting more functionality than exists). This field carries the user-side minor
+ * version on input and the kernel-side minor version on output.
+ */
+ u16 minor;
+} uku_client_version;
+
+/**
+ * @brief Open a connection to a UKK client
+ *
+ * The User-Kernel interface communicates with a kernel-side driver over
+ * an OS specific communication channel. A UKU context object stores the
+ * necessary OS specific objects and state information to represent this
+ * OS specific communication channel. A UKU context, defined by the
+ * uku_context type is passed in as the first argument to nearly all
+ * UKU functions. These UKU functions expect the UKU context to be valid,
+ * an invalid UKU context will trigger a debug assert (in debug builds).
+ *
+ * The function uku_open() opens a connection to a kernel-side driver with
+ * a User-Kernel interface, aka UKK client, and returns an initialized
+ * UKU context. The function uku_close() closes the connection to the UKK
+ * client.
+ *
+ * The kernel-side driver may support multiple instances and the particular
+ * instance that needs to be opened is selected by the instance argument.
+ * An instance of a kernel-side driver is normally associated with a particular
+ * instance of a physical hardware block, e.g. each instance corresponds
+ * to one of the ports of a UART controller. See the specifics of the
+ * kernel-side driver to find out which instances are supported.
+ *
+ * As the user and kernel-side of the UK interface are physically two
+ * different entities, they might end up using different versions of the
+ * UK interface and therefore part of the opening process makes an
+ * internal UK call to verify if the versions are compatible. A version
+ * has a major and minor part. Interfaces with different major version
+ * numbers are incompatible. A user-side interface minor version that is equal
+ * to or less than the kernel-side interface minor version is compatible.
+ * A user-side interface minor version that is greater than the kernel-side
+ * interface minor version is incompatible (as it is requesting more
+ * functionality than exists).
+ *
+ * Each UKK client has a unique id as defined by the uk_client_id
+ * enumeration. These IDs are mapped to OS specific device file names
+ * that refer to their respective kernel device drivers. Any new UKK client
+ * needs to be added to the uk_client_id enumeration.
+ *
+ * A UKU context must be shareable between threads. It may be shareable
+ * between processes. This attribute is mostly dependent on the OS specific
+ * communication channel used to communicate with the UKK client. When
+ * multiple threads use the same UKU context only one should be responsible
+ * for closing it and ensuring the other threads are not using it anymore.
+ *
+ * Opening a UKU context is considered to be an expensive operation, most
+ * likely resulting in loading a kernel device driver when opened for
+ * the first time in a system. The specific kernel device driver to be
+ * opened is defined by the OS specific implementation of this function and
+ * is not configurable.
+ *
+ * Once a UKU context is opened and in use by the user-kernel interface it
+ * is expected to operate without error. Any communication error will not
+ * result in an attempt by the user-kernel interface implementation to
+ * re-establish the OS specific communication channel and is considered
+ * to be a non-recoverable fault.
+ *
+ * Notes on Base driver context and UKU context
+ *
+ * A UKU context is opened each time a base driver context is created.
+ * A UKU context and base driver context therefore have a 1:1 relationship.
+ * A base driver context represents an isolated GPU address space and because
+ * of the 1:1 relationship with a UKU context, a UKU context can also be seen
+ * to present an isolated GPU address space. Each process is currently
+ * expected to create one base driver context (and therefore open one UKU
+ * context per process), but this might change, having multiple base driver
+ * contexts open per process, in case we need to support WebGL, where each
+ * GLES context must use a separate GPU address space inside the web browser
+ * to prevent seperate brower tabs from interfering with each other.
+ *
+ * Debug builds will assert when a NULL pointer is passed for the version or
+ * uku_ctx parameters, or when an unknown enumerated value for the id parameter
+ * is used.
+ *
+ * @param[in] id UKK client identifier, see uk_client_id.
+ * @param[in] instance instance number (0..) of the UKK client driver
+ * @param[in,out] version of the UKU client on input. On output it contains the version
+ * of the UKK client, when uku_open returns UKU_OPEN_OK or UKU_OPEN_INCOMPATIBLE,
+ * otherwise the output value is undefined.
+ * @param[out] uku_ctx Pointer to User-Kernel context to initialize
+ * @return UKU_OPEN_OK when the connection to the UKK client was successful.
+ * @return UKU_OPEN_FAILED when the connection to the UKK client could not be established,
+ * or the UKK client failed to perform version verification.
+ * @return UKU_OPEN_INCOMPATIBLE when the version of the UKK and UKU clients are incompatible.
+ */
+uku_open_status uku_open(uk_client_id id, u32 instance, uku_client_version *version, uku_context *uku_ctx) CHECK_RESULT;
+
+/**
+ * @brief Returns OS specific driver context from UKU context
+ *
+ * The UKU context abstracts the connection to a kernel-side driver. If OS specific code
+ * needs to communicate with this kernel-side driver directly, it can use this function
+ * to retrieve the OS specific object hidden by the UKU context. This object must only
+ * be used while the UKU context is open and must only be used by OS specific code.
+ *
+ * Debug builds will assert when a NULL pointer is passed for uku_ctx.
+ *
+ * @param[in] uku_ctx Pointer to a valid User-Kernel context. See uku_open.
+ * @return OS specific driver context, e.g. for Linux this would be an integer
+ * representing a file descriptor.
+ *
+ */
+void *uku_driver_context(uku_context *uku_ctx) CHECK_RESULT;
+
+/**
+ * @brief Closes a connection to a UKK client
+ *
+ * Closes a previously opened connection to a kernel-side driver with a
+ * User-Kernel interface, aka UKK client. The UKU context uku_ctx
+ * has now become invalid.
+ *
+ * Before calling this function, any UKU function using the UKU context
+ * uku_ctx must have finished. The UKU context uku_ctx must not be in
+ * use.
+ *
+ * Debug builds will assert when a NULL pointer is passed for uku_ctx.
+ *
+ * @param[in] uku_ctx Pointer to a valid User-Kernel context. See uku_open.
+ */
+void uku_close(uku_context * const uku_ctx);
+
+/**
+ * @brief Calls a function in a UKK client
+ *
+ * A UKK client defines a structure for each function callable over the UK
+ * interface. The structure starts with a header field of type uk_header,
+ * followed by the arguments for the function, e.g.
+ *
+ * @code
+ * typedef struct base_uk_foo_args
+ * {
+ * uk_header header; // first member is the header
+ * int n; // followed by function arguments
+ * int doubled_n;
+ * ...
+ * } base_uk_foo_args;
+ * @endcode
+ *
+ * The header.id field identifies the function to be called. See the UKK
+ * client documentation for a list of available functions and the structure
+ * definitions associated with them.
+ *
+ * The arguments in the structure can be of type input, input/output or
+ * output. All input and input/output arguments must be initialized in the
+ * structure before calling the uku_call() function. Memory pointed to by
+ * pointers in the structure should at least remain allocated until uku_call
+ * returns.
+ *
+ * When uku_call has successfully executed the function in the UKK client,
+ * it stores the return code of the function in the header.ret field, and
+ * only in this case the output and input/output members are considered to
+ * be valid in the structure.
+ *
+ * For example, to call function 'foo' which simply doubles the supplied
+ * argument 'n':
+ * @code
+ * base_uk_foo_args args;
+ * mali_error ret;
+ *
+ * args.header.id = BASE_UK_FOO_FUNC;
+ * args.n = 10;
+ *
+ * ret = uku_call(uku_ctx, &args, sizeof(args));
+ * if (MALI_ERROR_NONE == ret)
+ * {
+ * if (MALI_ERROR_NONE == args.header.ret)
+ * {
+ * printf("%d*%d=%d\n", args.n, args.n, args.doubled_n)
+ * }
+ * }
+ * @endcode
+ *
+ * Debug builds will assert when a NULL pointer is passed for uku_ctx or
+ * args, or args_size < sizeof(uku_header).
+ *
+ * @param[in] uku_ctx Pointer to a valid User-Kernel context. See uku_open.
+ * @param[in,out] args Pointer to an argument structure associated with
+ * the function to be called in the UKK client.
+ * @param[in] args_size Size of the argument structure in bytes
+ * @return MALI_ERROR_NONE on success. Any other value indicates failure,
+ * and the structure pointed to by args may contain invalid information.
+ */
+mali_error uku_call(uku_context *uku_ctx, void *args, u32 args_size) CHECK_RESULT;
+
+
+/** @} end group uk_api_user */
+
+/** @} end group uk_api */
+
+#ifdef __cplusplus
+}
+#endif /* __cplusplus */
+
+#endif /* _UKU_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <fcntl.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/ioctl.h>
+#include <pthread.h>
+#include <errno.h>
+#include <linux/version.h>
+
+#include <uk/mali_uku.h>
+#include <cdbg/mali_cdbg.h>
+#include <cutils/cstr/mali_cutils_cstr.h>
+#include <cutils/linked_list/mali_cutils_slist.h>
+#include <stdlib/mali_stdlib.h>
+
+#define LINUX_MALI_DEVICE_NAME "/dev/mali"
+
+#if CSTD_OS_ANDROID == 0 /* Android 2.3.x doesn't support pthread_atfork */
+
+/** Datastructures to keep track of all open file descriptors to UK clients for this process */
+struct _fd_admin
+{
+ pthread_mutex_t fd_admin_mutex; /** protects access to this datastructure */
+ mali_bool atfork_registered; /** MALI_TRUE when an atfork handler has been installed for this process */
+ cutils_slist fd_list; /** list tracking all open file descriptors to UK clients for this process */
+};
+STATIC struct _fd_admin fd_admin =
+{
+ PTHREAD_MUTEX_INITIALIZER,
+ MALI_FALSE,
+ {{NULL,NULL}}
+};
+typedef struct fd_list_item
+{
+ cutils_slist_item link;
+ int fd;
+} fd_list_item;
+
+
+/** atfork handler called in child's context to close all open file descriptors to UK clients in the child */
+STATIC void ukup_fd_child_atfork_handler(void)
+{
+ fd_list_item *item;
+
+ /* close all file descriptors registered with ukup_add_file_descriptor() */
+ CUTILS_SLIST_FOREACH(&fd_admin.fd_list, fd_list_item, link, item)
+ {
+ close(item->fd);
+ }
+}
+
+/** removes and closes a file descriptor added to the list of open file descriptors to UK clients
+ * by ukup_fd_add() earlier
+ */
+STATIC void ukup_fd_remove_and_close(int fd)
+{
+ int rc;
+ fd_list_item *item = NULL;
+
+ rc = pthread_mutex_lock(&fd_admin.fd_admin_mutex);
+ if (rc != 0)
+ {
+ CDBG_PRINT_INFO(CDBG_BASE, "can't lock file descriptor list, error %d\n", errno);
+ goto exit_mutex_lock;
+ }
+
+ CUTILS_SLIST_FOREACH(&fd_admin.fd_list, fd_list_item, link, item)
+ {
+ if (item->fd == fd)
+ {
+ CUTILS_SLIST_REMOVE(&fd_admin.fd_list, item, link);
+ stdlib_free(item);
+ close(fd);
+ break;
+ }
+ }
+
+ pthread_mutex_unlock(&fd_admin.fd_admin_mutex);
+
+ CDBG_ASSERT_MSG(CUTILS_SLIST_IS_VALID(item, link), "file descriptor %d not found on list!\n", fd);
+
+exit_mutex_lock:
+ return;
+}
+
+
+/** add a file descriptor to the list of open file descriptors to UK clients */
+STATIC mali_bool ukup_fd_add(int fd)
+{
+ int rc;
+ fd_list_item *item = NULL;
+
+ rc = pthread_mutex_lock(&fd_admin.fd_admin_mutex);
+ if (rc != 0)
+ {
+ CDBG_PRINT_INFO(CDBG_BASE, "can't lock file descriptor list, error %d\n", errno);
+ goto exit_mutex_lock;
+ }
+
+ if (MALI_FALSE == fd_admin.atfork_registered)
+ {
+ CUTILS_SLIST_INIT(&fd_admin.fd_list);
+
+ rc = pthread_atfork(NULL, NULL, ukup_fd_child_atfork_handler);
+ if (rc != 0)
+ {
+ CDBG_PRINT_INFO(CDBG_BASE, "pthread_atfork failed, error %d\n", errno);
+ goto exit;
+ }
+
+ fd_admin.atfork_registered = MALI_TRUE;
+ }
+
+ item = stdlib_malloc(sizeof(fd_list_item));
+ if (NULL == item)
+ {
+ CDBG_PRINT_INFO(CDBG_BASE, "allocating file descriptor list item failed\n");
+ goto exit;
+ }
+
+ item->fd = fd;
+
+ CUTILS_SLIST_PUSH_FRONT(&fd_admin.fd_list, item, fd_list_item, link);
+
+ pthread_mutex_unlock(&fd_admin.fd_admin_mutex);
+
+ return MALI_TRUE;
+
+exit:
+ if (NULL != item)
+ {
+ stdlib_free(item);
+ }
+
+ pthread_mutex_unlock(&fd_admin.fd_admin_mutex);
+exit_mutex_lock:
+ return MALI_FALSE;
+}
+
+#endif /* CSTD_OS_ANDROID == 0 */
+
+
+uku_open_status uku_open(uk_client_id id, u32 instance, uku_client_version *version, uku_context *uku_ctx)
+{
+ const char *linux_device_name;
+ char format_device_name[16];
+ struct stat filestat;
+ int fd;
+ uku_version_check_args version_check_args;
+ mali_error err;
+
+ CDBG_ASSERT_POINTER(version);
+ CDBG_ASSERT_POINTER(uku_ctx);
+
+ if(CDBG_SIMULATE_FAILURE(CDBG_BASE))
+ {
+ return UKU_OPEN_FAILED;
+ }
+
+ switch(id)
+ {
+ case UK_CLIENT_MALI_T600_BASE:
+ cutils_cstr_snprintf(format_device_name, sizeof(format_device_name), "%s%d", LINUX_MALI_DEVICE_NAME, instance);
+ linux_device_name = format_device_name;
+ break;
+ default:
+ CDBG_ASSERT_MSG(MALI_FALSE, "invalid uk_client_id value (%d)\n", id);
+ return UKU_OPEN_FAILED;
+ }
+
+ /* open the kernel device driver */
+ fd = open(linux_device_name, O_RDWR|O_CLOEXEC);
+
+ if (-1 == fd)
+ {
+ CDBG_PRINT_INFO(CDBG_BASE, "failed to open device file %s\n", linux_device_name);
+ return UKU_OPEN_FAILED;
+ }
+
+ /* query the file for information */
+ if (0 != fstat(fd, &filestat))
+ {
+ close(fd);
+ CDBG_PRINT_INFO(CDBG_BASE, "failed to query device file %s for type information\n", linux_device_name);
+ return UKU_OPEN_FAILED;
+ }
+
+ /* verify that it is a character special file */
+ if (0 == S_ISCHR(filestat.st_mode))
+ {
+ close(fd);
+ CDBG_PRINT_INFO(CDBG_BASE, "file %s is not a character device file", linux_device_name);
+ return UKU_OPEN_FAILED;
+ }
+
+ /* use the internal UK call UKP_FUNC_ID_CHECK_VERSION to verify versions */
+ version_check_args.header.id = UKP_FUNC_ID_CHECK_VERSION;
+ version_check_args.major = version->major;
+ version_check_args.minor = version->minor;
+
+ uku_ctx->ukup_internal_struct.fd = fd;
+ err = uku_call(uku_ctx, &version_check_args, sizeof(version_check_args));
+ if (MALI_ERROR_NONE == err && MALI_ERROR_NONE == version_check_args.header.ret)
+ {
+ mali_bool incompatible =
+ ( (version->major != version_check_args.major) || (version->minor > version_check_args.minor) );
+
+ if (incompatible)
+ {
+ CDBG_PRINT_INFO(CDBG_BASE, "file %s is not of a compatible version (user %d.%d, kernel %d.%d)\n",
+ linux_device_name, version->major, version->minor, version_check_args.major, version_check_args.minor);
+ }
+
+ /* output kernel-side version */
+ version->major = version_check_args.major;
+ version->minor = version_check_args.minor;
+
+ if (incompatible)
+ {
+ uku_ctx->ukup_internal_struct.fd = -1;
+ close(fd);
+ return UKU_OPEN_INCOMPATIBLE;
+ }
+ }
+ else
+ {
+ close(fd);
+ return UKU_OPEN_FAILED;
+ }
+
+#ifdef MALI_DEBUG
+ uku_ctx->ukup_internal_struct.canary = MALI_UK_CANARY_VALUE;
+#endif
+
+#if CSTD_OS_ANDROID == 0
+ /* track all open file descriptors to UK clients */
+ if (!ukup_fd_add(fd))
+ {
+ close(fd);
+ return UKU_OPEN_FAILED;
+ }
+#endif
+
+ return UKU_OPEN_OK;
+}
+
+
+void *uku_driver_context(uku_context *uku_ctx)
+{
+ CDBG_ASSERT_POINTER(uku_ctx);
+ return (void *)&uku_ctx->ukup_internal_struct.fd;
+}
+
+void uku_close(uku_context *uku_ctx)
+{
+ CDBG_ASSERT_POINTER(uku_ctx);
+#ifdef MALI_DEBUG
+ CDBG_ASSERT(uku_ctx->ukup_internal_struct.canary == MALI_UK_CANARY_VALUE);
+ uku_ctx->ukup_internal_struct.canary = 0;
+#endif
+#if CSTD_OS_ANDROID == 0
+ ukup_fd_remove_and_close(uku_ctx->ukup_internal_struct.fd);
+#else
+ close(uku_ctx->ukup_internal_struct.fd);
+#endif
+}
+
+mali_error uku_call(uku_context *uku_ctx, void *args, u32 args_size)
+{
+ uk_header *header = (uk_header *)args;
+ u32 cmd;
+
+ CDBG_ASSERT_POINTER(uku_ctx);
+ CDBG_ASSERT_POINTER(args);
+ CDBG_ASSERT_MSG(args_size >= sizeof(uk_header), "argument structure not large enough to contain required uk_header\n");
+
+ if(CDBG_SIMULATE_FAILURE(CDBG_BASE))
+ {
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+
+ cmd = _IOC(_IOC_READ|_IOC_WRITE, LINUX_UK_BASE_MAGIC, header->id, args_size);
+
+ /* call ioctl handler of driver */
+ if (0 != ioctl(uku_ctx->ukup_internal_struct.fd, cmd, args))
+ {
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+ else
+ {
+ return MALI_ERROR_NONE;
+ }
+}
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2011 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_uk_os.h
+ * User-Kernel Interface (kernel and user-side) dependent APIs (Linux).
+ */
+
+#ifndef _UK_OS_H_ /* Linux */
+#define _UK_OS_H_
+
+#ifdef MALI_DEBUG
+#define MALI_UK_CANARY_VALUE 0xb2bdbdf6
+#endif
+
+#if MALI_BACKEND_KERNEL
+
+#define LINUX_UK_BASE_MAGIC 0x80 /* BASE UK ioctl */
+
+struct uku_context
+{
+ struct
+ {
+#ifdef MALI_DEBUG
+ u32 canary;
+#endif
+ int fd;
+ } ukup_internal_struct;
+};
+
+#else /* MALI_BACKEND_KERNEL */
+
+typedef struct ukk_userspace
+{
+ void * ctx;
+ mali_error (*dispatch)(void * /*ctx*/, void* /*msg*/, u32 /*size*/);
+ void (*close)(struct ukk_userspace * /*self*/);
+} ukk_userspace;
+
+typedef ukk_userspace * (*kctx_open)(void);
+
+struct uku_context
+{
+ struct
+ {
+#ifdef MALI_DEBUG
+ u32 canary;
+#endif
+ ukk_userspace * ukku;
+ } ukup_internal_struct;
+};
+
+#endif /* MALI_BACKEND_KERNEL */
+
+#endif /* _UK_OS_H_ */
--- /dev/null
+/*
+ * This confidential and proprietary soft:q!ware may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+/**
+ * @file mali_ukk_os.h
+ * Types and definitions that are common for Linux OSs for the kernel side of the
+ * User-Kernel interface.
+ */
+
+#ifndef _UKK_OS_H_ /* Linux version */
+#define _UKK_OS_H_
+
+#include <linux/fs.h>
+
+/**
+ * @addtogroup uk_api User-Kernel Interface API
+ * @{
+ */
+
+/**
+ * @addtogroup uk_api_kernel UKK (Kernel side)
+ * @{
+ */
+
+/**
+ * Internal OS specific data structure associated with each UKK session. Part
+ * of a ukk_session object.
+ */
+typedef struct ukkp_session
+{
+ int dummy; /**< No internal OS specific data at this time */
+} ukkp_session;
+
+/** @} end group uk_api_kernel */
+
+/** @} end group uk_api */
+
+#endif /* _UKK_OS_H__ */
--- /dev/null
+# This confidential and proprietary software may be used only as
+# authorised by a licensing agreement from ARM Limited
+# (C) COPYRIGHT 2010 ARM Limited
+# ALL RIGHTS RESERVED
+# The entire notice above must be reproduced on all authorised
+# copies and copies may only be made to the extent permitted
+# by a licensing agreement from ARM Limited.
+
+Import( 'env' )
+
+libs=env.StaticLibrary( '$STATIC_LIB_PATH/uku', ['mali_uku_linux.c'] )
+
+env.LibTarget('uku', libs)
+
+if env.has_key('libs_install'):
+ env.Install (env['libs_install'], libs)
+ env.Alias ('libs', env['libs_install'])
+else:
+ env.Alias ('libs', libs)
--- /dev/null
+obj-y += ukk/
--- /dev/null
+# Copyright:
+# ----------------------------------------------------------------------------
+# This confidential and proprietary software may be used only as authorized
+# by a licensing agreement from ARM Limited.
+# (C) COPYRIGHT 2010 ARM Limited, ALL RIGHTS RESERVED
+# The entire notice above must be reproduced on all authorized copies and
+# copies may only be made to the extent permitted by a licensing agreement
+# from ARM Limited.
+# ----------------------------------------------------------------------------
+#
+
+SConscript('ukk/sconscript')
--- /dev/null
+ccflags-$(CONFIG_VITHAR) += -DMALI_DEBUG=0 -DMALI_HW_TYPE=2 \
+ -DMALI_USE_UMP=0 -DMALI_HW_VERSION=r0p0 -DMALI_BASE_TRACK_MEMLEAK=0 \
+ -DMALI_ANDROID=1 -DMALI_ERROR_INJECT_ON=0 -DMALI_NO_MALI=0 -DMALI_BACKEND_KERNEL=1 \
+ -DMALI_FAKE_PLATFORM_DEVICE=1 -DMALI_MOCK_TEST=0 -DMALI_KERNEL_TEST_API=0 \
+ -DMALI_INFINITE_CACHE=0 -DMALI_LICENSE_IS_GPL=1 -DMALI_PLATFORM_CONFIG=exynos5 \
+ -DMALI_UNIT_TEST=0 -DMALI_GATOR_SUPPORT=0 -DUMP_LICENSE_IS_GPL=1 \
+ -DUMP_SVN_REV_STRING="\"dummy\"" -DMALI_RELEASE_NAME="\"dummy\""
+
+ROOTDIR = $(src)/../../..
+
+ccflags-$(CONFIG_VITHAR) += -I$(ROOTDIR)/kbase
+ccflags-$(CONFIG_VITHAR) += -I$(ROOTDIR)
+
+ccflags-y += -I$(ROOTDIR) -I$(ROOTDIR)/include -I$(ROOTDIR)/osk/src/linux/include -I$(ROOTDIR)/uk/platform_dummy
+ccflags-y += -I$(ROOTDIR)/kbase/midg_gpus/r0p0
+
+obj-y += mali_ukk.o
+
+obj-y += linux/
--- /dev/null
+ccflags-$(CONFIG_VITHAR) += -DMALI_DEBUG=0 -DMALI_HW_TYPE=2 \
+ -DMALI_USE_UMP=0 -DMALI_HW_VERSION=r0p0 -DMALI_BASE_TRACK_MEMLEAK=0 \
+ -DMALI_ANDROID=1 -DMALI_ERROR_INJECT_ON=0 -DMALI_NO_MALI=0 -DMALI_BACKEND_KERNEL=1 \
+ -DMALI_FAKE_PLATFORM_DEVICE=1 -DMALI_MOCK_TEST=0 -DMALI_KERNEL_TEST_API=0 \
+ -DMALI_INFINITE_CACHE=0 -DMALI_LICENSE_IS_GPL=1 -DMALI_PLATFORM_CONFIG=exynos5 \
+ -DMALI_UNIT_TEST=0 -DMALI_GATOR_SUPPORT=0 -DUMP_LICENSE_IS_GPL=1 \
+ -DUMP_SVN_REV_STRING="\"dummy\"" -DMALI_RELEASE_NAME="\"dummy\""
+
+ROOTDIR = $(src)/../../../..
+
+ccflags-$(CONFIG_VITHAR) += -I$(ROOTDIR)/kbase
+ccflags-$(CONFIG_VITHAR) += -I$(ROOTDIR)
+
+ccflags-y += -I$(ROOTDIR) -I$(ROOTDIR)/include -I$(ROOTDIR)/osk/src/linux/include -I$(ROOTDIR)/uk/platform_dummy
+ccflags-y += -I$(ROOTDIR)/kbase/midg_gpus/r0p0
+
+obj-y += mali_ukk_os.o
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#include <linux/module.h> /* Needed by all modules */
+#include <linux/kernel.h> /* Needed for KERN_INFO */
+#include <linux/init.h> /* Needed for the macros */
+
+#include <osk/mali_osk.h>
+#include <uk/mali_ukk.h>
+
+mali_error ukk_session_init(ukk_session *ukk_session, ukk_dispatch_function dispatch, u16 version_major, u16 version_minor)
+{
+ OSK_ASSERT(NULL != ukk_session);
+ OSK_ASSERT(NULL != dispatch);
+
+ /* OS independent initialization of UKK context */
+ ukk_session->dispatch = dispatch;
+ ukk_session->version_major = version_major;
+ ukk_session->version_minor = version_minor;
+
+ /* OS specific initialization of UKK context */
+ ukk_session->internal_session.dummy = 0;
+ return MALI_ERROR_NONE;
+}
+
+void ukk_session_term(ukk_session *ukk_session)
+{
+ OSK_ASSERT(NULL != ukk_session);
+}
+
+mali_error ukk_copy_from_user( size_t bytes, void * kernel_buffer, const void * const user_buffer )
+{
+ if ( copy_from_user( kernel_buffer, user_buffer, bytes ) )
+ {
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+ return MALI_ERROR_NONE;
+}
+
+mali_error ukk_copy_to_user( size_t bytes, void * user_buffer, const void * const kernel_buffer )
+{
+ if ( copy_to_user( user_buffer, kernel_buffer, bytes ) )
+ {
+ return MALI_ERROR_FUNCTION_FAILED;
+ }
+ return MALI_ERROR_NONE;
+}
+
+static int __init ukk_module_init(void)
+{
+ if (MALI_ERROR_NONE != ukk_start())
+ {
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static void __exit ukk_module_exit(void)
+{
+ ukk_stop();
+}
+
+EXPORT_SYMBOL(ukk_copy_from_user);
+EXPORT_SYMBOL(ukk_copy_to_user);
+EXPORT_SYMBOL(ukk_session_init);
+EXPORT_SYMBOL(ukk_session_term);
+EXPORT_SYMBOL(ukk_session_get);
+EXPORT_SYMBOL(ukk_call_prepare);
+EXPORT_SYMBOL(ukk_dispatch);
+
+module_init(ukk_module_init);
+module_exit(ukk_module_exit);
+
+#if MALI_LICENSE_IS_GPL || MALI_UNIT_TEST /* See MIDBASE-1204 */
+MODULE_LICENSE("GPL");
+#else
+MODULE_LICENSE("Proprietary");
+#endif
+MODULE_AUTHOR("ARM Ltd.");
+MODULE_VERSION("0.0");
--- /dev/null
+# Copyright:
+# ----------------------------------------------------------------------------
+# This confidential and proprietary software may be used only as authorized
+# by a licensing agreement from ARM Limited.
+# (C) COPYRIGHT 2010-2011 ARM Limited, ALL RIGHTS RESERVED
+# The entire notice above must be reproduced on all authorized copies and
+# copies may only be made to the extent permitted by a licensing agreement
+# from ARM Limited.
+# ----------------------------------------------------------------------------
+#
+
+import os
+Import('env')
+
+# Clone the environment so changes don't affect other build files
+env_ukk = env.Clone()
+
+# Source files required for the UKK.
+ukk_src = [Glob('*.c'), Glob('#uk/src/ukk/*.c')]
+
+if env_ukk['backend'] == 'kernel':
+ if env_ukk['v'] != '1':
+ env_ukk['MAKECOMSTR'] = '[MAKE] ${SOURCE.dir}'
+
+ # Note: cleaning via the Linux kernel build system does not yet work
+ if env_ukk.GetOption('clean') :
+ makeAction=Action("cd ${SOURCE.dir} && make clean", '$MAKECOMSTR')
+ else:
+ makeAction=Action("cd ${SOURCE.dir} && make PLATFORM=${platform} MALI_DEBUG=${debug} MALI_BACKEND_KERNEL=1 MALI_HW_VERSION=${hwver} MALI_BASE_TRACK_MEMLEAK=${base_qa} MALI_UNIT_TEST=${unit} MALI_LICENSE_IS_GPL=${mali_license_is_gpl} && cp ukk.ko $STATIC_LIB_PATH/ukk.ko", '$MAKECOMSTR')
+
+ # The target is ukk.ko, built from the source in ukk_src, via the action makeAction
+ # ukk.ko will be copied to $STATIC_LIB_PATH after being built by the standard Linux
+ # kernel build system, after which it can be installed to the directory specified if
+ # "libs_install" is set; this is done by LibTarget.
+ cmd = env_ukk.Command('$STATIC_LIB_PATH/ukk.ko', ukk_src, [makeAction])
+
+ env.Depends('$STATIC_LIB_PATH/ukk.ko', '$STATIC_LIB_PATH/libosk.a')
+
+ # Until we fathom out how the invoke the Linux build system to clean, we can use Clean
+ # to remove generated files.
+
+ patterns = ['*.mod.c', '*.o', '*.ko', '*.a', '.*.cmd', 'modules.order', '.tmp_versions', 'Module.symvers']
+
+ for p in patterns:
+ Clean(cmd, Glob('#uk/src/ukk/linux/%s' % p))
+ Clean(cmd, Glob('#uk/src/ukk/%s' % p))
+
+ env_ukk.ProgTarget('uk', cmd)
--- /dev/null
+/*
+ * This confidential and proprietary software may be used only as
+ * authorised by a licensing agreement from ARM Limited
+ * (C) COPYRIGHT 2010-2012 ARM Limited
+ * ALL RIGHTS RESERVED
+ * The entire notice above must be reproduced on all authorised
+ * copies and copies may only be made to the extent permitted
+ * by a licensing agreement from ARM Limited.
+ */
+
+#include <osk/mali_osk.h>
+#include <uk/mali_ukk.h>
+#include <plat/mali_ukk_os.h>
+
+mali_error ukk_start(void)
+{
+ return MALI_ERROR_NONE;
+}
+
+void ukk_stop(void)
+{
+}
+
+void ukk_call_prepare(ukk_call_context * const ukk_ctx, ukk_session * const session)
+{
+ OSK_ASSERT(NULL != ukk_ctx);
+ OSK_ASSERT(NULL != session);
+
+ ukk_ctx->ukk_session = session;
+}
+
+void *ukk_session_get(ukk_call_context * const ukk_ctx)
+{
+ OSK_ASSERT(NULL != ukk_ctx);
+ return ukk_ctx->ukk_session;
+}
+
+static mali_error ukkp_dispatch_call(ukk_call_context *ukk_ctx, void *args, u32 args_size)
+{
+ uk_header *header = (uk_header *)args;
+ mali_error ret = MALI_ERROR_NONE;
+
+ if(UKP_FUNC_ID_CHECK_VERSION == header->id)
+ {
+ if (args_size == sizeof(uku_version_check_args))
+ {
+ ukk_session *ukk_session = ukk_session_get(ukk_ctx);
+ uku_version_check_args *version_check = (uku_version_check_args *)args;
+
+ version_check->major = ukk_session->version_major;
+ version_check->minor = ukk_session->version_minor;
+ header->ret = MALI_ERROR_NONE;
+ }
+ else
+ {
+ header->ret = MALI_ERROR_FUNCTION_FAILED;
+ }
+ }
+ else
+ {
+ ret = MALI_ERROR_FUNCTION_FAILED; /* not handled */
+ }
+ return ret;
+}
+
+mali_error ukk_dispatch(ukk_call_context * const ukk_ctx, void * const args, u32 args_size)
+{
+ mali_error ret;
+ uk_header *header = (uk_header *)args;
+
+ OSK_ASSERT(NULL != ukk_ctx);
+ OSK_ASSERT(NULL != args);
+
+ /* Verify args_size both in debug and release builds */
+ OSK_ASSERT(args_size >= sizeof(uk_header));
+ if (args_size < sizeof(uk_header)) return MALI_ERROR_FUNCTION_FAILED;
+
+ if (header->id >= UK_FUNC_ID)
+ {
+ ret = ukk_ctx->ukk_session->dispatch(ukk_ctx, args, args_size);
+ }
+ else
+ {
+ ret = ukkp_dispatch_call(ukk_ctx, args, args_size);
+ }
+ return ret;
+}
--- /dev/null
+# This confidential and proprietary software may be used only as
+# authorised by a licensing agreement from ARM Limited
+# (C) COPYRIGHT 2010-2011 ARM Limited
+# ALL RIGHTS RESERVED
+# The entire notice above must be reproduced on all authorised
+# copies and copies may only be made to the extent permitted
+# by a licensing agreement from ARM Limited.
+
+Import( 'env' )
+
+
+if env['backend'] == 'kernel':
+ SConscript(env['kernel'] + '/sconscript')
+else:
+ SConscript('userspace/sconscript')
This driver can also be built as a module. If so, the module
will be called ds1621.
-config SENSORS_EXYNOS4_TMU
- tristate "Temperature sensor on Samsung EXYNOS4"
- depends on ARCH_EXYNOS4
- help
- If you say yes here you get support for TMU (Thermal Managment
- Unit) on SAMSUNG EXYNOS4 series of SoC.
-
- This driver can also be built as a module. If so, the module
- will be called exynos4-tmu.
-
config SENSORS_I5K_AMB
tristate "FB-DIMM AMB temperature sensor on Intel 5000 series chipsets"
depends on PCI && EXPERIMENTAL
obj-$(CONFIG_SENSORS_EMC1403) += emc1403.o
obj-$(CONFIG_SENSORS_EMC2103) += emc2103.o
obj-$(CONFIG_SENSORS_EMC6W201) += emc6w201.o
-obj-$(CONFIG_SENSORS_EXYNOS4_TMU) += exynos4_tmu.o
obj-$(CONFIG_SENSORS_F71805F) += f71805f.o
obj-$(CONFIG_SENSORS_F71882FG) += f71882fg.o
obj-$(CONFIG_SENSORS_F75375S) += f75375s.o
+++ /dev/null
-/*
- * exynos4_tmu.c - Samsung EXYNOS4 TMU (Thermal Management Unit)
- *
- * Copyright (C) 2011 Samsung Electronics
- * Donggeun Kim <dg77.kim@samsung.com>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
- *
- */
-
-#include <linux/module.h>
-#include <linux/err.h>
-#include <linux/kernel.h>
-#include <linux/slab.h>
-#include <linux/platform_device.h>
-#include <linux/interrupt.h>
-#include <linux/clk.h>
-#include <linux/workqueue.h>
-#include <linux/sysfs.h>
-#include <linux/kobject.h>
-#include <linux/io.h>
-#include <linux/mutex.h>
-
-#include <linux/hwmon.h>
-#include <linux/hwmon-sysfs.h>
-
-#include <linux/platform_data/exynos4_tmu.h>
-
-#define EXYNOS4_TMU_REG_TRIMINFO 0x0
-#define EXYNOS4_TMU_REG_CONTROL 0x20
-#define EXYNOS4_TMU_REG_STATUS 0x28
-#define EXYNOS4_TMU_REG_CURRENT_TEMP 0x40
-#define EXYNOS4_TMU_REG_THRESHOLD_TEMP 0x44
-#define EXYNOS4_TMU_REG_TRIG_LEVEL0 0x50
-#define EXYNOS4_TMU_REG_TRIG_LEVEL1 0x54
-#define EXYNOS4_TMU_REG_TRIG_LEVEL2 0x58
-#define EXYNOS4_TMU_REG_TRIG_LEVEL3 0x5C
-#define EXYNOS4_TMU_REG_PAST_TEMP0 0x60
-#define EXYNOS4_TMU_REG_PAST_TEMP1 0x64
-#define EXYNOS4_TMU_REG_PAST_TEMP2 0x68
-#define EXYNOS4_TMU_REG_PAST_TEMP3 0x6C
-#define EXYNOS4_TMU_REG_INTEN 0x70
-#define EXYNOS4_TMU_REG_INTSTAT 0x74
-#define EXYNOS4_TMU_REG_INTCLEAR 0x78
-
-#define EXYNOS4_TMU_GAIN_SHIFT 8
-#define EXYNOS4_TMU_REF_VOLTAGE_SHIFT 24
-
-#define EXYNOS4_TMU_TRIM_TEMP_MASK 0xff
-#define EXYNOS4_TMU_CORE_ON 3
-#define EXYNOS4_TMU_CORE_OFF 2
-#define EXYNOS4_TMU_DEF_CODE_TO_TEMP_OFFSET 50
-#define EXYNOS4_TMU_TRIG_LEVEL0_MASK 0x1
-#define EXYNOS4_TMU_TRIG_LEVEL1_MASK 0x10
-#define EXYNOS4_TMU_TRIG_LEVEL2_MASK 0x100
-#define EXYNOS4_TMU_TRIG_LEVEL3_MASK 0x1000
-#define EXYNOS4_TMU_INTCLEAR_VAL 0x1111
-
-struct exynos4_tmu_data {
- struct exynos4_tmu_platform_data *pdata;
- struct device *hwmon_dev;
- struct resource *mem;
- void __iomem *base;
- int irq;
- struct work_struct irq_work;
- struct mutex lock;
- struct clk *clk;
- u8 temp_error1, temp_error2;
-};
-
-/*
- * TMU treats temperature as a mapped temperature code.
- * The temperature is converted differently depending on the calibration type.
- */
-static int temp_to_code(struct exynos4_tmu_data *data, u8 temp)
-{
- struct exynos4_tmu_platform_data *pdata = data->pdata;
- int temp_code;
-
- /* temp should range between 25 and 125 */
- if (temp < 25 || temp > 125) {
- temp_code = -EINVAL;
- goto out;
- }
-
- switch (pdata->cal_type) {
- case TYPE_TWO_POINT_TRIMMING:
- temp_code = (temp - 25) *
- (data->temp_error2 - data->temp_error1) /
- (85 - 25) + data->temp_error1;
- break;
- case TYPE_ONE_POINT_TRIMMING:
- temp_code = temp + data->temp_error1 - 25;
- break;
- default:
- temp_code = temp + EXYNOS4_TMU_DEF_CODE_TO_TEMP_OFFSET;
- break;
- }
-out:
- return temp_code;
-}
-
-/*
- * Calculate a temperature value from a temperature code.
- * The unit of the temperature is degree Celsius.
- */
-static int code_to_temp(struct exynos4_tmu_data *data, u8 temp_code)
-{
- struct exynos4_tmu_platform_data *pdata = data->pdata;
- int temp;
-
- /* temp_code should range between 75 and 175 */
- if (temp_code < 75 || temp_code > 175) {
- temp = -ENODATA;
- goto out;
- }
-
- switch (pdata->cal_type) {
- case TYPE_TWO_POINT_TRIMMING:
- temp = (temp_code - data->temp_error1) * (85 - 25) /
- (data->temp_error2 - data->temp_error1) + 25;
- break;
- case TYPE_ONE_POINT_TRIMMING:
- temp = temp_code - data->temp_error1 + 25;
- break;
- default:
- temp = temp_code - EXYNOS4_TMU_DEF_CODE_TO_TEMP_OFFSET;
- break;
- }
-out:
- return temp;
-}
-
-static int exynos4_tmu_initialize(struct platform_device *pdev)
-{
- struct exynos4_tmu_data *data = platform_get_drvdata(pdev);
- struct exynos4_tmu_platform_data *pdata = data->pdata;
- unsigned int status, trim_info;
- int ret = 0, threshold_code;
-
- mutex_lock(&data->lock);
- clk_enable(data->clk);
-
- status = readb(data->base + EXYNOS4_TMU_REG_STATUS);
- if (!status) {
- ret = -EBUSY;
- goto out;
- }
-
- /* Save trimming info in order to perform calibration */
- trim_info = readl(data->base + EXYNOS4_TMU_REG_TRIMINFO);
- data->temp_error1 = trim_info & EXYNOS4_TMU_TRIM_TEMP_MASK;
- data->temp_error2 = ((trim_info >> 8) & EXYNOS4_TMU_TRIM_TEMP_MASK);
-
- /* Write temperature code for threshold */
- threshold_code = temp_to_code(data, pdata->threshold);
- if (threshold_code < 0) {
- ret = threshold_code;
- goto out;
- }
- writeb(threshold_code,
- data->base + EXYNOS4_TMU_REG_THRESHOLD_TEMP);
-
- writeb(pdata->trigger_levels[0],
- data->base + EXYNOS4_TMU_REG_TRIG_LEVEL0);
- writeb(pdata->trigger_levels[1],
- data->base + EXYNOS4_TMU_REG_TRIG_LEVEL1);
- writeb(pdata->trigger_levels[2],
- data->base + EXYNOS4_TMU_REG_TRIG_LEVEL2);
- writeb(pdata->trigger_levels[3],
- data->base + EXYNOS4_TMU_REG_TRIG_LEVEL3);
-
- writel(EXYNOS4_TMU_INTCLEAR_VAL,
- data->base + EXYNOS4_TMU_REG_INTCLEAR);
-out:
- clk_disable(data->clk);
- mutex_unlock(&data->lock);
-
- return ret;
-}
-
-static void exynos4_tmu_control(struct platform_device *pdev, bool on)
-{
- struct exynos4_tmu_data *data = platform_get_drvdata(pdev);
- struct exynos4_tmu_platform_data *pdata = data->pdata;
- unsigned int con, interrupt_en;
-
- mutex_lock(&data->lock);
- clk_enable(data->clk);
-
- con = pdata->reference_voltage << EXYNOS4_TMU_REF_VOLTAGE_SHIFT |
- pdata->gain << EXYNOS4_TMU_GAIN_SHIFT;
- if (on) {
- con |= EXYNOS4_TMU_CORE_ON;
- interrupt_en = pdata->trigger_level3_en << 12 |
- pdata->trigger_level2_en << 8 |
- pdata->trigger_level1_en << 4 |
- pdata->trigger_level0_en;
- } else {
- con |= EXYNOS4_TMU_CORE_OFF;
- interrupt_en = 0; /* Disable all interrupts */
- }
- writel(interrupt_en, data->base + EXYNOS4_TMU_REG_INTEN);
- writel(con, data->base + EXYNOS4_TMU_REG_CONTROL);
-
- clk_disable(data->clk);
- mutex_unlock(&data->lock);
-}
-
-static int exynos4_tmu_read(struct exynos4_tmu_data *data)
-{
- u8 temp_code;
- int temp;
-
- mutex_lock(&data->lock);
- clk_enable(data->clk);
-
- temp_code = readb(data->base + EXYNOS4_TMU_REG_CURRENT_TEMP);
- temp = code_to_temp(data, temp_code);
-
- clk_disable(data->clk);
- mutex_unlock(&data->lock);
-
- return temp;
-}
-
-static void exynos4_tmu_work(struct work_struct *work)
-{
- struct exynos4_tmu_data *data = container_of(work,
- struct exynos4_tmu_data, irq_work);
-
- mutex_lock(&data->lock);
- clk_enable(data->clk);
-
- writel(EXYNOS4_TMU_INTCLEAR_VAL, data->base + EXYNOS4_TMU_REG_INTCLEAR);
-
- kobject_uevent(&data->hwmon_dev->kobj, KOBJ_CHANGE);
-
- enable_irq(data->irq);
-
- clk_disable(data->clk);
- mutex_unlock(&data->lock);
-}
-
-static irqreturn_t exynos4_tmu_irq(int irq, void *id)
-{
- struct exynos4_tmu_data *data = id;
-
- disable_irq_nosync(irq);
- schedule_work(&data->irq_work);
-
- return IRQ_HANDLED;
-}
-
-static ssize_t exynos4_tmu_show_name(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- return sprintf(buf, "exynos4-tmu\n");
-}
-
-static ssize_t exynos4_tmu_show_temp(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- struct exynos4_tmu_data *data = dev_get_drvdata(dev);
- int ret;
-
- ret = exynos4_tmu_read(data);
- if (ret < 0)
- return ret;
-
- /* convert from degree Celsius to millidegree Celsius */
- return sprintf(buf, "%d\n", ret * 1000);
-}
-
-static ssize_t exynos4_tmu_show_alarm(struct device *dev,
- struct device_attribute *devattr, char *buf)
-{
- struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);
- struct exynos4_tmu_data *data = dev_get_drvdata(dev);
- struct exynos4_tmu_platform_data *pdata = data->pdata;
- int temp;
- unsigned int trigger_level;
-
- temp = exynos4_tmu_read(data);
- if (temp < 0)
- return temp;
-
- trigger_level = pdata->threshold + pdata->trigger_levels[attr->index];
-
- return sprintf(buf, "%d\n", !!(temp > trigger_level));
-}
-
-static ssize_t exynos4_tmu_show_level(struct device *dev,
- struct device_attribute *devattr, char *buf)
-{
- struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);
- struct exynos4_tmu_data *data = dev_get_drvdata(dev);
- struct exynos4_tmu_platform_data *pdata = data->pdata;
- unsigned int temp = pdata->threshold +
- pdata->trigger_levels[attr->index];
-
- return sprintf(buf, "%u\n", temp * 1000);
-}
-
-static DEVICE_ATTR(name, S_IRUGO, exynos4_tmu_show_name, NULL);
-static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, exynos4_tmu_show_temp, NULL, 0);
-
-static SENSOR_DEVICE_ATTR(temp1_max_alarm, S_IRUGO,
- exynos4_tmu_show_alarm, NULL, 1);
-static SENSOR_DEVICE_ATTR(temp1_crit_alarm, S_IRUGO,
- exynos4_tmu_show_alarm, NULL, 2);
-static SENSOR_DEVICE_ATTR(temp1_emergency_alarm, S_IRUGO,
- exynos4_tmu_show_alarm, NULL, 3);
-
-static SENSOR_DEVICE_ATTR(temp1_max, S_IRUGO, exynos4_tmu_show_level, NULL, 1);
-static SENSOR_DEVICE_ATTR(temp1_crit, S_IRUGO, exynos4_tmu_show_level, NULL, 2);
-static SENSOR_DEVICE_ATTR(temp1_emergency, S_IRUGO,
- exynos4_tmu_show_level, NULL, 3);
-
-static struct attribute *exynos4_tmu_attributes[] = {
- &dev_attr_name.attr,
- &sensor_dev_attr_temp1_input.dev_attr.attr,
- &sensor_dev_attr_temp1_max_alarm.dev_attr.attr,
- &sensor_dev_attr_temp1_crit_alarm.dev_attr.attr,
- &sensor_dev_attr_temp1_emergency_alarm.dev_attr.attr,
- &sensor_dev_attr_temp1_max.dev_attr.attr,
- &sensor_dev_attr_temp1_crit.dev_attr.attr,
- &sensor_dev_attr_temp1_emergency.dev_attr.attr,
- NULL,
-};
-
-static const struct attribute_group exynos4_tmu_attr_group = {
- .attrs = exynos4_tmu_attributes,
-};
-
-static int __devinit exynos4_tmu_probe(struct platform_device *pdev)
-{
- struct exynos4_tmu_data *data;
- struct exynos4_tmu_platform_data *pdata = pdev->dev.platform_data;
- int ret;
-
- if (!pdata) {
- dev_err(&pdev->dev, "No platform init data supplied.\n");
- return -ENODEV;
- }
-
- data = kzalloc(sizeof(struct exynos4_tmu_data), GFP_KERNEL);
- if (!data) {
- dev_err(&pdev->dev, "Failed to allocate driver structure\n");
- return -ENOMEM;
- }
-
- data->irq = platform_get_irq(pdev, 0);
- if (data->irq < 0) {
- ret = data->irq;
- dev_err(&pdev->dev, "Failed to get platform irq\n");
- goto err_free;
- }
-
- INIT_WORK(&data->irq_work, exynos4_tmu_work);
-
- data->mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- if (!data->mem) {
- ret = -ENOENT;
- dev_err(&pdev->dev, "Failed to get platform resource\n");
- goto err_free;
- }
-
- data->mem = request_mem_region(data->mem->start,
- resource_size(data->mem), pdev->name);
- if (!data->mem) {
- ret = -ENODEV;
- dev_err(&pdev->dev, "Failed to request memory region\n");
- goto err_free;
- }
-
- data->base = ioremap(data->mem->start, resource_size(data->mem));
- if (!data->base) {
- ret = -ENODEV;
- dev_err(&pdev->dev, "Failed to ioremap memory\n");
- goto err_mem_region;
- }
-
- ret = request_irq(data->irq, exynos4_tmu_irq,
- IRQF_TRIGGER_RISING,
- "exynos4-tmu", data);
- if (ret) {
- dev_err(&pdev->dev, "Failed to request irq: %d\n", data->irq);
- goto err_io_remap;
- }
-
- data->clk = clk_get(NULL, "tmu_apbif");
- if (IS_ERR(data->clk)) {
- ret = PTR_ERR(data->clk);
- dev_err(&pdev->dev, "Failed to get clock\n");
- goto err_irq;
- }
-
- data->pdata = pdata;
- platform_set_drvdata(pdev, data);
- mutex_init(&data->lock);
-
- ret = exynos4_tmu_initialize(pdev);
- if (ret) {
- dev_err(&pdev->dev, "Failed to initialize TMU\n");
- goto err_clk;
- }
-
- ret = sysfs_create_group(&pdev->dev.kobj, &exynos4_tmu_attr_group);
- if (ret) {
- dev_err(&pdev->dev, "Failed to create sysfs group\n");
- goto err_clk;
- }
-
- data->hwmon_dev = hwmon_device_register(&pdev->dev);
- if (IS_ERR(data->hwmon_dev)) {
- ret = PTR_ERR(data->hwmon_dev);
- dev_err(&pdev->dev, "Failed to register hwmon device\n");
- goto err_create_group;
- }
-
- exynos4_tmu_control(pdev, true);
-
- return 0;
-
-err_create_group:
- sysfs_remove_group(&pdev->dev.kobj, &exynos4_tmu_attr_group);
-err_clk:
- platform_set_drvdata(pdev, NULL);
- clk_put(data->clk);
-err_irq:
- free_irq(data->irq, data);
-err_io_remap:
- iounmap(data->base);
-err_mem_region:
- release_mem_region(data->mem->start, resource_size(data->mem));
-err_free:
- kfree(data);
-
- return ret;
-}
-
-static int __devexit exynos4_tmu_remove(struct platform_device *pdev)
-{
- struct exynos4_tmu_data *data = platform_get_drvdata(pdev);
-
- exynos4_tmu_control(pdev, false);
-
- hwmon_device_unregister(data->hwmon_dev);
- sysfs_remove_group(&pdev->dev.kobj, &exynos4_tmu_attr_group);
-
- clk_put(data->clk);
-
- free_irq(data->irq, data);
-
- iounmap(data->base);
- release_mem_region(data->mem->start, resource_size(data->mem));
-
- platform_set_drvdata(pdev, NULL);
-
- kfree(data);
-
- return 0;
-}
-
-#ifdef CONFIG_PM
-static int exynos4_tmu_suspend(struct platform_device *pdev, pm_message_t state)
-{
- exynos4_tmu_control(pdev, false);
-
- return 0;
-}
-
-static int exynos4_tmu_resume(struct platform_device *pdev)
-{
- exynos4_tmu_initialize(pdev);
- exynos4_tmu_control(pdev, true);
-
- return 0;
-}
-#else
-#define exynos4_tmu_suspend NULL
-#define exynos4_tmu_resume NULL
-#endif
-
-static struct platform_driver exynos4_tmu_driver = {
- .driver = {
- .name = "exynos4-tmu",
- .owner = THIS_MODULE,
- },
- .probe = exynos4_tmu_probe,
- .remove = __devexit_p(exynos4_tmu_remove),
- .suspend = exynos4_tmu_suspend,
- .resume = exynos4_tmu_resume,
-};
-
-module_platform_driver(exynos4_tmu_driver);
-
-MODULE_DESCRIPTION("EXYNOS4 TMU Driver");
-MODULE_AUTHOR("Donggeun Kim <dg77.kim@samsung.com>");
-MODULE_LICENSE("GPL");
-MODULE_ALIAS("platform:exynos4-tmu");
If you don't know what to do here, definitely say N.
+config I2C_CHROMEOS_EC
+ tristate "ChromeOS EC pass-through I2C bus"
+ depends on MFD_CHROMEOS_EC
+ help
+ If you say yes here you get an I2C pass-through to use services of the
+ ChromeOS Embedded Controller and the chips behind it.
+
config SCx200_I2C
tristate "NatSemi SCx200 I2C using GPIO pins (DEPRECATED)"
depends on SCx200_GPIO
obj-$(CONFIG_I2C_PCA_ISA) += i2c-pca-isa.o
obj-$(CONFIG_I2C_SIBYTE) += i2c-sibyte.o
obj-$(CONFIG_I2C_STUB) += i2c-stub.o
+obj-$(CONFIG_I2C_CHROMEOS_EC) += i2c-chromeos_ec.o
obj-$(CONFIG_SCx200_ACB) += scx200_acb.o
obj-$(CONFIG_SCx200_I2C) += scx200_i2c.o
--- /dev/null
+/*
+ * Copyright (C) 2012 Google, Inc
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ *
+ * Expose an I2C passthrough to the ChromeOS EC.
+ */
+
+#include <linux/module.h>
+#include <linux/i2c.h>
+#include <linux/mfd/chromeos_ec.h>
+#include <linux/mfd/chromeos_ec_commands.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+
+struct ec_i2c_device {
+ struct device *dev;
+ struct i2c_adapter adap;
+ struct chromeos_ec_device *ec;
+};
+
+static int ec_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg i2c_msgs[],
+ int num)
+{
+ struct ec_i2c_device *bus = adap->algo_data;
+
+ return bus->ec->command_raw(bus->ec, i2c_msgs, num);
+}
+
+static u32 ec_i2c_functionality(struct i2c_adapter *adap)
+{
+ return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL;
+}
+
+static const struct i2c_algorithm ec_i2c_algorithm = {
+ .master_xfer = ec_i2c_xfer,
+ .functionality = ec_i2c_functionality,
+};
+
+static int __devinit ec_i2c_probe(struct platform_device *pdev)
+{
+ struct chromeos_ec_device *ec = dev_get_drvdata(pdev->dev.parent);
+ struct device *dev = ec->dev;
+ struct ec_i2c_device *bus = NULL;
+ int err;
+
+ dev_dbg(dev, "EC I2C pass-through probing\n");
+
+ bus = kzalloc(sizeof(*bus), GFP_KERNEL);
+ if (bus == NULL) {
+ err = -ENOMEM;
+ dev_err(dev, "cannot allocate bus device\n");
+ goto fail;
+ }
+
+ bus->ec = ec;
+ bus->dev = dev;
+
+ bus->adap.owner = THIS_MODULE;
+ bus->adap.retries = 3;
+ bus->adap.nr = 0;
+ strlcpy(bus->adap.name, "cros_ec_i2c", sizeof(bus->adap.name));
+ bus->adap.algo = &ec_i2c_algorithm;
+ bus->adap.algo_data = bus;
+ bus->adap.dev.parent = &ec->client->dev;
+ err = i2c_add_adapter(&bus->adap);
+ if (err) {
+ dev_err(dev, "cannot register i2c adapter\n");
+ goto fail;
+ }
+ platform_set_drvdata(pdev, bus);
+
+ return 0;
+fail:
+ kfree(bus);
+ return err;
+}
+
+static int __exit ec_i2c_remove(struct platform_device *dev)
+{
+ struct ec_i2c_device *bus = platform_get_drvdata(dev);
+
+ platform_set_drvdata(dev, NULL);
+
+ i2c_del_adapter(&bus->adap);
+ kfree(bus);
+
+ return 0;
+}
+
+static struct platform_driver ec_i2c_driver = {
+ .probe = ec_i2c_probe,
+ .remove = __exit_p(ec_i2c_remove),
+ .driver = {
+ .name = "cros_ec-i2c",
+ },
+};
+
+
+module_platform_driver(ec_i2c_driver);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("EC I2C pass-through driver");
+MODULE_ALIAS("platform:cros_ec-i2c");
adap->dev.parent = &pdev->dev;
adap->dev.of_node = pdev->dev.of_node;
- /*
- * If "dev->id" is negative we consider it as zero.
- * The reason to do so is to avoid sysfs names that only make
- * sense when there are multiple adapters.
- */
- adap->nr = (pdev->id != -1) ? pdev->id : 0;
+ adap->nr = pdev->id;
ret = i2c_bit_add_numbered_bus(adap);
if (ret)
goto err_add_bus;
i2c->adap = octeon_i2c_ops;
i2c->adap.dev.parent = &pdev->dev;
- i2c->adap.nr = pdev->id >= 0 ? pdev->id : 0;
+ i2c->adap.nr = pdev->id;
i2c_set_adapdata(&i2c->adap, i2c);
platform_set_drvdata(pdev, i2c);
i2c->io_size = resource_size(res);
i2c->irq = irq;
- i2c->adap.nr = pdev->id >= 0 ? pdev->id : 0;
+ i2c->adap.nr = pdev->id;
i2c->adap.owner = THIS_MODULE;
snprintf(i2c->adap.name, sizeof(i2c->adap.name),
"PCA9564/PCA9665 at 0x%08lx",
#include <plat/regs-iic.h>
#include <plat/iic.h>
-/* i2c controller state */
+/* Treat S3C2410 as baseline hardware, anything else is supported via quirks */
+#define QUIRK_S3C2440 (1 << 0)
+#define QUIRK_HDMIPHY (1 << 1)
+#define QUIRK_NO_GPIO (1 << 2)
+/* i2c controller state */
enum s3c24xx_i2c_state {
STATE_IDLE,
STATE_START,
STATE_STOP
};
-enum s3c24xx_i2c_type {
- TYPE_S3C2410,
- TYPE_S3C2440,
-};
-
struct s3c24xx_i2c {
spinlock_t lock;
wait_queue_head_t wait;
+ unsigned int quirks;
unsigned int suspended:1;
struct i2c_msg *msg;
#endif
};
-/* default platform data removed, dev should always carry data. */
+static struct platform_device_id s3c24xx_driver_ids[] = {
+ {
+ .name = "s3c2410-i2c",
+ .driver_data = 0,
+ }, {
+ .name = "s3c2440-i2c",
+ .driver_data = QUIRK_S3C2440,
+ }, {
+ .name = "s3c2440-hdmiphy-i2c",
+ .driver_data = QUIRK_S3C2440 | QUIRK_HDMIPHY | QUIRK_NO_GPIO,
+ }, { },
+};
+MODULE_DEVICE_TABLE(platform, s3c24xx_driver_ids);
-/* s3c24xx_i2c_is2440()
+#ifdef CONFIG_OF
+static const struct of_device_id s3c24xx_i2c_match[] = {
+ { .compatible = "samsung,s3c2410-i2c", .data = (void *)0 },
+ { .compatible = "samsung,s3c2440-i2c", .data = (void *)QUIRK_S3C2440 },
+ { .compatible = "samsung,s3c2440-hdmiphy-i2c",
+ .data = (void *)(QUIRK_S3C2440 | QUIRK_HDMIPHY | QUIRK_NO_GPIO) },
+ {},
+};
+MODULE_DEVICE_TABLE(of, s3c24xx_i2c_match);
+#endif
+
+/* s3c24xx_get_device_quirks
*
- * return true is this is an s3c2440
+ * Get controller type either from device tree or platform device variant.
*/
-static inline int s3c24xx_i2c_is2440(struct s3c24xx_i2c *i2c)
+static inline unsigned int s3c24xx_get_device_quirks(struct platform_device *pdev)
{
- struct platform_device *pdev = to_platform_device(i2c->dev);
- enum s3c24xx_i2c_type type;
-
-#ifdef CONFIG_OF
- if (i2c->dev->of_node)
- return of_device_is_compatible(i2c->dev->of_node,
- "samsung,s3c2440-i2c");
-#endif
+ if (pdev->dev.of_node) {
+ const struct of_device_id *match;
+ match = of_match_node(of_match_ptr(s3c24xx_i2c_match),
+ pdev->dev.of_node);
+ return (unsigned int)match->data;
+ }
- type = platform_get_device_id(pdev)->driver_data;
- return type == TYPE_S3C2440;
+ return platform_get_device_id(pdev)->driver_data;
}
/* s3c24xx_i2c_master_complete
unsigned long iicstat;
int timeout = 400;
+ /* the timeout for HDMIPHY is reduced to 10 ms because
+ * the hangup is expected to happen, so waiting 400 ms
+ * causes only unnecessary system hangup
+ */
+ if (i2c->quirks & QUIRK_HDMIPHY)
+ timeout = 10;
+
while (timeout-- > 0) {
iicstat = readl(i2c->regs + S3C2410_IICSTAT);
msleep(1);
}
+ /* hang-up of bus dedicated for HDMIPHY occurred, resetting */
+ if (i2c->quirks & QUIRK_HDMIPHY) {
+ writel(0, i2c->regs + S3C2410_IICCON);
+ writel(0, i2c->regs + S3C2410_IICSTAT);
+ writel(0, i2c->regs + S3C2410_IICDS);
+
+ return 0;
+ }
+
return -ETIMEDOUT;
}
writel(iiccon, i2c->regs + S3C2410_IICCON);
- if (s3c24xx_i2c_is2440(i2c)) {
+ if (i2c->quirks & QUIRK_S3C2440) {
unsigned long sda_delay;
if (pdata->sda_delay) {
{
int idx, gpio, ret;
+ if (i2c->quirks & QUIRK_NO_GPIO)
+ return 0;
+
for (idx = 0; idx < 2; idx++) {
gpio = of_get_gpio(i2c->dev->of_node, idx);
if (!gpio_is_valid(gpio)) {
dev_err(i2c->dev, "invalid gpio[%d]: %d\n", idx, gpio);
goto free_gpio;
}
+ i2c->gpios[idx] = gpio;
ret = gpio_request(gpio, "i2c-bus");
if (ret) {
static void s3c24xx_i2c_dt_gpio_free(struct s3c24xx_i2c *i2c)
{
unsigned int idx;
+
+ if (i2c->quirks & QUIRK_NO_GPIO)
+ return;
+
for (idx = 0; idx < 2; idx++)
gpio_free(i2c->gpios[idx]);
}
s3c24xx_i2c_parse_dt(struct device_node *np, struct s3c24xx_i2c *i2c)
{
struct s3c2410_platform_i2c *pdata = i2c->pdata;
+ int id;
if (!np)
return;
- pdata->bus_num = -1; /* i2c bus number is dynamically assigned */
+ id = of_alias_get_id(np, "i2c");
+ if (id < 0) {
+ dev_warn(i2c->dev, "failed to get alias id:%d\n", id);
+ pdata->bus_num = -1;
+ } else
+ /* i2c bus number is statically assigned from alias*/
+ pdata->bus_num = id;
of_property_read_u32(np, "samsung,i2c-sda-delay", &pdata->sda_delay);
of_property_read_u32(np, "samsung,i2c-slave-addr", &pdata->slave_addr);
of_property_read_u32(np, "samsung,i2c-max-bus-freq",
goto err_noclk;
}
+ i2c->quirks = s3c24xx_get_device_quirks(pdev);
if (pdata)
memcpy(i2c->pdata, pdata, sizeof(*pdata));
else
struct platform_device *pdev = to_platform_device(dev);
struct s3c24xx_i2c *i2c = platform_get_drvdata(pdev);
+ s3c24xx_i2c_dt_gpio_free(i2c);
i2c->suspended = 1;
return 0;
/* device driver for platform bus bits */
-static struct platform_device_id s3c24xx_driver_ids[] = {
- {
- .name = "s3c2410-i2c",
- .driver_data = TYPE_S3C2410,
- }, {
- .name = "s3c2440-i2c",
- .driver_data = TYPE_S3C2440,
- }, { },
-};
-MODULE_DEVICE_TABLE(platform, s3c24xx_driver_ids);
-
-#ifdef CONFIG_OF
-static const struct of_device_id s3c24xx_i2c_match[] = {
- { .compatible = "samsung,s3c2410-i2c" },
- { .compatible = "samsung,s3c2440-i2c" },
- {},
-};
-MODULE_DEVICE_TABLE(of, s3c24xx_i2c_match);
-#else
-#define s3c24xx_i2c_match NULL
-#endif
-
static struct platform_driver s3c24xx_i2c_driver = {
.probe = s3c24xx_i2c_probe,
.remove = s3c24xx_i2c_remove,
.owner = THIS_MODULE,
.name = "s3c-i2c",
.pm = S3C24XX_DEV_PM_OPS,
- .of_match_table = s3c24xx_i2c_match,
+ .of_match_table = of_match_ptr(s3c24xx_i2c_match),
},
};
i2c->algo = i2c_versatile_algo;
i2c->algo.data = i2c;
- if (dev->id >= 0) {
- /* static bus numbering */
- i2c->adap.nr = dev->id;
- ret = i2c_bit_add_numbered_bus(&i2c->adap);
- } else
- /* dynamic bus numbering */
- ret = i2c_bit_add_bus(&i2c->adap);
+ i2c->adap.nr = dev->id;
+ ret = i2c_bit_add_numbered_bus(&i2c->adap);
if (ret >= 0) {
platform_set_drvdata(dev, i2c);
of_i2c_register_devices(&i2c->adap);
To compile this driver as a module, choose M here: the
module will be called w90p910_keypad.
+config KEYBOARD_MKBP
+ tristate "Matrix Keyboard Protocol keyboard"
+ help
+ Say Y here to enable the Matrix Keyboard Protocol keyboard
+ used by Chrome devices.
+
+ To compile this driver as a module, choose M here: the
+ module will be called mkbp.
+
endif
obj-$(CONFIG_KEYBOARD_MATRIX) += matrix_keypad.o
obj-$(CONFIG_KEYBOARD_MAX7359) += max7359_keypad.o
obj-$(CONFIG_KEYBOARD_MCS) += mcs_touchkey.o
+obj-$(CONFIG_KEYBOARD_MKBP) += mkbp.o
obj-$(CONFIG_KEYBOARD_MPR121) += mpr121_touchkey.o
obj-$(CONFIG_KEYBOARD_NEWTON) += newtonkbd.o
obj-$(CONFIG_KEYBOARD_NOMADIK) += nomadik-ske-keypad.o
--- /dev/null
+/*
+ * mkbp.c - keyboard driver for Matrix KeyBoard Protocol keyboards.
+ *
+ * Copyright (C) 2012 Google, Inc
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ *
+ * The MKBP (matrix keyboard protocol) is a message-based protocol for
+ * communicating the keyboard state (which keys are pressed) from a keyboard EC
+ * to the AP over some bus (such as i2c, lpc, spi). The EC does debouncing,
+ * but everything else (including deghosting) is done here. The main
+ * motivation for this is to keep the EC firmware as simple as possible, since
+ * it cannot be easily upgraded.
+ */
+
+#include <linux/module.h>
+#include <linux/i2c.h>
+#include <linux/input.h>
+#include <linux/kernel.h>
+#include <linux/mfd/chromeos_ec.h>
+#include <linux/mfd/chromeos_ec_commands.h>
+#include <linux/notifier.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+
+struct mkbp_device {
+ struct device *dev;
+ struct input_dev *idev;
+ struct chromeos_ec_device *ec;
+ struct notifier_block notifier;
+};
+
+
+/*
+ * The standard MKBP keyboard matrix table.
+ *
+ * These may become variables when we switch to the Device Tree. However, the
+ * code and the protocol assume that NUM_ROWS = 8 (one byte per column).
+ */
+#define MKBP_NUM_ROWS 8
+#define MKBP_NUM_COLS 13
+
+/* We will read this table from the Device Tree when we have one. */
+static uint16_t mkbp_keycodes[MKBP_NUM_ROWS][MKBP_NUM_COLS] = {
+ { 0x0, KEY_LEFTMETA, KEY_F1, KEY_B,
+ KEY_F10, 0x0, KEY_N, 0x0,
+ KEY_EQUAL, 0x0, KEY_RIGHTALT, 0x0,
+ 0x0 },
+ { 0x0, KEY_ESC, KEY_F4, KEY_G,
+ KEY_F7, 0x0, KEY_H, 0x0,
+ KEY_APOSTROPHE,KEY_F9, 0x0, KEY_BACKSPACE,
+ 0x0 },
+ { KEY_LEFTCTRL, KEY_TAB, KEY_F3, KEY_T,
+ KEY_F6, KEY_RIGHTBRACE, KEY_Y, KEY_102ND,
+ KEY_LEFTBRACE,KEY_F8, 0x0, 0x0,
+ 0x0 },
+ { 0x0, KEY_GRAVE, KEY_F2, KEY_5,
+ KEY_F5, 0x0, KEY_6, 0x0,
+ KEY_MINUS, 0x0, 0x0, KEY_BACKSLASH,
+ 0x0 },
+ { KEY_RIGHTCTRL,KEY_A, KEY_D, KEY_F,
+ KEY_S, KEY_K, KEY_J, 0x0,
+ KEY_SEMICOLON,KEY_L, KEY_BACKSLASH, KEY_ENTER,
+ 0x0 },
+ { 0x0, KEY_Z, KEY_C, KEY_V,
+ KEY_X, KEY_COMMA, KEY_M, KEY_LEFTSHIFT,
+ KEY_SLASH, KEY_DOT, 0x0, KEY_SPACE,
+ 0x0 },
+ { 0x0, KEY_1, KEY_3, KEY_4,
+ KEY_2, KEY_8, KEY_7, 0x0,
+ KEY_0, KEY_9, KEY_LEFTALT, KEY_DOWN,
+ KEY_RIGHT },
+ { 0x0, KEY_Q, KEY_E, KEY_R,
+ KEY_W, KEY_I, KEY_U, KEY_RIGHTSHIFT,
+ KEY_P, KEY_O, 0x0, KEY_UP,
+ KEY_LEFT }
+};
+
+static uint8_t identity_keycodes[256];
+
+/*
+ * Sends a single key event to the input layer.
+ */
+static inline void mkbp_send_key_event(struct mkbp_device *mkbp_dev,
+ int row, int col, int pressed)
+{
+ struct input_dev *idev = mkbp_dev->idev;
+ int code = mkbp_keycodes[row][col];
+
+ input_report_key(idev, code, pressed);
+}
+
+/*
+ * Returns true when there is at least one combination of pressed keys that
+ * results in ghosting.
+ */
+static bool mkbp_has_ghosting(struct device *dev, uint8_t *buf)
+{
+ int col, row;
+ int mask;
+ int pressed_in_row[MKBP_NUM_ROWS];
+ int row_has_teeth[MKBP_NUM_ROWS];
+
+ memset(pressed_in_row, 0, sizeof(pressed_in_row));
+ memset(row_has_teeth, 0, sizeof(row_has_teeth));
+ /*
+ * Ghosting happens if for any pressed key X there are other keys
+ * pressed both in the same row and column of X as, for instance,
+ * in the following diagram:
+ *
+ * . . Y . g .
+ * . . . . . .
+ * . . . . . .
+ * . . X . Z .
+ *
+ * In this case only X, Y, and Z are pressed, but g appears to be
+ * pressed too (see Wikipedia).
+ *
+ * We can detect ghosting in a single pass (*) over the keyboard state
+ * by maintaining two arrays. pressed_in_row counts how many pressed
+ * keys we have found in a row. row_has_teeth is true if any of the
+ * pressed keys for this row has other pressed keys in its column. If
+ * at any point of the scan we find that a row has multiple pressed
+ * keys, and at least one of them is at the intersection with a column
+ * with multiple pressed keys, we're sure there is ghosting.
+ * Conversely, if there is ghosting, we will detect such situation for
+ * at least one key during the pass.
+ *
+ * (*) This looks linear in the number of keys, but it's not. We can
+ * cheat because the number of rows is small.
+ */
+ for (row = 0; row < MKBP_NUM_ROWS; row++) {
+ mask = 1 << row;
+ for (col = 0; col < MKBP_NUM_COLS; col++) {
+ if (buf[col] & mask) {
+ pressed_in_row[row] += 1;
+ row_has_teeth[row] |= buf[col] & ~mask;
+ if (pressed_in_row[row] > 1 &&
+ row_has_teeth[row]) {
+ /* ghosting */
+ dev_dbg(dev, "ghost found at: r%d c%d,"
+ " pressed %d, teeth 0x%x\n",
+ row, col, pressed_in_row[row],
+ row_has_teeth[row]);
+ return true;
+ }
+ }
+ }
+ }
+ return false;
+}
+
+/*
+ * mkbp_old_state[row][col] is 1 when the most recent (valid) communication
+ * with the keyboard indicated that the key at row/col was in the pressed
+ * state.
+ */
+static uint8_t mkbp_old_state[MKBP_NUM_ROWS][MKBP_NUM_COLS];
+
+/*
+ * Compares the new keyboard state to the old one and produces key
+ * press/release events accordingly. The keyboard state is 13 bytes (one byte
+ * per column)
+ */
+static void mkbp_process(struct mkbp_device *mkbp_dev,
+ uint8_t *kb_state, int len)
+{
+ int col, row;
+ int new_state;
+ int num_cols;
+
+ num_cols = len;
+
+ if (mkbp_has_ghosting(mkbp_dev->dev, kb_state)) {
+ /*
+ * Simple-minded solution: ignore this state. The obvious
+ * improvement is to only ignore changes to keys involved in
+ * the ghosting, but process the other changes.
+ */
+ dev_dbg(mkbp_dev->dev, "ghosting found\n");
+ return;
+ }
+
+ for (col = 0; col < MKBP_NUM_COLS; col++) {
+ for (row = 0; row < MKBP_NUM_ROWS; row++) {
+ new_state = kb_state[col] & (1 << row);
+ if (!!new_state != mkbp_old_state[row][col]) {
+ dev_dbg(mkbp_dev->dev,
+ "changed: [r%d c%d]: byte %02x\n",
+ row, col, new_state);
+ }
+ if (new_state && !mkbp_old_state[row][col]) {
+ /* key press */
+ mkbp_send_key_event(mkbp_dev, row, col, 1);
+ mkbp_old_state[row][col] = 1;
+ } else if (!new_state && mkbp_old_state[row][col]) {
+ /* key release */
+ mkbp_send_key_event(mkbp_dev, row, col, 0);
+ mkbp_old_state[row][col] = 0;
+ }
+ }
+ }
+ input_sync(mkbp_dev->idev);
+}
+
+static int mkbp_open(struct input_dev *dev)
+{
+ struct mkbp_device *mkbp_dev = input_get_drvdata(dev);
+
+ return blocking_notifier_chain_register(&mkbp_dev->ec->event_notifier,
+ &mkbp_dev->notifier);
+}
+
+static void mkbp_close(struct input_dev *dev)
+{
+ struct mkbp_device *mkbp_dev = input_get_drvdata(dev);
+
+ blocking_notifier_chain_unregister(&mkbp_dev->ec->event_notifier,
+ &mkbp_dev->notifier);
+}
+
+static int mkbp_work(struct notifier_block *nb,
+ unsigned long state, void *_notify)
+{
+ int ret;
+ struct mkbp_device *mkbp_dev = container_of(nb, struct mkbp_device,
+ notifier);
+ uint8_t kb_state[MKBP_NUM_COLS];
+
+ ret = mkbp_dev->ec->command_recv(mkbp_dev->ec, EC_CMD_MKBP_STATE,
+ kb_state, MKBP_NUM_COLS);
+ if (ret >= 0)
+ mkbp_process(mkbp_dev, kb_state, ret);
+
+ return NOTIFY_DONE;
+}
+
+static int __devinit mkbp_probe(struct platform_device *pdev)
+{
+ struct chromeos_ec_device *ec = dev_get_drvdata(pdev->dev.parent);
+ struct device *dev = ec->dev;
+ struct mkbp_device *mkbp_dev = NULL;
+ struct input_dev *idev = NULL;
+ int i, err;
+ bool input_device_registered = false;
+
+ dev_dbg(dev, "probing\n");
+
+ mkbp_dev = kzalloc(sizeof(*mkbp_dev), GFP_KERNEL);
+ idev = input_allocate_device();
+ if (idev == NULL || mkbp_dev == NULL) {
+ err = -ENOMEM;
+ dev_err(dev, "cannot allocate\n");
+ goto fail;
+ }
+
+ mkbp_dev->ec = ec;
+ mkbp_dev->notifier.notifier_call = mkbp_work;
+ mkbp_dev->dev = dev;
+
+ idev->name = ec->client->name;
+ idev->phys = ec->client->adapter->name;
+ idev->evbit[0] = BIT_MASK(EV_KEY) | BIT_MASK(EV_REP);
+ idev->keycode = identity_keycodes;
+ idev->keycodesize = sizeof(identity_keycodes[0]);
+ idev->keycodemax =
+ sizeof(identity_keycodes) / sizeof(identity_keycodes[0]);
+ for (i = 0; i < idev->keycodemax; i++) {
+ identity_keycodes[i] = i;
+ input_set_capability(idev, EV_KEY, i);
+ }
+ idev->id.bustype = BUS_I2C;
+ idev->id.version = 1;
+ idev->id.product = 0;
+ idev->dev.parent = &ec->client->dev;
+ idev->open = mkbp_open;
+ idev->close = mkbp_close;
+
+ input_set_drvdata(idev, mkbp_dev);
+ mkbp_dev->idev = idev;
+ err = input_register_device(mkbp_dev->idev);
+ if (err) {
+ dev_err(dev, "cannot register input device\n");
+ goto fail;
+ }
+ /* We have seen the mkbp work function scheduled as much as 300ms after
+ * the interrupt service routine is called. The default autorepeat
+ * delay is 250ms. This can lead to spurious autorepeat. A better fix
+ * would be to collect time stamps in the ISR, but for the moment a
+ * longer delay helps.
+ *
+ * Also note that we must change the delay after device registration,
+ * or else the input layer assumes that the driver does its own
+ * autorepeat. (Which we will probably have to do.)
+ */
+ mkbp_dev->idev->rep[REP_DELAY] = 600;
+ input_device_registered = true;
+
+ return err;
+fail:
+ if (input_device_registered)
+ input_unregister_device(idev);
+ kfree(mkbp_dev);
+ input_free_device(idev);
+ return err;
+}
+
+static struct platform_driver mkbp_driver = {
+ .probe = mkbp_probe,
+ .driver = {
+ .name = "mkbp",
+ },
+};
+
+
+module_platform_driver(mkbp_driver);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Matrix keyboard protocol driver");
+MODULE_ALIAS("platform:mkbp");
space through the SMMU (System Memory Management Unit)
hardware included on Tegra SoCs.
+config EXYNOS_IOMMU
+ bool "Exynos IOMMU Support"
+ depends on EXYNOS_DEV_SYSMMU
+ select IOMMU_API
+ select ARM_DMA_USE_IOMMU
+ help
+ Support for the IOMMU(System MMU) of Samsung Exynos application
+ processor family. This enables H/W multimedia accellerators to see
+ non-linear physical memory chunks as a linear memory in their
+ address spaces
+
+ If unsure, say N here.
+
+config EXYNOS_IOMMU_DEBUG
+ bool "Debugging log for Exynos IOMMU"
+ depends on EXYNOS_IOMMU
+ help
+ Select this to see the detailed log message that shows what
+ happens in the IOMMU driver
+
+ Say N unless you need kernel log message for IOMMU debugging
+
endif # IOMMU_SUPPORT
obj-$(CONFIG_OMAP_IOMMU_DEBUG) += omap-iommu-debug.o
obj-$(CONFIG_TEGRA_IOMMU_GART) += tegra-gart.o
obj-$(CONFIG_TEGRA_IOMMU_SMMU) += tegra-smmu.o
+obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
--- /dev/null
+/* linux/drivers/iommu/exynos_iommu.c
+ *
+ * Copyright (c) 2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifdef CONFIG_EXYNOS_IOMMU_DEBUG
+#define DEBUG
+#endif
+
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/io.h>
+#include <linux/interrupt.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/pm_runtime.h>
+#include <linux/clk.h>
+#include <linux/err.h>
+#include <linux/mm.h>
+#include <linux/iommu.h>
+#include <linux/errno.h>
+#include <linux/list.h>
+#include <linux/memblock.h>
+#include <linux/export.h>
+
+#include <asm/cacheflush.h>
+#include <asm/pgtable.h>
+
+#include <mach/sysmmu.h>
+
+/* We does not consider super section mapping (16MB) */
+#define SECT_ORDER 20
+#define LPAGE_ORDER 16
+#define SPAGE_ORDER 12
+
+#define SECT_SIZE (1 << SECT_ORDER)
+#define LPAGE_SIZE (1 << LPAGE_ORDER)
+#define SPAGE_SIZE (1 << SPAGE_ORDER)
+
+#define SECT_MASK (~(SECT_SIZE - 1))
+#define LPAGE_MASK (~(LPAGE_SIZE - 1))
+#define SPAGE_MASK (~(SPAGE_SIZE - 1))
+
+#define lv1ent_fault(sent) (((*(sent) & 3) == 0) || ((*(sent) & 3) == 3))
+#define lv1ent_page(sent) ((*(sent) & 3) == 1)
+#define lv1ent_section(sent) ((*(sent) & 3) == 2)
+
+#define lv2ent_fault(pent) ((*(pent) & 3) == 0)
+#define lv2ent_small(pent) ((*(pent) & 2) == 2)
+#define lv2ent_large(pent) ((*(pent) & 3) == 1)
+
+#define section_phys(sent) (*(sent) & SECT_MASK)
+#define section_offs(iova) ((iova) & 0xFFFFF)
+#define lpage_phys(pent) (*(pent) & LPAGE_MASK)
+#define lpage_offs(iova) ((iova) & 0xFFFF)
+#define spage_phys(pent) (*(pent) & SPAGE_MASK)
+#define spage_offs(iova) ((iova) & 0xFFF)
+
+#define lv1ent_offset(iova) ((iova) >> SECT_ORDER)
+#define lv2ent_offset(iova) (((iova) & 0xFF000) >> SPAGE_ORDER)
+
+#define NUM_LV1ENTRIES 4096
+#define NUM_LV2ENTRIES 256
+
+#define LV2TABLE_SIZE (NUM_LV2ENTRIES * sizeof(long))
+
+#define SPAGES_PER_LPAGE (LPAGE_SIZE / SPAGE_SIZE)
+
+#define lv2table_base(sent) (*(sent) & 0xFFFFFC00)
+
+#define mk_lv1ent_sect(pa) ((pa) | 2)
+#define mk_lv1ent_page(pa) ((pa) | 1)
+#define mk_lv2ent_lpage(pa) ((pa) | 1)
+#define mk_lv2ent_spage(pa) ((pa) | 2)
+
+#define CTRL_ENABLE 0x5
+#define CTRL_BLOCK 0x7
+#define CTRL_DISABLE 0x0
+
+#define REG_MMU_CTRL 0x000
+#define REG_MMU_CFG 0x004
+#define REG_MMU_STATUS 0x008
+#define REG_MMU_FLUSH 0x00C
+#define REG_MMU_FLUSH_ENTRY 0x010
+#define REG_PT_BASE_ADDR 0x014
+#define REG_INT_STATUS 0x018
+#define REG_INT_CLEAR 0x01C
+
+#define REG_PAGE_FAULT_ADDR 0x024
+#define REG_AW_FAULT_ADDR 0x028
+#define REG_AR_FAULT_ADDR 0x02C
+#define REG_DEFAULT_SLAVE_ADDR 0x030
+
+#define REG_MMU_VERSION 0x034
+
+#define REG_PB0_SADDR 0x04C
+#define REG_PB0_EADDR 0x050
+#define REG_PB1_SADDR 0x054
+#define REG_PB1_EADDR 0x058
+
+static unsigned long *section_entry(unsigned long *pgtable, unsigned long iova)
+{
+ return pgtable + lv1ent_offset(iova);
+}
+
+static unsigned long *page_entry(unsigned long *sent, unsigned long iova)
+{
+ return (unsigned long *)__va(lv2table_base(sent)) + lv2ent_offset(iova);
+}
+
+enum exynos_sysmmu_inttype {
+ SYSMMU_PAGEFAULT,
+ SYSMMU_AR_MULTIHIT,
+ SYSMMU_AW_MULTIHIT,
+ SYSMMU_BUSERROR,
+ SYSMMU_AR_SECURITY,
+ SYSMMU_AR_ACCESS,
+ SYSMMU_AW_SECURITY,
+ SYSMMU_AW_PROTECTION, /* 7 */
+ SYSMMU_FAULT_UNKNOWN,
+ SYSMMU_FAULTS_NUM
+};
+
+typedef int (*sysmmu_fault_handler_t)(enum exynos_sysmmu_inttype itype,
+ unsigned long pgtable_base, unsigned long fault_addr);
+
+static unsigned short fault_reg_offset[SYSMMU_FAULTS_NUM] = {
+ REG_PAGE_FAULT_ADDR,
+ REG_AR_FAULT_ADDR,
+ REG_AW_FAULT_ADDR,
+ REG_DEFAULT_SLAVE_ADDR,
+ REG_AR_FAULT_ADDR,
+ REG_AR_FAULT_ADDR,
+ REG_AW_FAULT_ADDR,
+ REG_AW_FAULT_ADDR
+};
+
+static char *sysmmu_fault_name[SYSMMU_FAULTS_NUM] = {
+ "PAGE FAULT",
+ "AR MULTI-HIT FAULT",
+ "AW MULTI-HIT FAULT",
+ "BUS ERROR",
+ "AR SECURITY PROTECTION FAULT",
+ "AR ACCESS PROTECTION FAULT",
+ "AW SECURITY PROTECTION FAULT",
+ "AW ACCESS PROTECTION FAULT",
+ "UNKNOWN FAULT"
+};
+
+struct exynos_iommu_domain {
+ struct list_head clients; /* list of sysmmu_drvdata.node */
+ unsigned long *pgtable; /* lv1 page table, 16KB */
+ short *lv2entcnt; /* free lv2 entry counter for each section */
+ spinlock_t lock; /* lock for this structure */
+ spinlock_t pgtablelock; /* lock for modifying page table @ pgtable */
+};
+
+struct sysmmu_drvdata {
+ struct list_head node; /* entry of exynos_iommu_domain.clients */
+ struct device *sysmmu; /* System MMU's device descriptor */
+ struct device *dev; /* Owner of system MMU */
+ char *dbgname;
+ int nsfrs;
+ void __iomem **sfrbases;
+ struct clk *clk[2];
+ int activations;
+ rwlock_t lock;
+ struct iommu_domain *domain;
+ sysmmu_fault_handler_t fault_handler;
+ unsigned long pgtable;
+};
+
+static bool set_sysmmu_active(struct sysmmu_drvdata *data)
+{
+ /* return true if the System MMU was not active previously
+ and it needs to be initialized */
+ return ++data->activations == 1;
+}
+
+static bool set_sysmmu_inactive(struct sysmmu_drvdata *data)
+{
+ /* return true if the System MMU is needed to be disabled */
+ BUG_ON(data->activations < 1);
+ return --data->activations == 0;
+}
+
+static bool is_sysmmu_active(struct sysmmu_drvdata *data)
+{
+ return data->activations > 0;
+}
+
+static void sysmmu_block(void __iomem *sfrbase)
+{
+ __raw_writel(CTRL_BLOCK, sfrbase + REG_MMU_CTRL);
+}
+
+static void sysmmu_unblock(void __iomem *sfrbase)
+{
+ __raw_writel(CTRL_ENABLE, sfrbase + REG_MMU_CTRL);
+}
+
+static void __sysmmu_tlb_invalidate(void __iomem *sfrbase)
+{
+ __raw_writel(0x1, sfrbase + REG_MMU_FLUSH);
+}
+
+static void __sysmmu_tlb_invalidate_entry(void __iomem *sfrbase,
+ unsigned long iova)
+{
+ __raw_writel((iova & SPAGE_MASK) | 1, sfrbase + REG_MMU_FLUSH_ENTRY);
+}
+
+static void __sysmmu_set_ptbase(void __iomem *sfrbase,
+ unsigned long pgd)
+{
+ __raw_writel(0x1, sfrbase + REG_MMU_CFG); /* 16KB LV1, LRU */
+ __raw_writel(pgd, sfrbase + REG_PT_BASE_ADDR);
+
+ __sysmmu_tlb_invalidate(sfrbase);
+}
+
+static void __sysmmu_set_prefbuf(void __iomem *sfrbase, unsigned long base,
+ unsigned long size, int idx)
+{
+ __raw_writel(base, sfrbase + REG_PB0_SADDR + idx * 8);
+ __raw_writel(size - 1 + base, sfrbase + REG_PB0_EADDR + idx * 8);
+}
+
+void exynos_sysmmu_set_prefbuf(struct device *dev,
+ unsigned long base0, unsigned long size0,
+ unsigned long base1, unsigned long size1)
+{
+ struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu);
+ unsigned long flags;
+ int i;
+
+ BUG_ON((base0 + size0) <= base0);
+ BUG_ON((size1 > 0) && ((base1 + size1) <= base1));
+
+ read_lock_irqsave(&data->lock, flags);
+ if (!is_sysmmu_active(data))
+ goto finish;
+
+ for (i = 0; i < data->nsfrs; i++) {
+ if ((readl(data->sfrbases[i] + REG_MMU_VERSION) >> 28) == 3) {
+ sysmmu_block(data->sfrbases[i]);
+
+ if (size1 == 0) {
+ if (size0 <= SZ_128K) {
+ base1 = base0;
+ size1 = size0;
+ } else {
+ size1 = size0 -
+ ALIGN(size0 / 2, SZ_64K);
+ size0 = size0 - size1;
+ base1 = base0 + size0;
+ }
+ }
+
+ __sysmmu_set_prefbuf(
+ data->sfrbases[i], base0, size0, 0);
+ __sysmmu_set_prefbuf(
+ data->sfrbases[i], base1, size1, 1);
+
+ sysmmu_unblock(data->sfrbases[i]);
+ }
+ }
+finish:
+ read_unlock_irqrestore(&data->lock, flags);
+}
+
+static void __set_fault_handler(struct sysmmu_drvdata *data,
+ sysmmu_fault_handler_t handler)
+{
+ unsigned long flags;
+
+ write_lock_irqsave(&data->lock, flags);
+ data->fault_handler = handler;
+ write_unlock_irqrestore(&data->lock, flags);
+}
+
+void exynos_sysmmu_set_fault_handler(struct device *dev,
+ sysmmu_fault_handler_t handler)
+{
+ struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu);
+
+ __set_fault_handler(data, handler);
+}
+
+static int default_fault_handler(enum exynos_sysmmu_inttype itype,
+ unsigned long pgtable_base, unsigned long fault_addr)
+{
+ unsigned long *ent;
+
+ if ((itype >= SYSMMU_FAULTS_NUM) || (itype < SYSMMU_PAGEFAULT))
+ itype = SYSMMU_FAULT_UNKNOWN;
+
+ pr_err("%s occured at 0x%lx(Page table base: 0x%lx)\n",
+ sysmmu_fault_name[itype], fault_addr, pgtable_base);
+
+ ent = section_entry(__va(pgtable_base), fault_addr);
+ pr_err("\tLv1 entry: 0x%lx\n", *ent);
+
+ if (lv1ent_page(ent)) {
+ ent = page_entry(ent, fault_addr);
+ pr_err("\t Lv2 entry: 0x%lx\n", *ent);
+ }
+
+ pr_err("Generating Kernel OOPS... because it is unrecoverable.\n");
+
+ BUG();
+
+ return 0;
+}
+
+static irqreturn_t exynos_sysmmu_irq(int irq, void *dev_id)
+{
+ /* SYSMMU is in blocked when interrupt occurred. */
+ struct sysmmu_drvdata *data = dev_id;
+ struct resource *irqres;
+ struct platform_device *pdev;
+ enum exynos_sysmmu_inttype itype;
+ unsigned long addr = -1;
+
+ int i, ret = -ENOSYS;
+
+ read_lock(&data->lock);
+
+ WARN_ON(!is_sysmmu_active(data));
+
+ pdev = to_platform_device(data->sysmmu);
+ for (i = 0; i < (pdev->num_resources / 2); i++) {
+ irqres = platform_get_resource(pdev, IORESOURCE_IRQ, i);
+ if (irqres && ((int)irqres->start == irq))
+ break;
+ }
+
+ if (i == pdev->num_resources) {
+ itype = SYSMMU_FAULT_UNKNOWN;
+ } else {
+ itype = (enum exynos_sysmmu_inttype)
+ __ffs(__raw_readl(data->sfrbases[i] + REG_INT_STATUS));
+ if (WARN_ON(!((itype >= 0) && (itype < SYSMMU_FAULT_UNKNOWN))))
+ itype = SYSMMU_FAULT_UNKNOWN;
+ else
+ addr = __raw_readl(
+ data->sfrbases[i] + fault_reg_offset[itype]);
+ }
+
+ if (data->domain)
+ ret = report_iommu_fault(data->domain, data->dev,
+ addr, itype);
+
+ if ((ret == -ENOSYS) && data->fault_handler) {
+ unsigned long base = data->pgtable;
+ if (itype != SYSMMU_FAULT_UNKNOWN)
+ base = __raw_readl(
+ data->sfrbases[i] + REG_PT_BASE_ADDR);
+ ret = data->fault_handler(itype, base, addr);
+ }
+
+ if (!ret && (itype != SYSMMU_FAULT_UNKNOWN))
+ __raw_writel(1 << itype, data->sfrbases[i] + REG_INT_CLEAR);
+ else
+ dev_dbg(data->sysmmu, "(%s) %s is not handled.\n",
+ data->dbgname, sysmmu_fault_name[itype]);
+
+ if (itype != SYSMMU_FAULT_UNKNOWN)
+ sysmmu_unblock(data->sfrbases[i]);
+
+ read_unlock(&data->lock);
+
+ return IRQ_HANDLED;
+}
+
+static bool __exynos_sysmmu_disable(struct sysmmu_drvdata *data)
+{
+ unsigned long flags;
+ bool disabled = false;
+ int i;
+
+ write_lock_irqsave(&data->lock, flags);
+
+ if (!set_sysmmu_inactive(data))
+ goto finish;
+
+ for (i = 0; i < data->nsfrs; i++)
+ __raw_writel(CTRL_DISABLE, data->sfrbases[i] + REG_MMU_CTRL);
+
+ if (data->clk[1])
+ clk_disable(data->clk[1]);
+ if (data->clk[0])
+ clk_disable(data->clk[0]);
+
+ disabled = true;
+ data->pgtable = 0;
+ data->domain = NULL;
+finish:
+ write_unlock_irqrestore(&data->lock, flags);
+
+ if (disabled)
+ dev_dbg(data->sysmmu, "(%s) Disabled\n", data->dbgname);
+ else
+ dev_dbg(data->sysmmu, "(%s) %d times left to be disabled\n",
+ data->dbgname, data->activations);
+
+ return disabled;
+}
+
+/* __exynos_sysmmu_enable: Enables System MMU
+ *
+ * returns -error if an error occurred and System MMU is not enabled,
+ * 0 if the System MMU has been just enabled and 1 if System MMU was already
+ * enabled before.
+ */
+static int __exynos_sysmmu_enable(struct sysmmu_drvdata *data,
+ unsigned long pgtable, struct iommu_domain *domain)
+{
+ int i, ret;
+ unsigned long flags;
+
+ write_lock_irqsave(&data->lock, flags);
+
+ if (!set_sysmmu_active(data)) {
+ if (WARN_ON(pgtable != data->pgtable)) {
+ ret = -EBUSY;
+ set_sysmmu_inactive(data);
+ } else {
+ ret = 1;
+ }
+
+ dev_dbg(data->sysmmu, "(%s) Already enabled\n", data->dbgname);
+ goto finish;
+ }
+
+ ret = 0;
+
+ if (data->clk[0])
+ clk_enable(data->clk[0]);
+ if (data->clk[1])
+ clk_enable(data->clk[1]);
+
+ data->pgtable = pgtable;
+
+ for (i = 0; i < data->nsfrs; i++) {
+ __sysmmu_set_ptbase(data->sfrbases[i], pgtable);
+
+ if ((readl(data->sfrbases[i] + REG_MMU_VERSION) >> 28) == 3) {
+ /* System MMU version is 3.x */
+ __raw_writel((1 << 12) | (2 << 28),
+ data->sfrbases[i] + REG_MMU_CFG);
+ __sysmmu_set_prefbuf(data->sfrbases[i], 0, -1, 0);
+ __sysmmu_set_prefbuf(data->sfrbases[i], 0, -1, 1);
+ }
+
+ __raw_writel(CTRL_ENABLE, data->sfrbases[i] + REG_MMU_CTRL);
+ }
+
+ data->domain = domain;
+
+ dev_dbg(data->sysmmu, "(%s) Enabled\n", data->dbgname);
+finish:
+ write_unlock_irqrestore(&data->lock, flags);
+
+ if ((ret < 0) && (ret != -EBUSY)) {
+ __exynos_sysmmu_disable(data);
+ dev_dbg(data->sysmmu, "(%s) Failed to enable\n", data->dbgname);
+ }
+
+ return ret;
+}
+
+int exynos_sysmmu_enable(struct device *dev, unsigned long pgtable)
+{
+ struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu);
+ int ret;
+
+ BUG_ON(!memblock_is_memory(pgtable));
+
+ ret = pm_runtime_get_sync(data->sysmmu);
+ if (ret < 0)
+ return ret;
+
+ ret = __exynos_sysmmu_enable(data, pgtable, NULL);
+ if (ret < 0)
+ pm_runtime_put(data->sysmmu);
+ else
+ data->dev = dev;
+
+ return ret;
+}
+
+bool exynos_sysmmu_disable(struct device *dev)
+{
+ struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu);
+ bool disabled;
+
+ disabled = __exynos_sysmmu_disable(data);
+ pm_runtime_put(data->sysmmu);
+
+ return disabled;
+}
+
+static void sysmmu_tlb_invalidate_entry(struct device *dev, unsigned long iova)
+{
+ unsigned long flags;
+ struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu);
+
+ read_lock_irqsave(&data->lock, flags);
+
+ if (is_sysmmu_active(data)) {
+ int i;
+ for (i = 0; i < data->nsfrs; i++) {
+ sysmmu_block(data->sfrbases[i]);
+ __sysmmu_tlb_invalidate_entry(data->sfrbases[i], iova);
+ sysmmu_unblock(data->sfrbases[i]);
+ }
+ } else {
+ dev_dbg(data->sysmmu,
+ "(%s) Disabled. Skipping invalidating TLB.\n",
+ data->dbgname);
+ }
+
+ read_unlock_irqrestore(&data->lock, flags);
+}
+
+void exynos_sysmmu_tlb_invalidate(struct device *dev)
+{
+ unsigned long flags;
+ struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu);
+
+ read_lock_irqsave(&data->lock, flags);
+
+ if (is_sysmmu_active(data)) {
+ int i;
+ for (i = 0; i < data->nsfrs; i++) {
+ sysmmu_block(data->sfrbases[i]);
+ __sysmmu_tlb_invalidate(data->sfrbases[i]);
+ sysmmu_unblock(data->sfrbases[i]);
+ }
+ } else {
+ dev_dbg(data->sysmmu,
+ "(%s) Disabled. Skipping invalidating TLB.\n",
+ data->dbgname);
+ }
+
+ read_unlock_irqrestore(&data->lock, flags);
+}
+
+static int exynos_sysmmu_probe(struct platform_device *pdev)
+{
+ int i, ret;
+ struct device *dev;
+ struct sysmmu_drvdata *data;
+
+ dev = &pdev->dev;
+
+ data = kzalloc(sizeof(*data), GFP_KERNEL);
+ if (!data) {
+ dev_dbg(dev, "Not enough memory\n");
+ ret = -ENOMEM;
+ goto err_alloc;
+ }
+
+ ret = dev_set_drvdata(dev, data);
+ if (ret) {
+ dev_dbg(dev, "Unabled to initialize driver data\n");
+ goto err_init;
+ }
+
+ data->nsfrs = pdev->num_resources / 2;
+ data->sfrbases = kmalloc(sizeof(*data->sfrbases) * data->nsfrs,
+ GFP_KERNEL);
+ if (data->sfrbases == NULL) {
+ dev_dbg(dev, "Not enough memory\n");
+ ret = -ENOMEM;
+ goto err_init;
+ }
+
+ for (i = 0; i < data->nsfrs; i++) {
+ struct resource *res;
+ res = platform_get_resource(pdev, IORESOURCE_MEM, i);
+ if (!res) {
+ dev_dbg(dev, "Unable to find IOMEM region\n");
+ ret = -ENOENT;
+ goto err_res;
+ }
+
+ data->sfrbases[i] = ioremap(res->start, resource_size(res));
+ if (!data->sfrbases[i]) {
+ dev_dbg(dev, "Unable to map IOMEM @ PA:%#x\n",
+ res->start);
+ ret = -ENOENT;
+ goto err_res;
+ }
+ }
+
+ for (i = 0; i < data->nsfrs; i++) {
+ ret = platform_get_irq(pdev, i);
+ if (ret <= 0) {
+ dev_dbg(dev, "Unable to find IRQ resource\n");
+ goto err_irq;
+ }
+
+ ret = request_irq(ret, exynos_sysmmu_irq, 0,
+ dev_name(dev), data);
+ if (ret) {
+ dev_dbg(dev, "Unabled to register interrupt handler\n");
+ goto err_irq;
+ }
+ }
+
+ if (dev_get_platdata(dev)) {
+ char *deli, *beg;
+ struct sysmmu_platform_data *platdata = dev_get_platdata(dev);
+
+ beg = platdata->clockname;
+
+ for (deli = beg; (*deli != '\0') && (*deli != ','); deli++)
+ /* NOTHING */;
+
+ if (*deli == '\0')
+ deli = NULL;
+ else
+ *deli = '\0';
+
+ data->clk[0] = clk_get(dev, beg);
+ if (IS_ERR(data->clk[0])) {
+ data->clk[0] = NULL;
+ dev_dbg(dev, "No clock descriptor registered\n");
+ }
+
+ if (data->clk[0] && deli) {
+ *deli = ',';
+ data->clk[1] = clk_get(dev, deli + 1);
+ if (IS_ERR(data->clk[1]))
+ data->clk[1] = NULL;
+ }
+
+ data->dbgname = platdata->dbgname;
+ }
+
+ data->sysmmu = dev;
+ rwlock_init(&data->lock);
+ INIT_LIST_HEAD(&data->node);
+
+ __set_fault_handler(data, &default_fault_handler);
+
+ pm_runtime_enable(dev);
+
+ dev_dbg(dev, "(%s) Initialized\n", data->dbgname);
+ return 0;
+err_irq:
+ while (i-- > 0) {
+ int irq;
+
+ irq = platform_get_irq(pdev, i);
+ free_irq(irq, data);
+ }
+err_res:
+ while (data->nsfrs-- > 0)
+ iounmap(data->sfrbases[data->nsfrs]);
+ kfree(data->sfrbases);
+err_init:
+ kfree(data);
+err_alloc:
+ dev_err(dev, "Failed to initialize\n");
+ return ret;
+}
+
+static int exynos_pm_resume(struct device *dev)
+{
+ struct sysmmu_drvdata *data;
+
+ data = dev_get_drvdata(dev);
+
+ if (is_sysmmu_active(data))
+ __exynos_sysmmu_enable(data, data->pgtable, NULL);
+
+ return 0;
+}
+
+const struct dev_pm_ops exynos_pm_ops = {
+ .resume = &exynos_pm_resume,
+};
+
+#ifdef CONFIG_OF
+static const struct of_device_id sysmmu_of_match[] = {
+ { .compatible = "samsung,s5p-sysmmu" },
+ {},
+};
+MODULE_DEVICE_TABLE(of, sysmmu_of_match);
+#else
+#define sysmmu_of_match NULL
+#endif
+
+static struct platform_driver exynos_sysmmu_driver = {
+ .probe = exynos_sysmmu_probe,
+ .driver = {
+ .owner = THIS_MODULE,
+ .name = "s5p-sysmmu",
+ .pm = &exynos_pm_ops,
+ .of_match_table = sysmmu_of_match,
+ }
+};
+
+static inline void pgtable_flush(void *vastart, void *vaend)
+{
+ dmac_flush_range(vastart, vaend);
+ outer_flush_range(virt_to_phys(vastart),
+ virt_to_phys(vaend));
+}
+
+static int exynos_iommu_domain_init(struct iommu_domain *domain)
+{
+ struct exynos_iommu_domain *priv;
+
+ priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+
+ priv->pgtable = (unsigned long *)__get_free_pages(
+ GFP_KERNEL | __GFP_ZERO, 2);
+ if (!priv->pgtable)
+ goto err_pgtable;
+
+ priv->lv2entcnt = (short *)__get_free_pages(
+ GFP_KERNEL | __GFP_ZERO, 1);
+ if (!priv->lv2entcnt)
+ goto err_counter;
+
+ pgtable_flush(priv->pgtable, priv->pgtable + NUM_LV1ENTRIES);
+
+ spin_lock_init(&priv->lock);
+ spin_lock_init(&priv->pgtablelock);
+ INIT_LIST_HEAD(&priv->clients);
+
+ domain->priv = priv;
+ return 0;
+
+err_counter:
+ free_pages((unsigned long)priv->pgtable, 2);
+err_pgtable:
+ kfree(priv);
+ return -ENOMEM;
+}
+
+static void exynos_iommu_domain_destroy(struct iommu_domain *domain)
+{
+ struct exynos_iommu_domain *priv = domain->priv;
+ struct sysmmu_drvdata *data;
+ unsigned long flags;
+ int i;
+
+ WARN_ON(!list_empty(&priv->clients));
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ list_for_each_entry(data, &priv->clients, node) {
+ while (!exynos_sysmmu_disable(data->dev))
+ ; /* until System MMU is actually disabled */
+ }
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ for (i = 0; i < NUM_LV1ENTRIES; i++)
+ if (lv1ent_page(priv->pgtable + i))
+ kfree(__va(lv2table_base(priv->pgtable + i)));
+
+ free_pages((unsigned long)priv->pgtable, 2);
+ free_pages((unsigned long)priv->lv2entcnt, 1);
+ kfree(domain->priv);
+ domain->priv = NULL;
+}
+
+static int exynos_iommu_attach_device(struct iommu_domain *domain,
+ struct device *dev)
+{
+ struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu);
+ struct exynos_iommu_domain *priv = domain->priv;
+ unsigned long flags;
+ int ret;
+
+ /* If there is no SYSMMU, this could be a virtual device with a common
+ * IOMMU mapping shared with another device e.g. DRM with DRM-FIMD
+ */
+ if (data==NULL) {
+ dev_err(dev, "No SYSMMU found\n");
+ return 0;
+ }
+
+ ret = pm_runtime_get_sync(data->sysmmu);
+ if (ret < 0)
+ return ret;
+
+ ret = 0;
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ ret = __exynos_sysmmu_enable(data, __pa(priv->pgtable), domain);
+
+ if (ret == 0) {
+ /* 'data->node' must not appear in priv->clients */
+ BUG_ON(!list_empty(&data->node));
+ data->dev = dev;
+ list_add_tail(&data->node, &priv->clients);
+ }
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ if (ret < 0) {
+ dev_err(dev, "%s: Failed to attach IOMMU with pgtable %#lx\n",
+ __func__, __pa(priv->pgtable));
+ pm_runtime_put(data->sysmmu);
+ } else if (ret > 0) {
+ dev_dbg(dev, "%s: IOMMU with pgtable 0x%lx already attached\n",
+ __func__, __pa(priv->pgtable));
+ } else {
+ dev_dbg(dev, "%s: Attached new IOMMU with pgtable 0x%lx\n",
+ __func__, __pa(priv->pgtable));
+ }
+
+ return ret;
+}
+
+static void exynos_iommu_detach_device(struct iommu_domain *domain,
+ struct device *dev)
+{
+ struct sysmmu_drvdata *data = dev_get_drvdata(dev->archdata.iommu);
+ struct exynos_iommu_domain *priv = domain->priv;
+ struct list_head *pos;
+ unsigned long flags;
+ bool found = false;
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ list_for_each(pos, &priv->clients) {
+ if (list_entry(pos, struct sysmmu_drvdata, node) == data) {
+ found = true;
+ break;
+ }
+ }
+
+ if (!found)
+ goto finish;
+
+ if (__exynos_sysmmu_disable(data)) {
+ dev_dbg(dev, "%s: Detached IOMMU with pgtable %#lx\n",
+ __func__, __pa(priv->pgtable));
+ list_del(&data->node);
+ INIT_LIST_HEAD(&data->node);
+
+ } else {
+ dev_dbg(dev, "%s: Detaching IOMMU with pgtable %#lx delayed",
+ __func__, __pa(priv->pgtable));
+ }
+
+finish:
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+ if (found)
+ pm_runtime_put(data->sysmmu);
+}
+
+static unsigned long *alloc_lv2entry(unsigned long *sent, unsigned long iova,
+ short *pgcounter)
+{
+ if (lv1ent_fault(sent)) {
+ unsigned long *pent;
+
+ pent = kzalloc(LV2TABLE_SIZE, GFP_ATOMIC);
+ BUG_ON((unsigned long)pent & (LV2TABLE_SIZE - 1));
+ if (!pent)
+ return NULL;
+
+ *sent = mk_lv1ent_page(__pa(pent));
+ *pgcounter = NUM_LV2ENTRIES;
+ pgtable_flush(pent, pent + NUM_LV2ENTRIES);
+ pgtable_flush(sent, sent + 1);
+ }
+
+ return page_entry(sent, iova);
+}
+
+static int lv1set_section(unsigned long *sent, phys_addr_t paddr, short *pgcnt)
+{
+ if (lv1ent_section(sent))
+ return -EADDRINUSE;
+
+ if (lv1ent_page(sent)) {
+ if (*pgcnt != NUM_LV2ENTRIES)
+ return -EADDRINUSE;
+
+ kfree(page_entry(sent, 0));
+
+ *pgcnt = 0;
+ }
+
+ *sent = mk_lv1ent_sect(paddr);
+
+ pgtable_flush(sent, sent + 1);
+
+ return 0;
+}
+
+static int lv2set_page(unsigned long *pent, phys_addr_t paddr, size_t size,
+ short *pgcnt)
+{
+ if (size == SPAGE_SIZE) {
+ if (!lv2ent_fault(pent))
+ return -EADDRINUSE;
+
+ *pent = mk_lv2ent_spage(paddr);
+ pgtable_flush(pent, pent + 1);
+ *pgcnt -= 1;
+ } else { /* size == LPAGE_SIZE */
+ int i;
+ for (i = 0; i < SPAGES_PER_LPAGE; i++, pent++) {
+ if (!lv2ent_fault(pent)) {
+ memset(pent, 0, sizeof(*pent) * i);
+ return -EADDRINUSE;
+ }
+
+ *pent = mk_lv2ent_lpage(paddr);
+ }
+ pgtable_flush(pent - SPAGES_PER_LPAGE, pent);
+ *pgcnt -= SPAGES_PER_LPAGE;
+ }
+
+ return 0;
+}
+
+static int exynos_iommu_map(struct iommu_domain *domain, unsigned long iova,
+ phys_addr_t paddr, size_t size, int prot)
+{
+ struct exynos_iommu_domain *priv = domain->priv;
+ unsigned long *entry;
+ unsigned long flags;
+ int ret = -ENOMEM;
+
+ BUG_ON(priv->pgtable == NULL);
+
+ spin_lock_irqsave(&priv->pgtablelock, flags);
+
+ entry = section_entry(priv->pgtable, iova);
+
+ if (size == SECT_SIZE) {
+ ret = lv1set_section(entry, paddr,
+ &priv->lv2entcnt[lv1ent_offset(iova)]);
+ } else {
+ unsigned long *pent;
+
+ pent = alloc_lv2entry(entry, iova,
+ &priv->lv2entcnt[lv1ent_offset(iova)]);
+
+ if (!pent)
+ ret = -ENOMEM;
+ else
+ ret = lv2set_page(pent, paddr, size,
+ &priv->lv2entcnt[lv1ent_offset(iova)]);
+ }
+
+ if (ret) {
+ pr_debug("%s: Failed to map iova 0x%lx/0x%x bytes\n",
+ __func__, iova, size);
+ }
+
+ spin_unlock_irqrestore(&priv->pgtablelock, flags);
+
+ return ret;
+}
+
+static size_t exynos_iommu_unmap(struct iommu_domain *domain,
+ unsigned long iova, size_t size)
+{
+ struct exynos_iommu_domain *priv = domain->priv;
+ struct sysmmu_drvdata *data;
+ unsigned long flags;
+ unsigned long *ent;
+
+ BUG_ON(priv->pgtable == NULL);
+
+ spin_lock_irqsave(&priv->pgtablelock, flags);
+
+ ent = section_entry(priv->pgtable, iova);
+
+ if (lv1ent_section(ent)) {
+ BUG_ON(size < SECT_SIZE);
+
+ *ent = 0;
+ pgtable_flush(ent, ent + 1);
+ size = SECT_SIZE;
+ goto done;
+ }
+
+ if (unlikely(lv1ent_fault(ent))) {
+ if (size > SECT_SIZE)
+ size = SECT_SIZE;
+ goto done;
+ }
+
+ /* lv1ent_page(sent) == true here */
+
+ ent = page_entry(ent, iova);
+
+ if (unlikely(lv2ent_fault(ent))) {
+ size = SPAGE_SIZE;
+ goto done;
+ }
+
+ if (lv2ent_small(ent)) {
+ *ent = 0;
+ size = SPAGE_SIZE;
+ priv->lv2entcnt[lv1ent_offset(iova)] += 1;
+ goto done;
+ }
+
+ /* lv1ent_large(ent) == true here */
+ BUG_ON(size < LPAGE_SIZE);
+
+ memset(ent, 0, sizeof(*ent) * SPAGES_PER_LPAGE);
+
+ size = LPAGE_SIZE;
+ priv->lv2entcnt[lv1ent_offset(iova)] += SPAGES_PER_LPAGE;
+done:
+ spin_unlock_irqrestore(&priv->pgtablelock, flags);
+
+ spin_lock_irqsave(&priv->lock, flags);
+ list_for_each_entry(data, &priv->clients, node)
+ sysmmu_tlb_invalidate_entry(data->dev, iova);
+ spin_unlock_irqrestore(&priv->lock, flags);
+
+
+ return size;
+}
+
+static phys_addr_t exynos_iommu_iova_to_phys(struct iommu_domain *domain,
+ unsigned long iova)
+{
+ struct exynos_iommu_domain *priv = domain->priv;
+ unsigned long *entry;
+ unsigned long flags;
+ phys_addr_t phys = 0;
+
+ spin_lock_irqsave(&priv->pgtablelock, flags);
+
+ entry = section_entry(priv->pgtable, iova);
+
+ if (lv1ent_section(entry)) {
+ phys = section_phys(entry) + section_offs(iova);
+ } else if (lv1ent_page(entry)) {
+ entry = page_entry(entry, iova);
+
+ if (lv2ent_large(entry))
+ phys = lpage_phys(entry) + lpage_offs(iova);
+ else if (lv2ent_small(entry))
+ phys = spage_phys(entry) + spage_offs(iova);
+ }
+
+ spin_unlock_irqrestore(&priv->pgtablelock, flags);
+
+ return phys;
+}
+
+static struct iommu_ops exynos_iommu_ops = {
+ .domain_init = &exynos_iommu_domain_init,
+ .domain_destroy = &exynos_iommu_domain_destroy,
+ .attach_dev = &exynos_iommu_attach_device,
+ .detach_dev = &exynos_iommu_detach_device,
+ .map = &exynos_iommu_map,
+ .unmap = &exynos_iommu_unmap,
+ .iova_to_phys = &exynos_iommu_iova_to_phys,
+ .pgsize_bitmap = SECT_SIZE | LPAGE_SIZE | SPAGE_SIZE,
+};
+
+static int __init exynos_iommu_init(void)
+{
+ int ret;
+
+ ret = platform_driver_register(&exynos_sysmmu_driver);
+
+ if (ret == 0)
+ bus_set_iommu(&platform_bus_type, &exynos_iommu_ops);
+
+ return ret;
+}
+arch_initcall(exynos_iommu_init);
config VIDEOBUF2_MEMOPS
tristate
+config VIDEOBUF2_FB
+ depends on VIDEOBUF2_CORE
+ select FB_CFB_FILLRECT
+ select FB_CFB_COPYAREA
+ select FB_CFB_IMAGEBLIT
+ tristate
config VIDEOBUF2_DMA_CONTIG
select VIDEOBUF2_CORE
select VIDEOBUF2_MEMOPS
select VIDEOBUF2_MEMOPS
tristate
-
config VIDEOBUF2_DMA_SG
#depends on HAS_DMA
select VIDEOBUF2_CORE
depends on FRAMEBUFFER_CONSOLE || STI_CONSOLE
select FONT_8x16
select VIDEOBUF2_VMALLOC
+ select DMA_SHARED_BUFFER
default n
---help---
Enables a virtual video driver. This device shows a color bar
module will be called s5p-csis.
source "drivers/media/video/s5p-tv/Kconfig"
+source "drivers/media/video/exynos/Kconfig"
endif # V4L_PLATFORM_DRIVERS
endif # VIDEO_CAPTURE_DRIVERS
This is a v4l2 driver for Samsung S5P and EXYNOS4 JPEG codec
config VIDEO_SAMSUNG_S5P_MFC
+ bool
+
+config VIDEO_SAMSUNG_S5P_MFC_V5
tristate "Samsung S5P MFC 5.1 Video Codec"
- depends on VIDEO_DEV && VIDEO_V4L2 && PLAT_S5P
- select VIDEOBUF2_DMA_CONTIG
+ depends on VIDEO_DEV && VIDEO_V4L2 && ARCH_EXYNOS4
+ select VIDEO_SAMSUNG_S5P_MFC
default n
help
MFC 5.1 driver for V4L2.
+config VIDEO_SAMSUNG_S5P_MFC_V6
+ tristate "Samsung S5P MFC 6.x Video Codec"
+ depends on VIDEO_DEV && VIDEO_V4L2 && ARCH_EXYNOS5
+ select VIDEO_SAMSUNG_S5P_MFC
+ select VIDEOBUF2_DMA_CONTIG
+ select DMA_SHARED_BUFFER
+ default n
+ help
+ MFC 6.x driver for V4L2.
+
+config VIDEO_SAMSUNG_GSCALER
+ tristate "Samsung Exynos GSC driver"
+ depends on VIDEO_DEV && VIDEO_V4L2 && PLAT_S5P
+ select VIDEOBUF2_DMA_CONTIG
+ select V4L2_MEM2MEM_DEV
+ select DMA_SHARED_BUFFER
+ help
+ This is v4l2 based g-scaler driver for EXYNOS5
+
config VIDEO_MX2_EMMAPRP
tristate "MX2 eMMa-PrP support"
depends on VIDEO_DEV && VIDEO_V4L2 && SOC_IMX27
obj-$(CONFIG_VIDEOBUF2_CORE) += videobuf2-core.o
obj-$(CONFIG_VIDEOBUF2_MEMOPS) += videobuf2-memops.o
+obj-$(CONFIG_VIDEOBUF2_FB) += videobuf2-fb.o
obj-$(CONFIG_VIDEOBUF2_VMALLOC) += videobuf2-vmalloc.o
obj-$(CONFIG_VIDEOBUF2_DMA_CONTIG) += videobuf2-dma-contig.o
obj-$(CONFIG_VIDEOBUF2_DMA_SG) += videobuf2-dma-sg.o
obj-$(CONFIG_VIDEO_SAMSUNG_S5P_JPEG) += s5p-jpeg/
obj-$(CONFIG_VIDEO_SAMSUNG_S5P_MFC) += s5p-mfc/
obj-$(CONFIG_VIDEO_SAMSUNG_S5P_TV) += s5p-tv/
+obj-$(CONFIG_VIDEO_EXYNOS) += exynos/
obj-$(CONFIG_VIDEO_SAMSUNG_S5P_G2D) += s5p-g2d/
--- /dev/null
+config VIDEO_EXYNOS
+ bool "Exynos Multimedia Devices"
+ depends on ARCH_EXYNOS5
+ default n
+ select VIDEO_FIXED_MINOR_RANGES
+ select VIDEOBUF2_DMA_CONTIG
+ help
+ This is a representative exynos multimedia device.
+
+if VIDEO_EXYNOS
+ source "drivers/media/video/exynos/tv/Kconfig"
+ source "drivers/media/video/exynos/mdev/Kconfig"
+ source "drivers/media/video/exynos/gsc/Kconfig"
+endif
+
+config MEDIA_EXYNOS
+ bool
+ help
+ Compile mdev to use exynos5 media device driver.
--- /dev/null
+obj-$(CONFIG_VIDEO_EXYNOS_GSCALER) += gsc/
+obj-$(CONFIG_EXYNOS_MEDIA_DEVICE) += mdev/
+obj-$(CONFIG_VIDEO_EXYNOS_TV) += tv/
+
+EXTRA_CLAGS += -Idrivers/media/video
--- /dev/null
+config VIDEO_EXYNOS_GSCALER
+ bool "Exynos G-Scaler driver"
+ depends on VIDEO_EXYNOS
+ select MEDIA_EXYNOS
+ select V4L2_MEM2MEM_DEV
+ default n
+ help
+ This is a v4l2 driver for exynos G-Scaler device.
+
+if VIDEO_EXYNOS_GSCALER && VIDEOBUF2_CMA_PHYS
+comment "Reserved memory configurations"
+config VIDEO_SAMSUNG_MEMSIZE_GSC0
+ int "Memory size in kbytes for GSC0"
+ default "5120"
+
+config VIDEO_SAMSUNG_MEMSIZE_GSC1
+ int "Memory size in kbytes for GSC1"
+ default "5120"
+
+config VIDEO_SAMSUNG_MEMSIZE_GSC2
+ int "Memory size in kbytes for GSC2"
+ default "5120"
+
+config VIDEO_SAMSUNG_MEMSIZE_GSC3
+ int "Memory size in kbytes for GSC3"
+ default "5120"
+endif
--- /dev/null
+gsc-objs := gsc-core.o gsc-vb2.o gsc-m2m.o gsc-output.o gsc-capture.o \
+ gsc-regs.o
+obj-$(CONFIG_VIDEO_SAMSUNG_GSCALER) += gsc.o
--- /dev/null
+/* linux/drivers/media/video/exynos/gsc/gsc-capture.c
+ *
+ * Copyright (c) 2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com
+ *
+ * Samsung EXYNOS5 SoC series G-scaler driver
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published
+ * by the Free Software Foundation, either version 2 of the License,
+ * or (at your option) any later version.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/version.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/bug.h>
+#include <linux/interrupt.h>
+#include <linux/workqueue.h>
+#include <linux/device.h>
+#include <linux/platform_device.h>
+#include <linux/list.h>
+#include <linux/io.h>
+#include <linux/slab.h>
+#include <linux/clk.h>
+#include <linux/string.h>
+#include <linux/i2c.h>
+#include <media/v4l2-ioctl.h>
+#include <media/exynos_gscaler.h>
+
+#include "gsc-core.h"
+
+static int gsc_capture_queue_setup(struct vb2_queue *vq,
+ const struct v4l2_format *fmt, unsigned int *num_buffers,
+ unsigned int *num_planes, unsigned int sizes[],
+ void *allocators[])
+{
+ struct gsc_ctx *ctx = vq->drv_priv;
+ struct gsc_fmt *ffmt = ctx->d_frame.fmt;
+ int i;
+
+ if (!ffmt)
+ return -EINVAL;
+
+ *num_planes = ffmt->num_planes;
+
+ for (i = 0; i < ffmt->num_planes; i++) {
+ sizes[i] = get_plane_size(&ctx->d_frame, i);
+ allocators[i] = ctx->gsc_dev->alloc_ctx;
+ }
+
+ return 0;
+}
+static int gsc_capture_buf_prepare(struct vb2_buffer *vb)
+{
+ struct vb2_queue *vq = vb->vb2_queue;
+ struct gsc_ctx *ctx = vq->drv_priv;
+ struct gsc_frame *frame = &ctx->d_frame;
+ int i;
+
+ if (frame->fmt == NULL)
+ return -EINVAL;
+
+ for (i = 0; i < frame->fmt->num_planes; i++) {
+ unsigned long size = frame->payload[i];
+
+ if (vb2_plane_size(vb, i) < size) {
+ v4l2_err(ctx->gsc_dev->cap.vfd,
+ "User buffer too small (%ld < %ld)\n",
+ vb2_plane_size(vb, i), size);
+ return -EINVAL;
+ }
+ vb2_set_plane_payload(vb, i, size);
+ }
+
+ return 0;
+}
+
+int gsc_cap_pipeline_s_stream(struct gsc_dev *gsc, int on)
+{
+ struct gsc_pipeline *p = &gsc->pipeline;
+ int ret = 0;
+
+ if ((!p->sensor || !p->flite) && (!p->disp))
+ return -ENODEV;
+
+ if (on) {
+ ret = v4l2_subdev_call(p->sd_gsc, video, s_stream, 1);
+ if (ret < 0 && ret != -ENOIOCTLCMD)
+ return ret;
+ if (p->disp) {
+ ret = v4l2_subdev_call(p->disp, video, s_stream, 1);
+ if (ret < 0 && ret != -ENOIOCTLCMD)
+ return ret;
+ } else {
+ ret = v4l2_subdev_call(p->flite, video, s_stream, 1);
+ if (ret < 0 && ret != -ENOIOCTLCMD)
+ return ret;
+ ret = v4l2_subdev_call(p->csis, video, s_stream, 1);
+ if (ret < 0 && ret != -ENOIOCTLCMD)
+ return ret;
+ ret = v4l2_subdev_call(p->sensor, video, s_stream, 1);
+ }
+ } else {
+ ret = v4l2_subdev_call(p->sd_gsc, video, s_stream, 0);
+ if (ret < 0 && ret != -ENOIOCTLCMD)
+ return ret;
+ if (p->disp) {
+ ret = v4l2_subdev_call(p->disp, video, s_stream, 0);
+ if (ret < 0 && ret != -ENOIOCTLCMD)
+ return ret;
+ } else {
+ ret = v4l2_subdev_call(p->sensor, video, s_stream, 0);
+ if (ret < 0 && ret != -ENOIOCTLCMD)
+ return ret;
+ ret = v4l2_subdev_call(p->csis, video, s_stream, 0);
+ if (ret < 0 && ret != -ENOIOCTLCMD)
+ return ret;
+ ret = v4l2_subdev_call(p->flite, video, s_stream, 0);
+ }
+ }
+
+ return ret == -ENOIOCTLCMD ? 0 : ret;
+}
+
+static int gsc_capture_set_addr(struct vb2_buffer *vb)
+{
+ struct gsc_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
+ struct gsc_dev *gsc = ctx->gsc_dev;
+ int ret;
+
+ ret = gsc_prepare_addr(ctx, vb, &ctx->d_frame, &ctx->d_frame.addr);
+ if (ret) {
+ gsc_err("Prepare G-Scaler address failed\n");
+ return -EINVAL;
+ }
+
+ gsc_hw_set_output_addr(gsc, &ctx->d_frame.addr, vb->v4l2_buf.index);
+
+ return 0;
+}
+
+static void gsc_capture_buf_queue(struct vb2_buffer *vb)
+{
+ struct gsc_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
+ struct gsc_dev *gsc = ctx->gsc_dev;
+ struct gsc_capture_device *cap = &gsc->cap;
+ int min_bufs, ret;
+ unsigned long flags;
+
+ spin_lock_irqsave(&gsc->slock, flags);
+ ret = gsc_capture_set_addr(vb);
+ if (ret)
+ gsc_err("Failed to prepare output addr");
+
+ if (!test_bit(ST_CAPT_SUSPENDED, &gsc->state)) {
+ gsc_info("buf_index : %d", vb->v4l2_buf.index);
+ gsc_hw_set_output_buf_masking(gsc, vb->v4l2_buf.index, 0);
+ }
+
+ min_bufs = cap->reqbufs_cnt > 1 ? 2 : 1;
+
+ if (vb2_is_streaming(&cap->vbq) &&
+ (gsc_hw_get_nr_unmask_bits(gsc) >= min_bufs) &&
+ !test_bit(ST_CAPT_STREAM, &gsc->state)) {
+ if (!test_and_set_bit(ST_CAPT_PIPE_STREAM, &gsc->state)) {
+ spin_unlock_irqrestore(&gsc->slock, flags);
+ gsc_cap_pipeline_s_stream(gsc, 1);
+ return;
+ }
+
+ if (!test_bit(ST_CAPT_STREAM, &gsc->state)) {
+ gsc_info("G-Scaler h/w enable control");
+ gsc_hw_enable_control(gsc, true);
+ set_bit(ST_CAPT_STREAM, &gsc->state);
+ }
+ }
+ spin_unlock_irqrestore(&gsc->slock, flags);
+
+ return;
+}
+
+static int gsc_capture_get_scaler_factor(u32 src, u32 tar, u32 *ratio)
+{
+ u32 sh = 3;
+ tar *= 4;
+ if (tar >= src) {
+ *ratio = 1;
+ return 0;
+ }
+
+ while (--sh) {
+ u32 tmp = 1 << sh;
+ if (src >= tar * tmp)
+ *ratio = sh;
+ }
+ return 0;
+}
+
+static int gsc_capture_scaler_info(struct gsc_ctx *ctx)
+{
+ struct gsc_frame *s_frame = &ctx->s_frame;
+ struct gsc_frame *d_frame = &ctx->d_frame;
+ struct gsc_scaler *sc = &ctx->scaler;
+
+ gsc_capture_get_scaler_factor(s_frame->crop.width, d_frame->crop.width,
+ &sc->pre_hratio);
+ gsc_capture_get_scaler_factor(s_frame->crop.height, d_frame->crop.width,
+ &sc->pre_vratio);
+
+ sc->main_hratio = (s_frame->crop.width << 16) / d_frame->crop.width;
+ sc->main_vratio = (s_frame->crop.height << 16) / d_frame->crop.height;
+
+ gsc_info("src width : %d, src height : %d, dst width : %d,\
+ dst height : %d", s_frame->crop.width, s_frame->crop.height,\
+ d_frame->crop.width, d_frame->crop.height);
+ gsc_info("pre_hratio : 0x%x, pre_vratio : 0x%x, main_hratio : 0x%lx,\
+ main_vratio : 0x%lx", sc->pre_hratio,\
+ sc->pre_vratio, sc->main_hratio, sc->main_vratio);
+
+ return 0;
+}
+
+static int gsc_capture_subdev_s_stream(struct v4l2_subdev *sd, int enable)
+{
+ struct gsc_dev *gsc = v4l2_get_subdevdata(sd);
+ struct gsc_capture_device *cap = &gsc->cap;
+ struct gsc_ctx *ctx = cap->ctx;
+
+ gsc_info("");
+
+ gsc_hw_set_frm_done_irq_mask(gsc, false);
+ gsc_hw_set_overflow_irq_mask(gsc, false);
+ gsc_hw_set_one_frm_mode(gsc, false);
+ gsc_hw_set_gsc_irq_enable(gsc, true);
+
+ if (gsc->pipeline.disp)
+ gsc_hw_set_sysreg_writeback(ctx);
+ else
+ gsc_hw_set_sysreg_camif(true);
+
+ gsc_hw_set_input_path(ctx);
+ gsc_hw_set_in_size(ctx);
+ gsc_hw_set_in_image_format(ctx);
+ gsc_hw_set_output_path(ctx);
+ gsc_hw_set_out_size(ctx);
+ gsc_hw_set_out_image_format(ctx);
+ gsc_hw_set_global_alpha(ctx);
+
+ gsc_capture_scaler_info(ctx);
+ gsc_hw_set_prescaler(ctx);
+ gsc_hw_set_mainscaler(ctx);
+
+ set_bit(ST_CAPT_PEND, &gsc->state);
+
+ gsc_hw_enable_control(gsc, true);
+ set_bit(ST_CAPT_STREAM, &gsc->state);
+
+ return 0;
+
+}
+
+static int gsc_capture_start_streaming(struct vb2_queue *q, unsigned int count)
+{
+ struct gsc_ctx *ctx = q->drv_priv;
+ struct gsc_dev *gsc = ctx->gsc_dev;
+ struct gsc_capture_device *cap = &gsc->cap;
+ int min_bufs;
+
+ gsc_hw_set_sw_reset(gsc);
+ gsc_wait_reset(gsc);
+ gsc_hw_set_output_buf_mask_all(gsc);
+
+ min_bufs = cap->reqbufs_cnt > 1 ? 2 : 1;
+ if ((gsc_hw_get_nr_unmask_bits(gsc) >= min_bufs) &&
+ !test_bit(ST_CAPT_STREAM, &gsc->state)) {
+ if (!test_and_set_bit(ST_CAPT_PIPE_STREAM, &gsc->state)) {
+ gsc_info("");
+ gsc_cap_pipeline_s_stream(gsc, 1);
+ }
+ }
+
+ return 0;
+}
+
+static int gsc_capture_state_cleanup(struct gsc_dev *gsc)
+{
+ unsigned long flags;
+ bool streaming;
+
+ spin_lock_irqsave(&gsc->slock, flags);
+ streaming = gsc->state & (1 << ST_CAPT_PIPE_STREAM);
+
+ gsc->state &= ~(1 << ST_CAPT_RUN | 1 << ST_CAPT_STREAM |
+ 1 << ST_CAPT_PIPE_STREAM | 1 << ST_CAPT_PEND);
+
+ set_bit(ST_CAPT_SUSPENDED, &gsc->state);
+ spin_unlock_irqrestore(&gsc->slock, flags);
+
+ if (streaming)
+ return gsc_cap_pipeline_s_stream(gsc, 0);
+ else
+ return 0;
+}
+
+static int gsc_cap_stop_capture(struct gsc_dev *gsc)
+{
+ int ret;
+ if (!gsc_cap_active(gsc)) {
+ gsc_warn("already stopped\n");
+ return 0;
+ }
+ gsc_info("G-Scaler h/w disable control");
+ gsc_hw_enable_control(gsc, false);
+ clear_bit(ST_CAPT_STREAM, &gsc->state);
+ ret = gsc_wait_operating(gsc);
+ if (ret) {
+ gsc_err("GSCALER_OP_STATUS is operating\n");
+ return ret;
+ }
+
+ return gsc_capture_state_cleanup(gsc);
+}
+
+static int gsc_capture_stop_streaming(struct vb2_queue *q)
+{
+ struct gsc_ctx *ctx = q->drv_priv;
+ struct gsc_dev *gsc = ctx->gsc_dev;
+
+ if (!gsc_cap_active(gsc))
+ return -EINVAL;
+
+ return gsc_cap_stop_capture(gsc);
+}
+
+static struct vb2_ops gsc_capture_qops = {
+ .queue_setup = gsc_capture_queue_setup,
+ .buf_prepare = gsc_capture_buf_prepare,
+ .buf_queue = gsc_capture_buf_queue,
+ .wait_prepare = gsc_unlock,
+ .wait_finish = gsc_lock,
+ .start_streaming = gsc_capture_start_streaming,
+ .stop_streaming = gsc_capture_stop_streaming,
+};
+
+/*
+ * The video node ioctl operations
+ */
+static int gsc_vidioc_querycap_capture(struct file *file, void *priv,
+ struct v4l2_capability *cap)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+
+ strncpy(cap->driver, gsc->pdev->name, sizeof(cap->driver) - 1);
+ strncpy(cap->card, gsc->pdev->name, sizeof(cap->card) - 1);
+ cap->bus_info[0] = 0;
+ cap->capabilities = V4L2_CAP_STREAMING | V4L2_CAP_VIDEO_CAPTURE_MPLANE;
+
+ return 0;
+}
+
+static int gsc_capture_enum_fmt_mplane(struct file *file, void *priv,
+ struct v4l2_fmtdesc *f)
+{
+ return gsc_enum_fmt_mplane(f);
+}
+
+static int gsc_capture_try_fmt_mplane(struct file *file, void *fh,
+ struct v4l2_format *f)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+
+ if (f->type != V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE)
+ return -EINVAL;
+
+ return gsc_try_fmt_mplane(gsc->cap.ctx, f);
+}
+
+static int gsc_capture_s_fmt_mplane(struct file *file, void *fh,
+ struct v4l2_format *f)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ struct gsc_ctx *ctx = gsc->cap.ctx;
+ struct gsc_frame *frame;
+ struct v4l2_pix_format_mplane *pix;
+ int i, ret = 0;
+
+ ret = gsc_capture_try_fmt_mplane(file, fh, f);
+ if (ret)
+ return ret;
+
+ if (vb2_is_streaming(&gsc->cap.vbq)) {
+ gsc_err("queue (%d) busy", f->type);
+ return -EBUSY;
+ }
+
+ frame = &ctx->d_frame;
+
+ pix = &f->fmt.pix_mp;
+ frame->fmt = find_fmt(&pix->pixelformat, NULL, 0);
+ if (!frame->fmt)
+ return -EINVAL;
+
+ for (i = 0; i < frame->fmt->nr_comp; i++)
+ frame->payload[i] =
+ pix->plane_fmt[i].bytesperline * pix->height;
+
+ gsc_set_frame_size(frame, pix->width, pix->height);
+
+ gsc_info("f_w: %d, f_h: %d", frame->f_width, frame->f_height);
+
+ return 0;
+}
+
+static int gsc_capture_g_fmt_mplane(struct file *file, void *fh,
+ struct v4l2_format *f)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ struct gsc_ctx *ctx = gsc->cap.ctx;
+
+ if (f->type != V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE)
+ return -EINVAL;
+
+ return gsc_g_fmt_mplane(ctx, f);
+}
+
+static int gsc_capture_reqbufs(struct file *file, void *priv,
+ struct v4l2_requestbuffers *reqbufs)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ struct gsc_capture_device *cap = &gsc->cap;
+ struct gsc_frame *frame;
+ int ret;
+
+ frame = ctx_get_frame(cap->ctx, reqbufs->type);
+
+ ret = vb2_reqbufs(&cap->vbq, reqbufs);
+ if (!ret)
+ cap->reqbufs_cnt = reqbufs->count;
+
+ return ret;
+
+}
+
+static int gsc_capture_querybuf(struct file *file, void *priv,
+ struct v4l2_buffer *buf)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ struct gsc_capture_device *cap = &gsc->cap;
+
+ return vb2_querybuf(&cap->vbq, buf);
+}
+
+static int gsc_capture_qbuf(struct file *file, void *priv,
+ struct v4l2_buffer *buf)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ struct gsc_capture_device *cap = &gsc->cap;
+
+ return vb2_qbuf(&cap->vbq, buf);
+}
+
+static int gsc_capture_dqbuf(struct file *file, void *priv,
+ struct v4l2_buffer *buf)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ return vb2_dqbuf(&gsc->cap.vbq, buf,
+ file->f_flags & O_NONBLOCK);
+}
+
+static int gsc_capture_cropcap(struct file *file, void *fh,
+ struct v4l2_cropcap *cr)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ struct gsc_ctx *ctx = gsc->cap.ctx;
+
+ if (cr->type != V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE)
+ return -EINVAL;
+
+ cr->bounds.left = 0;
+ cr->bounds.top = 0;
+ cr->bounds.width = ctx->d_frame.f_width;
+ cr->bounds.height = ctx->d_frame.f_height;
+ cr->defrect = cr->bounds;
+
+ return 0;
+}
+
+static int gsc_capture_enum_input(struct file *file, void *priv,
+ struct v4l2_input *i)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ struct exynos_platform_gscaler *pdata = gsc->pdata;
+ struct exynos_isp_info *isp_info;
+
+ if (i->index >= MAX_CAMIF_CLIENTS)
+ return -EINVAL;
+
+ isp_info = pdata->isp_info[i->index];
+ if (isp_info == NULL)
+ return -EINVAL;
+
+ i->type = V4L2_INPUT_TYPE_CAMERA;
+
+ strncpy(i->name, isp_info->board_info->type, 32);
+
+ return 0;
+}
+
+static int gsc_capture_s_input(struct file *file, void *priv, unsigned int i)
+{
+ return i == 0 ? 0 : -EINVAL;
+}
+
+static int gsc_capture_g_input(struct file *file, void *priv, unsigned int *i)
+{
+ *i = 0;
+ return 0;
+}
+
+int gsc_capture_ctrls_create(struct gsc_dev *gsc)
+{
+ int ret;
+
+ if (WARN_ON(gsc->cap.ctx == NULL))
+ return -ENXIO;
+ if (gsc->cap.ctx->ctrls_rdy)
+ return 0;
+ ret = gsc_ctrls_create(gsc->cap.ctx);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+
+void gsc_cap_pipeline_prepare(struct gsc_dev *gsc, struct media_entity *me)
+{
+ struct media_entity_graph graph;
+ struct v4l2_subdev *sd;
+
+ media_entity_graph_walk_start(&graph, me);
+
+ while ((me = media_entity_graph_walk_next(&graph))) {
+ gsc_info("me->name : %s", me->name);
+ if (media_entity_type(me) != MEDIA_ENT_T_V4L2_SUBDEV)
+ continue;
+ sd = media_entity_to_v4l2_subdev(me);
+
+ switch (sd->grp_id) {
+ case GSC_CAP_GRP_ID:
+ gsc->pipeline.sd_gsc = sd;
+ break;
+ case FLITE_GRP_ID:
+ gsc->pipeline.flite = sd;
+ break;
+ case SENSOR_GRP_ID:
+ gsc->pipeline.sensor = sd;
+ break;
+ case CSIS_GRP_ID:
+ gsc->pipeline.csis = sd;
+ break;
+ case FIMD_GRP_ID:
+ gsc->pipeline.disp = sd;
+ break;
+ default:
+ gsc_err("Unsupported group id");
+ break;
+ }
+ }
+
+ gsc_info("gsc->pipeline.sd_gsc : 0x%p", gsc->pipeline.sd_gsc);
+ gsc_info("gsc->pipeline.flite : 0x%p", gsc->pipeline.flite);
+ gsc_info("gsc->pipeline.sensor : 0x%p", gsc->pipeline.sensor);
+ gsc_info("gsc->pipeline.csis : 0x%p", gsc->pipeline.csis);
+ gsc_info("gsc->pipeline.disp : 0x%p", gsc->pipeline.disp);
+}
+
+static int __subdev_set_power(struct v4l2_subdev *sd, int on)
+{
+ int *use_count;
+ int ret;
+
+ if (sd == NULL)
+ return -ENXIO;
+
+ use_count = &sd->entity.use_count;
+ if (on && (*use_count)++ > 0)
+ return 0;
+ else if (!on && (*use_count == 0 || --(*use_count) > 0))
+ return 0;
+ ret = v4l2_subdev_call(sd, core, s_power, on);
+
+ return ret != -ENOIOCTLCMD ? ret : 0;
+}
+
+int gsc_cap_pipeline_s_power(struct gsc_dev *gsc, int state)
+{
+ int ret = 0;
+
+ if (!gsc->pipeline.sensor || !gsc->pipeline.flite)
+ return -ENXIO;
+
+ if (state) {
+ ret = __subdev_set_power(gsc->pipeline.flite, 1);
+ if (ret && ret != -ENXIO)
+ return ret;
+ ret = __subdev_set_power(gsc->pipeline.csis, 1);
+ if (ret && ret != -ENXIO)
+ return ret;
+ ret = __subdev_set_power(gsc->pipeline.sensor, 1);
+ } else {
+ ret = __subdev_set_power(gsc->pipeline.flite, 0);
+ if (ret && ret != -ENXIO)
+ return ret;
+ ret = __subdev_set_power(gsc->pipeline.sensor, 0);
+ if (ret && ret != -ENXIO)
+ return ret;
+ ret = __subdev_set_power(gsc->pipeline.csis, 0);
+ }
+ return ret == -ENXIO ? 0 : ret;
+}
+
+static void gsc_set_cam_clock(struct gsc_dev *gsc, bool on)
+{
+ struct v4l2_subdev *sd = NULL;
+ struct gsc_sensor_info *s_info = NULL;
+
+ if (gsc->pipeline.sensor) {
+ sd = gsc->pipeline.sensor;
+ s_info = v4l2_get_subdev_hostdata(sd);
+ }
+ if (on) {
+ clk_enable(gsc->clock);
+ if (gsc->pipeline.sensor)
+ clk_enable(s_info->camclk);
+ } else {
+ clk_disable(gsc->clock);
+ if (gsc->pipeline.sensor)
+ clk_disable(s_info->camclk);
+ }
+}
+
+static int __gsc_cap_pipeline_initialize(struct gsc_dev *gsc,
+ struct media_entity *me, bool prep)
+{
+ int ret = 0;
+
+ if (prep)
+ gsc_cap_pipeline_prepare(gsc, me);
+ if ((!gsc->pipeline.sensor || !gsc->pipeline.flite) &&
+ !gsc->pipeline.disp)
+ return -EINVAL;
+
+ gsc_set_cam_clock(gsc, true);
+
+ if (gsc->pipeline.sensor && gsc->pipeline.flite)
+ ret = gsc_cap_pipeline_s_power(gsc, 1);
+
+ return ret;
+}
+
+int gsc_cap_pipeline_initialize(struct gsc_dev *gsc, struct media_entity *me,
+ bool prep)
+{
+ int ret;
+
+ mutex_lock(&me->parent->graph_mutex);
+ ret = __gsc_cap_pipeline_initialize(gsc, me, prep);
+ mutex_unlock(&me->parent->graph_mutex);
+
+ return ret;
+}
+
+static int gsc_capture_open(struct file *file)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ int ret = v4l2_fh_open(file);
+
+ if (ret)
+ return ret;
+
+ if (gsc_m2m_opened(gsc) || gsc_out_opened(gsc) || gsc_cap_opened(gsc)) {
+ v4l2_fh_release(file);
+ return -EBUSY;
+ }
+
+ set_bit(ST_CAPT_OPEN, &gsc->state);
+ pm_runtime_get_sync(&gsc->pdev->dev);
+
+ if (++gsc->cap.refcnt == 1) {
+ ret = gsc_cap_pipeline_initialize(gsc, &gsc->cap.vfd->entity, true);
+ if (ret < 0) {
+ gsc_err("gsc pipeline initialization failed\n");
+ goto err;
+ }
+
+ ret = gsc_capture_ctrls_create(gsc);
+ if (ret) {
+ gsc_err("failed to create controls\n");
+ goto err;
+ }
+ }
+
+ gsc_info("pid: %d, state: 0x%lx", task_pid_nr(current), gsc->state);
+
+ return 0;
+
+err:
+ pm_runtime_put_sync(&gsc->pdev->dev);
+ v4l2_fh_release(file);
+ clear_bit(ST_CAPT_OPEN, &gsc->state);
+ return ret;
+}
+
+int __gsc_cap_pipeline_shutdown(struct gsc_dev *gsc)
+{
+ int ret = 0;
+
+ if (gsc->pipeline.sensor && gsc->pipeline.flite)
+ ret = gsc_cap_pipeline_s_power(gsc, 0);
+
+ if (ret && ret != -ENXIO)
+ gsc_set_cam_clock(gsc, false);
+
+ return ret == -ENXIO ? 0 : ret;
+}
+
+int gsc_cap_pipeline_shutdown(struct gsc_dev *gsc)
+{
+ struct media_entity *me = &gsc->cap.vfd->entity;
+ int ret;
+
+ mutex_lock(&me->parent->graph_mutex);
+ ret = __gsc_cap_pipeline_shutdown(gsc);
+ mutex_unlock(&me->parent->graph_mutex);
+
+ return ret;
+}
+static int gsc_capture_close(struct file *file)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+
+ gsc_info("pid: %d, state: 0x%lx", task_pid_nr(current), gsc->state);
+
+ if (--gsc->cap.refcnt == 0) {
+ clear_bit(ST_CAPT_OPEN, &gsc->state);
+ gsc_info("G-Scaler h/w disable control");
+ gsc_hw_enable_control(gsc, false);
+ clear_bit(ST_CAPT_STREAM, &gsc->state);
+ gsc_cap_pipeline_shutdown(gsc);
+ clear_bit(ST_CAPT_SUSPENDED, &gsc->state);
+ }
+
+ pm_runtime_put(&gsc->pdev->dev);
+
+ if (gsc->cap.refcnt == 0) {
+ vb2_queue_release(&gsc->cap.vbq);
+ gsc_ctrls_delete(gsc->cap.ctx);
+ }
+
+ return v4l2_fh_release(file);
+}
+
+static unsigned int gsc_capture_poll(struct file *file,
+ struct poll_table_struct *wait)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+
+ return vb2_poll(&gsc->cap.vbq, file, wait);
+}
+
+static int gsc_capture_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+
+ return vb2_mmap(&gsc->cap.vbq, vma);
+}
+
+static int gsc_cap_link_validate(struct gsc_dev *gsc)
+{
+ struct gsc_capture_device *cap = &gsc->cap;
+ struct v4l2_subdev_format sink_fmt, src_fmt;
+ struct v4l2_subdev *sd;
+ struct media_pad *pad;
+ int ret;
+
+ /* Get the source pad connected with gsc-video */
+ pad = media_entity_remote_source(&cap->vd_pad);
+ if (pad == NULL)
+ return -EPIPE;
+ /* Get the subdev of source pad */
+ sd = media_entity_to_v4l2_subdev(pad->entity);
+
+ while (1) {
+ /* Find sink pad of the subdev*/
+ pad = &sd->entity.pads[0];
+ if (!(pad->flags & MEDIA_PAD_FL_SINK))
+ break;
+ if (sd == cap->sd_cap) {
+ struct gsc_frame *gf = &cap->ctx->s_frame;
+ sink_fmt.format.width = gf->crop.width;
+ sink_fmt.format.height = gf->crop.height;
+ sink_fmt.format.code = gf->fmt ? gf->fmt->mbus_code : 0;
+ } else {
+ sink_fmt.pad = pad->index;
+ sink_fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+ ret = v4l2_subdev_call(sd, pad, get_fmt, NULL, &sink_fmt);
+ if (ret < 0 && ret != -ENOIOCTLCMD) {
+ gsc_err("failed %s subdev get_fmt", sd->name);
+ return -EPIPE;
+ }
+ }
+ gsc_info("sink sd name : %s", sd->name);
+ /* Get the source pad connected with remote sink pad */
+ pad = media_entity_remote_source(pad);
+ if (pad == NULL ||
+ media_entity_type(pad->entity) != MEDIA_ENT_T_V4L2_SUBDEV)
+ break;
+
+ /* Get the subdev of source pad */
+ sd = media_entity_to_v4l2_subdev(pad->entity);
+ gsc_info("source sd name : %s", sd->name);
+
+ src_fmt.pad = pad->index;
+ src_fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+ ret = v4l2_subdev_call(sd, pad, get_fmt, NULL, &src_fmt);
+ if (ret < 0 && ret != -ENOIOCTLCMD) {
+ gsc_err("failed %s subdev get_fmt", sd->name);
+ return -EPIPE;
+ }
+
+ gsc_info("src_width : %d, src_height : %d, src_code : %d",
+ src_fmt.format.width, src_fmt.format.height,
+ src_fmt.format.code);
+ gsc_info("sink_width : %d, sink_height : %d, sink_code : %d",
+ sink_fmt.format.width, sink_fmt.format.height,
+ sink_fmt.format.code);
+
+ if (src_fmt.format.width != sink_fmt.format.width ||
+ src_fmt.format.height != sink_fmt.format.height ||
+ src_fmt.format.code != sink_fmt.format.code) {
+ gsc_err("mismatch sink and source");
+ return -EPIPE;
+ }
+ }
+
+ return 0;
+}
+
+static int gsc_capture_streamon(struct file *file, void *priv,
+ enum v4l2_buf_type type)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ struct gsc_pipeline *p = &gsc->pipeline;
+ int ret;
+
+ if (gsc_cap_active(gsc))
+ return -EBUSY;
+
+ if (p->disp) {
+ media_entity_pipeline_start(&p->disp->entity, p->pipe);
+ } else if (p->sensor) {
+ media_entity_pipeline_start(&p->sensor->entity, p->pipe);
+ } else {
+ gsc_err("Error pipeline");
+ return -EPIPE;
+ }
+
+ ret = gsc_cap_link_validate(gsc);
+ if (ret)
+ return ret;
+
+ return vb2_streamon(&gsc->cap.vbq, type);
+}
+
+static int gsc_capture_streamoff(struct file *file, void *priv,
+ enum v4l2_buf_type type)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ struct v4l2_subdev *sd = gsc->pipeline.sensor;
+ int ret;
+
+ ret = vb2_streamoff(&gsc->cap.vbq, type);
+ if (ret == 0)
+ media_entity_pipeline_stop(&sd->entity);
+ return ret;
+}
+
+static struct v4l2_subdev *gsc_cap_remote_subdev(struct gsc_dev *gsc, u32 *pad)
+{
+ struct media_pad *remote;
+
+ remote = media_entity_remote_source(&gsc->cap.vd_pad);
+
+ if (remote == NULL ||
+ media_entity_type(remote->entity) != MEDIA_ENT_T_V4L2_SUBDEV)
+ return NULL;
+
+ if (pad)
+ *pad = remote->index;
+
+ return media_entity_to_v4l2_subdev(remote->entity);
+}
+
+static int gsc_capture_g_crop(struct file *file, void *fh, struct v4l2_crop *crop)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ struct v4l2_subdev_format format;
+ struct v4l2_subdev *subdev;
+ u32 pad;
+ int ret;
+
+ subdev = gsc_cap_remote_subdev(gsc, &pad);
+ if (subdev == NULL)
+ return -EINVAL;
+
+ /* Try the get crop operation first and fallback to get format if not
+ * implemented.
+ */
+ ret = v4l2_subdev_call(subdev, video, g_crop, crop);
+ if (ret != -ENOIOCTLCMD)
+ return ret;
+
+ format.pad = pad;
+ format.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+ ret = v4l2_subdev_call(subdev, pad, get_fmt, NULL, &format);
+ if (ret < 0)
+ return ret == -ENOIOCTLCMD ? -EINVAL : ret;
+
+ crop->c.left = 0;
+ crop->c.top = 0;
+ crop->c.width = format.format.width;
+ crop->c.height = format.format.height;
+
+ return 0;
+}
+
+static int gsc_capture_s_crop(struct file *file, void *fh, struct v4l2_crop *crop)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ struct v4l2_subdev *subdev;
+ int ret;
+
+ subdev = gsc_cap_remote_subdev(gsc, NULL);
+ if (subdev == NULL)
+ return -EINVAL;
+
+ ret = v4l2_subdev_call(subdev, video, s_crop, crop);
+
+ return ret == -ENOIOCTLCMD ? -EINVAL : ret;
+}
+
+static const struct v4l2_ioctl_ops gsc_capture_ioctl_ops = {
+ .vidioc_querycap = gsc_vidioc_querycap_capture,
+
+ .vidioc_enum_fmt_vid_cap_mplane = gsc_capture_enum_fmt_mplane,
+ .vidioc_try_fmt_vid_cap_mplane = gsc_capture_try_fmt_mplane,
+ .vidioc_s_fmt_vid_cap_mplane = gsc_capture_s_fmt_mplane,
+ .vidioc_g_fmt_vid_cap_mplane = gsc_capture_g_fmt_mplane,
+
+ .vidioc_reqbufs = gsc_capture_reqbufs,
+ .vidioc_querybuf = gsc_capture_querybuf,
+
+ .vidioc_qbuf = gsc_capture_qbuf,
+ .vidioc_dqbuf = gsc_capture_dqbuf,
+
+ .vidioc_streamon = gsc_capture_streamon,
+ .vidioc_streamoff = gsc_capture_streamoff,
+
+ .vidioc_g_crop = gsc_capture_g_crop,
+ .vidioc_s_crop = gsc_capture_s_crop,
+ .vidioc_cropcap = gsc_capture_cropcap,
+
+ .vidioc_enum_input = gsc_capture_enum_input,
+ .vidioc_s_input = gsc_capture_s_input,
+ .vidioc_g_input = gsc_capture_g_input,
+};
+
+static const struct v4l2_file_operations gsc_capture_fops = {
+ .owner = THIS_MODULE,
+ .open = gsc_capture_open,
+ .release = gsc_capture_close,
+ .poll = gsc_capture_poll,
+ .unlocked_ioctl = video_ioctl2,
+ .mmap = gsc_capture_mmap,
+};
+
+/*
+ * __gsc_cap_get_format - helper function for getting gscaler format
+ * @res : pointer to resizer private structure
+ * @pad : pad number
+ * @fh : V4L2 subdev file handle
+ * @which : wanted subdev format
+ * return zero
+ */
+static struct v4l2_mbus_framefmt *__gsc_cap_get_format(struct gsc_dev *gsc,
+ struct v4l2_subdev_fh *fh, unsigned int pad,
+ enum v4l2_subdev_format_whence which)
+{
+ if (which == V4L2_SUBDEV_FORMAT_TRY)
+ return v4l2_subdev_get_try_format(fh, pad);
+ else
+ return &gsc->cap.mbus_fmt[pad];
+}
+
+static void gsc_cap_check_limit_size(struct gsc_dev *gsc, unsigned int pad,
+ struct v4l2_mbus_framefmt *fmt)
+{
+ struct gsc_variant *variant = gsc->variant;
+ struct gsc_ctx *ctx = gsc->cap.ctx;
+ u32 min_w, min_h, max_w, max_h;
+
+ switch (pad) {
+ case GSC_PAD_SINK:
+ if (gsc_cap_opened(gsc) &&
+ (ctx->gsc_ctrls.rotate->val == 90 ||
+ ctx->gsc_ctrls.rotate->val == 270)) {
+ min_w = variant->pix_min->real_w;
+ min_h = variant->pix_min->real_h;
+ max_w = variant->pix_max->real_rot_en_w;
+ max_h = variant->pix_max->real_rot_en_h;
+ } else {
+ min_w = variant->pix_min->real_w;
+ min_h = variant->pix_min->real_h;
+ max_w = variant->pix_max->real_rot_dis_w;
+ max_h = variant->pix_max->real_rot_dis_h;
+ }
+ break;
+
+ case GSC_PAD_SOURCE:
+ min_w = variant->pix_min->target_rot_dis_w;
+ min_h = variant->pix_min->target_rot_dis_h;
+ max_w = variant->pix_max->target_rot_dis_w;
+ max_h = variant->pix_max->target_rot_dis_h;
+ break;
+
+ default:
+ BUG();
+ return;
+ }
+
+ fmt->width = clamp_t(u32, fmt->width, min_w, max_w);
+ fmt->height = clamp_t(u32, fmt->height , min_h, max_h);
+}
+
+static void gsc_cap_try_format(struct gsc_dev *gsc,
+ struct v4l2_subdev_fh *fh, unsigned int pad,
+ struct v4l2_mbus_framefmt *fmt,
+ enum v4l2_subdev_format_whence which)
+{
+ struct gsc_fmt *gfmt;
+
+ gfmt = find_fmt(NULL, &fmt->code, 0);
+ WARN_ON(!gfmt);
+
+ if (pad == GSC_PAD_SINK) {
+ struct gsc_ctx *ctx = gsc->cap.ctx;
+ struct gsc_frame *frame = &ctx->s_frame;
+
+ frame->fmt = gfmt;
+ }
+
+ gsc_cap_check_limit_size(gsc, pad, fmt);
+
+ fmt->colorspace = V4L2_COLORSPACE_JPEG;
+ fmt->field = V4L2_FIELD_NONE;
+}
+
+static int gsc_capture_subdev_set_fmt(struct v4l2_subdev *sd,
+ struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_format *fmt)
+{
+ struct gsc_dev *gsc = v4l2_get_subdevdata(sd);
+ struct v4l2_mbus_framefmt *mf;
+ struct gsc_ctx *ctx = gsc->cap.ctx;
+ struct gsc_frame *frame;
+
+ mf = __gsc_cap_get_format(gsc, fh, fmt->pad, fmt->which);
+ if (mf == NULL)
+ return -EINVAL;
+
+ gsc_cap_try_format(gsc, fh, fmt->pad, &fmt->format, fmt->which);
+ *mf = fmt->format;
+
+ if (fmt->which == V4L2_SUBDEV_FORMAT_TRY)
+ return 0;
+
+ frame = gsc_capture_get_frame(ctx, fmt->pad);
+
+ if (fmt->which == V4L2_SUBDEV_FORMAT_ACTIVE) {
+ frame->crop.left = 0;
+ frame->crop.top = 0;
+ frame->f_width = mf->width;
+ frame->f_height = mf->height;
+ frame->crop.width = mf->width;
+ frame->crop.height = mf->height;
+ }
+ gsc_dbg("offs_h : %d, offs_v : %d, f_width : %d, f_height :%d,\
+ width : %d, height : %d", frame->crop.left,\
+ frame->crop.top, frame->f_width,
+ frame->f_height,\
+ frame->crop.width, frame->crop.height);
+
+ return 0;
+}
+
+static int gsc_capture_subdev_get_fmt(struct v4l2_subdev *sd,
+ struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_format *fmt)
+{
+ struct gsc_dev *gsc = v4l2_get_subdevdata(sd);
+ struct v4l2_mbus_framefmt *mf;
+
+ mf = __gsc_cap_get_format(gsc, fh, fmt->pad, fmt->which);
+ if (mf == NULL)
+ return -EINVAL;
+
+ fmt->format = *mf;
+
+ return 0;
+}
+
+static int __gsc_cap_get_crop(struct gsc_dev *gsc, struct v4l2_subdev_fh *fh,
+ unsigned int pad, enum v4l2_subdev_format_whence which,
+ struct v4l2_rect *crop)
+{
+ struct gsc_ctx *ctx = gsc->cap.ctx;
+ struct gsc_frame *frame = gsc_capture_get_frame(ctx, pad);
+
+ if (which == V4L2_SUBDEV_FORMAT_TRY) {
+ crop = v4l2_subdev_get_try_crop(fh, pad);
+ } else {
+ crop->left = frame->crop.left;
+ crop->top = frame->crop.top;
+ crop->width = frame->crop.width;
+ crop->height = frame->crop.height;
+ }
+
+ return 0;
+}
+
+static void gsc_cap_try_crop(struct gsc_dev *gsc, struct v4l2_rect *crop,
+ u32 pad)
+{
+ struct gsc_variant *variant = gsc->variant;
+ struct gsc_ctx *ctx = gsc->cap.ctx;
+ struct gsc_frame *frame = gsc_capture_get_frame(ctx, pad);
+
+ u32 crop_min_w = variant->pix_min->target_rot_dis_w;
+ u32 crop_min_h = variant->pix_min->target_rot_dis_h;
+ u32 crop_max_w = frame->f_width;
+ u32 crop_max_h = frame->f_height;
+
+ crop->left = clamp_t(u32, crop->left, 0, crop_max_w - crop_min_w);
+ crop->top = clamp_t(u32, crop->top, 0, crop_max_h - crop_min_h);
+ crop->width = clamp_t(u32, crop->width, crop_min_w, crop_max_w);
+ crop->height = clamp_t(u32, crop->height, crop_min_h, crop_max_h);
+}
+
+static int gsc_capture_subdev_set_crop(struct v4l2_subdev *sd,
+ struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_crop *crop)
+{
+ struct gsc_dev *gsc = v4l2_get_subdevdata(sd);
+ struct gsc_ctx *ctx = gsc->cap.ctx;
+ struct gsc_frame *frame = gsc_capture_get_frame(ctx, crop->pad);
+
+ gsc_cap_try_crop(gsc, &crop->rect, crop->pad);
+
+ if (crop->which == V4L2_SUBDEV_FORMAT_ACTIVE)
+ frame->crop = crop->rect;
+
+ return 0;
+}
+
+static int gsc_capture_subdev_get_crop(struct v4l2_subdev *sd,
+ struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_crop *crop)
+{
+ struct gsc_dev *gsc = v4l2_get_subdevdata(sd);
+ struct v4l2_rect gcrop = {0, };
+
+ __gsc_cap_get_crop(gsc, fh, crop->pad, crop->which, &gcrop);
+ crop->rect = gcrop;
+
+ return 0;
+}
+
+static struct v4l2_subdev_pad_ops gsc_cap_subdev_pad_ops = {
+ .get_fmt = gsc_capture_subdev_get_fmt,
+ .set_fmt = gsc_capture_subdev_set_fmt,
+ .get_crop = gsc_capture_subdev_get_crop,
+ .set_crop = gsc_capture_subdev_set_crop,
+};
+
+static struct v4l2_subdev_video_ops gsc_cap_subdev_video_ops = {
+ .s_stream = gsc_capture_subdev_s_stream,
+};
+
+static struct v4l2_subdev_ops gsc_cap_subdev_ops = {
+ .pad = &gsc_cap_subdev_pad_ops,
+ .video = &gsc_cap_subdev_video_ops,
+};
+
+static int gsc_capture_init_formats(struct v4l2_subdev *sd,
+ struct v4l2_subdev_fh *fh)
+{
+ struct v4l2_subdev_format format;
+ struct gsc_dev *gsc = v4l2_get_subdevdata(sd);
+ struct gsc_ctx *ctx = gsc->cap.ctx;
+
+ ctx->s_frame.fmt = get_format(2);
+ memset(&format, 0, sizeof(format));
+ format.pad = GSC_PAD_SINK;
+ format.which = fh ? V4L2_SUBDEV_FORMAT_TRY : V4L2_SUBDEV_FORMAT_ACTIVE;
+ format.format.code = ctx->s_frame.fmt->mbus_code;
+ format.format.width = DEFAULT_GSC_SINK_WIDTH;
+ format.format.height = DEFAULT_GSC_SINK_HEIGHT;
+ gsc_capture_subdev_set_fmt(sd, fh, &format);
+
+ /* G-scaler should not propagate, because it is possible that sink
+ * format different from source format. But the operation of source pad
+ * is not needed.
+ */
+ ctx->d_frame.fmt = get_format(2);
+
+ return 0;
+}
+
+static int gsc_capture_subdev_close(struct v4l2_subdev *sd,
+ struct v4l2_subdev_fh *fh)
+{
+ gsc_dbg("");
+
+ return 0;
+}
+
+static int gsc_capture_subdev_registered(struct v4l2_subdev *sd)
+{
+ gsc_dbg("");
+
+ return 0;
+}
+
+static void gsc_capture_subdev_unregistered(struct v4l2_subdev *sd)
+{
+ gsc_dbg("");
+}
+
+static const struct v4l2_subdev_internal_ops gsc_cap_v4l2_internal_ops = {
+ .open = gsc_capture_init_formats,
+ .close = gsc_capture_subdev_close,
+ .registered = gsc_capture_subdev_registered,
+ .unregistered = gsc_capture_subdev_unregistered,
+};
+
+static int gsc_capture_link_setup(struct media_entity *entity,
+ const struct media_pad *local,
+ const struct media_pad *remote, u32 flags)
+{
+ struct v4l2_subdev *sd = media_entity_to_v4l2_subdev(entity);
+ struct gsc_dev *gsc = v4l2_get_subdevdata(sd);
+ struct gsc_capture_device *cap = &gsc->cap;
+
+ gsc_info("");
+ switch (local->index | media_entity_type(remote->entity)) {
+ case GSC_PAD_SINK | MEDIA_ENT_T_V4L2_SUBDEV:
+ if (flags & MEDIA_LNK_FL_ENABLED) {
+ if (cap->input != 0)
+ return -EBUSY;
+ /* Write-Back link enabled */
+ if (!strcmp(remote->entity->name, FIMD_MODULE_NAME)) {
+ gsc->cap.sd_disp =
+ media_entity_to_v4l2_subdev(remote->entity);
+ gsc->cap.sd_disp->grp_id = FIMD_GRP_ID;
+ cap->ctx->in_path = GSC_WRITEBACK;
+ cap->input |= GSC_IN_FIMD_WRITEBACK;
+ } else if (remote->index == FLITE_PAD_SOURCE_PREV) {
+ cap->input |= GSC_IN_FLITE_PREVIEW;
+ } else {
+ cap->input |= GSC_IN_FLITE_CAMCORDING;
+ }
+ } else {
+ cap->input = GSC_IN_NONE;
+ }
+ break;
+ case GSC_PAD_SOURCE | MEDIA_ENT_T_DEVNODE:
+ /* gsc-cap always write to memory */
+ break;
+ }
+
+ return 0;
+}
+
+static const struct media_entity_operations gsc_cap_media_ops = {
+ .link_setup = gsc_capture_link_setup,
+};
+
+static int gsc_capture_create_subdev(struct gsc_dev *gsc)
+{
+ struct v4l2_device *v4l2_dev;
+ struct v4l2_subdev *sd;
+ int ret;
+
+ sd = kzalloc(sizeof(*sd), GFP_KERNEL);
+ if (!sd)
+ return -ENOMEM;
+
+ v4l2_subdev_init(sd, &gsc_cap_subdev_ops);
+ sd->flags = V4L2_SUBDEV_FL_HAS_DEVNODE;
+ snprintf(sd->name, sizeof(sd->name), "gsc-cap-subdev.%d", gsc->id);
+
+ gsc->cap.sd_pads[GSC_PAD_SINK].flags = MEDIA_PAD_FL_SINK;
+ gsc->cap.sd_pads[GSC_PAD_SOURCE].flags = MEDIA_PAD_FL_SOURCE;
+ ret = media_entity_init(&sd->entity, GSC_PADS_NUM,
+ gsc->cap.sd_pads, 0);
+ if (ret)
+ goto err_ent;
+
+ sd->internal_ops = &gsc_cap_v4l2_internal_ops;
+ sd->entity.ops = &gsc_cap_media_ops;
+ sd->grp_id = GSC_CAP_GRP_ID;
+ v4l2_dev = &gsc->mdev[MDEV_CAPTURE]->v4l2_dev;
+
+ ret = v4l2_device_register_subdev(v4l2_dev, sd);
+ if (ret)
+ goto err_sub;
+
+ gsc->mdev[MDEV_CAPTURE]->gsc_cap_sd[gsc->id] = sd;
+ gsc->cap.sd_cap = sd;
+ v4l2_set_subdevdata(sd, gsc);
+ gsc_capture_init_formats(sd, NULL);
+
+ return 0;
+
+err_sub:
+ media_entity_cleanup(&sd->entity);
+err_ent:
+ kfree(sd);
+ return ret;
+}
+
+static int gsc_capture_create_link(struct gsc_dev *gsc)
+{
+ struct media_entity *source, *sink;
+ struct exynos_platform_gscaler *pdata = gsc->pdata;
+ struct exynos_isp_info *isp_info;
+ u32 num_clients = pdata->num_clients;
+ int ret, i;
+ enum cam_port id;
+
+ /* GSC-SUBDEV ------> GSC-VIDEO (Always link enable) */
+ source = &gsc->cap.sd_cap->entity;
+ sink = &gsc->cap.vfd->entity;
+ if (source && sink) {
+ ret = media_entity_create_link(source, GSC_PAD_SOURCE, sink,
+ 0, MEDIA_LNK_FL_IMMUTABLE |
+ MEDIA_LNK_FL_ENABLED);
+ if (ret) {
+ gsc_err("failed link flite to gsc\n");
+ return ret;
+ }
+ }
+ for (i = 0; i < num_clients; i++) {
+ isp_info = pdata->isp_info[i];
+ id = isp_info->cam_port;
+ /* FIMC-LITE ------> GSC-SUBDEV (ITU & MIPI common) */
+ source = &gsc->cap.sd_flite[id]->entity;
+ sink = &gsc->cap.sd_cap->entity;
+ if (source && sink) {
+ if (pdata->cam_preview)
+ ret = media_entity_create_link(source,
+ FLITE_PAD_SOURCE_PREV,
+ sink, GSC_PAD_SINK, 0);
+ if (!ret && pdata->cam_camcording)
+ ret = media_entity_create_link(source,
+ FLITE_PAD_SOURCE_CAMCORD,
+ sink, GSC_PAD_SINK, 0);
+ if (ret) {
+ gsc_err("failed link flite to gsc\n");
+ return ret;
+ }
+ }
+ }
+
+ return 0;
+}
+
+static struct v4l2_subdev *gsc_cap_register_sensor(struct gsc_dev *gsc, int i)
+{
+ struct exynos_md *mdev = gsc->mdev[MDEV_CAPTURE];
+ struct v4l2_subdev *sd = NULL;
+
+ sd = mdev->sensor_sd[i];
+ if (!sd)
+ return NULL;
+
+ v4l2_set_subdev_hostdata(sd, &gsc->cap.sensor[i]);
+
+ return sd;
+}
+
+static int gsc_cap_register_sensor_entities(struct gsc_dev *gsc)
+{
+ struct exynos_platform_gscaler *pdata = gsc->pdata;
+ u32 num_clients = pdata->num_clients;
+ int i;
+
+ for (i = 0; i < num_clients; i++) {
+ gsc->cap.sensor[i].pdata = pdata->isp_info[i];
+ gsc->cap.sensor[i].sd = gsc_cap_register_sensor(gsc, i);
+ if (IS_ERR_OR_NULL(gsc->cap.sensor[i].sd)) {
+ gsc_err("failed to get register sensor");
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+
+static int gsc_cap_config_camclk(struct gsc_dev *gsc,
+ struct exynos_isp_info *isp_info, int i)
+{
+ struct gsc_capture_device *gsc_cap = &gsc->cap;
+ struct clk *camclk;
+ struct clk *srclk;
+
+ camclk = clk_get(&gsc->pdev->dev, isp_info->cam_clk_name);
+ if (IS_ERR_OR_NULL(camclk)) {
+ gsc_err("failed to get cam clk");
+ return -ENXIO;
+ }
+ gsc_cap->sensor[i].camclk = camclk;
+
+ srclk = clk_get(&gsc->pdev->dev, isp_info->cam_srclk_name);
+ if (IS_ERR_OR_NULL(srclk)) {
+ clk_put(camclk);
+ gsc_err("failed to get cam source clk\n");
+ return -ENXIO;
+ }
+ clk_set_parent(camclk, srclk);
+ clk_set_rate(camclk, isp_info->clk_frequency);
+ clk_put(srclk);
+
+ return 0;
+}
+
+int gsc_register_capture_device(struct gsc_dev *gsc)
+{
+ struct video_device *vfd;
+ struct gsc_capture_device *gsc_cap;
+ struct gsc_ctx *ctx;
+ struct vb2_queue *q;
+ struct exynos_platform_gscaler *pdata = gsc->pdata;
+ struct exynos_isp_info *isp_info;
+ int ret = -ENOMEM;
+ int i;
+
+ ctx = kzalloc(sizeof *ctx, GFP_KERNEL);
+ if (!ctx)
+ return -ENOMEM;
+
+ ctx->gsc_dev = gsc;
+ ctx->in_path = GSC_CAMERA;
+ ctx->out_path = GSC_DMA;
+ ctx->state = GSC_CTX_CAP;
+
+ vfd = video_device_alloc();
+ if (!vfd) {
+ printk("Failed to allocate video device\n");
+ goto err_ctx_alloc;
+ }
+
+ snprintf(vfd->name, sizeof(vfd->name), "%s.capture",
+ dev_name(&gsc->pdev->dev));
+
+ vfd->fops = &gsc_capture_fops;
+ vfd->ioctl_ops = &gsc_capture_ioctl_ops;
+ vfd->v4l2_dev = &gsc->mdev[MDEV_CAPTURE]->v4l2_dev;
+ vfd->minor = -1;
+ vfd->release = video_device_release;
+ vfd->lock = &gsc->lock;
+ video_set_drvdata(vfd, gsc);
+
+ gsc_cap = &gsc->cap;
+ gsc_cap->vfd = vfd;
+ gsc_cap->refcnt = 0;
+ gsc_cap->active_buf_cnt = 0;
+ gsc_cap->reqbufs_cnt = 0;
+
+ spin_lock_init(&ctx->slock);
+ gsc_cap->ctx = ctx;
+
+ q = &gsc->cap.vbq;
+ memset(q, 0, sizeof(*q));
+ q->type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
+ q->io_modes = VB2_MMAP | VB2_USERPTR;
+ q->drv_priv = gsc->cap.ctx;
+ q->ops = &gsc_capture_qops;
+ q->mem_ops = &vb2_dma_contig_memops;
+
+ vb2_queue_init(q);
+
+ /* Get mipi-csis and fimc-lite subdev ptr using mdev */
+ for (i = 0; i < FLITE_MAX_ENTITIES; i++)
+ gsc->cap.sd_flite[i] = gsc->mdev[MDEV_CAPTURE]->flite_sd[i];
+
+ for (i = 0; i < CSIS_MAX_ENTITIES; i++)
+ gsc->cap.sd_csis[i] = gsc->mdev[MDEV_CAPTURE]->csis_sd[i];
+
+ for (i = 0; i < pdata->num_clients; i++) {
+ isp_info = pdata->isp_info[i];
+ ret = gsc_cap_config_camclk(gsc, isp_info, i);
+ if (ret) {
+ gsc_err("failed setup cam clk");
+ goto err_ctx_alloc;
+ }
+ }
+
+ ret = gsc_cap_register_sensor_entities(gsc);
+ if (ret) {
+ gsc_err("failed register sensor entities");
+ goto err_clk;
+ }
+
+ ret = video_register_device(vfd, VFL_TYPE_GRABBER, -1);
+ if (ret) {
+ gsc_err("failed to register video device");
+ goto err_clk;
+ }
+
+ gsc->cap.vd_pad.flags = MEDIA_PAD_FL_SOURCE;
+ ret = media_entity_init(&vfd->entity, 1, &gsc->cap.vd_pad, 0);
+ if (ret) {
+ gsc_err("failed to initialize entity");
+ goto err_ent;
+ }
+
+ ret = gsc_capture_create_subdev(gsc);
+ if (ret) {
+ gsc_err("failed create subdev");
+ goto err_sd_reg;
+ }
+
+ ret = gsc_capture_create_link(gsc);
+ if (ret) {
+ gsc_err("failed create link");
+ goto err_sd_reg;
+ }
+
+ vfd->ctrl_handler = &ctx->ctrl_handler;
+ gsc_dbg("gsc capture driver registered as /dev/video%d", vfd->num);
+
+ return 0;
+
+err_sd_reg:
+ media_entity_cleanup(&vfd->entity);
+err_ent:
+ video_device_release(vfd);
+err_clk:
+ for (i = 0; i < pdata->num_clients; i++)
+ clk_put(gsc_cap->sensor[i].camclk);
+err_ctx_alloc:
+ kfree(ctx);
+
+ return ret;
+}
+
+static void gsc_capture_destroy_subdev(struct gsc_dev *gsc)
+{
+ struct v4l2_subdev *sd = gsc->cap.sd_cap;
+
+ if (!sd)
+ return;
+ media_entity_cleanup(&sd->entity);
+ v4l2_device_unregister_subdev(sd);
+ kfree(sd);
+ sd = NULL;
+}
+
+void gsc_unregister_capture_device(struct gsc_dev *gsc)
+{
+ struct video_device *vfd = gsc->cap.vfd;
+
+ if (vfd) {
+ media_entity_cleanup(&vfd->entity);
+ /* Can also be called if video device was
+ not registered */
+ video_unregister_device(vfd);
+ }
+ gsc_capture_destroy_subdev(gsc);
+ kfree(gsc->cap.ctx);
+ gsc->cap.ctx = NULL;
+}
--- /dev/null
+/* linux/drivers/media/video/exynos/gsc/gsc-core.c
+ *
+ * Copyright (c) 2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com
+ *
+ * Samsung EXYNOS5 SoC series G-scaler driver
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published
+ * by the Free Software Foundation, either version 2 of the License,
+ * or (at your option) any later version.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/version.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/bug.h>
+#include <linux/interrupt.h>
+#include <linux/workqueue.h>
+#include <linux/device.h>
+#include <linux/platform_device.h>
+#include <linux/list.h>
+#include <linux/io.h>
+#include <linux/slab.h>
+#include <linux/clk.h>
+#include <media/v4l2-ioctl.h>
+#include <linux/of.h>
+#include "gsc-core.h"
+#ifdef CONFIG_EXYNOS_IOMMU
+#include <mach/sysmmu.h>
+#include <linux/of_platform.h>
+#endif
+#define GSC_CLOCK_GATE_NAME "gscl"
+
+int gsc_dbg = 6;
+module_param(gsc_dbg, int, 0644);
+
+static struct gsc_fmt gsc_formats[] = {
+ {
+ .name = "RGB565",
+ .pixelformat = V4L2_PIX_FMT_RGB565X,
+ .depth = { 16 },
+ .color = GSC_RGB,
+ .num_planes = 1,
+ .nr_comp = 1,
+ }, {
+ .name = "XRGB-8-8-8-8, 32 bpp",
+ .pixelformat = V4L2_PIX_FMT_RGB32,
+ .depth = { 32 },
+ .color = GSC_RGB,
+ .num_planes = 1,
+ .nr_comp = 1,
+ }, {
+ .name = "YUV 4:2:2 packed, YCbYCr",
+ .pixelformat = V4L2_PIX_FMT_YUYV,
+ .depth = { 16 },
+ .color = GSC_YUV422,
+ .yorder = GSC_LSB_Y,
+ .corder = GSC_CBCR,
+ .num_planes = 1,
+ .nr_comp = 1,
+ .mbus_code = V4L2_MBUS_FMT_YUYV8_2X8,
+ }, {
+ .name = "YUV 4:2:2 packed, CbYCrY",
+ .pixelformat = V4L2_PIX_FMT_UYVY,
+ .depth = { 16 },
+ .color = GSC_YUV422,
+ .yorder = GSC_LSB_C,
+ .corder = GSC_CBCR,
+ .num_planes = 1,
+ .nr_comp = 1,
+ .mbus_code = V4L2_MBUS_FMT_UYVY8_2X8,
+ }, {
+ .name = "YUV 4:2:2 packed, CrYCbY",
+ .pixelformat = V4L2_PIX_FMT_VYUY,
+ .depth = { 16 },
+ .color = GSC_YUV422,
+ .yorder = GSC_LSB_C,
+ .corder = GSC_CRCB,
+ .num_planes = 1,
+ .nr_comp = 1,
+ .mbus_code = V4L2_MBUS_FMT_VYUY8_2X8,
+ }, {
+ .name = "YUV 4:2:2 packed, YCrYCb",
+ .pixelformat = V4L2_PIX_FMT_YVYU,
+ .depth = { 16 },
+ .color = GSC_YUV422,
+ .yorder = GSC_LSB_Y,
+ .corder = GSC_CRCB,
+ .num_planes = 1,
+ .nr_comp = 1,
+ .mbus_code = V4L2_MBUS_FMT_YVYU8_2X8,
+ }, {
+ .name = "YUV 4:4:4 planar, YCbYCr",
+ .pixelformat = V4L2_PIX_FMT_YUV32,
+ .depth = { 32 },
+ .color = GSC_YUV444,
+ .yorder = GSC_LSB_Y,
+ .corder = GSC_CBCR,
+ .num_planes = 1,
+ .nr_comp = 1,
+ }, {
+ .name = "YUV 4:2:2 planar, Y/Cb/Cr",
+ .pixelformat = V4L2_PIX_FMT_YUV422P,
+ .depth = { 16 },
+ .color = GSC_YUV422,
+ .yorder = GSC_LSB_Y,
+ .corder = GSC_CBCR,
+ .num_planes = 1,
+ .nr_comp = 3,
+ }, {
+ .name = "YUV 4:2:2 planar, Y/CbCr",
+ .pixelformat = V4L2_PIX_FMT_NV16,
+ .depth = { 16 },
+ .color = GSC_YUV422,
+ .yorder = GSC_LSB_Y,
+ .corder = GSC_CBCR,
+ .num_planes = 1,
+ .nr_comp = 2,
+ }, {
+ .name = "YUV 4:2:2 planar, Y/CrCb",
+ .pixelformat = V4L2_PIX_FMT_NV61,
+ .depth = { 16 },
+ .color = GSC_YUV422,
+ .yorder = GSC_LSB_Y,
+ .corder = GSC_CRCB,
+ .num_planes = 1,
+ .nr_comp = 2,
+ }, {
+ .name = "YUV 4:2:0 planar, YCbCr",
+ .pixelformat = V4L2_PIX_FMT_YUV420,
+ .depth = { 12 },
+ .color = GSC_YUV420,
+ .yorder = GSC_LSB_Y,
+ .corder = GSC_CBCR,
+ .num_planes = 1,
+ .nr_comp = 3,
+ }, {
+ .name = "YUV 4:2:0 planar, YCbCr",
+ .pixelformat = V4L2_PIX_FMT_YVU420,
+ .depth = { 12 },
+ .color = GSC_YUV420,
+ .yorder = GSC_LSB_Y,
+ .corder = GSC_CRCB,
+ .num_planes = 1,
+ .nr_comp = 3,
+
+ }, {
+ .name = "YUV 4:2:0 planar, Y/CbCr",
+ .pixelformat = V4L2_PIX_FMT_NV12,
+ .depth = { 12 },
+ .color = GSC_YUV420,
+ .yorder = GSC_LSB_Y,
+ .corder = GSC_CBCR,
+ .num_planes = 1,
+ .nr_comp = 2,
+ }, {
+ .name = "YUV 4:2:0 planar, Y/CrCb",
+ .pixelformat = V4L2_PIX_FMT_NV21,
+ .depth = { 12 },
+ .color = GSC_YUV420,
+ .yorder = GSC_LSB_Y,
+ .corder = GSC_CRCB,
+ .num_planes = 1,
+ .nr_comp = 2,
+ }, {
+ .name = "YUV 4:2:0 non-contiguous 2-planar, Y/CbCr",
+ .pixelformat = V4L2_PIX_FMT_NV12M,
+ .depth = { 8, 4 },
+ .color = GSC_YUV420,
+ .yorder = GSC_LSB_Y,
+ .corder = GSC_CBCR,
+ .num_planes = 2,
+ .nr_comp = 2,
+ }, {
+ .name = "YUV 4:2:0 non-contiguous 3-planar, Y/Cb/Cr",
+ .pixelformat = V4L2_PIX_FMT_YUV420M,
+ .depth = { 8, 2, 2 },
+ .color = GSC_YUV420,
+ .yorder = GSC_LSB_Y,
+ .corder = GSC_CBCR,
+ .num_planes = 3,
+ .nr_comp = 3,
+ }, {
+ .name = "YUV 4:2:0 non-contiguous 3-planar, Y/Cr/Cb",
+ .pixelformat = V4L2_PIX_FMT_YVU420M,
+ .depth = { 8, 2, 2 },
+ .color = GSC_YUV420,
+ .yorder = GSC_LSB_Y,
+ .corder = GSC_CRCB,
+ .num_planes = 3,
+ .nr_comp = 3,
+ }, {
+ .name =
+ "YUV 4:2:0 non-contiguous 2-planar, Y/CbCr, tiled",
+ .pixelformat = V4L2_PIX_FMT_NV12MT_16X16,
+ .depth = { 8, 4 },
+ .color = GSC_YUV420,
+ .yorder = GSC_LSB_Y,
+ .corder = GSC_CBCR,
+ .num_planes = 2,
+ .nr_comp = 2,
+ },
+};
+
+struct gsc_fmt *get_format(int index)
+{
+ return &gsc_formats[index];
+}
+
+struct gsc_fmt *find_fmt(u32 *pixelformat, u32 *mbus_code, int index)
+{
+ struct gsc_fmt *fmt, *def_fmt = NULL;
+ unsigned int i;
+
+ if (index >= ARRAY_SIZE(gsc_formats))
+ return NULL;
+
+ for (i = 0; i < ARRAY_SIZE(gsc_formats); ++i) {
+ fmt = get_format(i);
+ if (pixelformat && fmt->pixelformat == *pixelformat)
+ return fmt;
+ if (mbus_code && fmt->mbus_code == *mbus_code)
+ return fmt;
+ if (index == i)
+ def_fmt = fmt;
+ }
+ return def_fmt;
+
+}
+
+void gsc_set_frame_size(struct gsc_frame *frame, int width, int height)
+{
+ frame->f_width = width;
+ frame->f_height = height;
+ frame->crop.width = width;
+ frame->crop.height = height;
+ frame->crop.left = 0;
+ frame->crop.top = 0;
+}
+
+int gsc_cal_prescaler_ratio(struct gsc_variant *var, u32 src, u32 dst, u32 *ratio)
+{
+ if ((dst > src) || (dst >= src / var->poly_sc_down_max)) {
+ *ratio = 1;
+ return 0;
+ }
+
+ if ((src / var->poly_sc_down_max / var->pre_sc_down_max) > dst) {
+ gsc_err("scale ratio exceeded maximun scale down ratio(1/16)");
+ return -EINVAL;
+ }
+
+ *ratio = (dst > (src / 8)) ? 2 : 4;
+
+ return 0;
+}
+
+void gsc_get_prescaler_shfactor(u32 hratio, u32 vratio, u32 *sh)
+{
+ if (hratio == 4 && vratio == 4)
+ *sh = 4;
+ else if ((hratio == 4 && vratio == 2) ||
+ (hratio == 2 && vratio == 4))
+ *sh = 3;
+ else if ((hratio == 4 && vratio == 1) ||
+ (hratio == 1 && vratio == 4) ||
+ (hratio == 2 && vratio == 2))
+ *sh = 2;
+ else if (hratio == 1 && vratio == 1)
+ *sh = 0;
+ else
+ *sh = 1;
+}
+
+void gsc_check_src_scale_info(struct gsc_variant *var, struct gsc_frame *s_frame,
+ u32 *wratio, u32 tx, u32 ty, u32 *hratio)
+{
+ int remainder = 0, walign, halign;
+
+ if (is_yuv420(s_frame->fmt->color)) {
+ walign = GSC_SC_ALIGN_4;
+ halign = GSC_SC_ALIGN_4;
+ } else if (is_yuv422(s_frame->fmt->color)) {
+ walign = GSC_SC_ALIGN_4;
+ halign = GSC_SC_ALIGN_2;
+ } else {
+ walign = GSC_SC_ALIGN_2;
+ halign = GSC_SC_ALIGN_2;
+ }
+
+ remainder = s_frame->crop.width % (*wratio * walign);
+ if (remainder) {
+ s_frame->crop.width -= remainder;
+ gsc_cal_prescaler_ratio(var, s_frame->crop.width, tx, wratio);
+ gsc_info("cropped src width size is recalculated from %d to %d",
+ s_frame->crop.width + remainder, s_frame->crop.width);
+ }
+
+ remainder = s_frame->crop.height % (*hratio * halign);
+ if (remainder) {
+ s_frame->crop.height -= remainder;
+ gsc_cal_prescaler_ratio(var, s_frame->crop.height, ty, hratio);
+ gsc_info("cropped src height size is recalculated from %d to %d",
+ s_frame->crop.height + remainder, s_frame->crop.height);
+ }
+}
+
+int gsc_enum_fmt_mplane(struct v4l2_fmtdesc *f)
+{
+ struct gsc_fmt *fmt;
+
+ fmt = find_fmt(NULL, NULL, f->index);
+ if (!fmt)
+ return -EINVAL;
+
+ strncpy(f->description, fmt->name, sizeof(f->description) - 1);
+ f->pixelformat = fmt->pixelformat;
+
+ return 0;
+}
+
+u32 get_plane_size(struct gsc_frame *frame, unsigned int plane)
+{
+ if (!frame || plane >= frame->fmt->num_planes) {
+ gsc_err("Invalid argument");
+ return 0;
+ }
+
+ return frame->payload[plane];
+}
+
+u32 get_plane_info(struct gsc_frame frm, u32 addr, u32 *index)
+{
+ if (frm.addr.y == addr) {
+ *index = 0;
+ return frm.addr.y;
+ } else if (frm.addr.cb == addr) {
+ *index = 1;
+ return frm.addr.cb;
+ } else if (frm.addr.cr == addr) {
+ *index = 2;
+ return frm.addr.cr;
+ } else {
+ gsc_err("Plane address is wrong");
+ return -EINVAL;
+ }
+}
+
+void gsc_set_prefbuf(struct gsc_dev *gsc, struct gsc_frame frm)
+{
+ u32 f_chk_addr, f_chk_len, s_chk_addr, s_chk_len;
+ f_chk_addr = f_chk_len = s_chk_addr = s_chk_len = 0;
+
+ f_chk_addr = frm.addr.y;
+ f_chk_len = frm.payload[0];
+ if (frm.fmt->num_planes == 2) {
+ s_chk_addr = frm.addr.cb;
+ s_chk_len = frm.payload[1];
+ } else if (frm.fmt->num_planes == 3) {
+ u32 low_addr, low_plane, mid_addr, mid_plane, high_addr, high_plane;
+ u32 t_min, t_max;
+
+ t_min = min3(frm.addr.y, frm.addr.cb, frm.addr.cr);
+ low_addr = get_plane_info(frm, t_min, &low_plane);
+ t_max = max3(frm.addr.y, frm.addr.cb, frm.addr.cr);
+ high_addr = get_plane_info(frm, t_max, &high_plane);
+
+ mid_plane = 3 - (low_plane + high_plane);
+ if (mid_plane == 0)
+ mid_addr = frm.addr.y;
+ else if (mid_plane == 1)
+ mid_addr = frm.addr.cb;
+ else if (mid_plane == 2)
+ mid_addr = frm.addr.cr;
+ else
+ return;
+
+ f_chk_addr = low_addr;
+ if (mid_addr + frm.payload[mid_plane] - low_addr >
+ high_addr + frm.payload[high_plane] - mid_addr) {
+ f_chk_len = frm.payload[low_plane];
+ s_chk_addr = mid_addr;
+ s_chk_len = high_addr + frm.payload[high_plane] - mid_addr;
+ } else {
+ f_chk_len = mid_addr + frm.payload[mid_plane] - low_addr;
+ s_chk_addr = high_addr;
+ s_chk_len = frm.payload[high_plane];
+ }
+ }
+ gsc_dbg("f_addr = 0x%08x, f_len = %d, s_addr = 0x%08x, s_len = %d\n",
+ f_chk_addr, f_chk_len, s_chk_addr, s_chk_len);
+}
+
+int gsc_try_fmt_mplane(struct gsc_ctx *ctx, struct v4l2_format *f)
+{
+ struct gsc_dev *gsc = ctx->gsc_dev;
+ struct gsc_variant *variant = gsc->variant;
+ struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp;
+ struct gsc_fmt *fmt;
+ u32 max_w, max_h, mod_x, mod_y;
+ u32 min_w, min_h, tmp_w, tmp_h;
+ int i;
+
+ gsc_dbg("user put w: %d, h: %d", pix_mp->width, pix_mp->height);
+
+ fmt = find_fmt(&pix_mp->pixelformat, NULL, 0);
+ if (!fmt) {
+ gsc_err("pixelformat format (0x%X) invalid\n", pix_mp->pixelformat);
+ return -EINVAL;
+ }
+
+ if (pix_mp->field == V4L2_FIELD_ANY)
+ pix_mp->field = V4L2_FIELD_NONE;
+ else if (pix_mp->field != V4L2_FIELD_NONE) {
+ gsc_err("Not supported field order(%d)\n", pix_mp->field);
+ return -EINVAL;
+ }
+
+ max_w = variant->pix_max->target_rot_dis_w;
+ max_h = variant->pix_max->target_rot_dis_h;
+ if (V4L2_TYPE_IS_OUTPUT(f->type)) {
+ mod_x = ffs(variant->pix_align->org_w) - 1;
+ if (is_yuv420(fmt->color))
+ mod_y = ffs(variant->pix_align->org_h) - 1;
+ else
+ mod_y = ffs(variant->pix_align->org_h) - 2;
+ min_w = variant->pix_min->org_w;
+ min_h = variant->pix_min->org_h;
+ } else {
+ mod_x = ffs(variant->pix_align->org_w) - 1;
+ if (is_yuv420(fmt->color))
+ mod_y = ffs(variant->pix_align->org_h) - 1;
+ else
+ mod_y = ffs(variant->pix_align->org_h) - 2;
+ min_w = variant->pix_min->target_rot_dis_w;
+ min_h = variant->pix_min->target_rot_dis_h;
+ }
+ gsc_dbg("mod_x: %d, mod_y: %d, max_w: %d, max_h = %d",
+ mod_x, mod_y, max_w, max_h);
+ /* To check if image size is modified to adjust parameter against
+ hardware abilities */
+ tmp_w = pix_mp->width;
+ tmp_h = pix_mp->height;
+
+ v4l_bound_align_image(&pix_mp->width, min_w, max_w, mod_x,
+ &pix_mp->height, min_h, max_h, mod_y, 0);
+ if (tmp_w != pix_mp->width || tmp_h != pix_mp->height)
+ gsc_info("Image size has been modified from %dx%d to %dx%d",
+ tmp_w, tmp_h, pix_mp->width, pix_mp->height);
+
+ pix_mp->num_planes = fmt->num_planes;
+
+ if (ctx->gsc_ctrls.csc_eq_mode->val)
+ ctx->gsc_ctrls.csc_eq->val =
+ (pix_mp->width >= 1280) ? 1 : 0;
+ if (ctx->gsc_ctrls.csc_eq->val) /* HD */
+ pix_mp->colorspace = V4L2_COLORSPACE_REC709;
+ else /* SD */
+ pix_mp->colorspace = V4L2_COLORSPACE_SMPTE170M;
+
+
+ for (i = 0; i < pix_mp->num_planes; ++i) {
+ int bpl = (pix_mp->width * fmt->depth[i]) >> 3;
+ pix_mp->plane_fmt[i].bytesperline = bpl;
+ pix_mp->plane_fmt[i].sizeimage = bpl * pix_mp->height;
+
+ gsc_dbg("[%d]: bpl: %d, sizeimage: %d",
+ i, bpl, pix_mp->plane_fmt[i].sizeimage);
+ }
+
+ return 0;
+}
+
+int gsc_g_fmt_mplane(struct gsc_ctx *ctx, struct v4l2_format *f)
+{
+ struct gsc_frame *frame;
+ struct v4l2_pix_format_mplane *pix_mp;
+ int i;
+
+ frame = ctx_get_frame(ctx, f->type);
+ if (IS_ERR(frame))
+ return PTR_ERR(frame);
+
+ pix_mp = &f->fmt.pix_mp;
+
+ pix_mp->width = frame->f_width;
+ pix_mp->height = frame->f_height;
+ pix_mp->field = V4L2_FIELD_NONE;
+ pix_mp->pixelformat = frame->fmt->pixelformat;
+ pix_mp->colorspace = V4L2_COLORSPACE_JPEG;
+ pix_mp->num_planes = frame->fmt->num_planes;
+
+ for (i = 0; i < pix_mp->num_planes; ++i) {
+ pix_mp->plane_fmt[i].bytesperline = (frame->f_width *
+ frame->fmt->depth[i]) / 8;
+ pix_mp->plane_fmt[i].sizeimage = pix_mp->plane_fmt[i].bytesperline *
+ frame->f_height;
+ }
+
+ return 0;
+}
+
+void gsc_check_crop_change(u32 tmp_w, u32 tmp_h, u32 *w, u32 *h)
+{
+ if (tmp_w != *w || tmp_h != *h) {
+ gsc_info("Image cropped size has been modified from %dx%d to %dx%d",
+ *w, *h, tmp_w, tmp_h);
+ *w = tmp_w;
+ *h = tmp_h;
+ }
+}
+
+int gsc_g_crop(struct gsc_ctx *ctx, struct v4l2_crop *cr)
+{
+ struct gsc_frame *frame;
+
+ frame = ctx_get_frame(ctx, cr->type);
+ if (IS_ERR(frame))
+ return PTR_ERR(frame);
+
+ memcpy(&cr->c, &frame->crop, sizeof(struct v4l2_rect));
+
+ return 0;
+}
+
+int gsc_try_crop(struct gsc_ctx *ctx, struct v4l2_crop *cr)
+{
+ struct gsc_frame *f;
+ struct gsc_dev *gsc = ctx->gsc_dev;
+ struct gsc_variant *variant = gsc->variant;
+ u32 mod_x = 0, mod_y = 0, tmp_w, tmp_h;
+ u32 min_w, min_h, max_w, max_h;
+
+ if (cr->c.top < 0 || cr->c.left < 0) {
+ gsc_err("doesn't support negative values for top & left\n");
+ return -EINVAL;
+ }
+ gsc_dbg("user put w: %d, h: %d", cr->c.width, cr->c.height);
+
+ if (cr->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE)
+ f = &ctx->d_frame;
+ else if (cr->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE)
+ f = &ctx->s_frame;
+ else
+ return -EINVAL;
+
+ max_w = f->f_width;
+ max_h = f->f_height;
+ tmp_w = cr->c.width;
+ tmp_h = cr->c.height;
+
+ if (V4L2_TYPE_IS_OUTPUT(cr->type)) {
+ if ((is_yuv422(f->fmt->color) && f->fmt->nr_comp == 1) ||
+ is_rgb(f->fmt->color))
+ min_w = 32;
+ else
+ min_w = 64;
+ if ((is_yuv422(f->fmt->color) && f->fmt->nr_comp == 3) ||
+ is_yuv420(f->fmt->color))
+ min_h = 32;
+ else
+ min_h = 16;
+ } else {
+ if (is_yuv420(f->fmt->color) || is_yuv422(f->fmt->color))
+ mod_x = ffs(variant->pix_align->target_w) - 1;
+ if (is_yuv420(f->fmt->color))
+ mod_y = ffs(variant->pix_align->target_h) - 1;
+ if (ctx->gsc_ctrls.rotate->val == 90 ||
+ ctx->gsc_ctrls.rotate->val == 270) {
+ max_w = f->f_height;
+ max_h = f->f_width;
+ min_w = variant->pix_min->target_rot_en_w;
+ min_h = variant->pix_min->target_rot_en_h;
+ tmp_w = cr->c.height;
+ tmp_h = cr->c.width;
+ } else {
+ min_w = variant->pix_min->target_rot_dis_w;
+ min_h = variant->pix_min->target_rot_dis_h;
+ }
+ }
+ gsc_dbg("mod_x: %d, mod_y: %d, min_w: %d, min_h = %d,\
+ tmp_w : %d, tmp_h : %d",
+ mod_x, mod_y, min_w, min_h, tmp_w, tmp_h);
+
+ v4l_bound_align_image(&tmp_w, min_w, max_w, mod_x,
+ &tmp_h, min_h, max_h, mod_y, 0);
+
+ if (!V4L2_TYPE_IS_OUTPUT(cr->type) &&
+ (ctx->gsc_ctrls.rotate->val == 90 ||
+ ctx->gsc_ctrls.rotate->val == 270)) {
+ gsc_check_crop_change(tmp_h, tmp_w, &cr->c.width, &cr->c.height);
+ } else {
+ gsc_check_crop_change(tmp_w, tmp_h, &cr->c.width, &cr->c.height);
+ }
+
+ /* adjust left/top if cropping rectangle is out of bounds */
+ /* Need to add code to algin left value with 2's multiple */
+ if (cr->c.left + tmp_w > max_w)
+ cr->c.left = max_w - tmp_w;
+ if (cr->c.top + tmp_h > max_h)
+ cr->c.top = max_h - tmp_h;
+
+ if (is_yuv420(f->fmt->color) || is_yuv422(f->fmt->color))
+ if (cr->c.left % 2)
+ cr->c.left -= 1;
+
+ gsc_dbg("Aligned l:%d, t:%d, w:%d, h:%d, f_w: %d, f_h: %d",
+ cr->c.left, cr->c.top, cr->c.width, cr->c.height, max_w, max_h);
+
+ return 0;
+}
+
+int gsc_check_scaler_ratio(struct gsc_variant *var, int sw, int sh, int dw,
+ int dh, int rot, int out_path)
+{
+ int tmp_w, tmp_h, sc_down_max;
+ sc_down_max =
+ (out_path == GSC_DMA) ? var->sc_down_max : var->local_sc_down;
+
+ if (rot == 90 || rot == 270) {
+ tmp_w = dh;
+ tmp_h = dw;
+ } else {
+ tmp_w = dw;
+ tmp_h = dh;
+ }
+
+ if ((sw / tmp_w) > sc_down_max ||
+ (sh / tmp_h) > sc_down_max ||
+ (tmp_w / sw) > var->sc_up_max ||
+ (tmp_h / sh) > var->sc_up_max)
+ return -EINVAL;
+
+ return 0;
+}
+
+int gsc_set_scaler_info(struct gsc_ctx *ctx)
+{
+ struct gsc_scaler *sc = &ctx->scaler;
+ struct gsc_frame *s_frame = &ctx->s_frame;
+ struct gsc_frame *d_frame = &ctx->d_frame;
+ struct gsc_variant *variant = ctx->gsc_dev->variant;
+ int tx, ty;
+ int ret;
+
+ ret = gsc_check_scaler_ratio(variant, s_frame->crop.width,
+ s_frame->crop.height, d_frame->crop.width, d_frame->crop.height,
+ ctx->gsc_ctrls.rotate->val, ctx->out_path);
+ if (ret) {
+ gsc_err("out of scaler range");
+ return ret;
+ }
+
+ if (ctx->gsc_ctrls.rotate->val == 90 ||
+ ctx->gsc_ctrls.rotate->val == 270) {
+ ty = d_frame->crop.width;
+ tx = d_frame->crop.height;
+ } else {
+ tx = d_frame->crop.width;
+ ty = d_frame->crop.height;
+ }
+
+ ret = gsc_cal_prescaler_ratio(variant, s_frame->crop.width,
+ tx, &sc->pre_hratio);
+ if (ret) {
+ gsc_err("Horizontal scale ratio is out of range");
+ return ret;
+ }
+
+ ret = gsc_cal_prescaler_ratio(variant, s_frame->crop.height,
+ ty, &sc->pre_vratio);
+ if (ret) {
+ gsc_err("Vertical scale ratio is out of range");
+ return ret;
+ }
+
+ gsc_check_src_scale_info(variant, s_frame, &sc->pre_hratio,
+ tx, ty, &sc->pre_vratio);
+
+ gsc_get_prescaler_shfactor(sc->pre_hratio, sc->pre_vratio,
+ &sc->pre_shfactor);
+
+ sc->main_hratio = (s_frame->crop.width << 16) / tx;
+ sc->main_vratio = (s_frame->crop.height << 16) / ty;
+
+ gsc_dbg("scaler input/output size : sx = %d, sy = %d, tx = %d, ty = %d",
+ s_frame->crop.width, s_frame->crop.height, tx, ty);
+ gsc_dbg("scaler ratio info : pre_shfactor : %d, pre_h : %d, pre_v :%d,\
+ main_h : %ld, main_v : %ld", sc->pre_shfactor, sc->pre_hratio,
+ sc->pre_vratio, sc->main_hratio, sc->main_vratio);
+
+ return 0;
+}
+
+int gsc_pipeline_s_stream(struct gsc_dev *gsc, bool on)
+{
+ struct gsc_pipeline *p = &gsc->pipeline;
+ struct exynos_entity_data md_data;
+ int ret = 0;
+
+ /* If gscaler subdev calls the mixer's s_stream, the gscaler must
+ inform the mixer subdev pipeline started from gscaler */
+ if (!strncmp(p->disp->name, MXR_SUBDEV_NAME,
+ sizeof(MXR_SUBDEV_NAME) - 1)) {
+ md_data.mxr_data_from = FROM_GSC_SD;
+ v4l2_set_subdevdata(p->disp, &md_data);
+ }
+
+ ret = v4l2_subdev_call(p->disp, video, s_stream, on);
+ if (ret)
+ gsc_err("Display s_stream on failed\n");
+
+ return ret;
+}
+
+int gsc_out_link_validate(const struct media_pad *source,
+ const struct media_pad *sink)
+{
+ struct v4l2_subdev_format src_fmt;
+ struct v4l2_subdev_crop dst_crop;
+ struct v4l2_subdev *sd;
+ struct gsc_dev *gsc;
+ struct gsc_frame *f;
+ int ret;
+
+ if (media_entity_type(source->entity) != MEDIA_ENT_T_V4L2_SUBDEV ||
+ media_entity_type(sink->entity) != MEDIA_ENT_T_V4L2_SUBDEV) {
+ gsc_err("media entity type isn't subdev\n");
+ return 0;
+ }
+
+ sd = media_entity_to_v4l2_subdev(source->entity);
+ gsc = entity_data_to_gsc(v4l2_get_subdevdata(sd));
+ f = &gsc->out.ctx->d_frame;
+
+ src_fmt.format.width = f->crop.width;
+ src_fmt.format.height = f->crop.height;
+ src_fmt.format.code = f->fmt->mbus_code;
+
+ sd = media_entity_to_v4l2_subdev(sink->entity);
+ /* To check if G-Scaler destination size and Mixer destinatin size
+ are the same */
+ dst_crop.pad = sink->index;
+ dst_crop.which = V4L2_SUBDEV_FORMAT_ACTIVE;
+ ret = v4l2_subdev_call(sd, pad, get_crop, NULL, &dst_crop);
+ if (ret < 0 && ret != -ENOIOCTLCMD) {
+ gsc_err("subdev get_fmt is failed\n");
+ return -EPIPE;
+ }
+
+ if (src_fmt.format.width != dst_crop.rect.width ||
+ src_fmt.format.height != dst_crop.rect.height) {
+ gsc_err("sink and source format is different\
+ src_fmt.w = %d, src_fmt.h = %d,\
+ dst_crop.w = %d, dst_crop.h = %d, rotation = %d",
+ src_fmt.format.width, src_fmt.format.height,
+ dst_crop.rect.width, dst_crop.rect.height,
+ gsc->out.ctx->gsc_ctrls.rotate->val);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+/*
+ * V4L2 controls handling
+ */
+static int gsc_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+ struct gsc_ctx *ctx = ctrl_to_ctx(ctrl);
+
+ switch (ctrl->id) {
+ case V4L2_CID_HFLIP:
+ user_to_drv(ctx->gsc_ctrls.hflip, ctrl->val);
+ break;
+
+ case V4L2_CID_VFLIP:
+ user_to_drv(ctx->gsc_ctrls.vflip, ctrl->val);
+ break;
+
+ case V4L2_CID_ROTATE:
+ user_to_drv(ctx->gsc_ctrls.rotate, ctrl->val);
+ break;
+
+ default:
+ break;
+ }
+
+ if (gsc_m2m_opened(ctx->gsc_dev))
+ gsc_ctx_state_lock_set(GSC_PARAMS, ctx);
+
+ return 0;
+}
+
+const struct v4l2_ctrl_ops gsc_ctrl_ops = {
+ .s_ctrl = gsc_s_ctrl,
+};
+
+static const struct v4l2_ctrl_config gsc_custom_ctrl[] = {
+ {
+ .ops = &gsc_ctrl_ops,
+ .id = V4L2_CID_GLOBAL_ALPHA,
+ .name = "Set RGB alpha",
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .flags = V4L2_CTRL_FLAG_SLIDER,
+ .max = 255,
+ .step = 1,
+ .def = 0,
+ }, {
+ .ops = &gsc_ctrl_ops,
+ .id = V4L2_CID_CACHEABLE,
+ .name = "Set cacheable",
+ .type = V4L2_CTRL_TYPE_BOOLEAN,
+ .flags = V4L2_CTRL_FLAG_SLIDER,
+ .max = 1,
+ .def = true,
+ }, {
+ .ops = &gsc_ctrl_ops,
+ .id = V4L2_CID_TV_LAYER_BLEND_ENABLE,
+ .name = "Enable layer alpha blending",
+ .type = V4L2_CTRL_TYPE_BOOLEAN,
+ .flags = V4L2_CTRL_FLAG_SLIDER,
+ }, {
+ .ops = &gsc_ctrl_ops,
+ .id = V4L2_CID_TV_LAYER_BLEND_ALPHA,
+ .name = "Set alpha for layer blending",
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .flags = V4L2_CTRL_FLAG_SLIDER,
+ .min = 0,
+ .max = 255,
+ .step = 1,
+ }, {
+ .ops = &gsc_ctrl_ops,
+ .id = V4L2_CID_TV_PIXEL_BLEND_ENABLE,
+ .name = "Enable pixel alpha blending",
+ .type = V4L2_CTRL_TYPE_BOOLEAN,
+ .flags = V4L2_CTRL_FLAG_SLIDER,
+ }, {
+ .ops = &gsc_ctrl_ops,
+ .id = V4L2_CID_TV_CHROMA_ENABLE,
+ .name = "Enable chromakey",
+ .type = V4L2_CTRL_TYPE_BOOLEAN,
+ .flags = V4L2_CTRL_FLAG_SLIDER,
+ }, {
+ .ops = &gsc_ctrl_ops,
+ .id = V4L2_CID_TV_CHROMA_VALUE,
+ .name = "Set chromakey value",
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .flags = V4L2_CTRL_FLAG_SLIDER,
+ .min = 0,
+ .max = 255,
+ .step = 1,
+ }, {
+ .ops = &gsc_ctrl_ops,
+ .id = V4L2_CID_CSC_EQ_MODE,
+ .name = "Set CSC equation mode",
+ .type = V4L2_CTRL_TYPE_BOOLEAN,
+ .flags = V4L2_CTRL_FLAG_SLIDER,
+ .max = DEFAULT_CSC_EQ,
+ .def = DEFAULT_CSC_EQ,
+ }, {
+ .ops = &gsc_ctrl_ops,
+ .id = V4L2_CID_CSC_EQ,
+ .name = "Set CSC equation",
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .flags = V4L2_CTRL_FLAG_SLIDER,
+ .step = 1,
+ .max = 8,
+ .def = V4L2_COLORSPACE_REC709,
+ }, {
+ .ops = &gsc_ctrl_ops,
+ .id = V4L2_CID_CSC_RANGE,
+ .name = "Set CSC range",
+ .type = V4L2_CTRL_TYPE_BOOLEAN,
+ .flags = V4L2_CTRL_FLAG_SLIDER,
+ .max = DEFAULT_CSC_RANGE,
+ .def = DEFAULT_CSC_RANGE,
+ },
+};
+
+int gsc_ctrls_create(struct gsc_ctx *ctx)
+{
+ if (ctx->ctrls_rdy) {
+ gsc_err("Control handler of this context was created already");
+ return 0;
+ }
+
+ v4l2_ctrl_handler_init(&ctx->ctrl_handler, GSC_MAX_CTRL_NUM);
+
+ ctx->gsc_ctrls.rotate = v4l2_ctrl_new_std(&ctx->ctrl_handler,
+ &gsc_ctrl_ops, V4L2_CID_ROTATE, 0, 270, 90, 0);
+ ctx->gsc_ctrls.hflip = v4l2_ctrl_new_std(&ctx->ctrl_handler,
+ &gsc_ctrl_ops, V4L2_CID_HFLIP, 0, 1, 1, 0);
+ ctx->gsc_ctrls.vflip = v4l2_ctrl_new_std(&ctx->ctrl_handler,
+ &gsc_ctrl_ops, V4L2_CID_VFLIP, 0, 1, 1, 0);
+
+
+ ctx->gsc_ctrls.global_alpha = v4l2_ctrl_new_custom(&ctx->ctrl_handler,
+ &gsc_custom_ctrl[0], NULL);
+ ctx->gsc_ctrls.layer_blend_en = v4l2_ctrl_new_custom(&ctx->ctrl_handler,
+ &gsc_custom_ctrl[2], NULL);
+ ctx->gsc_ctrls.layer_alpha = v4l2_ctrl_new_custom(&ctx->ctrl_handler,
+ &gsc_custom_ctrl[3], NULL);
+ ctx->gsc_ctrls.pixel_blend_en = v4l2_ctrl_new_custom(&ctx->ctrl_handler,
+ &gsc_custom_ctrl[4], NULL);
+ ctx->gsc_ctrls.chroma_en = v4l2_ctrl_new_custom(&ctx->ctrl_handler,
+ &gsc_custom_ctrl[5], NULL);
+ ctx->gsc_ctrls.chroma_val = v4l2_ctrl_new_custom(&ctx->ctrl_handler,
+ &gsc_custom_ctrl[6], NULL);
+
+/* for CSC equation */
+ ctx->gsc_ctrls.csc_eq_mode = v4l2_ctrl_new_custom(&ctx->ctrl_handler,
+ &gsc_custom_ctrl[7], NULL);
+ ctx->gsc_ctrls.csc_eq = v4l2_ctrl_new_custom(&ctx->ctrl_handler,
+ &gsc_custom_ctrl[8], NULL);
+ ctx->gsc_ctrls.csc_range = v4l2_ctrl_new_custom(&ctx->ctrl_handler,
+ &gsc_custom_ctrl[9], NULL);
+
+ ctx->ctrls_rdy = ctx->ctrl_handler.error == 0;
+
+
+ if (ctx->ctrl_handler.error) {
+ int err = ctx->ctrl_handler.error;
+ v4l2_ctrl_handler_free(&ctx->ctrl_handler);
+ gsc_err("Failed to gscaler control hander create");
+ return err;
+ }
+
+ return 0;
+}
+
+void gsc_ctrls_delete(struct gsc_ctx *ctx)
+{
+ if (ctx->ctrls_rdy) {
+ v4l2_ctrl_handler_free(&ctx->ctrl_handler);
+ ctx->ctrls_rdy = false;
+ }
+}
+
+/* The color format (nr_comp, num_planes) must be already configured. */
+int gsc_prepare_addr(struct gsc_ctx *ctx, struct vb2_buffer *vb,
+ struct gsc_frame *frame, struct gsc_addr *addr)
+{
+ int ret = 0;
+ u32 pix_size;
+
+ if (IS_ERR(vb) || IS_ERR(frame)) {
+ gsc_err("Invalid argument");
+ return -EINVAL;
+ }
+
+ pix_size = frame->f_width * frame->f_height;
+
+ gsc_dbg("num_planes= %d, nr_comp= %d, pix_size= %d",
+ frame->fmt->num_planes, frame->fmt->nr_comp, pix_size);
+
+ addr->y = vb2_dma_contig_plane_dma_addr(vb, 0);
+
+ if (frame->fmt->num_planes == 1) {
+ switch (frame->fmt->nr_comp) {
+ case 1:
+ addr->cb = 0;
+ addr->cr = 0;
+ break;
+ case 2:
+ /* decompose Y into Y/Cb */
+ addr->cb = (dma_addr_t)(addr->y + pix_size);
+ addr->cr = 0;
+ break;
+ case 3:
+ addr->cb = (dma_addr_t)(addr->y + pix_size);
+ addr->cr = (dma_addr_t)(addr->cb + (pix_size >> 2));
+ break;
+ default:
+ gsc_err("Invalid the number of color planes");
+ return -EINVAL;
+ }
+ } else {
+ if (frame->fmt->num_planes >= 2)
+ addr->cb = vb2_dma_contig_plane_dma_addr(vb, 1);
+
+ if (frame->fmt->num_planes == 3)
+ addr->cr = vb2_dma_contig_plane_dma_addr(vb, 2);
+ }
+
+ if (frame->fmt->pixelformat == V4L2_PIX_FMT_YVU420) {
+ u32 t_cb = addr->cb;
+ addr->cb = addr->cr;
+ addr->cr = t_cb;
+ }
+
+ gsc_dbg("ADDR: y= 0x%X cb= 0x%X cr= 0x%X ret= %d",
+ addr->y, addr->cb, addr->cr, ret);
+
+ return ret;
+}
+
+void gsc_wq_suspend(struct work_struct *work)
+{
+ struct gsc_dev *gsc = container_of(work, struct gsc_dev,
+ work_struct);
+ pm_runtime_put_sync(&gsc->pdev->dev);
+}
+
+void gsc_cap_irq_handler(struct gsc_dev *gsc)
+{
+ int done_index;
+
+ done_index = gsc_hw_get_done_output_buf_index(gsc);
+ gsc_info("done_index : %d", done_index);
+ if (done_index < 0)
+ gsc_err("All buffers are masked\n");
+ test_bit(ST_CAPT_RUN, &gsc->state) ? :
+ set_bit(ST_CAPT_RUN, &gsc->state);
+ vb2_buffer_done(gsc->cap.vbq.bufs[done_index], VB2_BUF_STATE_DONE);
+}
+
+static irqreturn_t gsc_irq_handler(int irq, void *priv)
+{
+ struct gsc_dev *gsc = priv;
+ int gsc_irq;
+
+ gsc_irq = gsc_hw_get_irq_status(gsc);
+ gsc_hw_clear_irq(gsc, gsc_irq);
+
+ if (gsc_irq == GSC_OR_IRQ) {
+ gsc_err("Local path input over-run interrupt has occurred!\n");
+ return IRQ_HANDLED;
+ }
+
+ spin_lock(&gsc->slock);
+
+ if (test_and_clear_bit(ST_M2M_RUN, &gsc->state)) {
+ struct vb2_buffer *src_vb, *dst_vb;
+ struct gsc_ctx *ctx =
+ v4l2_m2m_get_curr_priv(gsc->m2m.m2m_dev);
+
+ if (!ctx || !ctx->m2m_ctx)
+ goto isr_unlock;
+
+ src_vb = v4l2_m2m_src_buf_remove(ctx->m2m_ctx);
+ dst_vb = v4l2_m2m_dst_buf_remove(ctx->m2m_ctx);
+ if (src_vb && dst_vb) {
+ v4l2_m2m_buf_done(src_vb, VB2_BUF_STATE_DONE);
+ v4l2_m2m_buf_done(dst_vb, VB2_BUF_STATE_DONE);
+
+ if (test_and_clear_bit(ST_STOP_REQ, &gsc->state))
+ wake_up(&gsc->irq_queue);
+ else
+ v4l2_m2m_job_finish(gsc->m2m.m2m_dev, ctx->m2m_ctx);
+
+ /* wake_up job_abort, stop_streaming */
+ spin_lock(&ctx->slock);
+ if (ctx->state & GSC_CTX_STOP_REQ) {
+ ctx->state &= ~GSC_CTX_STOP_REQ;
+ wake_up(&gsc->irq_queue);
+ }
+ spin_unlock(&ctx->slock);
+ }
+ /* schedule pm_runtime_put_sync */
+ queue_work(gsc->irq_workqueue, &gsc->work_struct);
+ } else if (test_bit(ST_OUTPUT_STREAMON, &gsc->state)) {
+ if (!list_empty(&gsc->out.active_buf_q)) {
+ struct gsc_input_buf *done_buf;
+ done_buf = active_queue_pop(&gsc->out, gsc);
+ gsc_hw_set_input_buf_masking(gsc, done_buf->idx, true);
+ if (!list_is_last(&done_buf->list, &gsc->out.active_buf_q)) {
+ vb2_buffer_done(&done_buf->vb, VB2_BUF_STATE_DONE);
+ list_del(&done_buf->list);
+ }
+ }
+ } else if (test_bit(ST_CAPT_PEND, &gsc->state)) {
+ gsc_cap_irq_handler(gsc);
+ }
+
+isr_unlock:
+ spin_unlock(&gsc->slock);
+ return IRQ_HANDLED;
+}
+
+/* Below piece of code shall stay disabled as there is no media device
+ * support for exynos as of now.
+ */
+#if 0
+static int gsc_get_media_info(struct device *dev, void *p)
+{
+ struct exynos_md **mdev = p;
+ struct platform_device *pdev = to_platform_device(dev);
+
+ mdev[pdev->id] = dev_get_drvdata(dev);
+ if (!mdev[pdev->id])
+ return -ENODEV;
+
+ return 0;
+}
+#endif
+
+static int gsc_runtime_suspend(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct gsc_dev *gsc = (struct gsc_dev *)platform_get_drvdata(pdev);
+
+ if (gsc_m2m_opened(gsc))
+ gsc->m2m.ctx = NULL;
+
+ clk_disable(gsc->clock);
+ clear_bit(ST_PWR_ON, &gsc->state);
+
+ return 0;
+}
+
+static int gsc_runtime_resume(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct gsc_dev *gsc = (struct gsc_dev *)platform_get_drvdata(pdev);
+
+ clk_enable(gsc->clock);
+ set_bit(ST_PWR_ON, &gsc->state);
+ return 0;
+}
+static inline void *gsc_get_drv_data(struct platform_device *pdev);
+
+#ifdef CONFIG_EXYNOS_IOMMU
+static int iommu_init(struct platform_device *pdev)
+{
+ struct platform_device *pds;
+
+ pds = find_sysmmu_dt(pdev, "sysmmu");
+ if (pds==NULL) {
+ printk(KERN_ERR "No sysmmu found\n");
+ return -1;
+ }
+
+ platform_set_sysmmu(&pds->dev, &pdev->dev);
+ if (!s5p_create_iommu_mapping(&pdev->dev, 0x20000000,
+ SZ_128M, 4, NULL)) {
+ printk(KERN_ERR "IOMMU mapping failed\n");
+ return -1;
+ }
+
+ return 0;
+}
+#endif
+static int gsc_probe(struct platform_device *pdev)
+{
+ struct gsc_dev *gsc;
+ struct resource *res;
+ struct gsc_driverdata *drv_data;
+
+ /* Below piece of code shall stay disabled as there is no media device
+ * support for exynos as of now.
+ */
+#if 0
+ struct device_driver *driver;
+ struct exynos_md *mdev[MDEV_MAX_NUM] = {NULL,};
+#endif
+ int ret = 0;
+ char workqueue_name[WORKQUEUE_NAME_SIZE];
+
+ dev_dbg(&pdev->dev, "%s():\n", __func__);
+ drv_data = (struct gsc_driverdata *)
+ gsc_get_drv_data(pdev);
+
+#ifdef CONFIG_EXYNOS_IOMMU
+ if (iommu_init(pdev)) {
+ dev_err(&pdev->dev, "IOMMU Initialization failed\n");
+ return -EINVAL;
+ }
+#endif
+
+ if (pdev->dev.of_node) {
+ pdev->id = of_alias_get_id(pdev->dev.of_node, "gsc");
+ if (pdev->id < 0)
+ dev_err(&pdev->dev, "failed to get alias id, errno %d\n", ret);
+ }
+
+ if (pdev->id >= drv_data->num_entities) {
+ dev_err(&pdev->dev, "Invalid platform device id: %d\n",
+ pdev->id);
+ return -EINVAL;
+ }
+
+ gsc = kzalloc(sizeof(struct gsc_dev), GFP_KERNEL);
+ if (!gsc)
+ return -ENOMEM;
+
+ gsc->id = pdev->id;
+ gsc->variant = drv_data->variant[gsc->id];
+ gsc->pdev = pdev;
+ gsc->pdata = pdev->dev.platform_data;
+
+ init_waitqueue_head(&gsc->irq_queue);
+ spin_lock_init(&gsc->slock);
+ mutex_init(&gsc->lock);
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ dev_err(&pdev->dev, "failed to find the registers\n");
+ ret = -ENOENT;
+ goto err_info;
+ }
+
+ gsc->regs_res = request_mem_region(res->start, resource_size(res),
+ dev_name(&pdev->dev));
+ if (!gsc->regs_res) {
+ dev_err(&pdev->dev, "failed to obtain register region\n");
+ ret = -ENOENT;
+ goto err_info;
+ }
+
+ gsc->regs = ioremap(res->start, resource_size(res));
+ if (!gsc->regs) {
+ dev_err(&pdev->dev, "failed to map registers\n");
+ ret = -ENXIO;
+ goto err_req_region;
+ }
+
+ /* Get Gscaler clock */
+ gsc->clock = clk_get(&gsc->pdev->dev, GSC_CLOCK_GATE_NAME);
+ if (IS_ERR(gsc->clock)) {
+ gsc_err("failed to get gscaler.%d clock", gsc->id);
+ goto err_regs_unmap;
+ }
+ clk_put(gsc->clock);
+
+ res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+ if (!res) {
+ dev_err(&pdev->dev, "failed to get IRQ resource\n");
+ ret = -ENXIO;
+ goto err_regs_unmap;
+ }
+ gsc->irq = res->start;
+
+ ret = request_irq(gsc->irq, gsc_irq_handler, 0, pdev->name, gsc);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to install irq (%d)\n", ret);
+ goto err_regs_unmap;
+ }
+
+ platform_set_drvdata(pdev, gsc);
+
+ ret = gsc_register_m2m_device(gsc);
+ if (ret)
+ goto err_irq;
+
+ /* G-Scaler driver shall be used as only a color-space-convert
+ * and scaling device. Below piece of code shall stay disabled
+ * as there is no media device support for exynos as of now.
+ */
+#if 0
+ /* find media device */
+ driver = driver_find(MDEV_MODULE_NAME, &platform_bus_type);
+ if (!driver)
+ goto err_irq;
+
+ ret = driver_for_each_device(driver, NULL, &mdev[0],
+ gsc_get_media_info);
+
+ if (ret)
+ goto err_irq;
+
+ gsc->mdev[MDEV_OUTPUT] = mdev[MDEV_OUTPUT];
+ gsc->mdev[MDEV_CAPTURE] = mdev[MDEV_CAPTURE];
+
+ gsc_dbg("mdev->mdev[%d] = 0x%08x, mdev->mdev[%d] = 0x%08x",
+ MDEV_OUTPUT, (u32)gsc->mdev[MDEV_OUTPUT], MDEV_CAPTURE,
+ (u32)gsc->mdev[MDEV_CAPTURE]);
+
+ ret = gsc_register_output_device(gsc);
+ if (ret)
+ goto err_irq;
+#endif
+ sprintf(workqueue_name, "gsc%d_irq_wq_name", gsc->id);
+ gsc->irq_workqueue = create_singlethread_workqueue(workqueue_name);
+ if (gsc->irq_workqueue == NULL) {
+ dev_err(&pdev->dev, "failed to create workqueue for gsc\n");
+ goto err_irq;
+ }
+ INIT_WORK(&gsc->work_struct, gsc_wq_suspend);
+
+ gsc->alloc_ctx = vb2_dma_contig_init_ctx(&pdev->dev);
+ if (IS_ERR(gsc->alloc_ctx)) {
+ ret = PTR_ERR(gsc->alloc_ctx);
+ goto err_wq;
+ }
+
+ gsc_runtime_resume(&pdev->dev);
+
+ gsc_info("gsc-%d registered successfully", gsc->id);
+
+ return 0;
+
+err_wq:
+ destroy_workqueue(gsc->irq_workqueue);
+err_irq:
+ free_irq(gsc->irq, gsc);
+err_regs_unmap:
+ iounmap(gsc->regs);
+err_req_region:
+ release_resource(gsc->regs_res);
+ kfree(gsc->regs_res);
+err_info:
+ kfree(gsc);
+
+ return ret;
+}
+
+static int __devexit gsc_remove(struct platform_device *pdev)
+{
+ struct gsc_dev *gsc =
+ (struct gsc_dev *)platform_get_drvdata(pdev);
+
+ free_irq(gsc->irq, gsc);
+
+ gsc_unregister_m2m_device(gsc);
+ gsc_unregister_output_device(gsc);
+ gsc_unregister_capture_device(gsc);
+
+ vb2_dma_contig_cleanup_ctx(gsc->alloc_ctx);
+ pm_runtime_disable(&pdev->dev);
+
+ iounmap(gsc->regs);
+ release_resource(gsc->regs_res);
+ kfree(gsc->regs_res);
+ kfree(gsc);
+
+ dev_info(&pdev->dev, "%s driver unloaded\n", pdev->name);
+ return 0;
+}
+
+static int gsc_suspend(struct device *dev)
+{
+ struct platform_device *pdev;
+ struct gsc_dev *gsc;
+ int ret = 0;
+
+ pdev = to_platform_device(dev);
+ gsc = (struct gsc_dev *)platform_get_drvdata(pdev);
+
+ if (gsc_m2m_run(gsc)) {
+ set_bit(ST_STOP_REQ, &gsc->state);
+ ret = wait_event_timeout(gsc->irq_queue,
+ !test_bit(ST_STOP_REQ, &gsc->state),
+ GSC_SHUTDOWN_TIMEOUT);
+ if (ret == 0)
+ dev_err(&gsc->pdev->dev, "wait timeout : %s\n",
+ __func__);
+ }
+ if (gsc_cap_active(gsc)) {
+ gsc_err("capture device is running!!");
+ return -EINVAL;
+ }
+
+ pm_runtime_put_sync(dev);
+
+ return ret;
+}
+
+static int gsc_resume(struct device *dev)
+{
+ struct platform_device *pdev;
+ struct gsc_driverdata *drv_data;
+ struct gsc_dev *gsc;
+ struct gsc_ctx *ctx;
+
+ pdev = to_platform_device(dev);
+ gsc = (struct gsc_dev *)platform_get_drvdata(pdev);
+ drv_data = (struct gsc_driverdata *)
+ platform_get_device_id(pdev)->driver_data;
+
+ pm_runtime_get_sync(dev);
+ if (gsc_m2m_opened(gsc)) {
+ ctx = v4l2_m2m_get_curr_priv(gsc->m2m.m2m_dev);
+ if (ctx != NULL) {
+ gsc->m2m.ctx = NULL;
+ v4l2_m2m_job_finish(gsc->m2m.m2m_dev, ctx->m2m_ctx);
+ }
+ }
+
+ return 0;
+}
+
+static const struct dev_pm_ops gsc_pm_ops = {
+ .suspend = gsc_suspend,
+ .resume = gsc_resume,
+ .runtime_suspend = gsc_runtime_suspend,
+ .runtime_resume = gsc_runtime_resume,
+};
+
+struct gsc_pix_max gsc_v_100_max = {
+ .org_scaler_bypass_w = 8192,
+ .org_scaler_bypass_h = 8192,
+ .org_scaler_input_w = 4800,
+ .org_scaler_input_h = 3344,
+ .real_rot_dis_w = 4800,
+ .real_rot_dis_h = 3344,
+ .real_rot_en_w = 2047,
+ .real_rot_en_h = 2047,
+ .target_rot_dis_w = 4800,
+ .target_rot_dis_h = 3344,
+ .target_rot_en_w = 2016,
+ .target_rot_en_h = 2016,
+};
+
+struct gsc_pix_min gsc_v_100_min = {
+ .org_w = 64,
+ .org_h = 32,
+ .real_w = 64,
+ .real_h = 32,
+ .target_rot_dis_w = 64,
+ .target_rot_dis_h = 32,
+ .target_rot_en_w = 32,
+ .target_rot_en_h = 16,
+};
+
+struct gsc_pix_align gsc_v_100_align = {
+ .org_h = 16,
+ .org_w = 16, /* yuv420 : 16, others : 8 */
+ .offset_h = 2, /* yuv420/422 : 2, others : 1 */
+ .real_w = 16, /* yuv420/422 : 4~16, others : 2~8 */
+ .real_h = 16, /* yuv420 : 4~16, others : 1 */
+ .target_w = 2, /* yuv420/422 : 2, others : 1 */
+ .target_h = 2, /* yuv420 : 2, others : 1 */
+};
+
+struct gsc_variant gsc_v_100_variant = {
+ .pix_max = &gsc_v_100_max,
+ .pix_min = &gsc_v_100_min,
+ .pix_align = &gsc_v_100_align,
+ .in_buf_cnt = 8,
+ .out_buf_cnt = 16,
+ .sc_up_max = 8,
+ .sc_down_max = 16,
+ .poly_sc_down_max = 4,
+ .pre_sc_down_max = 4,
+ .local_sc_down = 2,
+};
+
+static struct gsc_driverdata gsc_v_100_drvdata = {
+ .variant = {
+ [0] = &gsc_v_100_variant,
+ [1] = &gsc_v_100_variant,
+ [2] = &gsc_v_100_variant,
+ [3] = &gsc_v_100_variant,
+ },
+ .num_entities = 4,
+ .lclk_frequency = 266000000UL,
+};
+
+static struct platform_device_id gsc_driver_ids[] = {
+ {
+ .name = "exynos-gsc",
+ .driver_data = (unsigned long)&gsc_v_100_drvdata,
+ },
+ {},
+};
+MODULE_DEVICE_TABLE(platform, gsc_driver_ids);
+
+#ifdef CONFIG_OF
+static const struct of_device_id exynos_gsc_match[] = {
+ { .compatible = "samsung,exynos-gsc",
+ .data = &gsc_v_100_drvdata, },
+ {},
+};
+MODULE_DEVICE_TABLE(of, exynos_gsc_match);
+#else
+#define exynos_gsc_match NULL
+#endif
+
+static inline void *gsc_get_drv_data(struct platform_device *pdev)
+{
+#ifdef CONFIG_OF
+ if (pdev->dev.of_node) {
+ const struct of_device_id *match;
+ match = of_match_node(exynos_gsc_match, pdev->dev.of_node);
+ return (struct gsc_driverdata *) match->data;
+ }
+#endif
+ return (struct gsc_driverdata *)platform_get_device_id(pdev)->driver_data;
+}
+
+static struct platform_driver gsc_driver = {
+ .probe = gsc_probe,
+ .remove = __devexit_p(gsc_remove),
+ .id_table = gsc_driver_ids,
+ .driver = {
+ .name = GSC_MODULE_NAME,
+ .owner = THIS_MODULE,
+ .pm = &gsc_pm_ops,
+ .of_match_table = exynos_gsc_match,
+ }
+};
+
+static int __init gsc_init(void)
+{
+ int ret = platform_driver_register(&gsc_driver);
+ if (ret)
+ gsc_err("platform_driver_register failed: %d\n", ret);
+ return ret;
+}
+
+static void __exit gsc_exit(void)
+{
+ platform_driver_unregister(&gsc_driver);
+}
+
+module_init(gsc_init);
+module_exit(gsc_exit);
+
+MODULE_AUTHOR("Hyunwong Kim <khw0178.kim@xxxxxxxxxxx>");
+MODULE_DESCRIPTION("Samsung EXYNOS5 Soc series G-Scaler driver");
+MODULE_LICENSE("GPL");
--- /dev/null
+/* linux/drivers/media/video/exynos/gsc/gsc-core.h
+ *
+ * Copyright (c) 2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com
+ *
+ * header file for Samsung EXYNOS5 SoC series G-scaler driver
+
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef GSC_CORE_H_
+#define GSC_CORE_H_
+
+#include <linux/delay.h>
+#include <linux/sched.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+#include <linux/videodev2.h>
+#include <linux/io.h>
+#include <linux/pm_runtime.h>
+#include <media/videobuf2-core.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-mem2mem.h>
+#include <media/v4l2-mediabus.h>
+#include <media/exynos_mc.h>
+#include <media/exynos_gscaler.h>
+#define CONFIG_VB2_GSC_DMA_CONTIG 1
+#include <media/videobuf2-dma-contig.h>
+#include "regs-gsc.h"
+extern int gsc_dbg;
+
+#define gsc_info(fmt, args...) \
+ do { \
+ if (gsc_dbg >= 6) \
+ printk(KERN_INFO "[INFO]%s:%d: "fmt "\n", \
+ __func__, __LINE__, ##args); \
+ } while (0)
+
+#define gsc_err(fmt, args...) \
+ do { \
+ if (gsc_dbg >= 3) \
+ printk(KERN_ERR "[ERROR]%s:%d: "fmt "\n", \
+ __func__, __LINE__, ##args); \
+ } while (0)
+
+#define gsc_warn(fmt, args...) \
+ do { \
+ if (gsc_dbg >= 4) \
+ printk(KERN_WARNING "[WARN]%s:%d: "fmt "\n", \
+ __func__, __LINE__, ##args); \
+ } while (0)
+
+#define gsc_dbg(fmt, args...) \
+ do { \
+ if (gsc_dbg >= 7) \
+ printk(KERN_DEBUG "[DEBUG]%s:%d: "fmt "\n", \
+ __func__, __LINE__, ##args); \
+ } while (0)
+
+#define GSC_MAX_CLOCKS 3
+#define GSC_SHUTDOWN_TIMEOUT ((100*HZ)/1000)
+#define GSC_MAX_DEVS 4
+#define WORKQUEUE_NAME_SIZE 32
+#define FIMD_NAME_SIZE 32
+#define GSC_M2M_BUF_NUM 0
+#define GSC_OUT_BUF_MAX 2
+#define GSC_MAX_CTRL_NUM 10
+#define GSC_OUT_MAX_MASK_NUM 7
+#define GSC_SC_ALIGN_4 4
+#define GSC_SC_ALIGN_2 2
+#define GSC_OUT_DEF_SRC 15
+#define GSC_OUT_DEF_DST 7
+#define DEFAULT_GSC_SINK_WIDTH 800
+#define DEFAULT_GSC_SINK_HEIGHT 480
+#define DEFAULT_GSC_SOURCE_WIDTH 800
+#define DEFAULT_GSC_SOURCE_HEIGHT 480
+#define DEFAULT_CSC_EQ 1
+#define DEFAULT_CSC_RANGE 1
+
+#define GSC_LAST_DEV_ID 3
+#define GSC_PAD_SINK 0
+#define GSC_PAD_SOURCE 1
+#define GSC_PADS_NUM 2
+
+#define GSC_PARAMS (1 << 0)
+#define GSC_SRC_FMT (1 << 1)
+#define GSC_DST_FMT (1 << 2)
+#define GSC_CTX_M2M (1 << 3)
+#define GSC_CTX_OUTPUT (1 << 4)
+#define GSC_CTX_START (1 << 5)
+#define GSC_CTX_STOP_REQ (1 << 6)
+#define GSC_CTX_CAP (1 << 10)
+#define MAX_MDEV 2
+
+#define V4L2_CID_CACHEABLE (V4L2_CID_LASTP1 + 1)
+#define V4L2_CID_TV_LAYER_BLEND_ENABLE (V4L2_CID_LASTP1 + 2)
+#define V4L2_CID_TV_LAYER_BLEND_ALPHA (V4L2_CID_LASTP1 + 3)
+#define V4L2_CID_TV_PIXEL_BLEND_ENABLE (V4L2_CID_LASTP1 + 4)
+#define V4L2_CID_TV_CHROMA_ENABLE (V4L2_CID_LASTP1 + 5)
+#define V4L2_CID_TV_CHROMA_VALUE (V4L2_CID_LASTP1 + 6)
+/* for color space conversion equation selection */
+#define V4L2_CID_CSC_EQ_MODE (V4L2_CID_LASTP1 + 8)
+#define V4L2_CID_CSC_EQ (V4L2_CID_LASTP1 + 9)
+#define V4L2_CID_CSC_RANGE (V4L2_CID_LASTP1 + 10)
+#define V4L2_CID_GLOBAL_ALPHA (V4L2_CID_LASTP1 + 11)
+
+enum gsc_dev_flags {
+ /* for global */
+ ST_PWR_ON,
+ ST_STOP_REQ,
+ /* for m2m node */
+ ST_M2M_OPEN,
+ ST_M2M_RUN,
+ /* for output node */
+ ST_OUTPUT_OPEN,
+ ST_OUTPUT_STREAMON,
+ /* for capture node */
+ ST_CAPT_OPEN,
+ ST_CAPT_PEND,
+ ST_CAPT_RUN,
+ ST_CAPT_STREAM,
+ ST_CAPT_PIPE_STREAM,
+ ST_CAPT_SUSPENDED,
+ ST_CAPT_SHUT,
+ ST_CAPT_APPLY_CFG,
+ ST_CAPT_JPEG,
+};
+
+enum gsc_cap_input_entity {
+ GSC_IN_NONE,
+ GSC_IN_FLITE_PREVIEW,
+ GSC_IN_FLITE_CAMCORDING,
+ GSC_IN_FIMD_WRITEBACK,
+};
+
+enum gsc_irq {
+ GSC_OR_IRQ = 17,
+ GSC_DONE_IRQ = 16,
+};
+
+/**
+ * enum gsc_datapath - the path of data used for gscaler
+ * @GSC_CAMERA: from camera
+ * @GSC_DMA: from/to DMA
+ * @GSC_LOCAL: to local path
+ * @GSC_WRITEBACK: from FIMD
+ */
+enum gsc_datapath {
+ GSC_CAMERA = 0x1,
+ GSC_DMA,
+ GSC_MIXER,
+ GSC_FIMD,
+ GSC_WRITEBACK,
+};
+
+enum gsc_color_fmt {
+ GSC_RGB = 0x1,
+ GSC_YUV420 = 0x2,
+ GSC_YUV422 = 0x4,
+ GSC_YUV444 = 0x8,
+};
+
+enum gsc_yuv_fmt {
+ GSC_LSB_Y = 0x10,
+ GSC_LSB_C,
+ GSC_CBCR = 0x20,
+ GSC_CRCB,
+};
+
+#define fh_to_ctx(__fh) container_of(__fh, struct gsc_ctx, fh)
+#define is_rgb(x) (!!((x) & 0x1))
+#define is_yuv420(x) (!!((x) & 0x2))
+#define is_yuv422(x) (!!((x) & 0x4))
+#define gsc_m2m_run(dev) test_bit(ST_M2M_RUN, &(dev)->state)
+#define gsc_m2m_opened(dev) test_bit(ST_M2M_OPEN, &(dev)->state)
+#define gsc_out_run(dev) test_bit(ST_OUTPUT_STREAMON, &(dev)->state)
+#define gsc_out_opened(dev) test_bit(ST_OUTPUT_OPEN, &(dev)->state)
+#define gsc_cap_opened(dev) test_bit(ST_CAPT_OPEN, &(dev)->state)
+#define gsc_cap_active(dev) test_bit(ST_CAPT_RUN, &(dev)->state)
+
+#define ctrl_to_ctx(__ctrl) \
+ container_of((__ctrl)->handler, struct gsc_ctx, ctrl_handler)
+#define entity_data_to_gsc(data) \
+ container_of(data, struct gsc_dev, md_data)
+#define gsc_capture_get_frame(ctx, pad)\
+ ((pad == GSC_PAD_SINK) ? &ctx->s_frame : &ctx->d_frame)
+/**
+ * struct gsc_fmt - the driver's internal color format data
+ * @mbus_code: Media Bus pixel code, -1 if not applicable
+ * @name: format description
+ * @pixelformat: the fourcc code for this format, 0 if not applicable
+ * @yorder: Y/C order
+ * @corder: Chrominance order control
+ * @num_planes: number of physically non-contiguous data planes
+ * @nr_comp: number of physically contiguous data planes
+ * @depth: per plane driver's private 'number of bits per pixel'
+ * @flags: flags indicating which operation mode format applies to
+ */
+struct gsc_fmt {
+ enum v4l2_mbus_pixelcode mbus_code;
+ char *name;
+ u32 pixelformat;
+ u32 color;
+ u32 yorder;
+ u32 corder;
+ u16 num_planes;
+ u16 nr_comp;
+ u8 depth[VIDEO_MAX_PLANES];
+ u32 flags;
+};
+
+/**
+ * struct gsc_input_buf - the driver's video buffer
+ * @vb: videobuf2 buffer
+ * @list : linked list structure for buffer queue
+ * @idx : index of G-Scaler input buffer
+ */
+struct gsc_input_buf {
+ struct vb2_buffer vb;
+ struct list_head list;
+ int idx;
+};
+
+/**
+ * struct gsc_addr - the G-Scaler physical address set
+ * @y: luminance plane address
+ * @cb: Cb plane address
+ * @cr: Cr plane address
+ */
+struct gsc_addr {
+ dma_addr_t y;
+ dma_addr_t cb;
+ dma_addr_t cr;
+};
+
+/* struct gsc_ctrls - the G-Scaler control set
+ * @rotate: rotation degree
+ * @hflip: horizontal flip
+ * @vflip: vertical flip
+ * @global_alpha: the alpha value of current frame
+ * @layer_blend_en: enable mixer layer alpha blending
+ * @layer_alpha: set alpha value for mixer layer
+ * @pixel_blend_en: enable mixer pixel alpha blending
+ * @chroma_en: enable chromakey
+ * @chroma_val: set value for chromakey
+ * @csc_eq_mode: mode to select csc equation of current frame
+ * @csc_eq: csc equation of current frame
+ * @csc_range: csc range of current frame
+ */
+struct gsc_ctrls {
+ struct v4l2_ctrl *rotate;
+ struct v4l2_ctrl *hflip;
+ struct v4l2_ctrl *vflip;
+ struct v4l2_ctrl *global_alpha;
+ struct v4l2_ctrl *layer_blend_en;
+ struct v4l2_ctrl *layer_alpha;
+ struct v4l2_ctrl *pixel_blend_en;
+ struct v4l2_ctrl *chroma_en;
+ struct v4l2_ctrl *chroma_val;
+ struct v4l2_ctrl *csc_eq_mode;
+ struct v4l2_ctrl *csc_eq;
+ struct v4l2_ctrl *csc_range;
+};
+
+/**
+ * struct gsc_scaler - the configuration data for G-Scaler inetrnal scaler
+ * @pre_shfactor: pre sclaer shift factor
+ * @pre_hratio: horizontal ratio of the prescaler
+ * @pre_vratio: vertical ratio of the prescaler
+ * @main_hratio: the main scaler's horizontal ratio
+ * @main_vratio: the main scaler's vertical ratio
+ */
+struct gsc_scaler {
+ u32 pre_shfactor;
+ u32 pre_hratio;
+ u32 pre_vratio;
+ unsigned long main_hratio;
+ unsigned long main_vratio;
+};
+
+struct gsc_dev;
+
+struct gsc_ctx;
+
+/**
+ * struct gsc_frame - source/target frame properties
+ * @f_width: SRC : SRCIMG_WIDTH, DST : OUTPUTDMA_WHOLE_IMG_WIDTH
+ * @f_height: SRC : SRCIMG_HEIGHT, DST : OUTPUTDMA_WHOLE_IMG_HEIGHT
+ * @crop: cropped(source)/scaled(destination) size
+ * @payload: image size in bytes (w x h x bpp)
+ * @addr: image frame buffer physical addresses
+ * @fmt: G-scaler color format pointer
+ * @alph: frame's alpha value
+ */
+struct gsc_frame {
+ u32 f_width;
+ u32 f_height;
+ struct v4l2_rect crop;
+ unsigned long payload[VIDEO_MAX_PLANES];
+ struct gsc_addr addr;
+ struct gsc_fmt *fmt;
+ u8 alpha;
+};
+
+struct gsc_sensor_info {
+ struct exynos_isp_info *pdata;
+ struct v4l2_subdev *sd;
+ struct clk *camclk;
+};
+
+struct gsc_capture_device {
+ struct gsc_ctx *ctx;
+ struct video_device *vfd;
+ struct v4l2_subdev *sd_cap;
+ struct v4l2_subdev *sd_disp;
+ struct v4l2_subdev *sd_flite[FLITE_MAX_ENTITIES];
+ struct v4l2_subdev *sd_csis[CSIS_MAX_ENTITIES];
+ struct gsc_sensor_info sensor[SENSOR_MAX_ENTITIES];
+ struct media_pad vd_pad;
+ struct media_pad sd_pads[GSC_PADS_NUM];
+ struct v4l2_mbus_framefmt mbus_fmt[GSC_PADS_NUM];
+ struct vb2_queue vbq;
+ int active_buf_cnt;
+ int buf_index;
+ int input_index;
+ int refcnt;
+ u32 frame_cnt;
+ u32 reqbufs_cnt;
+ enum gsc_cap_input_entity input;
+ u32 cam_index;
+};
+
+/**
+ * struct gsc_output_device - v4l2 output device data
+ * @vfd: the video device node for v4l2 output mode
+ * @alloc_ctx: v4l2 memory-to-memory device data
+ * @ctx: hardware context data
+ * @sd: v4l2 subdev pointer of gscaler
+ * @vbq: videobuf2 queue of gscaler output device
+ * @vb_pad: the pad of gscaler video entity
+ * @sd_pads: pads of gscaler subdev entity
+ * @active_buf_q: linked list structure of input buffer
+ * @req_cnt: the number of requested buffer
+ */
+struct gsc_output_device {
+ struct video_device *vfd;
+ struct vb2_alloc_ctx *alloc_ctx;
+ struct gsc_ctx *ctx;
+ struct v4l2_subdev *sd;
+ struct vb2_queue vbq;
+ struct media_pad vd_pad;
+ struct media_pad sd_pads[GSC_PADS_NUM];
+ struct list_head active_buf_q;
+ int req_cnt;
+};
+
+/**
+ * struct gsc_m2m_device - v4l2 memory-to-memory device data
+ * @vfd: the video device node for v4l2 m2m mode
+ * @m2m_dev: v4l2 memory-to-memory device data
+ * @ctx: hardware context data
+ * @refcnt: the reference counter
+ */
+struct gsc_m2m_device {
+ struct video_device *vfd;
+ struct v4l2_m2m_dev *m2m_dev;
+ struct gsc_ctx *ctx;
+ int refcnt;
+};
+
+/**
+ * struct gsc_pix_max - image pixel size limits in various IP configurations
+ *
+ * @org_scaler_bypass_w: max pixel width when the scaler is disabled
+ * @org_scaler_bypass_h: max pixel height when the scaler is disabled
+ * @org_scaler_input_w: max pixel width when the scaler is enabled
+ * @org_scaler_input_h: max pixel height when the scaler is enabled
+ * @real_rot_dis_w: max pixel src cropped height with the rotator is off
+ * @real_rot_dis_h: max pixel src croppped width with the rotator is off
+ * @real_rot_en_w: max pixel src cropped width with the rotator is on
+ * @real_rot_en_h: max pixel src cropped height with the rotator is on
+ * @target_rot_dis_w: max pixel dst scaled width with the rotator is off
+ * @target_rot_dis_h: max pixel dst scaled height with the rotator is off
+ * @target_rot_en_w: max pixel dst scaled width with the rotator is on
+ * @target_rot_en_h: max pixel dst scaled height with the rotator is on
+ */
+struct gsc_pix_max {
+ u16 org_scaler_bypass_w;
+ u16 org_scaler_bypass_h;
+ u16 org_scaler_input_w;
+ u16 org_scaler_input_h;
+ u16 real_rot_dis_w;
+ u16 real_rot_dis_h;
+ u16 real_rot_en_w;
+ u16 real_rot_en_h;
+ u16 target_rot_dis_w;
+ u16 target_rot_dis_h;
+ u16 target_rot_en_w;
+ u16 target_rot_en_h;
+};
+
+/**
+ * struct gsc_pix_min - image pixel size limits in various IP configurations
+ *
+ * @org_w: minimum source pixel width
+ * @org_h: minimum source pixel height
+ * @real_w: minimum input crop pixel width
+ * @real_h: minimum input crop pixel height
+ * @target_rot_dis_w: minimum output scaled pixel height when rotator is off
+ * @target_rot_dis_h: minimum output scaled pixel height when rotator is off
+ * @target_rot_en_w: minimum output scaled pixel height when rotator is on
+ * @target_rot_en_h: minimum output scaled pixel height when rotator is on
+ */
+struct gsc_pix_min {
+ u16 org_w;
+ u16 org_h;
+ u16 real_w;
+ u16 real_h;
+ u16 target_rot_dis_w;
+ u16 target_rot_dis_h;
+ u16 target_rot_en_w;
+ u16 target_rot_en_h;
+};
+
+struct gsc_pix_align {
+ u16 org_h;
+ u16 org_w;
+ u16 offset_h;
+ u16 real_w;
+ u16 real_h;
+ u16 target_w;
+ u16 target_h;
+};
+
+/**
+ * struct gsc_variant - G-Scaler variant information
+ */
+struct gsc_variant {
+ struct gsc_pix_max *pix_max;
+ struct gsc_pix_min *pix_min;
+ struct gsc_pix_align *pix_align;
+ u16 in_buf_cnt;
+ u16 out_buf_cnt;
+ u16 sc_up_max;
+ u16 sc_down_max;
+ u16 poly_sc_down_max;
+ u16 pre_sc_down_max;
+ u16 local_sc_down;
+};
+
+/**
+ * struct gsc_driverdata - per device type driver data for init time.
+ *
+ * @variant: the variant information for this driver.
+ * @lclk_frequency: g-scaler clock frequency
+ * @num_entities: the number of g-scalers
+ */
+struct gsc_driverdata {
+ struct gsc_variant *variant[GSC_MAX_DEVS];
+ unsigned long lclk_frequency;
+ int num_entities;
+};
+
+struct gsc_vb2 {
+ const struct vb2_mem_ops *ops;
+ void *(*init)(struct gsc_dev *gsc);
+ void (*cleanup)(void *alloc_ctx);
+
+ void (*resume)(void *alloc_ctx);
+ void (*suspend)(void *alloc_ctx);
+
+ int (*cache_flush)(struct vb2_buffer *vb, u32 num_planes);
+ void (*set_cacheable)(void *alloc_ctx, bool cacheable);
+ void (*set_sharable)(void *alloc_ctx, bool sharable);
+};
+
+struct gsc_pipeline {
+ struct media_pipeline *pipe;
+ struct v4l2_subdev *sd_gsc;
+ struct v4l2_subdev *disp;
+ struct v4l2_subdev *flite;
+ struct v4l2_subdev *csis;
+ struct v4l2_subdev *sensor;
+};
+
+/**
+ * struct gsc_dev - abstraction for G-Scaler entity
+ * @slock: the spinlock protecting this data structure
+ * @lock: the mutex protecting this data structure
+ * @pdev: pointer to the G-Scaler platform device
+ * @variant: the IP variant information
+ * @id: g_scaler device index (0..GSC_MAX_DEVS)
+ * @regs: the mapped hardware registers
+ * @regs_res: the resource claimed for IO registers
+ * @irq: G-scaler interrupt number
+ * @irq_queue: interrupt handler waitqueue
+ * @m2m: memory-to-memory V4L2 device information
+ * @out: memory-to-local V4L2 output device information
+ * @state: flags used to synchronize m2m and capture mode operation
+ * @alloc_ctx: videobuf2 memory allocator context
+ * @vb2: videobuf2 memory allocator call-back functions
+ * @mdev: pointer to exynos media device
+ * @pipeline: pointer to subdevs that are connected with gscaler
+ */
+struct gsc_dev {
+ spinlock_t slock;
+ struct mutex lock;
+ struct platform_device *pdev;
+ struct gsc_variant *variant;
+ u16 id;
+ struct clk *clock;
+ void __iomem *regs;
+ struct resource *regs_res;
+ int irq;
+ wait_queue_head_t irq_queue;
+ struct work_struct work_struct;
+ struct workqueue_struct *irq_workqueue;
+ struct gsc_m2m_device m2m;
+ struct gsc_output_device out;
+ struct gsc_capture_device cap;
+ struct exynos_platform_gscaler *pdata;
+ unsigned long state;
+ struct vb2_alloc_ctx *alloc_ctx;
+ const struct gsc_vb2 *vb2;
+ struct exynos_md *mdev[MAX_MDEV];
+ struct gsc_pipeline pipeline;
+ struct exynos_entity_data md_data;
+};
+
+/**
+ * gsc_ctx - the device context data
+ * @slock: spinlock protecting this data structure
+ * @s_frame: source frame properties
+ * @d_frame: destination frame properties
+ * @in_path: input mode (DMA or camera)
+ * @out_path: output mode (DMA or FIFO)
+ * @scaler: image scaler properties
+ * @flags: additional flags for image conversion
+ * @state: flags to keep track of user configuration
+ * @gsc_dev: the g-scaler device this context applies to
+ * @m2m_ctx: memory-to-memory device context
+ * @fh: v4l2 file handle
+ * @ctrl_handler: v4l2 controls handler
+ * @ctrls_rdy: true if the control handler is initialized
+ * @gsc_ctrls G-Scaler control set
+ * @m2m_ctx: memory-to-memory device context
+ */
+struct gsc_ctx {
+ spinlock_t slock;
+ struct gsc_frame s_frame;
+ struct gsc_frame d_frame;
+ enum gsc_datapath in_path;
+ enum gsc_datapath out_path;
+ struct gsc_scaler scaler;
+ u32 flags;
+ u32 state;
+ struct gsc_dev *gsc_dev;
+ struct v4l2_m2m_ctx *m2m_ctx;
+ struct v4l2_fh fh;
+ struct v4l2_ctrl_handler ctrl_handler;
+ struct gsc_ctrls gsc_ctrls;
+ bool ctrls_rdy;
+};
+
+void gsc_set_prefbuf(struct gsc_dev *gsc, struct gsc_frame frm);
+void gsc_clk_release(struct gsc_dev *gsc);
+int gsc_register_m2m_device(struct gsc_dev *gsc);
+void gsc_unregister_m2m_device(struct gsc_dev *gsc);
+int gsc_register_output_device(struct gsc_dev *gsc);
+void gsc_unregister_output_device(struct gsc_dev *gsc);
+int gsc_register_capture_device(struct gsc_dev *gsc);
+void gsc_unregister_capture_device(struct gsc_dev *gsc);
+
+u32 get_plane_size(struct gsc_frame *fr, unsigned int plane);
+char gsc_total_fmts(void);
+struct gsc_fmt *get_format(int index);
+struct gsc_fmt *find_fmt(u32 *pixelformat, u32 *mbus_code, int index);
+int gsc_enum_fmt_mplane(struct v4l2_fmtdesc *f);
+int gsc_try_fmt_mplane(struct gsc_ctx *ctx, struct v4l2_format *f);
+void gsc_set_frame_size(struct gsc_frame *frame, int width, int height);
+int gsc_g_fmt_mplane(struct gsc_ctx *ctx, struct v4l2_format *f);
+void gsc_check_crop_change(u32 tmp_w, u32 tmp_h, u32 *w, u32 *h);
+int gsc_g_crop(struct gsc_ctx *ctx, struct v4l2_crop *cr);
+int gsc_try_crop(struct gsc_ctx *ctx, struct v4l2_crop *cr);
+int gsc_cal_prescaler_ratio(struct gsc_variant *var, u32 src, u32 dst, u32 *ratio);
+void gsc_get_prescaler_shfactor(u32 hratio, u32 vratio, u32 *sh);
+void gsc_check_src_scale_info(struct gsc_variant *var, struct gsc_frame *s_frame,
+ u32 *wratio, u32 tx, u32 ty, u32 *hratio);
+int gsc_check_scaler_ratio(struct gsc_variant *var, int sw, int sh, int dw,
+ int dh, int rot, int out_path);
+int gsc_set_scaler_info(struct gsc_ctx *ctx);
+int gsc_ctrls_create(struct gsc_ctx *ctx);
+void gsc_ctrls_delete(struct gsc_ctx *ctx);
+int gsc_out_hw_set(struct gsc_ctx *ctx);
+int gsc_out_set_in_addr(struct gsc_dev *gsc, struct gsc_ctx *ctx,
+ struct gsc_input_buf *buf, int index);
+int gsc_prepare_addr(struct gsc_ctx *ctx, struct vb2_buffer *vb,
+ struct gsc_frame *frame, struct gsc_addr *addr);
+int gsc_out_link_validate(const struct media_pad *source,
+ const struct media_pad *sink);
+int gsc_pipeline_s_stream(struct gsc_dev *gsc, bool on);
+
+static inline void gsc_ctx_state_lock_set(u32 state, struct gsc_ctx *ctx)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&ctx->slock, flags);
+ ctx->state |= state;
+ spin_unlock_irqrestore(&ctx->slock, flags);
+}
+
+static inline void gsc_ctx_state_lock_clear(u32 state, struct gsc_ctx *ctx)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&ctx->slock, flags);
+ ctx->state &= ~state;
+ spin_unlock_irqrestore(&ctx->slock, flags);
+}
+
+static inline int get_win_num(struct gsc_dev *dev)
+{
+ return (dev->id == 3) ? 2 : dev->id;
+}
+static inline int is_tiled(struct gsc_fmt *fmt)
+{
+ return fmt->pixelformat == V4L2_PIX_FMT_NV12MT_16X16;
+}
+
+static inline int is_output(enum v4l2_buf_type type)
+{
+ return (type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE ||
+ type == V4L2_BUF_TYPE_VIDEO_OUTPUT) ? 1 : 0;
+}
+
+static inline void gsc_hw_enable_control(struct gsc_dev *dev, bool on)
+{
+ u32 cfg = readl(dev->regs + GSC_ENABLE);
+
+ if (on)
+ cfg |= GSC_ENABLE_ON;
+ else
+ cfg &= ~GSC_ENABLE_ON;
+
+ writel(cfg, dev->regs + GSC_ENABLE);
+}
+
+static inline int gsc_hw_get_irq_status(struct gsc_dev *dev)
+{
+ u32 cfg = readl(dev->regs + GSC_IRQ);
+ if (cfg & (1 << GSC_OR_IRQ))
+ return GSC_OR_IRQ;
+ else
+ return GSC_DONE_IRQ;
+
+}
+
+static inline void gsc_hw_clear_irq(struct gsc_dev *dev, int irq)
+{
+ u32 cfg = readl(dev->regs + GSC_IRQ);
+ if (irq == GSC_OR_IRQ)
+ cfg |= GSC_IRQ_STATUS_OR_IRQ;
+ else if (irq == GSC_DONE_IRQ)
+ cfg |= GSC_IRQ_STATUS_OR_FRM_DONE;
+ writel(cfg, dev->regs + GSC_IRQ);
+}
+
+static inline void gsc_lock(struct vb2_queue *vq)
+{
+ struct gsc_ctx *ctx = vb2_get_drv_priv(vq);
+ mutex_lock(&ctx->gsc_dev->lock);
+}
+
+static inline void gsc_unlock(struct vb2_queue *vq)
+{
+ struct gsc_ctx *ctx = vb2_get_drv_priv(vq);
+ mutex_unlock(&ctx->gsc_dev->lock);
+}
+
+static inline bool gsc_ctx_state_is_set(u32 mask, struct gsc_ctx *ctx)
+{
+ unsigned long flags;
+ bool ret;
+
+ spin_lock_irqsave(&ctx->slock, flags);
+ ret = (ctx->state & mask) == mask;
+ spin_unlock_irqrestore(&ctx->slock, flags);
+ return ret;
+}
+
+static inline struct gsc_frame *ctx_get_frame(struct gsc_ctx *ctx,
+ enum v4l2_buf_type type)
+{
+ struct gsc_frame *frame;
+
+ if (V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE == type) {
+ frame = &ctx->s_frame;
+ } else if (V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE == type) {
+ frame = &ctx->d_frame;
+ } else {
+ gsc_err("Wrong buffer/video queue type (%d)", type);
+ return ERR_PTR(-EINVAL);
+ }
+
+ return frame;
+}
+
+static inline struct gsc_input_buf *
+active_queue_pop(struct gsc_output_device *vid_out, struct gsc_dev *dev)
+{
+ struct gsc_input_buf *buf;
+
+ buf = list_entry(vid_out->active_buf_q.next, struct gsc_input_buf, list);
+ return buf;
+}
+
+static inline void active_queue_push(struct gsc_output_device *vid_out,
+ struct gsc_input_buf *buf, struct gsc_dev *dev)
+{
+ unsigned long flags;
+ spin_lock_irqsave(&dev->slock, flags);
+ list_add_tail(&buf->list, &vid_out->active_buf_q);
+ spin_unlock_irqrestore(&dev->slock, flags);
+}
+
+static inline struct gsc_dev *entity_to_gsc(struct media_entity *me)
+{
+ struct v4l2_subdev *sd;
+
+ sd = container_of(me, struct v4l2_subdev, entity);
+ return entity_data_to_gsc(v4l2_get_subdevdata(sd));
+}
+
+static inline void user_to_drv(struct v4l2_ctrl *ctrl, s32 value)
+{
+ ctrl->cur.val = ctrl->val = value;
+}
+
+void gsc_hw_set_sw_reset(struct gsc_dev *dev);
+void gsc_hw_set_one_frm_mode(struct gsc_dev *dev, bool mask);
+void gsc_hw_set_frm_done_irq_mask(struct gsc_dev *dev, bool mask);
+void gsc_hw_set_overflow_irq_mask(struct gsc_dev *dev, bool mask);
+void gsc_hw_set_gsc_irq_enable(struct gsc_dev *dev, bool mask);
+void gsc_hw_set_input_buf_mask_all(struct gsc_dev *dev);
+void gsc_hw_set_output_buf_mask_all(struct gsc_dev *dev);
+void gsc_hw_set_input_buf_masking(struct gsc_dev *dev, u32 shift, bool enable);
+void gsc_hw_set_output_buf_masking(struct gsc_dev *dev, u32 shift, bool enable);
+void gsc_hw_set_input_addr(struct gsc_dev *dev, struct gsc_addr *addr, int index);
+void gsc_hw_set_output_addr(struct gsc_dev *dev, struct gsc_addr *addr, int index);
+void gsc_hw_set_input_path(struct gsc_ctx *ctx);
+void gsc_hw_set_in_size(struct gsc_ctx *ctx);
+void gsc_hw_set_in_image_rgb(struct gsc_ctx *ctx);
+void gsc_hw_set_in_image_format(struct gsc_ctx *ctx);
+void gsc_hw_set_output_path(struct gsc_ctx *ctx);
+void gsc_hw_set_out_size(struct gsc_ctx *ctx);
+void gsc_hw_set_out_image_rgb(struct gsc_ctx *ctx);
+void gsc_hw_set_out_image_format(struct gsc_ctx *ctx);
+void gsc_hw_set_prescaler(struct gsc_ctx *ctx);
+void gsc_hw_set_mainscaler(struct gsc_ctx *ctx);
+void gsc_hw_set_rotation(struct gsc_ctx *ctx);
+void gsc_hw_set_global_alpha(struct gsc_ctx *ctx);
+void gsc_hw_set_sfr_update(struct gsc_ctx *ctx);
+void gsc_hw_set_local_dst(int id, bool on);
+void gsc_hw_set_sysreg_writeback(struct gsc_ctx *ctx);
+void gsc_hw_set_sysreg_camif(bool on);
+
+int gsc_hw_get_input_buf_mask_status(struct gsc_dev *dev);
+int gsc_hw_get_done_input_buf_index(struct gsc_dev *dev);
+int gsc_hw_get_done_output_buf_index(struct gsc_dev *dev);
+int gsc_hw_get_nr_unmask_bits(struct gsc_dev *dev);
+int gsc_wait_reset(struct gsc_dev *dev);
+int gsc_wait_operating(struct gsc_dev *dev);
+int gsc_wait_stop(struct gsc_dev *dev);
+
+void gsc_disp_fifo_sw_reset(struct gsc_dev *dev);
+void gsc_pixelasync_sw_reset(struct gsc_dev *dev);
+
+
+#endif /* GSC_CORE_H_ */
--- /dev/null
+/* linux/drivers/media/video/exynos/gsc/gsc-m2m.c
+ *
+ * Copyright (c) 2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com
+ *
+ * Samsung EXYNOS5 SoC series G-scaler driver
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published
+ * by the Free Software Foundation, either version 2 of the License,
+ * or (at your option) any later version.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/version.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/bug.h>
+#include <linux/interrupt.h>
+#include <linux/workqueue.h>
+#include <linux/device.h>
+#include <linux/platform_device.h>
+#include <linux/list.h>
+#include <linux/io.h>
+#include <linux/slab.h>
+#include <linux/clk.h>
+#include <media/v4l2-ioctl.h>
+#include <mach/videonode.h>
+
+#include "gsc-core.h"
+
+static int gsc_ctx_stop_req(struct gsc_ctx *ctx)
+{
+ struct gsc_ctx *curr_ctx;
+ struct gsc_dev *gsc = ctx->gsc_dev;
+ int ret = 0;
+
+ curr_ctx = v4l2_m2m_get_curr_priv(gsc->m2m.m2m_dev);
+ if (!gsc_m2m_run(gsc) || (curr_ctx != ctx))
+ return 0;
+ ctx->state |= GSC_CTX_STOP_REQ;
+ ret = wait_event_timeout(gsc->irq_queue,
+ !gsc_ctx_state_is_set(GSC_CTX_STOP_REQ, ctx),
+ GSC_SHUTDOWN_TIMEOUT);
+ if (!ret)
+ ret = -EBUSY;
+
+ return ret;
+}
+
+static int gsc_m2m_stop_streaming(struct vb2_queue *q)
+{
+ struct gsc_ctx *ctx = q->drv_priv;
+ struct gsc_dev *gsc = ctx->gsc_dev;
+ int ret;
+
+ ret = gsc_ctx_stop_req(ctx);
+ /* FIXME: need to add v4l2_m2m_job_finish(fail) if ret is timeout */
+ if (ret < 0)
+ dev_err(&gsc->pdev->dev, "wait timeout : %s\n", __func__);
+
+ return 0;
+}
+
+static void gsc_m2m_job_abort(void *priv)
+{
+ struct gsc_ctx *ctx = priv;
+ struct gsc_dev *gsc = ctx->gsc_dev;
+ int ret;
+
+ ret = gsc_ctx_stop_req(ctx);
+ /* FIXME: need to add v4l2_m2m_job_finish(fail) if ret is timeout */
+ if (ret < 0)
+ dev_err(&gsc->pdev->dev, "wait timeout : %s\n", __func__);
+}
+
+int gsc_fill_addr(struct gsc_ctx *ctx)
+{
+ struct gsc_frame *s_frame, *d_frame;
+ struct vb2_buffer *vb = NULL;
+ int ret = 0;
+
+ s_frame = &ctx->s_frame;
+ d_frame = &ctx->d_frame;
+
+ vb = v4l2_m2m_next_src_buf(ctx->m2m_ctx);
+ ret = gsc_prepare_addr(ctx, vb, s_frame, &s_frame->addr);
+ if (ret)
+ return ret;
+
+ vb = v4l2_m2m_next_dst_buf(ctx->m2m_ctx);
+ ret = gsc_prepare_addr(ctx, vb, d_frame, &d_frame->addr);
+
+ return ret;
+}
+
+static void gsc_m2m_device_run(void *priv)
+{
+ struct gsc_ctx *ctx = priv;
+ struct gsc_dev *gsc;
+ unsigned long flags;
+ u32 ret;
+ bool is_set = false;
+
+ if (WARN(!ctx, "null hardware context\n"))
+ return;
+
+ gsc = ctx->gsc_dev;
+ pm_runtime_get_sync(&gsc->pdev->dev);
+
+ spin_lock_irqsave(&ctx->slock, flags);
+ /* Reconfigure hardware if the context has changed. */
+ if (gsc->m2m.ctx != ctx) {
+ gsc_dbg("gsc->m2m.ctx = 0x%p, current_ctx = 0x%p",
+ gsc->m2m.ctx, ctx);
+ ctx->state |= GSC_PARAMS;
+ gsc->m2m.ctx = ctx;
+ }
+
+ is_set = (ctx->state & GSC_CTX_STOP_REQ) ? 1 : 0;
+ ctx->state &= ~GSC_CTX_STOP_REQ;
+ if (is_set) {
+ wake_up(&gsc->irq_queue);
+ goto put_device;
+ }
+
+ ret = gsc_fill_addr(ctx);
+ if (ret) {
+ gsc_err("Wrong address");
+ goto put_device;
+ }
+
+ gsc_set_prefbuf(gsc, ctx->s_frame);
+ gsc_hw_set_input_addr(gsc, &ctx->s_frame.addr, GSC_M2M_BUF_NUM);
+ gsc_hw_set_output_addr(gsc, &ctx->d_frame.addr, GSC_M2M_BUF_NUM);
+
+ if (ctx->state & GSC_PARAMS) {
+ gsc_hw_set_input_buf_masking(gsc, GSC_M2M_BUF_NUM, false);
+ gsc_hw_set_output_buf_masking(gsc, GSC_M2M_BUF_NUM, false);
+ gsc_hw_set_frm_done_irq_mask(gsc, false);
+ gsc_hw_set_gsc_irq_enable(gsc, true);
+
+ if (gsc_set_scaler_info(ctx)) {
+ gsc_err("Scaler setup error");
+ goto put_device;
+ }
+
+ gsc_hw_set_input_path(ctx);
+ gsc_hw_set_in_size(ctx);
+ gsc_hw_set_in_image_format(ctx);
+
+ gsc_hw_set_output_path(ctx);
+ gsc_hw_set_out_size(ctx);
+ gsc_hw_set_out_image_format(ctx);
+
+ gsc_hw_set_prescaler(ctx);
+ gsc_hw_set_mainscaler(ctx);
+ gsc_hw_set_rotation(ctx);
+ gsc_hw_set_global_alpha(ctx);
+ }
+ /* When you update SFRs in the middle of operating
+ gsc_hw_set_sfr_update(ctx);
+ */
+
+ ctx->state &= ~GSC_PARAMS;
+
+ if (!test_and_set_bit(ST_M2M_RUN, &gsc->state)) {
+ /* One frame mode sequence
+ GSCALER_ON on -> GSCALER_OP_STATUS is operating ->
+ GSCALER_ON off */
+ gsc_hw_enable_control(gsc, true);
+ ret = gsc_wait_operating(gsc);
+ if (ret < 0) {
+ gsc_err("gscaler wait operating timeout");
+ goto put_device;
+ }
+ gsc_hw_enable_control(gsc, false);
+ }
+
+ spin_unlock_irqrestore(&ctx->slock, flags);
+ return;
+
+put_device:
+ ctx->state &= ~GSC_PARAMS;
+ spin_unlock_irqrestore(&ctx->slock, flags);
+ pm_runtime_put_sync(&gsc->pdev->dev);
+}
+
+static int gsc_m2m_queue_setup(struct vb2_queue *vq, const struct v4l2_format *fmt,
+ unsigned int *num_buffers, unsigned int *num_planes,
+ unsigned int sizes[], void *allocators[])
+{
+ struct gsc_ctx *ctx = vb2_get_drv_priv(vq);
+ struct gsc_frame *frame;
+ int i;
+
+ frame = ctx_get_frame(ctx, vq->type);
+ if (IS_ERR(frame))
+ return PTR_ERR(frame);
+
+ if (!frame->fmt)
+ return -EINVAL;
+
+ *num_planes = frame->fmt->num_planes;
+ for (i = 0; i < frame->fmt->num_planes; i++) {
+ sizes[i] = get_plane_size(frame, i);
+ allocators[i] = ctx->gsc_dev->alloc_ctx;
+ }
+ return 0;
+}
+
+static int gsc_m2m_buf_prepare(struct vb2_buffer *vb)
+{
+ struct gsc_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
+ struct gsc_frame *frame;
+ int i;
+
+ frame = ctx_get_frame(ctx, vb->vb2_queue->type);
+ if (IS_ERR(frame))
+ return PTR_ERR(frame);
+
+ if (!V4L2_TYPE_IS_OUTPUT(vb->vb2_queue->type)) {
+ for (i = 0; i < frame->fmt->num_planes; i++)
+ vb2_set_plane_payload(vb, i, frame->payload[i]);
+ }
+
+ return 0;
+}
+
+static void gsc_m2m_buf_queue(struct vb2_buffer *vb)
+{
+ struct gsc_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
+
+ gsc_dbg("ctx: %p, ctx->state: 0x%x", ctx, ctx->state);
+
+ if (ctx->m2m_ctx)
+ v4l2_m2m_buf_queue(ctx->m2m_ctx, vb);
+}
+
+struct vb2_ops gsc_m2m_qops = {
+ .queue_setup = gsc_m2m_queue_setup,
+ .buf_prepare = gsc_m2m_buf_prepare,
+ .buf_queue = gsc_m2m_buf_queue,
+ .wait_prepare = gsc_unlock,
+ .wait_finish = gsc_lock,
+ .stop_streaming = gsc_m2m_stop_streaming,
+};
+
+static int gsc_m2m_querycap(struct file *file, void *fh,
+ struct v4l2_capability *cap)
+{
+ struct gsc_ctx *ctx = fh_to_ctx(fh);
+ struct gsc_dev *gsc = ctx->gsc_dev;
+
+ strncpy(cap->driver, gsc->pdev->name, sizeof(cap->driver) - 1);
+ strncpy(cap->card, gsc->pdev->name, sizeof(cap->card) - 1);
+ cap->bus_info[0] = 0;
+ cap->capabilities = V4L2_CAP_STREAMING |
+ V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_VIDEO_OUTPUT |
+ V4L2_CAP_VIDEO_CAPTURE_MPLANE | V4L2_CAP_VIDEO_OUTPUT_MPLANE;
+
+ return 0;
+}
+
+static int gsc_m2m_enum_fmt_mplane(struct file *file, void *priv,
+ struct v4l2_fmtdesc *f)
+{
+ return gsc_enum_fmt_mplane(f);
+}
+
+static int gsc_m2m_g_fmt_mplane(struct file *file, void *fh,
+ struct v4l2_format *f)
+{
+ struct gsc_ctx *ctx = fh_to_ctx(fh);
+
+ if ((f->type != V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) &&
+ (f->type != V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE))
+ return -EINVAL;
+
+ return gsc_g_fmt_mplane(ctx, f);
+}
+
+static int gsc_m2m_try_fmt_mplane(struct file *file, void *fh,
+ struct v4l2_format *f)
+{
+ struct gsc_ctx *ctx = fh_to_ctx(fh);
+
+ if ((f->type != V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) &&
+ (f->type != V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE))
+ return -EINVAL;
+
+ return gsc_try_fmt_mplane(ctx, f);
+}
+
+static int gsc_m2m_s_fmt_mplane(struct file *file, void *fh,
+ struct v4l2_format *f)
+{
+ struct gsc_ctx *ctx = fh_to_ctx(fh);
+ struct vb2_queue *vq;
+ struct gsc_frame *frame;
+ struct v4l2_pix_format_mplane *pix;
+ int i, ret = 0;
+
+ ret = gsc_m2m_try_fmt_mplane(file, fh, f);
+ if (ret)
+ return ret;
+
+ vq = v4l2_m2m_get_vq(ctx->m2m_ctx, f->type);
+
+ if (vb2_is_streaming(vq)) {
+ gsc_err("queue (%d) busy", f->type);
+ return -EBUSY;
+ }
+
+ if (V4L2_TYPE_IS_OUTPUT(f->type))
+ frame = &ctx->s_frame;
+ else
+ frame = &ctx->d_frame;
+
+
+ pix = &f->fmt.pix_mp;
+ frame->fmt = find_fmt(&pix->pixelformat, NULL, 0);
+ if (!frame->fmt)
+ return -EINVAL;
+
+ for (i = 0; i < frame->fmt->num_planes; i++)
+ frame->payload[i] = pix->plane_fmt[i].sizeimage;
+
+ gsc_set_frame_size(frame, pix->width, pix->height);
+
+ if (f->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE)
+ gsc_ctx_state_lock_set(GSC_PARAMS | GSC_DST_FMT, ctx);
+ else
+ gsc_ctx_state_lock_set(GSC_PARAMS | GSC_SRC_FMT, ctx);
+
+ gsc_dbg("f_w: %d, f_h: %d", frame->f_width, frame->f_height);
+
+ return 0;
+}
+
+static int gsc_m2m_reqbufs(struct file *file, void *fh,
+ struct v4l2_requestbuffers *reqbufs)
+{
+ struct gsc_ctx *ctx = fh_to_ctx(fh);
+ struct gsc_dev *gsc = ctx->gsc_dev;
+ struct gsc_frame *frame;
+ u32 max_cnt;
+
+ max_cnt = (reqbufs->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) ?
+ gsc->variant->in_buf_cnt : gsc->variant->out_buf_cnt;
+ if (reqbufs->count > max_cnt)
+ return -EINVAL;
+ else if (reqbufs->count == 0) {
+ if (reqbufs->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE)
+ gsc_ctx_state_lock_clear(GSC_SRC_FMT, ctx);
+ else
+ gsc_ctx_state_lock_clear(GSC_DST_FMT, ctx);
+ }
+
+ frame = ctx_get_frame(ctx, reqbufs->type);
+
+ return v4l2_m2m_reqbufs(file, ctx->m2m_ctx, reqbufs);
+}
+
+static int gsc_m2m_expbuf(struct file *file, void *fh,
+ struct v4l2_exportbuffer *eb)
+{
+ struct gsc_ctx *ctx = fh_to_ctx(fh);
+ return v4l2_m2m_expbuf(file, ctx->m2m_ctx, eb);
+}
+
+static int gsc_m2m_querybuf(struct file *file, void *fh, struct v4l2_buffer *buf)
+{
+ struct gsc_ctx *ctx = fh_to_ctx(fh);
+ return v4l2_m2m_querybuf(file, ctx->m2m_ctx, buf);
+}
+
+static int gsc_m2m_qbuf(struct file *file, void *fh,
+ struct v4l2_buffer *buf)
+{
+ struct gsc_ctx *ctx = fh_to_ctx(fh);
+ return v4l2_m2m_qbuf(file, ctx->m2m_ctx, buf);
+}
+
+static int gsc_m2m_dqbuf(struct file *file, void *fh,
+ struct v4l2_buffer *buf)
+{
+ struct gsc_ctx *ctx = fh_to_ctx(fh);
+ return v4l2_m2m_dqbuf(file, ctx->m2m_ctx, buf);
+}
+
+static int gsc_m2m_streamon(struct file *file, void *fh,
+ enum v4l2_buf_type type)
+{
+ struct gsc_ctx *ctx = fh_to_ctx(fh);
+
+ /* The source and target color format need to be set */
+ if (V4L2_TYPE_IS_OUTPUT(type)) {
+ if (!gsc_ctx_state_is_set(GSC_SRC_FMT, ctx))
+ return -EINVAL;
+ } else if (!gsc_ctx_state_is_set(GSC_DST_FMT, ctx)) {
+ return -EINVAL;
+ }
+
+ return v4l2_m2m_streamon(file, ctx->m2m_ctx, type);
+}
+
+static int gsc_m2m_streamoff(struct file *file, void *fh,
+ enum v4l2_buf_type type)
+{
+ struct gsc_ctx *ctx = fh_to_ctx(fh);
+ return v4l2_m2m_streamoff(file, ctx->m2m_ctx, type);
+}
+
+static int gsc_m2m_cropcap(struct file *file, void *fh,
+ struct v4l2_cropcap *cr)
+{
+ struct gsc_frame *frame;
+ struct gsc_ctx *ctx = fh_to_ctx(fh);
+
+ frame = ctx_get_frame(ctx, cr->type);
+ if (IS_ERR(frame))
+ return PTR_ERR(frame);
+
+ cr->bounds.left = 0;
+ cr->bounds.top = 0;
+ cr->bounds.width = frame->f_width;
+ cr->bounds.height = frame->f_height;
+ cr->defrect = cr->bounds;
+
+ return 0;
+}
+
+static int gsc_m2m_g_crop(struct file *file, void *fh, struct v4l2_crop *cr)
+{
+ struct gsc_ctx *ctx = fh_to_ctx(fh);
+
+ return gsc_g_crop(ctx, cr);
+}
+
+static int gsc_m2m_s_crop(struct file *file, void *fh, struct v4l2_crop *cr)
+{
+ struct gsc_ctx *ctx = fh_to_ctx(fh);
+ struct gsc_variant *variant = ctx->gsc_dev->variant;
+ struct gsc_frame *f;
+ int ret;
+
+ ret = gsc_try_crop(ctx, cr);
+ if (ret)
+ return ret;
+
+ f = (cr->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) ?
+ &ctx->s_frame : &ctx->d_frame;
+
+ /* Check to see if scaling ratio is within supported range */
+ if (gsc_ctx_state_is_set(GSC_DST_FMT | GSC_SRC_FMT, ctx)) {
+ if (cr->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) {
+ ret = gsc_check_scaler_ratio(variant, cr->c.width,
+ cr->c.height, ctx->d_frame.crop.width,
+ ctx->d_frame.crop.height,
+ ctx->gsc_ctrls.rotate->val, ctx->out_path);
+ } else {
+ ret = gsc_check_scaler_ratio(variant, ctx->s_frame.crop.width,
+ ctx->s_frame.crop.height, cr->c.width,
+ cr->c.height, ctx->gsc_ctrls.rotate->val,
+ ctx->out_path);
+ }
+ if (ret) {
+ gsc_err("Out of scaler range");
+ return -EINVAL;
+ }
+ }
+
+ f->crop.left = cr->c.left;
+ f->crop.top = cr->c.top;
+ f->crop.width = cr->c.width;
+ f->crop.height = cr->c.height;
+
+ gsc_ctx_state_lock_set(GSC_PARAMS, ctx);
+
+ return 0;
+}
+
+static const struct v4l2_ioctl_ops gsc_m2m_ioctl_ops = {
+ .vidioc_querycap = gsc_m2m_querycap,
+
+ .vidioc_enum_fmt_vid_cap_mplane = gsc_m2m_enum_fmt_mplane,
+ .vidioc_enum_fmt_vid_out_mplane = gsc_m2m_enum_fmt_mplane,
+
+ .vidioc_g_fmt_vid_cap_mplane = gsc_m2m_g_fmt_mplane,
+ .vidioc_g_fmt_vid_out_mplane = gsc_m2m_g_fmt_mplane,
+
+ .vidioc_try_fmt_vid_cap_mplane = gsc_m2m_try_fmt_mplane,
+ .vidioc_try_fmt_vid_out_mplane = gsc_m2m_try_fmt_mplane,
+
+ .vidioc_s_fmt_vid_cap_mplane = gsc_m2m_s_fmt_mplane,
+ .vidioc_s_fmt_vid_out_mplane = gsc_m2m_s_fmt_mplane,
+
+ .vidioc_reqbufs = gsc_m2m_reqbufs,
+ .vidioc_querybuf = gsc_m2m_querybuf,
+
+ .vidioc_expbuf = gsc_m2m_expbuf,
+
+ .vidioc_qbuf = gsc_m2m_qbuf,
+ .vidioc_dqbuf = gsc_m2m_dqbuf,
+
+ .vidioc_streamon = gsc_m2m_streamon,
+ .vidioc_streamoff = gsc_m2m_streamoff,
+
+ .vidioc_g_crop = gsc_m2m_g_crop,
+ .vidioc_s_crop = gsc_m2m_s_crop,
+ .vidioc_cropcap = gsc_m2m_cropcap
+
+};
+
+static int queue_init(void *priv, struct vb2_queue *src_vq,
+ struct vb2_queue *dst_vq)
+{
+ struct gsc_ctx *ctx = priv;
+ int ret;
+
+ memset(src_vq, 0, sizeof(*src_vq));
+ src_vq->type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
+ src_vq->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;
+ src_vq->drv_priv = ctx;
+ src_vq->ops = &gsc_m2m_qops;
+ src_vq->mem_ops = &vb2_dma_contig_memops;
+ src_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer);
+
+ ret = vb2_queue_init(src_vq);
+ if (ret)
+ return ret;
+
+ memset(dst_vq, 0, sizeof(*dst_vq));
+ dst_vq->type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
+ dst_vq->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;
+ dst_vq->drv_priv = ctx;
+ dst_vq->ops = &gsc_m2m_qops;
+ dst_vq->mem_ops = &vb2_dma_contig_memops;
+ dst_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer);
+
+ return vb2_queue_init(dst_vq);
+}
+
+static int gsc_m2m_open(struct file *file)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ struct gsc_ctx *ctx = NULL;
+ int ret;
+
+ gsc_dbg("pid: %d, state: 0x%lx", task_pid_nr(current), gsc->state);
+
+ if (gsc_out_opened(gsc) || gsc_cap_opened(gsc))
+ return -EBUSY;
+
+ ctx = kzalloc(sizeof *ctx, GFP_KERNEL);
+ if (!ctx)
+ return -ENOMEM;
+
+ v4l2_fh_init(&ctx->fh, gsc->m2m.vfd);
+ ret = gsc_ctrls_create(ctx);
+ if (ret)
+ goto error_fh;
+
+ /* Use separate control handler per file handle */
+ ctx->fh.ctrl_handler = &ctx->ctrl_handler;
+ file->private_data = &ctx->fh;
+ v4l2_fh_add(&ctx->fh);
+
+ ctx->gsc_dev = gsc;
+ /* Default color format */
+ ctx->s_frame.fmt = get_format(0);
+ ctx->d_frame.fmt = get_format(0);
+ /* Setup the device context for mem2mem mode. */
+ ctx->state |= GSC_CTX_M2M;
+ ctx->flags = 0;
+ ctx->in_path = GSC_DMA;
+ ctx->out_path = GSC_DMA;
+ spin_lock_init(&ctx->slock);
+
+ ctx->m2m_ctx = v4l2_m2m_ctx_init(gsc->m2m.m2m_dev, ctx, queue_init);
+ if (IS_ERR(ctx->m2m_ctx)) {
+ gsc_err("Failed to initialize m2m context");
+ ret = PTR_ERR(ctx->m2m_ctx);
+ goto error_fh;
+ }
+
+ if (gsc->m2m.refcnt++ == 0)
+ set_bit(ST_M2M_OPEN, &gsc->state);
+
+ gsc_dbg("gsc m2m driver is opened, ctx(0x%p)", ctx);
+ return 0;
+
+error_fh:
+ v4l2_fh_del(&ctx->fh);
+ v4l2_fh_exit(&ctx->fh);
+ kfree(ctx);
+ return ret;
+}
+
+static int gsc_m2m_release(struct file *file)
+{
+ struct gsc_ctx *ctx = fh_to_ctx(file->private_data);
+ struct gsc_dev *gsc = ctx->gsc_dev;
+
+ gsc_dbg("pid: %d, state: 0x%lx, refcnt= %d",
+ task_pid_nr(current), gsc->state, gsc->m2m.refcnt);
+
+ v4l2_m2m_ctx_release(ctx->m2m_ctx);
+ gsc_ctrls_delete(ctx);
+ v4l2_fh_del(&ctx->fh);
+ v4l2_fh_exit(&ctx->fh);
+
+ if (--gsc->m2m.refcnt <= 0)
+ clear_bit(ST_M2M_OPEN, &gsc->state);
+ kfree(ctx);
+ return 0;
+}
+
+static unsigned int gsc_m2m_poll(struct file *file,
+ struct poll_table_struct *wait)
+{
+ struct gsc_ctx *ctx = fh_to_ctx(file->private_data);
+
+ return v4l2_m2m_poll(file, ctx->m2m_ctx, wait);
+}
+
+static int gsc_m2m_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ struct gsc_ctx *ctx = fh_to_ctx(file->private_data);
+
+ return v4l2_m2m_mmap(file, ctx->m2m_ctx, vma);
+}
+static const struct v4l2_file_operations gsc_m2m_fops = {
+ .owner = THIS_MODULE,
+ .open = gsc_m2m_open,
+ .release = gsc_m2m_release,
+ .poll = gsc_m2m_poll,
+ .unlocked_ioctl = video_ioctl2,
+ .mmap = gsc_m2m_mmap,
+};
+
+static struct v4l2_m2m_ops gsc_m2m_ops = {
+ .device_run = gsc_m2m_device_run,
+ .job_abort = gsc_m2m_job_abort,
+};
+
+int gsc_register_m2m_device(struct gsc_dev *gsc)
+{
+ struct video_device *vfd;
+ struct platform_device *pdev;
+ int ret = 0;
+
+ if (!gsc)
+ return -ENODEV;
+
+ pdev = gsc->pdev;
+
+ vfd = video_device_alloc();
+ if (!vfd) {
+ dev_err(&pdev->dev, "Failed to allocate video device\n");
+ return -ENOMEM;
+ }
+
+ vfd->fops = &gsc_m2m_fops;
+ vfd->ioctl_ops = &gsc_m2m_ioctl_ops;
+ vfd->release = video_device_release;
+ vfd->lock = &gsc->lock;
+ snprintf(vfd->name, sizeof(vfd->name), "%s:m2m", dev_name(&pdev->dev));
+
+ video_set_drvdata(vfd, gsc);
+
+ gsc->m2m.vfd = vfd;
+ gsc->m2m.m2m_dev = v4l2_m2m_init(&gsc_m2m_ops);
+ if (IS_ERR(gsc->m2m.m2m_dev)) {
+ dev_err(&pdev->dev, "failed to initialize v4l2-m2m device\n");
+ ret = PTR_ERR(gsc->m2m.m2m_dev);
+ goto err_m2m_r1;
+ }
+
+ ret = video_register_device(vfd, VFL_TYPE_GRABBER, -1);
+ if (ret) {
+ dev_err(&pdev->dev,
+ "%s(): failed to register video device\n", __func__);
+ goto err_m2m_r2;
+ }
+
+ gsc_dbg("gsc m2m driver registered as /dev/video%d", vfd->num);
+
+ return 0;
+
+err_m2m_r2:
+ v4l2_m2m_release(gsc->m2m.m2m_dev);
+err_m2m_r1:
+ video_device_release(gsc->m2m.vfd);
+
+ return ret;
+}
+
+void gsc_unregister_m2m_device(struct gsc_dev *gsc)
+{
+ if (gsc)
+ v4l2_m2m_release(gsc->m2m.m2m_dev);
+}
--- /dev/null
+/* linux/drivers/media/video/exynos/gsc/gsc-output.c
+ *
+ * Copyright (c) 2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com
+ *
+ * Samsung EXYNOS5 SoC series G-scaler driver
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published
+ * by the Free Software Foundation, either version 2 of the License,
+ * or (at your option) any later version.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/version.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/bug.h>
+#include <linux/interrupt.h>
+#include <linux/workqueue.h>
+#include <linux/device.h>
+#include <linux/platform_device.h>
+#include <linux/list.h>
+#include <linux/io.h>
+#include <linux/slab.h>
+#include <linux/clk.h>
+#include <linux/string.h>
+#include <linux/delay.h>
+#include <media/v4l2-ioctl.h>
+
+#include "gsc-core.h"
+
+int gsc_out_hw_reset_off(struct gsc_dev *gsc)
+{
+ int ret;
+
+ mdelay(1);
+ gsc_hw_set_sw_reset(gsc);
+ ret = gsc_wait_reset(gsc);
+ if (ret < 0) {
+ gsc_err("gscaler s/w reset timeout");
+ return ret;
+ }
+ gsc_pixelasync_sw_reset(gsc);
+ gsc_disp_fifo_sw_reset(gsc);
+ gsc_hw_enable_control(gsc, false);
+ ret = gsc_wait_stop(gsc);
+ if (ret < 0) {
+ gsc_err("gscaler stop timeout");
+ return ret;
+ }
+
+ return 0;
+}
+
+int gsc_out_hw_set(struct gsc_ctx *ctx)
+{
+ struct gsc_dev *gsc = ctx->gsc_dev;
+ int ret = 0;
+
+ ret = gsc_set_scaler_info(ctx);
+ if (ret) {
+ gsc_err("Scaler setup error");
+ return ret;
+ }
+
+ gsc_hw_set_frm_done_irq_mask(gsc, false);
+ gsc_hw_set_gsc_irq_enable(gsc, true);
+
+ gsc_hw_set_input_path(ctx);
+ gsc_hw_set_in_size(ctx);
+ gsc_hw_set_in_image_format(ctx);
+
+ gsc_hw_set_output_path(ctx);
+ gsc_hw_set_out_size(ctx);
+ gsc_hw_set_out_image_format(ctx);
+
+ gsc_hw_set_prescaler(ctx);
+ gsc_hw_set_mainscaler(ctx);
+ gsc_hw_set_rotation(ctx);
+ gsc_hw_set_global_alpha(ctx);
+ gsc_hw_set_input_buf_mask_all(gsc);
+
+ return 0;
+}
+
+static void gsc_subdev_try_crop(struct gsc_dev *gsc, struct v4l2_rect *cr)
+{
+ struct gsc_variant *variant = gsc->variant;
+ u32 max_w, max_h, min_w, min_h;
+ u32 tmp_w, tmp_h;
+
+ if (gsc->out.ctx->gsc_ctrls.rotate->val == 90 ||
+ gsc->out.ctx->gsc_ctrls.rotate->val == 270) {
+ max_w = variant->pix_max->target_rot_en_w;
+ max_h = variant->pix_max->target_rot_en_h;
+ min_w = variant->pix_min->target_rot_en_w;
+ min_h = variant->pix_min->target_rot_en_h;
+ tmp_w = cr->height;
+ tmp_h = cr->width;
+ } else {
+ max_w = variant->pix_max->target_rot_dis_w;
+ max_h = variant->pix_max->target_rot_dis_h;
+ min_w = variant->pix_min->target_rot_dis_w;
+ min_h = variant->pix_min->target_rot_dis_h;
+ tmp_w = cr->width;
+ tmp_h = cr->height;
+ }
+
+ gsc_dbg("min_w: %d, min_h: %d, max_w: %d, max_h = %d",
+ min_w, min_h, max_w, max_h);
+
+ v4l_bound_align_image(&tmp_w, min_w, max_w, 0,
+ &tmp_h, min_h, max_h, 0, 0);
+
+ if (gsc->out.ctx->gsc_ctrls.rotate->val == 90 ||
+ gsc->out.ctx->gsc_ctrls.rotate->val == 270)
+ gsc_check_crop_change(tmp_h, tmp_w, &cr->width, &cr->height);
+ else
+ gsc_check_crop_change(tmp_w, tmp_h, &cr->width, &cr->height);
+
+ gsc_dbg("Aligned l:%d, t:%d, w:%d, h:%d", cr->left, cr->top,
+ cr->width, cr->height);
+}
+
+static int gsc_subdev_get_fmt(struct v4l2_subdev *sd,
+ struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_format *fmt)
+{
+ struct gsc_dev *gsc = entity_data_to_gsc(v4l2_get_subdevdata(sd));
+ struct gsc_ctx *ctx = gsc->out.ctx;
+ struct v4l2_mbus_framefmt *mf = &fmt->format;
+ struct gsc_frame *f;
+
+ if (fmt->pad == GSC_PAD_SINK) {
+ gsc_err("Sink pad get_fmt is not supported");
+ return 0;
+ }
+
+ if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
+ fmt->format = *v4l2_subdev_get_try_format(fh, fmt->pad);
+ return 0;
+ }
+
+ f = &ctx->d_frame;
+ mf->code = f->fmt->mbus_code;
+ mf->width = f->f_width;
+ mf->height = f->f_height;
+ mf->colorspace = V4L2_COLORSPACE_JPEG;
+
+ return 0;
+}
+
+static int gsc_subdev_set_fmt(struct v4l2_subdev *sd,
+ struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_format *fmt)
+{
+ struct gsc_dev *gsc = entity_data_to_gsc(v4l2_get_subdevdata(sd));
+ struct v4l2_mbus_framefmt *mf;
+ struct gsc_ctx *ctx = gsc->out.ctx;
+ struct gsc_frame *f;
+
+ gsc_dbg("pad%d: code: 0x%x, %dx%d",
+ fmt->pad, fmt->format.code, fmt->format.width, fmt->format.height);
+
+ if (fmt->pad == GSC_PAD_SINK) {
+ gsc_err("Sink pad set_fmt is not supported");
+ return 0;
+ }
+
+ if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
+ mf = v4l2_subdev_get_try_format(fh, fmt->pad);
+ mf->width = fmt->format.width;
+ mf->height = fmt->format.height;
+ mf->code = fmt->format.code;
+ mf->colorspace = V4L2_COLORSPACE_JPEG;
+ } else {
+ f = &ctx->d_frame;
+ gsc_set_frame_size(f, fmt->format.width, fmt->format.height);
+ f->fmt = find_fmt(NULL, &fmt->format.code, 0);
+ ctx->state |= GSC_DST_FMT;
+ }
+
+ return 0;
+}
+
+static int gsc_subdev_get_crop(struct v4l2_subdev *sd,
+ struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_crop *crop)
+{
+ struct gsc_dev *gsc = entity_data_to_gsc(v4l2_get_subdevdata(sd));
+ struct gsc_ctx *ctx = gsc->out.ctx;
+ struct v4l2_rect *r = &crop->rect;
+ struct gsc_frame *f;
+
+ if (crop->pad == GSC_PAD_SINK) {
+ gsc_err("Sink pad get_crop is not supported");
+ return 0;
+ }
+
+ if (crop->which == V4L2_SUBDEV_FORMAT_TRY) {
+ crop->rect = *v4l2_subdev_get_try_crop(fh, crop->pad);
+ return 0;
+ }
+
+ f = &ctx->d_frame;
+ r->left = f->crop.left;
+ r->top = f->crop.top;
+ r->width = f->crop.width;
+ r->height = f->crop.height;
+
+ gsc_dbg("f:%p, pad%d: l:%d, t:%d, %dx%d, f_w: %d, f_h: %d",
+ f, crop->pad, r->left, r->top, r->width, r->height,
+ f->f_width, f->f_height);
+
+ return 0;
+}
+
+static int gsc_subdev_set_crop(struct v4l2_subdev *sd,
+ struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_crop *crop)
+{
+ struct gsc_dev *gsc = entity_data_to_gsc(v4l2_get_subdevdata(sd));
+ struct gsc_ctx *ctx = gsc->out.ctx;
+ struct v4l2_rect *r;
+ struct gsc_frame *f;
+
+ gsc_dbg("(%d,%d)/%dx%d", crop->rect.left, crop->rect.top, crop->rect.width, crop->rect.height);
+
+ if (crop->pad == GSC_PAD_SINK) {
+ gsc_err("Sink pad set_fmt is not supported\n");
+ return 0;
+ }
+
+ if (crop->which == V4L2_SUBDEV_FORMAT_TRY) {
+ r = v4l2_subdev_get_try_crop(fh, crop->pad);
+ r->left = crop->rect.left;
+ r->top = crop->rect.top;
+ r->width = crop->rect.width;
+ r->height = crop->rect.height;
+ } else {
+ f = &ctx->d_frame;
+ f->crop.left = crop->rect.left;
+ f->crop.top = crop->rect.top;
+ f->crop.width = crop->rect.width;
+ f->crop.height = crop->rect.height;
+ }
+
+ gsc_dbg("pad%d: (%d,%d)/%dx%d", crop->pad, crop->rect.left, crop->rect.top,
+ crop->rect.width, crop->rect.height);
+
+ return 0;
+}
+
+static int gsc_subdev_s_stream(struct v4l2_subdev *sd, int enable)
+{
+ struct gsc_dev *gsc = entity_data_to_gsc(v4l2_get_subdevdata(sd));
+ int ret;
+
+ if (enable) {
+ pm_runtime_get_sync(&gsc->pdev->dev);
+ ret = gsc_out_hw_set(gsc->out.ctx);
+ if (ret) {
+ gsc_err("GSC H/W setting is failed");
+ return -EINVAL;
+ }
+ } else {
+ INIT_LIST_HEAD(&gsc->out.active_buf_q);
+ clear_bit(ST_OUTPUT_STREAMON, &gsc->state);
+ pm_runtime_put_sync(&gsc->pdev->dev);
+ }
+
+ return 0;
+}
+
+static struct v4l2_subdev_pad_ops gsc_subdev_pad_ops = {
+ .get_fmt = gsc_subdev_get_fmt,
+ .set_fmt = gsc_subdev_set_fmt,
+ .get_crop = gsc_subdev_get_crop,
+ .set_crop = gsc_subdev_set_crop,
+};
+
+static struct v4l2_subdev_video_ops gsc_subdev_video_ops = {
+ .s_stream = gsc_subdev_s_stream,
+};
+
+static struct v4l2_subdev_ops gsc_subdev_ops = {
+ .pad = &gsc_subdev_pad_ops,
+ .video = &gsc_subdev_video_ops,
+};
+
+static int gsc_out_power_off(struct v4l2_subdev *sd)
+{
+ struct gsc_dev *gsc = entity_data_to_gsc(v4l2_get_subdevdata(sd));
+ int ret;
+
+ ret = gsc_out_hw_reset_off(gsc);
+ if (ret < 0)
+ return ret;
+
+ return 0;
+}
+
+static struct exynos_media_ops gsc_out_link_callback = {
+ .power_off = gsc_out_power_off,
+};
+
+/*
+ * The video node ioctl operations
+ */
+static int gsc_output_querycap(struct file *file, void *priv,
+ struct v4l2_capability *cap)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+
+ strncpy(cap->driver, gsc->pdev->name, sizeof(cap->driver) - 1);
+ strncpy(cap->card, gsc->pdev->name, sizeof(cap->card) - 1);
+ cap->bus_info[0] = 0;
+ cap->capabilities = V4L2_CAP_STREAMING |
+ V4L2_CAP_VIDEO_OUTPUT | V4L2_CAP_VIDEO_OUTPUT_MPLANE;
+
+ return 0;
+}
+
+static int gsc_output_enum_fmt_mplane(struct file *file, void *priv,
+ struct v4l2_fmtdesc *f)
+{
+ return gsc_enum_fmt_mplane(f);
+}
+
+static int gsc_output_try_fmt_mplane(struct file *file, void *fh,
+ struct v4l2_format *f)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+
+ if (!is_output(f->type)) {
+ gsc_err("Not supported buffer type");
+ return -EINVAL;
+ }
+
+ return gsc_try_fmt_mplane(gsc->out.ctx, f);
+}
+
+static int gsc_output_s_fmt_mplane(struct file *file, void *fh,
+ struct v4l2_format *f)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ struct gsc_ctx *ctx = gsc->out.ctx;
+ struct gsc_frame *frame;
+ struct v4l2_pix_format_mplane *pix;
+ int i, ret = 0;
+
+ ret = gsc_output_try_fmt_mplane(file, fh, f);
+ if (ret) {
+ gsc_err("Invalid argument");
+ return ret;
+ }
+
+ if (vb2_is_streaming(&gsc->out.vbq)) {
+ gsc_err("queue (%d) busy", f->type);
+ return -EBUSY;
+ }
+
+ frame = &ctx->s_frame;
+
+ pix = &f->fmt.pix_mp;
+ frame->fmt = find_fmt(&pix->pixelformat, NULL, 0);
+ if (!frame->fmt) {
+ gsc_err("Not supported pixel format");
+ return -EINVAL;
+ }
+
+ for (i = 0; i < frame->fmt->num_planes; i++)
+ frame->payload[i] = pix->plane_fmt[i].sizeimage;
+
+ gsc_set_frame_size(frame, pix->width, pix->height);
+
+ ctx->state |= GSC_SRC_FMT;
+
+ gsc_dbg("f_w: %d, f_h: %d", frame->f_width, frame->f_height);
+
+ return 0;
+}
+
+static int gsc_output_g_fmt_mplane(struct file *file, void *fh,
+ struct v4l2_format *f)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ struct gsc_ctx *ctx = gsc->out.ctx;
+
+ if (!is_output(f->type)) {
+ gsc_err("Not supported buffer type");
+ return -EINVAL;
+ }
+
+ return gsc_g_fmt_mplane(ctx, f);
+}
+
+static int gsc_output_reqbufs(struct file *file, void *priv,
+ struct v4l2_requestbuffers *reqbufs)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ struct gsc_output_device *out = &gsc->out;
+ struct gsc_frame *frame;
+ int ret;
+
+ if (reqbufs->count > gsc->variant->in_buf_cnt) {
+ gsc_err("Requested count exceeds maximun count of input buffer");
+ return -EINVAL;
+ } else if (reqbufs->count == 0)
+ gsc_ctx_state_lock_clear(GSC_SRC_FMT | GSC_DST_FMT,
+ out->ctx);
+
+ frame = ctx_get_frame(out->ctx, reqbufs->type);
+
+ ret = vb2_reqbufs(&out->vbq, reqbufs);
+ if (ret)
+ return ret;
+ out->req_cnt = reqbufs->count;
+
+ return ret;
+}
+
+static int gsc_output_querybuf(struct file *file, void *priv,
+ struct v4l2_buffer *buf)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ struct gsc_output_device *out = &gsc->out;
+
+ return vb2_querybuf(&out->vbq, buf);
+}
+
+static int gsc_output_streamon(struct file *file, void *priv,
+ enum v4l2_buf_type type)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ struct gsc_output_device *out = &gsc->out;
+ struct media_pad *sink_pad;
+ int ret;
+
+ sink_pad = media_entity_remote_source(&out->sd_pads[GSC_PAD_SOURCE]);
+ if (IS_ERR(sink_pad)) {
+ gsc_err("No sink pad conncted with a gscaler source pad");
+ return PTR_ERR(sink_pad);
+ }
+
+ ret = gsc_out_link_validate(&out->sd_pads[GSC_PAD_SOURCE], sink_pad);
+ if (ret) {
+ gsc_err("Output link validation is failed");
+ return ret;
+ }
+
+ media_entity_pipeline_start(&out->vfd->entity, gsc->pipeline.pipe);
+
+ return vb2_streamon(&gsc->out.vbq, type);
+}
+
+static int gsc_output_streamoff(struct file *file, void *priv,
+ enum v4l2_buf_type type)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+
+ return vb2_streamoff(&gsc->out.vbq, type);
+}
+
+static int gsc_output_qbuf(struct file *file, void *priv,
+ struct v4l2_buffer *buf)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ struct gsc_output_device *out = &gsc->out;
+
+ return vb2_qbuf(&out->vbq, buf);
+}
+
+static int gsc_output_dqbuf(struct file *file, void *priv,
+ struct v4l2_buffer *buf)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+
+ return vb2_dqbuf(&gsc->out.vbq, buf,
+ file->f_flags & O_NONBLOCK);
+}
+
+static int gsc_output_cropcap(struct file *file, void *fh,
+ struct v4l2_cropcap *cr)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ struct gsc_ctx *ctx = gsc->out.ctx;
+
+ if (!is_output(cr->type)) {
+ gsc_err("Not supported buffer type");
+ return -EINVAL;
+ }
+
+ cr->bounds.left = 0;
+ cr->bounds.top = 0;
+ cr->bounds.width = ctx->s_frame.f_width;
+ cr->bounds.height = ctx->s_frame.f_height;
+ cr->defrect = cr->bounds;
+
+ return 0;
+
+}
+
+static int gsc_output_g_crop(struct file *file, void *fh,
+ struct v4l2_crop *cr)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+
+ if (!is_output(cr->type)) {
+ gsc_err("Not supported buffer type");
+ return -EINVAL;
+ }
+
+ return gsc_g_crop(gsc->out.ctx, cr);
+}
+
+static int gsc_output_s_crop(struct file *file, void *fh, struct v4l2_crop *cr)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ struct gsc_ctx *ctx = gsc->out.ctx;
+ struct gsc_variant *variant = gsc->variant;
+ struct gsc_frame *f;
+ unsigned int mask = GSC_DST_FMT | GSC_SRC_FMT;
+ int ret;
+
+ if (!is_output(cr->type)) {
+ gsc_err("Not supported buffer type");
+ return -EINVAL;
+ }
+
+ ret = gsc_try_crop(ctx, cr);
+ if (ret)
+ return ret;
+
+ f = &ctx->s_frame;
+
+ /* Check to see if scaling ratio is within supported range */
+ if ((ctx->state & (GSC_DST_FMT | GSC_SRC_FMT)) == mask) {
+ ret = gsc_check_scaler_ratio(variant, f->crop.width,
+ f->crop.height, ctx->d_frame.crop.width,
+ ctx->d_frame.crop.height,
+ ctx->gsc_ctrls.rotate->val, ctx->out_path);
+ if (ret) {
+ gsc_err("Out of scaler range");
+ return -EINVAL;
+ }
+ gsc_subdev_try_crop(gsc, &ctx->d_frame.crop);
+ }
+
+ f->crop.left = cr->c.left;
+ f->crop.top = cr->c.top;
+ f->crop.width = cr->c.width;
+ f->crop.height = cr->c.height;
+
+ return 0;
+}
+
+static const struct v4l2_ioctl_ops gsc_output_ioctl_ops = {
+ .vidioc_querycap = gsc_output_querycap,
+ .vidioc_enum_fmt_vid_out_mplane = gsc_output_enum_fmt_mplane,
+
+ .vidioc_try_fmt_vid_out_mplane = gsc_output_try_fmt_mplane,
+ .vidioc_s_fmt_vid_out_mplane = gsc_output_s_fmt_mplane,
+ .vidioc_g_fmt_vid_out_mplane = gsc_output_g_fmt_mplane,
+
+ .vidioc_reqbufs = gsc_output_reqbufs,
+ .vidioc_querybuf = gsc_output_querybuf,
+
+ .vidioc_qbuf = gsc_output_qbuf,
+ .vidioc_dqbuf = gsc_output_dqbuf,
+
+ .vidioc_streamon = gsc_output_streamon,
+ .vidioc_streamoff = gsc_output_streamoff,
+
+ .vidioc_g_crop = gsc_output_g_crop,
+ .vidioc_s_crop = gsc_output_s_crop,
+ .vidioc_cropcap = gsc_output_cropcap,
+};
+
+static int gsc_out_video_s_stream(struct gsc_dev *gsc, int enable)
+{
+ struct gsc_output_device *out = &gsc->out;
+ struct media_pad *sink_pad;
+ struct v4l2_subdev *sd;
+ int ret = 0;
+
+ sink_pad = media_entity_remote_source(&out->vd_pad);
+ if (IS_ERR(sink_pad)) {
+ gsc_err("No sink pad conncted with a gscaler video source pad");
+ return PTR_ERR(sink_pad);
+ }
+ sd = media_entity_to_v4l2_subdev(sink_pad->entity);
+ ret = v4l2_subdev_call(sd, video, s_stream, enable);
+ if (ret)
+ gsc_err("G-Scaler subdev s_stream[%d] failed", enable);
+
+ return ret;
+}
+
+static int gsc_out_start_streaming(struct vb2_queue *q, unsigned int count)
+{
+ struct gsc_ctx *ctx = q->drv_priv;
+ struct gsc_dev *gsc = ctx->gsc_dev;
+
+ return gsc_out_video_s_stream(gsc, 1);
+}
+
+static int gsc_out_stop_streaming(struct vb2_queue *q)
+{
+ struct gsc_ctx *ctx = q->drv_priv;
+ struct gsc_dev *gsc = ctx->gsc_dev;
+ int ret = 0;
+
+ ret = gsc_pipeline_s_stream(gsc, false);
+ if (ret)
+ return ret;
+
+ if (ctx->out_path == GSC_FIMD) {
+ gsc_hw_enable_control(gsc, false);
+ ret = gsc_wait_stop(gsc);
+ if (ret < 0)
+ return ret;
+ }
+ gsc_hw_set_input_buf_mask_all(gsc);
+
+ /* TODO: Add gscaler clock off function */
+ ret = gsc_out_video_s_stream(gsc, 0);
+ if (ret) {
+ gsc_err("G-Scaler video s_stream off failed");
+ return ret;
+ }
+ media_entity_pipeline_stop(&gsc->out.vfd->entity);
+
+ return ret;
+}
+
+static int gsc_out_queue_setup(struct vb2_queue *vq, const struct v4l2_format *fmt,
+ unsigned int *num_buffers, unsigned int *num_planes,
+ unsigned int sizes[], void *allocators[])
+{
+ struct gsc_ctx *ctx = vq->drv_priv;
+ struct gsc_fmt *ffmt = ctx->s_frame.fmt;
+ int i;
+
+ if (IS_ERR(ffmt)) {
+ gsc_err("Invalid source format");
+ return PTR_ERR(ffmt);
+ }
+
+ *num_planes = ffmt->num_planes;
+
+ for (i = 0; i < ffmt->num_planes; i++) {
+ sizes[i] = get_plane_size(&ctx->s_frame, i);
+ allocators[i] = ctx->gsc_dev->alloc_ctx;
+ }
+
+ return 0;
+}
+
+static int gsc_out_buffer_prepare(struct vb2_buffer *vb)
+{
+ struct vb2_queue *vq = vb->vb2_queue;
+ struct gsc_ctx *ctx = vq->drv_priv;
+
+ if (!ctx->s_frame.fmt || !is_output(vq->type)) {
+ gsc_err("Invalid argument");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int gsc_out_set_in_addr(struct gsc_dev *gsc, struct gsc_ctx *ctx,
+ struct gsc_input_buf *buf, int index)
+{
+ int ret;
+
+ ret = gsc_prepare_addr(ctx, &buf->vb, &ctx->s_frame, &ctx->s_frame.addr);
+ if (ret) {
+ gsc_err("Fail to prepare G-Scaler address");
+ return -EINVAL;
+ }
+ gsc_hw_set_input_addr(gsc, &ctx->s_frame.addr, index);
+ active_queue_push(&gsc->out, buf, gsc);
+ buf->idx = index;
+
+ return 0;
+}
+
+static void gsc_out_buffer_queue(struct vb2_buffer *vb)
+{
+ struct gsc_input_buf *buf
+ = container_of(vb, struct gsc_input_buf, vb);
+ struct vb2_queue *q = vb->vb2_queue;
+ struct gsc_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
+ struct gsc_dev *gsc = ctx->gsc_dev;
+ int ret;
+
+ if (gsc->out.req_cnt >= atomic_read(&q->queued_count)) {
+ ret = gsc_out_set_in_addr(gsc, ctx, buf, vb->v4l2_buf.index);
+ if (ret) {
+ gsc_err("Failed to prepare G-Scaler address");
+ return;
+ }
+ gsc_hw_set_input_buf_masking(gsc, vb->v4l2_buf.index, false);
+ } else {
+ gsc_err("All requested buffers have been queued already");
+ return;
+ }
+
+ if (!test_and_set_bit(ST_OUTPUT_STREAMON, &gsc->state)) {
+ gsc_disp_fifo_sw_reset(gsc);
+ gsc_pixelasync_sw_reset(gsc);
+ gsc_hw_enable_control(gsc, true);
+ ret = gsc_wait_operating(gsc);
+ if (ret < 0) {
+ gsc_err("wait operation timeout");
+ return;
+ }
+ gsc_pipeline_s_stream(gsc, true);
+ }
+}
+
+static struct vb2_ops gsc_output_qops = {
+ .queue_setup = gsc_out_queue_setup,
+ .buf_prepare = gsc_out_buffer_prepare,
+ .buf_queue = gsc_out_buffer_queue,
+ .wait_prepare = gsc_unlock,
+ .wait_finish = gsc_lock,
+ .start_streaming = gsc_out_start_streaming,
+ .stop_streaming = gsc_out_stop_streaming,
+};
+
+static int gsc_out_link_setup(struct media_entity *entity,
+ const struct media_pad *local,
+ const struct media_pad *remote, u32 flags)
+{
+ if (media_entity_type(entity) != MEDIA_ENT_T_V4L2_SUBDEV)
+ return 0;
+
+ if (local->flags == MEDIA_PAD_FL_SOURCE) {
+ struct gsc_dev *gsc = entity_to_gsc(entity);
+ struct v4l2_subdev *sd;
+ if (flags & MEDIA_LNK_FL_ENABLED) {
+ if (gsc->pipeline.disp == NULL) {
+ /* Gscaler 0 --> Winwow 0, Gscaler 1 --> Window 1,
+ Gscaler 2 --> Window 2, Gscaler 3 --> Window 2 */
+ char name[FIMD_NAME_SIZE];
+ sprintf(name, "%s%d", FIMD_ENTITY_NAME, get_win_num(gsc));
+ gsc_hw_set_local_dst(gsc->id, true);
+ sd = media_entity_to_v4l2_subdev(remote->entity);
+ gsc->pipeline.disp = sd;
+ if (!strcmp(sd->name, name))
+ gsc->out.ctx->out_path = GSC_FIMD;
+ else
+ gsc->out.ctx->out_path = GSC_MIXER;
+ } else
+ gsc_err("G-Scaler source pad was linked already");
+ } else if (!(flags & ~MEDIA_LNK_FL_ENABLED)) {
+ if (gsc->pipeline.disp != NULL) {
+ gsc_hw_set_local_dst(gsc->id, false);
+ gsc->pipeline.disp = NULL;
+ gsc->out.ctx->out_path = 0;
+ } else
+ gsc_err("G-Scaler source pad was unlinked already");
+ }
+ }
+
+ return 0;
+}
+
+static const struct media_entity_operations gsc_out_media_ops = {
+ .link_setup = gsc_out_link_setup,
+};
+
+int gsc_output_ctrls_create(struct gsc_dev *gsc)
+{
+ int ret;
+
+ ret = gsc_ctrls_create(gsc->out.ctx);
+ if (ret) {
+ gsc_err("Failed to create controls of G-Scaler");
+ return ret;
+ }
+
+ return 0;
+}
+
+static int gsc_output_open(struct file *file)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+ int ret = v4l2_fh_open(file);
+
+ if (ret)
+ return ret;
+
+ gsc_dbg("pid: %d, state: 0x%lx", task_pid_nr(current), gsc->state);
+
+ /* Return if the corresponding mem2mem/output/capture video node
+ is already opened. */
+ if (gsc_m2m_opened(gsc) || gsc_cap_opened(gsc) || gsc_out_opened(gsc)) {
+ gsc_err("G-Scaler%d has been opened already", gsc->id);
+ return -EBUSY;
+ }
+
+ if (WARN_ON(gsc->out.ctx == NULL)) {
+ gsc_err("G-Scaler output context is NULL");
+ return -ENXIO;
+ }
+
+ set_bit(ST_OUTPUT_OPEN, &gsc->state);
+
+ ret = gsc_ctrls_create(gsc->out.ctx);
+ if (ret < 0) {
+ v4l2_fh_release(file);
+ clear_bit(ST_OUTPUT_OPEN, &gsc->state);
+ return ret;
+ }
+
+ return ret;
+}
+
+static int gsc_output_close(struct file *file)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+
+ gsc_dbg("pid: %d, state: 0x%lx", task_pid_nr(current), gsc->state);
+
+ clear_bit(ST_OUTPUT_OPEN, &gsc->state);
+ vb2_queue_release(&gsc->out.vbq);
+ gsc_ctrls_delete(gsc->out.ctx);
+ v4l2_fh_release(file);
+
+ return 0;
+}
+
+static unsigned int gsc_output_poll(struct file *file,
+ struct poll_table_struct *wait)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+
+ return vb2_poll(&gsc->out.vbq, file, wait);
+}
+
+static int gsc_output_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ struct gsc_dev *gsc = video_drvdata(file);
+
+ return vb2_mmap(&gsc->out.vbq, vma);
+}
+
+static const struct v4l2_file_operations gsc_output_fops = {
+ .owner = THIS_MODULE,
+ .open = gsc_output_open,
+ .release = gsc_output_close,
+ .poll = gsc_output_poll,
+ .unlocked_ioctl = video_ioctl2,
+ .mmap = gsc_output_mmap,
+};
+
+static int gsc_create_link(struct gsc_dev *gsc)
+{
+ struct media_entity *source, *sink;
+ int ret;
+
+ source = &gsc->out.vfd->entity;
+ sink = &gsc->out.sd->entity;
+ ret = media_entity_create_link(source, 0, sink, GSC_PAD_SINK,
+ MEDIA_LNK_FL_IMMUTABLE |
+ MEDIA_LNK_FL_ENABLED);
+ if (ret) {
+ gsc_err("Failed to create link between G-Scaler vfd and subdev");
+ return ret;
+ }
+
+ return 0;
+}
+
+
+static int gsc_create_subdev(struct gsc_dev *gsc)
+{
+ struct v4l2_subdev *sd;
+ int ret;
+
+ sd = kzalloc(sizeof(*sd), GFP_KERNEL);
+ if (!sd)
+ return -ENOMEM;
+
+ v4l2_subdev_init(sd, &gsc_subdev_ops);
+ sd->flags = V4L2_SUBDEV_FL_HAS_DEVNODE;
+ snprintf(sd->name, sizeof(sd->name), "%s.%d", GSC_SUBDEV_NAME, gsc->id);
+
+ gsc->out.sd_pads[GSC_PAD_SINK].flags = MEDIA_PAD_FL_SINK;
+ gsc->out.sd_pads[GSC_PAD_SOURCE].flags = MEDIA_PAD_FL_SOURCE;
+ ret = media_entity_init(&sd->entity, GSC_PADS_NUM,
+ gsc->out.sd_pads, 0);
+ if (ret) {
+ gsc_err("Failed to initialize the G-Scaler media entity");
+ goto error;
+ }
+
+ sd->entity.ops = &gsc_out_media_ops;
+ ret = v4l2_device_register_subdev(&gsc->mdev[MDEV_OUTPUT]->v4l2_dev, sd);
+ if (ret) {
+ media_entity_cleanup(&sd->entity);
+ goto error;
+ }
+ gsc->mdev[MDEV_OUTPUT]->gsc_sd[gsc->id] = sd;
+ gsc_dbg("gsc_sd[%d] = 0x%08x\n", gsc->id,
+ (u32)gsc->mdev[MDEV_OUTPUT]->gsc_sd[gsc->id]);
+ gsc->out.sd = sd;
+ gsc->md_data.media_ops = &gsc_out_link_callback;
+ v4l2_set_subdevdata(sd, &gsc->md_data);
+
+ return 0;
+error:
+ kfree(sd);
+ return ret;
+}
+
+int gsc_register_output_device(struct gsc_dev *gsc)
+{
+ struct video_device *vfd;
+ struct gsc_output_device *gsc_out;
+ struct gsc_ctx *ctx;
+ struct vb2_queue *q;
+ int ret = -ENOMEM;
+
+ ctx = kzalloc(sizeof *ctx, GFP_KERNEL);
+ if (!ctx)
+ return -ENOMEM;
+
+ ctx->gsc_dev = gsc;
+ ctx->s_frame.fmt = get_format(GSC_OUT_DEF_SRC);
+ ctx->d_frame.fmt = get_format(GSC_OUT_DEF_DST);
+ ctx->in_path = GSC_DMA;
+ ctx->state = GSC_CTX_OUTPUT;
+
+ vfd = video_device_alloc();
+ if (!vfd) {
+ gsc_err("Failed to allocate video device");
+ goto err_ctx_alloc;
+ }
+
+ snprintf(vfd->name, sizeof(vfd->name), "%s.output",
+ dev_name(&gsc->pdev->dev));
+
+
+ vfd->fops = &gsc_output_fops;
+ vfd->ioctl_ops = &gsc_output_ioctl_ops;
+ vfd->v4l2_dev = &gsc->mdev[MDEV_OUTPUT]->v4l2_dev;
+ vfd->release = video_device_release;
+ vfd->lock = &gsc->lock;
+ vfd->minor = -1;
+ video_set_drvdata(vfd, gsc);
+
+ gsc_out = &gsc->out;
+ gsc_out->vfd = vfd;
+
+ INIT_LIST_HEAD(&gsc_out->active_buf_q);
+ spin_lock_init(&ctx->slock);
+ gsc_out->ctx = ctx;
+
+ q = &gsc->out.vbq;
+ memset(q, 0, sizeof(*q));
+ q->type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
+ q->io_modes = VB2_MMAP | VB2_USERPTR;
+ q->drv_priv = gsc->out.ctx;
+ q->ops = &gsc_output_qops;
+ q->mem_ops = &vb2_dma_contig_memops;
+ q->buf_struct_size = sizeof(struct gsc_input_buf);
+
+ vb2_queue_init(q);
+
+ ret = video_register_device(vfd, VFL_TYPE_GRABBER, -1);
+ if (ret) {
+ gsc_err("Failed to register video device");
+ goto err_ent;
+ }
+
+ gsc->out.vd_pad.flags = MEDIA_PAD_FL_SOURCE;
+ ret = media_entity_init(&vfd->entity, 1, &gsc->out.vd_pad, 0);
+ if (ret)
+ goto err_ent;
+
+ ret = gsc_create_subdev(gsc);
+ if (ret)
+ goto err_sd_reg;
+
+ ret = gsc_create_link(gsc);
+ if (ret)
+ goto err_sd_reg;
+
+ vfd->ctrl_handler = &ctx->ctrl_handler;
+ gsc_dbg("gsc output driver registered as /dev/video%d, ctx(0x%08x)",
+ vfd->num, (u32)ctx);
+ return 0;
+
+err_sd_reg:
+ media_entity_cleanup(&vfd->entity);
+err_ent:
+ video_device_release(vfd);
+err_ctx_alloc:
+ kfree(ctx);
+ return ret;
+}
+
+static void gsc_destroy_subdev(struct gsc_dev *gsc)
+{
+ struct v4l2_subdev *sd = gsc->out.sd;
+
+ if (!sd)
+ return;
+ media_entity_cleanup(&sd->entity);
+ v4l2_device_unregister_subdev(sd);
+ kfree(sd);
+ sd = NULL;
+}
+
+void gsc_unregister_output_device(struct gsc_dev *gsc)
+{
+ struct video_device *vfd = gsc->out.vfd;
+
+ if (vfd) {
+ media_entity_cleanup(&vfd->entity);
+ /* Can also be called if video device was
+ not registered */
+ video_unregister_device(vfd);
+ }
+ gsc_destroy_subdev(gsc);
+ kfree(gsc->out.ctx);
+ gsc->out.ctx = NULL;
+}
--- /dev/null
+/* linux/drivers/media/video/exynos/gsc/gsc-regs.c
+ *
+ * Copyright (c) 2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com
+ *
+ * Samsung EXYNOS5 SoC series G-scaler driver
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published
+ * by the Free Software Foundation, either version 2 of the License,
+ * or (at your option) any later version.
+ */
+
+#include <linux/io.h>
+#include <linux/delay.h>
+#include <mach/map.h>
+#include "gsc-core.h"
+
+void gsc_hw_set_sw_reset(struct gsc_dev *dev)
+{
+ u32 cfg = 0;
+
+ cfg |= GSC_SW_RESET_SRESET;
+ writel(cfg, dev->regs + GSC_SW_RESET);
+}
+
+void gsc_disp_fifo_sw_reset(struct gsc_dev *dev)
+{
+ u32 cfg = readl(SYSREG_DISP1BLK_CFG);
+ /* DISPBLK1 FIFO S/W reset sequence
+ set FIFORST_DISP1 as 0 then, set FIFORST_DISP1 as 1 again */
+ cfg &= ~FIFORST_DISP1;
+ writel(cfg, SYSREG_DISP1BLK_CFG);
+ cfg |= FIFORST_DISP1;
+ writel(cfg, SYSREG_DISP1BLK_CFG);
+}
+
+void gsc_pixelasync_sw_reset(struct gsc_dev *dev)
+{
+ u32 cfg = readl(SYSREG_GSCBLK_CFG0);
+ /* GSCBLK Pixel asyncy FIFO S/W reset sequence
+ set PXLASYNC_SW_RESET as 0 then, set PXLASYNC_SW_RESET as 1 again */
+ cfg &= ~GSC_PXLASYNC_RST(dev->id);
+ writel(cfg, SYSREG_GSCBLK_CFG0);
+ cfg |= GSC_PXLASYNC_RST(dev->id);
+ writel(cfg, SYSREG_GSCBLK_CFG0);
+}
+
+int gsc_wait_reset(struct gsc_dev *dev)
+{
+ unsigned long timeo = jiffies + 10; /* timeout of 50ms */
+ u32 cfg;
+
+ while (time_before(jiffies, timeo)) {
+ cfg = readl(dev->regs + GSC_SW_RESET);
+ if (!cfg)
+ return 0;
+ usleep_range(10, 20);
+ }
+ gsc_dbg("wait time : %d ms", jiffies_to_msecs(jiffies - timeo + 20));
+
+ return -EBUSY;
+}
+
+int gsc_wait_operating(struct gsc_dev *dev)
+{
+ unsigned long timeo = jiffies + 10; /* timeout of 50ms */
+ u32 cfg;
+
+ while (time_before(jiffies, timeo)) {
+ cfg = readl(dev->regs + GSC_ENABLE);
+ if ((cfg & GSC_ENABLE_OP_STATUS) == GSC_ENABLE_OP_STATUS)
+ return 0;
+ usleep_range(10, 20);
+ }
+ gsc_dbg("wait time : %d ms", jiffies_to_msecs(jiffies - timeo + 20));
+
+ return -EBUSY;
+}
+
+int gsc_wait_stop(struct gsc_dev *dev)
+{
+ unsigned long timeo = jiffies + 10; /* timeout of 50ms */
+ u32 cfg;
+
+ while (time_before(jiffies, timeo)) {
+ cfg = readl(dev->regs + GSC_ENABLE);
+ if (!(cfg & GSC_ENABLE_OP_STATUS))
+ return 0;
+ usleep_range(10, 20);
+ }
+ gsc_dbg("wait time : %d ms", jiffies_to_msecs(jiffies - timeo + 20));
+
+ return -EBUSY;
+}
+
+
+void gsc_hw_set_one_frm_mode(struct gsc_dev *dev, bool mask)
+{
+ u32 cfg;
+
+ cfg = readl(dev->regs + GSC_ENABLE);
+ if (mask)
+ cfg |= GSC_ENABLE_ON_CLEAR;
+ else
+ cfg &= ~GSC_ENABLE_ON_CLEAR;
+ writel(cfg, dev->regs + GSC_ENABLE);
+}
+
+int gsc_hw_get_input_buf_mask_status(struct gsc_dev *dev)
+{
+ u32 cfg, status, bits = 0;
+
+ cfg = readl(dev->regs + GSC_IN_BASE_ADDR_Y_MASK);
+ status = cfg & GSC_IN_BASE_ADDR_MASK;
+ while (status) {
+ status = status & (status - 1);
+ bits++;
+ }
+ return bits;
+}
+
+int gsc_hw_get_done_input_buf_index(struct gsc_dev *dev)
+{
+ u32 cfg, curr_index, i;
+
+ cfg = readl(dev->regs + GSC_IN_BASE_ADDR_Y_MASK);
+ curr_index = GSC_IN_CURR_GET_INDEX(cfg);
+ for (i = curr_index; i > 1; i--) {
+ if (cfg ^ (1 << (i - 2)))
+ return i - 2;
+ }
+
+ for (i = dev->variant->in_buf_cnt; i > curr_index; i--) {
+ if (cfg ^ (1 << (i - 1)))
+ return i - 1;
+ }
+
+ return curr_index - 1;
+}
+
+int gsc_hw_get_done_output_buf_index(struct gsc_dev *dev)
+{
+ u32 cfg, curr_index, done_buf_index;
+ unsigned long state_mask;
+ u32 reqbufs_cnt = dev->cap.reqbufs_cnt;
+
+ cfg = readl(dev->regs + GSC_OUT_BASE_ADDR_Y_MASK);
+ curr_index = GSC_OUT_CURR_GET_INDEX(cfg);
+ gsc_dbg("curr_index : %d", curr_index);
+ state_mask = cfg & GSC_OUT_BASE_ADDR_MASK;
+
+ done_buf_index = (curr_index == 0) ? reqbufs_cnt - 1 : curr_index - 1;
+
+ do {
+ /* Test done_buf_index whether masking or not */
+ if (test_bit(done_buf_index, &state_mask))
+ done_buf_index = (done_buf_index == 0) ?
+ reqbufs_cnt - 1 : done_buf_index - 1;
+ else
+ return done_buf_index;
+ } while (done_buf_index != curr_index);
+
+ return -EBUSY;
+}
+
+void gsc_hw_set_frm_done_irq_mask(struct gsc_dev *dev, bool mask)
+{
+ u32 cfg;
+
+ cfg = readl(dev->regs + GSC_IRQ);
+ if (mask)
+ cfg |= GSC_IRQ_FRMDONE_MASK;
+ else
+ cfg &= ~GSC_IRQ_FRMDONE_MASK;
+ writel(cfg, dev->regs + GSC_IRQ);
+}
+
+void gsc_hw_set_overflow_irq_mask(struct gsc_dev *dev, bool mask)
+{
+ u32 cfg;
+
+ cfg = readl(dev->regs + GSC_IRQ);
+ if (mask)
+ cfg |= GSC_IRQ_OR_MASK;
+ else
+ cfg &= ~GSC_IRQ_OR_MASK;
+ writel(cfg, dev->regs + GSC_IRQ);
+}
+
+void gsc_hw_set_gsc_irq_enable(struct gsc_dev *dev, bool mask)
+{
+ u32 cfg;
+
+ cfg = readl(dev->regs + GSC_IRQ);
+ if (mask)
+ cfg |= GSC_IRQ_ENABLE;
+ else
+ cfg &= ~GSC_IRQ_ENABLE;
+ writel(cfg, dev->regs + GSC_IRQ);
+}
+
+void gsc_hw_set_input_buf_mask_all(struct gsc_dev *dev)
+{
+ u32 cfg;
+
+ cfg = readl(dev->regs + GSC_IN_BASE_ADDR_Y_MASK);
+ cfg |= GSC_IN_BASE_ADDR_MASK;
+ cfg |= GSC_IN_BASE_ADDR_PINGPONG(dev->variant->in_buf_cnt);
+
+ writel(cfg, dev->regs + GSC_IN_BASE_ADDR_Y_MASK);
+ writel(cfg, dev->regs + GSC_IN_BASE_ADDR_CB_MASK);
+ writel(cfg, dev->regs + GSC_IN_BASE_ADDR_CR_MASK);
+}
+
+void gsc_hw_set_output_buf_mask_all(struct gsc_dev *dev)
+{
+ u32 cfg;
+
+ cfg = readl(dev->regs + GSC_OUT_BASE_ADDR_Y_MASK);
+ cfg |= GSC_OUT_BASE_ADDR_MASK;
+ cfg |= GSC_OUT_BASE_ADDR_PINGPONG(dev->variant->out_buf_cnt);
+
+ writel(cfg, dev->regs + GSC_OUT_BASE_ADDR_Y_MASK);
+ writel(cfg, dev->regs + GSC_OUT_BASE_ADDR_CB_MASK);
+ writel(cfg, dev->regs + GSC_OUT_BASE_ADDR_CR_MASK);
+}
+
+void gsc_hw_set_input_buf_masking(struct gsc_dev *dev, u32 shift,
+ bool enable)
+{
+ u32 cfg = readl(dev->regs + GSC_IN_BASE_ADDR_Y_MASK);
+ u32 mask = 1 << shift;
+
+ cfg &= (~mask);
+ cfg |= enable << shift;
+
+ writel(cfg, dev->regs + GSC_IN_BASE_ADDR_Y_MASK);
+ writel(cfg, dev->regs + GSC_IN_BASE_ADDR_CB_MASK);
+ writel(cfg, dev->regs + GSC_IN_BASE_ADDR_CR_MASK);
+}
+
+void gsc_hw_set_output_buf_masking(struct gsc_dev *dev, u32 shift,
+ bool enable)
+{
+ u32 cfg = readl(dev->regs + GSC_OUT_BASE_ADDR_Y_MASK);
+ u32 mask = 1 << shift;
+
+ cfg &= (~mask);
+ cfg |= enable << shift;
+
+ writel(cfg, dev->regs + GSC_OUT_BASE_ADDR_Y_MASK);
+ writel(cfg, dev->regs + GSC_OUT_BASE_ADDR_CB_MASK);
+ writel(cfg, dev->regs + GSC_OUT_BASE_ADDR_CR_MASK);
+}
+
+int gsc_hw_get_nr_unmask_bits(struct gsc_dev *dev)
+{
+ u32 bits = 0;
+ u32 mask_bits = readl(dev->regs + GSC_OUT_BASE_ADDR_Y_MASK);
+ mask_bits &= GSC_OUT_BASE_ADDR_MASK;
+
+ while (mask_bits) {
+ mask_bits = mask_bits & (mask_bits - 1);
+ bits++;
+ }
+ bits = 16 - bits;
+
+ return bits;
+}
+
+void gsc_hw_set_input_addr(struct gsc_dev *dev, struct gsc_addr *addr,
+ int index)
+{
+ gsc_dbg("src_buf[%d]: 0x%X, cb: 0x%X, cr: 0x%X", index,
+ addr->y, addr->cb, addr->cr);
+ writel(addr->y, dev->regs + GSC_IN_BASE_ADDR_Y(index));
+ writel(addr->cb, dev->regs + GSC_IN_BASE_ADDR_CB(index));
+ writel(addr->cr, dev->regs + GSC_IN_BASE_ADDR_CR(index));
+
+}
+
+void gsc_hw_set_output_addr(struct gsc_dev *dev,
+ struct gsc_addr *addr, int index)
+{
+ gsc_dbg("dst_buf[%d]: 0x%X, cb: 0x%X, cr: 0x%X",
+ index, addr->y, addr->cb, addr->cr);
+ writel(addr->y, dev->regs + GSC_OUT_BASE_ADDR_Y(index));
+ writel(addr->cb, dev->regs + GSC_OUT_BASE_ADDR_CB(index));
+ writel(addr->cr, dev->regs + GSC_OUT_BASE_ADDR_CR(index));
+}
+
+void gsc_hw_set_input_path(struct gsc_ctx *ctx)
+{
+ struct gsc_dev *dev = ctx->gsc_dev;
+
+ u32 cfg = readl(dev->regs + GSC_IN_CON);
+ cfg &= ~(GSC_IN_PATH_MASK | GSC_IN_LOCAL_SEL_MASK);
+
+ if (ctx->in_path == GSC_DMA) {
+ cfg |= GSC_IN_PATH_MEMORY;
+ } else {
+ cfg |= GSC_IN_PATH_LOCAL;
+ if (ctx->in_path == GSC_WRITEBACK) {
+ cfg |= GSC_IN_LOCAL_FIMD_WB;
+ } else {
+ struct v4l2_subdev *sd = dev->pipeline.sensor;
+ struct gsc_sensor_info *s_info =
+ v4l2_get_subdev_hostdata(sd);
+ if (s_info->pdata->cam_port == CAM_PORT_A)
+ cfg |= GSC_IN_LOCAL_CAM0;
+ else
+ cfg |= GSC_IN_LOCAL_CAM1;
+ }
+ }
+
+ writel(cfg, dev->regs + GSC_IN_CON);
+}
+
+void gsc_hw_set_in_size(struct gsc_ctx *ctx)
+{
+ struct gsc_dev *dev = ctx->gsc_dev;
+ struct gsc_frame *frame = &ctx->s_frame;
+ u32 cfg;
+
+ /* Set input pixel offset */
+ cfg = GSC_SRCIMG_OFFSET_X(frame->crop.left);
+ cfg |= GSC_SRCIMG_OFFSET_Y(frame->crop.top);
+ writel(cfg, dev->regs + GSC_SRCIMG_OFFSET);
+
+ /* Set input original size */
+ cfg = GSC_SRCIMG_WIDTH(frame->f_width);
+ cfg |= GSC_SRCIMG_HEIGHT(frame->f_height);
+ writel(cfg, dev->regs + GSC_SRCIMG_SIZE);
+
+ /* Set input cropped size */
+ cfg = GSC_CROPPED_WIDTH(frame->crop.width);
+ cfg |= GSC_CROPPED_HEIGHT(frame->crop.height);
+ writel(cfg, dev->regs + GSC_CROPPED_SIZE);
+}
+
+void gsc_hw_set_in_image_rgb(struct gsc_ctx *ctx)
+{
+ struct gsc_dev *dev = ctx->gsc_dev;
+ struct gsc_frame *frame = &ctx->s_frame;
+ u32 cfg;
+
+ cfg = readl(dev->regs + GSC_IN_CON);
+ if (ctx->gsc_ctrls.csc_eq->val) {
+ if (ctx->gsc_ctrls.csc_range->val)
+ cfg |= GSC_IN_RGB_HD_WIDE;
+ else
+ cfg |= GSC_IN_RGB_HD_NARROW;
+ } else {
+ if (ctx->gsc_ctrls.csc_range->val)
+ cfg |= GSC_IN_RGB_SD_WIDE;
+ else
+ cfg |= GSC_IN_RGB_SD_NARROW;
+ }
+
+ if (frame->fmt->pixelformat == V4L2_PIX_FMT_RGB565X)
+ cfg |= GSC_IN_RGB565;
+ else if (frame->fmt->pixelformat == V4L2_PIX_FMT_RGB32)
+ cfg |= GSC_IN_XRGB8888;
+
+ writel(cfg, dev->regs + GSC_IN_CON);
+}
+
+void gsc_hw_set_in_image_format(struct gsc_ctx *ctx)
+{
+ struct gsc_dev *dev = ctx->gsc_dev;
+ struct gsc_frame *frame = &ctx->s_frame;
+ u32 i, depth = 0;
+ u32 cfg;
+
+ cfg = readl(dev->regs + GSC_IN_CON);
+ cfg &= ~(GSC_IN_RGB_TYPE_MASK | GSC_IN_YUV422_1P_ORDER_MASK |
+ GSC_IN_CHROMA_ORDER_MASK | GSC_IN_FORMAT_MASK |
+ GSC_IN_TILE_TYPE_MASK | GSC_IN_TILE_MODE);
+ writel(cfg, dev->regs + GSC_IN_CON);
+
+ if (is_rgb(frame->fmt->color)) {
+ gsc_hw_set_in_image_rgb(ctx);
+ return;
+ }
+ for (i = 0; i < frame->fmt->num_planes; i++)
+ depth += frame->fmt->depth[i];
+
+ switch (frame->fmt->nr_comp) {
+ case 1:
+ cfg |= GSC_IN_YUV422_1P;
+ if (frame->fmt->yorder == GSC_LSB_Y)
+ cfg |= GSC_IN_YUV422_1P_ORDER_LSB_Y;
+ else
+ cfg |= GSC_IN_YUV422_1P_OEDER_LSB_C;
+ if (frame->fmt->corder == GSC_CBCR)
+ cfg |= GSC_IN_CHROMA_ORDER_CBCR;
+ else
+ cfg |= GSC_IN_CHROMA_ORDER_CRCB;
+ break;
+ case 2:
+ if (depth == 12)
+ cfg |= GSC_IN_YUV420_2P;
+ else
+ cfg |= GSC_IN_YUV422_2P;
+ if (frame->fmt->corder == GSC_CBCR)
+ cfg |= GSC_IN_CHROMA_ORDER_CBCR;
+ else
+ cfg |= GSC_IN_CHROMA_ORDER_CRCB;
+ break;
+ case 3:
+ if (depth == 12)
+ cfg |= GSC_IN_YUV420_3P;
+ else
+ cfg |= GSC_IN_YUV422_3P;
+ break;
+ };
+
+ if (is_tiled(frame->fmt))
+ cfg |= GSC_IN_TILE_C_16x8 | GSC_IN_TILE_MODE;
+
+ writel(cfg, dev->regs + GSC_IN_CON);
+}
+
+void gsc_hw_set_output_path(struct gsc_ctx *ctx)
+{
+ struct gsc_dev *dev = ctx->gsc_dev;
+
+ u32 cfg = readl(dev->regs + GSC_OUT_CON);
+ cfg &= ~GSC_OUT_PATH_MASK;
+
+ if (ctx->out_path == GSC_DMA)
+ cfg |= GSC_OUT_PATH_MEMORY;
+ else
+ cfg |= GSC_OUT_PATH_LOCAL;
+
+ writel(cfg, dev->regs + GSC_OUT_CON);
+}
+
+void gsc_hw_set_out_size(struct gsc_ctx *ctx)
+{
+ struct gsc_dev *dev = ctx->gsc_dev;
+ struct gsc_frame *frame = &ctx->d_frame;
+ u32 cfg;
+
+ /* Set output original size */
+ if (ctx->out_path == GSC_DMA) {
+ cfg = GSC_DSTIMG_OFFSET_X(frame->crop.left);
+ cfg |= GSC_DSTIMG_OFFSET_Y(frame->crop.top);
+ writel(cfg, dev->regs + GSC_DSTIMG_OFFSET);
+
+ cfg = GSC_DSTIMG_WIDTH(frame->f_width);
+ cfg |= GSC_DSTIMG_HEIGHT(frame->f_height);
+ writel(cfg, dev->regs + GSC_DSTIMG_SIZE);
+ }
+
+ /* Set output scaled size */
+ if (ctx->gsc_ctrls.rotate->val == 90 ||
+ ctx->gsc_ctrls.rotate->val == 270) {
+ cfg = GSC_SCALED_WIDTH(frame->crop.height);
+ cfg |= GSC_SCALED_HEIGHT(frame->crop.width);
+ } else {
+ cfg = GSC_SCALED_WIDTH(frame->crop.width);
+ cfg |= GSC_SCALED_HEIGHT(frame->crop.height);
+ }
+ writel(cfg, dev->regs + GSC_SCALED_SIZE);
+}
+
+void gsc_hw_set_out_image_rgb(struct gsc_ctx *ctx)
+{
+ struct gsc_dev *dev = ctx->gsc_dev;
+ struct gsc_frame *frame = &ctx->d_frame;
+ u32 cfg;
+
+ cfg = readl(dev->regs + GSC_OUT_CON);
+ if (ctx->gsc_ctrls.csc_eq->val) {
+ if (ctx->gsc_ctrls.csc_range->val)
+ cfg |= GSC_OUT_RGB_HD_WIDE;
+ else
+ cfg |= GSC_OUT_RGB_HD_NARROW;
+ } else {
+ if (ctx->gsc_ctrls.csc_range->val)
+ cfg |= GSC_OUT_RGB_SD_WIDE;
+ else
+ cfg |= GSC_OUT_RGB_SD_NARROW;
+ }
+
+ if (frame->fmt->pixelformat == V4L2_PIX_FMT_RGB565X)
+ cfg |= GSC_OUT_RGB565;
+ else if (frame->fmt->pixelformat == V4L2_PIX_FMT_RGB32)
+ cfg |= GSC_OUT_XRGB8888;
+
+ writel(cfg, dev->regs + GSC_OUT_CON);
+}
+
+void gsc_hw_set_out_image_format(struct gsc_ctx *ctx)
+{
+ struct gsc_dev *dev = ctx->gsc_dev;
+ struct gsc_frame *frame = &ctx->d_frame;
+ u32 i, depth = 0;
+ u32 cfg;
+
+ cfg = readl(dev->regs + GSC_OUT_CON);
+ cfg &= ~(GSC_OUT_RGB_TYPE_MASK | GSC_OUT_YUV422_1P_ORDER_MASK |
+ GSC_OUT_CHROMA_ORDER_MASK | GSC_OUT_FORMAT_MASK |
+ GSC_OUT_TILE_TYPE_MASK | GSC_OUT_TILE_MODE);
+ writel(cfg, dev->regs + GSC_OUT_CON);
+
+ if (is_rgb(frame->fmt->color)) {
+ gsc_hw_set_out_image_rgb(ctx);
+ return;
+ }
+
+ if (ctx->out_path != GSC_DMA) {
+ cfg |= GSC_OUT_YUV444;
+ goto end_set;
+ }
+
+ for (i = 0; i < frame->fmt->num_planes; i++)
+ depth += frame->fmt->depth[i];
+
+ switch (frame->fmt->nr_comp) {
+ case 1:
+ cfg |= GSC_OUT_YUV422_1P;
+ if (frame->fmt->yorder == GSC_LSB_Y)
+ cfg |= GSC_OUT_YUV422_1P_ORDER_LSB_Y;
+ else
+ cfg |= GSC_OUT_YUV422_1P_OEDER_LSB_C;
+ if (frame->fmt->corder == GSC_CBCR)
+ cfg |= GSC_OUT_CHROMA_ORDER_CBCR;
+ else
+ cfg |= GSC_OUT_CHROMA_ORDER_CRCB;
+ break;
+ case 2:
+ if (depth == 12)
+ cfg |= GSC_OUT_YUV420_2P;
+ else
+ cfg |= GSC_OUT_YUV422_2P;
+ if (frame->fmt->corder == GSC_CBCR)
+ cfg |= GSC_OUT_CHROMA_ORDER_CBCR;
+ else
+ cfg |= GSC_OUT_CHROMA_ORDER_CRCB;
+ break;
+ case 3:
+ cfg |= GSC_OUT_YUV420_3P;
+ break;
+ };
+
+ if (is_tiled(frame->fmt))
+ cfg |= GSC_OUT_TILE_C_16x8 | GSC_OUT_TILE_MODE;
+
+end_set:
+ writel(cfg, dev->regs + GSC_OUT_CON);
+}
+
+void gsc_hw_set_prescaler(struct gsc_ctx *ctx)
+{
+ struct gsc_dev *dev = ctx->gsc_dev;
+ struct gsc_scaler *sc = &ctx->scaler;
+ u32 cfg;
+
+ cfg = GSC_PRESC_SHFACTOR(sc->pre_shfactor);
+ cfg |= GSC_PRESC_H_RATIO(sc->pre_hratio);
+ cfg |= GSC_PRESC_V_RATIO(sc->pre_vratio);
+ writel(cfg, dev->regs + GSC_PRE_SCALE_RATIO);
+}
+
+void gsc_hw_set_mainscaler(struct gsc_ctx *ctx)
+{
+ struct gsc_dev *dev = ctx->gsc_dev;
+ struct gsc_scaler *sc = &ctx->scaler;
+ u32 cfg;
+
+ cfg = GSC_MAIN_H_RATIO_VALUE(sc->main_hratio);
+ writel(cfg, dev->regs + GSC_MAIN_H_RATIO);
+
+ cfg = GSC_MAIN_V_RATIO_VALUE(sc->main_vratio);
+ writel(cfg, dev->regs + GSC_MAIN_V_RATIO);
+}
+
+void gsc_hw_set_rotation(struct gsc_ctx *ctx)
+{
+ struct gsc_dev *dev = ctx->gsc_dev;
+ u32 cfg;
+
+ cfg = readl(dev->regs + GSC_IN_CON);
+ cfg &= ~GSC_IN_ROT_MASK;
+
+ switch (ctx->gsc_ctrls.rotate->val) {
+ case 270:
+ cfg |= GSC_IN_ROT_270;
+ break;
+ case 180:
+ cfg |= GSC_IN_ROT_180;
+ break;
+ case 90:
+ if (ctx->gsc_ctrls.hflip->val)
+ cfg |= GSC_IN_ROT_90_XFLIP;
+ else if (ctx->gsc_ctrls.vflip->val)
+ cfg |= GSC_IN_ROT_90_YFLIP;
+ else
+ cfg |= GSC_IN_ROT_90;
+ break;
+ case 0:
+ if (ctx->gsc_ctrls.hflip->val)
+ cfg |= GSC_IN_ROT_XFLIP;
+ else if (ctx->gsc_ctrls.vflip->val)
+ cfg |= GSC_IN_ROT_YFLIP;
+ }
+
+ writel(cfg, dev->regs + GSC_IN_CON);
+}
+
+void gsc_hw_set_global_alpha(struct gsc_ctx *ctx)
+{
+ struct gsc_dev *dev = ctx->gsc_dev;
+ struct gsc_frame *frame = &ctx->d_frame;
+ u32 cfg;
+
+ cfg = readl(dev->regs + GSC_OUT_CON);
+ cfg &= ~GSC_OUT_GLOBAL_ALPHA_MASK;
+
+ if (!is_rgb(frame->fmt->color)) {
+ gsc_dbg("Not a RGB format");
+ return;
+ }
+
+ cfg |= GSC_OUT_GLOBAL_ALPHA(ctx->gsc_ctrls.global_alpha->val);
+ writel(cfg, dev->regs + GSC_OUT_CON);
+}
+
+void gsc_hw_set_sfr_update(struct gsc_ctx *ctx)
+{
+ struct gsc_dev *dev = ctx->gsc_dev;
+ u32 cfg;
+
+ cfg = readl(dev->regs + GSC_ENABLE);
+ cfg |= GSC_ENABLE_SFR_UPDATE;
+ writel(cfg, dev->regs + GSC_ENABLE);
+}
+
+void gsc_hw_set_local_dst(int id, bool on)
+{
+ u32 cfg = readl(SYSREG_GSCBLK_CFG0);
+
+ if (on)
+ cfg |= GSC_OUT_DST_SEL(id);
+ else
+ cfg &= ~(GSC_OUT_DST_SEL(id));
+ writel(cfg, SYSREG_GSCBLK_CFG0);
+}
+
+void gsc_hw_set_sysreg_writeback(struct gsc_ctx *ctx)
+{
+ struct gsc_dev *dev = ctx->gsc_dev;
+
+ u32 cfg = readl(SYSREG_GSCBLK_CFG1);
+
+ cfg |= GSC_BLK_DISP1WB_DEST(dev->id);
+ cfg |= GSC_BLK_GSCL_WB_IN_SRC_SEL(dev->id);
+ cfg |= GSC_BLK_SW_RESET_WB_DEST(dev->id);
+
+ writel(cfg, SYSREG_GSCBLK_CFG1);
+}
+
+void gsc_hw_set_sysreg_camif(bool on)
+{
+ u32 cfg = readl(SYSREG_GSCBLK_CFG0);
+
+ if (on)
+ cfg |= GSC_PXLASYNC_CAMIF_TOP;
+ else
+ cfg &= ~(GSC_PXLASYNC_CAMIF_TOP);
+
+ writel(cfg, SYSREG_GSCBLK_CFG0);
+}
--- /dev/null
+/* linux/drivers/media/video/exynos/gsc-vb2.c
+ *
+ * Copyright (c) 2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com/
+ *
+ * Videobuf2 allocator operations file
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+#include <linux/platform_device.h>
+#include "gsc-core.h"
+#ifdef CONFIG_VIDEOBUF2_SDVMM
+void *gsc_sdvmm_init(struct gsc_dev *gsc)
+{
+ struct vb2_vcm vb2_vcm;
+ struct vb2_cma vb2_cma;
+ char cma_name[16] = {0,};
+ struct vb2_drv vb2_drv;
+
+ gsc->vcm_id = VCM_DEV_GSC0 + gsc->id;
+
+ vb2_vcm.vcm_id = gsc->vcm_id;
+ vb2_vcm.size = SZ_64M;
+
+ vb2_cma.dev = &gsc->pdev->dev;
+ sprintf(cma_name, "gsc%d", gsc->id);
+ vb2_cma.type = cma_name;
+ vb2_cma.alignment = SZ_4K;
+
+ vb2_drv.cacheable = true;
+ vb2_drv.remap_dva = false;
+
+ return vb2_sdvmm_init(&vb2_vcm, NULL, &vb2_drv);
+}
+
+static int gsc_sdvmm_cache_flush(struct vb2_buffer *vb, u32 num_planes)
+{
+
+ return vb2_sdvmm_cache_flush(NULL, vb, num_planes);
+}
+
+const struct gsc_vb2 gsc_vb2_sdvmm = {
+ .ops = &vb2_sdvmm_memops,
+ .init = gsc_sdvmm_init,
+ .cleanup = vb2_sdvmm_cleanup,
+/* .plane_addr = vb2_sdvmm_plane_dvaddr, */
+ .resume = vb2_sdvmm_resume,
+ .suspend = vb2_sdvmm_suspend,
+ .cache_flush = gsc_sdvmm_cache_flush,
+ .set_cacheable = vb2_sdvmm_set_cacheable,
+/* .set_sharable = vb2_sdvmm_set_sharable, */
+};
+#endif
--- /dev/null
+/* linux/drivers/media/video/exynos/gsc/regs-gsc.h
+ *
+ * Copyright (c) 2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com
+ *
+ * Register definition file for Samsung G-Scaler driver
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef REGS_GSC_H_
+#define REGS_GSC_H_
+
+/* SYSCON. GSCBLK_CFG */
+#include <plat/map-base.h>
+#define SYSREG_DISP1BLK_CFG (S3C_VA_SYS + 0x0214)
+#define FIFORST_DISP1 (1 << 23)
+#define SYSREG_GSCBLK_CFG0 (S3C_VA_SYS + 0x0220)
+#define GSC_OUT_DST_SEL(x) (1 << (8 + 2 * (x)))
+#define GSC_PXLASYNC_RST(x) (1 << (x))
+#define GSC_PXLASYNC_CAMIF_TOP (1 << 20)
+#define SYSREG_GSCBLK_CFG1 (S3C_VA_SYS + 0x0224)
+#define GSC_BLK_DISP1WB_DEST(x) (x << 10)
+#define GSC_BLK_SW_RESET_WB_DEST(x) (1 << (18 + x))
+#define GSC_BLK_GSCL_WB_IN_SRC_SEL(x) (1 << (2 * x))
+
+/* G-Scaler enable */
+#define GSC_ENABLE 0x00
+#define GSC_ENABLE_ON_CLEAR (1 << 4)
+#define GSC_ENABLE_QOS_ENABLE (1 << 3)
+#define GSC_ENABLE_OP_STATUS (1 << 2)
+#define GSC_ENABLE_SFR_UPDATE (1 << 1)
+#define GSC_ENABLE_ON (1 << 0)
+
+/* G-Scaler S/W reset */
+#define GSC_SW_RESET 0x04
+#define GSC_SW_RESET_SRESET (1 << 0)
+
+/* G-Scaler IRQ */
+#define GSC_IRQ 0x08
+#define GSC_IRQ_STATUS_OR_IRQ (1 << 17)
+#define GSC_IRQ_STATUS_OR_FRM_DONE (1 << 16)
+#define GSC_IRQ_OR_MASK (1 << 2)
+#define GSC_IRQ_FRMDONE_MASK (1 << 1)
+#define GSC_IRQ_ENABLE (1 << 0)
+
+/* G-Scaler input control */
+#define GSC_IN_CON 0x10
+#define GSC_IN_ROT_MASK (7 << 16)
+#define GSC_IN_ROT_270 (7 << 16)
+#define GSC_IN_ROT_90_YFLIP (6 << 16)
+#define GSC_IN_ROT_90_XFLIP (5 << 16)
+#define GSC_IN_ROT_90 (4 << 16)
+#define GSC_IN_ROT_180 (3 << 16)
+#define GSC_IN_ROT_YFLIP (2 << 16)
+#define GSC_IN_ROT_XFLIP (1 << 16)
+#define GSC_IN_RGB_TYPE_MASK (3 << 14)
+#define GSC_IN_RGB_HD_WIDE (3 << 14)
+#define GSC_IN_RGB_HD_NARROW (2 << 14)
+#define GSC_IN_RGB_SD_WIDE (1 << 14)
+#define GSC_IN_RGB_SD_NARROW (0 << 14)
+#define GSC_IN_YUV422_1P_ORDER_MASK (1 << 13)
+#define GSC_IN_YUV422_1P_ORDER_LSB_Y (0 << 13)
+#define GSC_IN_YUV422_1P_OEDER_LSB_C (1 << 13)
+#define GSC_IN_CHROMA_ORDER_MASK (1 << 12)
+#define GSC_IN_CHROMA_ORDER_CBCR (0 << 12)
+#define GSC_IN_CHROMA_ORDER_CRCB (1 << 12)
+#define GSC_IN_FORMAT_MASK (7 << 8)
+#define GSC_IN_XRGB8888 (0 << 8)
+#define GSC_IN_RGB565 (1 << 8)
+#define GSC_IN_YUV420_2P (2 << 8)
+#define GSC_IN_YUV420_3P (3 << 8)
+#define GSC_IN_YUV422_1P (4 << 8)
+#define GSC_IN_YUV422_2P (5 << 8)
+#define GSC_IN_YUV422_3P (6 << 8)
+#define GSC_IN_TILE_TYPE_MASK (1 << 4)
+#define GSC_IN_TILE_C_16x8 (0 << 4)
+#define GSC_IN_TILE_C_16x16 (1 << 4)
+#define GSC_IN_TILE_MODE (1 << 3)
+#define GSC_IN_LOCAL_SEL_MASK (3 << 1)
+#define GSC_IN_LOCAL_FIMD_WB (2 << 1)
+#define GSC_IN_LOCAL_CAM1 (1 << 1)
+#define GSC_IN_LOCAL_CAM0 (0 << 1)
+#define GSC_IN_PATH_MASK (1 << 0)
+#define GSC_IN_PATH_LOCAL (1 << 0)
+#define GSC_IN_PATH_MEMORY (0 << 0)
+
+/* G-Scaler source image size */
+#define GSC_SRCIMG_SIZE 0x14
+#define GSC_SRCIMG_HEIGHT_MASK (0x1fff << 16)
+#define GSC_SRCIMG_HEIGHT(x) ((x) << 16)
+#define GSC_SRCIMG_WIDTH_MASK (0x1fff << 0)
+#define GSC_SRCIMG_WIDTH(x) ((x) << 0)
+
+/* G-Scaler source image offset */
+#define GSC_SRCIMG_OFFSET 0x18
+#define GSC_SRCIMG_OFFSET_Y_MASK (0x1fff << 16)
+#define GSC_SRCIMG_OFFSET_Y(x) ((x) << 16)
+#define GSC_SRCIMG_OFFSET_X_MASK (0x1fff << 0)
+#define GSC_SRCIMG_OFFSET_X(x) ((x) << 0)
+
+/* G-Scaler cropped source image size */
+#define GSC_CROPPED_SIZE 0x1C
+#define GSC_CROPPED_HEIGHT_MASK (0x1fff << 16)
+#define GSC_CROPPED_HEIGHT(x) ((x) << 16)
+#define GSC_CROPPED_WIDTH_MASK (0x1fff << 0)
+#define GSC_CROPPED_WIDTH(x) ((x) << 0)
+
+/* G-Scaler output control */
+#define GSC_OUT_CON 0x20
+#define GSC_OUT_GLOBAL_ALPHA_MASK (0xff << 24)
+#define GSC_OUT_GLOBAL_ALPHA(x) ((x) << 24)
+#define GSC_OUT_RGB_TYPE_MASK (3 << 10)
+#define GSC_OUT_RGB_HD_NARROW (3 << 10)
+#define GSC_OUT_RGB_HD_WIDE (2 << 10)
+#define GSC_OUT_RGB_SD_NARROW (1 << 10)
+#define GSC_OUT_RGB_SD_WIDE (0 << 10)
+#define GSC_OUT_YUV422_1P_ORDER_MASK (1 << 9)
+#define GSC_OUT_YUV422_1P_ORDER_LSB_Y (0 << 9)
+#define GSC_OUT_YUV422_1P_OEDER_LSB_C (1 << 9)
+#define GSC_OUT_CHROMA_ORDER_MASK (1 << 8)
+#define GSC_OUT_CHROMA_ORDER_CBCR (0 << 8)
+#define GSC_OUT_CHROMA_ORDER_CRCB (1 << 8)
+#define GSC_OUT_FORMAT_MASK (7 << 4)
+#define GSC_OUT_XRGB8888 (0 << 4)
+#define GSC_OUT_RGB565 (1 << 4)
+#define GSC_OUT_YUV420_2P (2 << 4)
+#define GSC_OUT_YUV420_3P (3 << 4)
+#define GSC_OUT_YUV422_1P (4 << 4)
+#define GSC_OUT_YUV422_2P (5 << 4)
+#define GSC_OUT_YUV444 (7 << 4)
+#define GSC_OUT_TILE_TYPE_MASK (1 << 2)
+#define GSC_OUT_TILE_C_16x8 (0 << 2)
+#define GSC_OUT_TILE_C_16x16 (1 << 2)
+#define GSC_OUT_TILE_MODE (1 << 1)
+#define GSC_OUT_PATH_MASK (1 << 0)
+#define GSC_OUT_PATH_LOCAL (1 << 0)
+#define GSC_OUT_PATH_MEMORY (0 << 0)
+
+/* G-Scaler scaled destination image size */
+#define GSC_SCALED_SIZE 0x24
+#define GSC_SCALED_HEIGHT_MASK (0x1fff << 16)
+#define GSC_SCALED_HEIGHT(x) ((x) << 16)
+#define GSC_SCALED_WIDTH_MASK (0x1fff << 0)
+#define GSC_SCALED_WIDTH(x) ((x) << 0)
+
+/* G-Scaler pre scale ratio */
+#define GSC_PRE_SCALE_RATIO 0x28
+#define GSC_PRESC_SHFACTOR_MASK (7 << 28)
+#define GSC_PRESC_SHFACTOR(x) ((x) << 28)
+#define GSC_PRESC_V_RATIO_MASK (7 << 16)
+#define GSC_PRESC_V_RATIO(x) ((x) << 16)
+#define GSC_PRESC_H_RATIO_MASK (7 << 0)
+#define GSC_PRESC_H_RATIO(x) ((x) << 0)
+
+/* G-Scaler main scale horizontal ratio */
+#define GSC_MAIN_H_RATIO 0x2C
+#define GSC_MAIN_H_RATIO_MASK (0xfffff << 0)
+#define GSC_MAIN_H_RATIO_VALUE(x) ((x) << 0)
+
+/* G-Scaler main scale vertical ratio */
+#define GSC_MAIN_V_RATIO 0x30
+#define GSC_MAIN_V_RATIO_MASK (0xfffff << 0)
+#define GSC_MAIN_V_RATIO_VALUE(x) ((x) << 0)
+
+/* G-Scaler destination image size */
+#define GSC_DSTIMG_SIZE 0x40
+#define GSC_DSTIMG_HEIGHT_MASK (0x1fff << 16)
+#define GSC_DSTIMG_HEIGHT(x) ((x) << 16)
+#define GSC_DSTIMG_WIDTH_MASK (0x1fff << 0)
+#define GSC_DSTIMG_WIDTH(x) ((x) << 0)
+
+/* G-Scaler destination image offset */
+#define GSC_DSTIMG_OFFSET 0x44
+#define GSC_DSTIMG_OFFSET_Y_MASK (0x1fff << 16)
+#define GSC_DSTIMG_OFFSET_Y(x) ((x) << 16)
+#define GSC_DSTIMG_OFFSET_X_MASK (0x1fff << 0)
+#define GSC_DSTIMG_OFFSET_X(x) ((x) << 0)
+
+/* G-Scaler input y address mask */
+#define GSC_IN_BASE_ADDR_Y_MASK 0x4C
+/* G-Scaler input y base address */
+#define GSC_IN_BASE_ADDR_Y(n) (0x50 + (n) * 0x4)
+
+/* G-Scaler input cb address mask */
+#define GSC_IN_BASE_ADDR_CB_MASK 0x7C
+/* G-Scaler input cb base address */
+#define GSC_IN_BASE_ADDR_CB(n) (0x80 + (n) * 0x4)
+
+/* G-Scaler input cr address mask */
+#define GSC_IN_BASE_ADDR_CR_MASK 0xAC
+/* G-Scaler input cr base address */
+#define GSC_IN_BASE_ADDR_CR(n) (0xB0 + (n) * 0x4)
+
+/* G-Scaler input address mask */
+#define GSC_IN_CURR_ADDR_INDEX (0xf << 12)
+#define GSC_IN_CURR_GET_INDEX(x) ((x) >> 12)
+#define GSC_IN_BASE_ADDR_PINGPONG(x) ((x) << 8)
+#define GSC_IN_BASE_ADDR_MASK (0xff << 0)
+
+/* G-Scaler output y address mask */
+#define GSC_OUT_BASE_ADDR_Y_MASK 0x10C
+/* G-Scaler output y base address */
+#define GSC_OUT_BASE_ADDR_Y(n) (0x110 + (n) * 0x4)
+
+/* G-Scaler output cb address mask */
+#define GSC_OUT_BASE_ADDR_CB_MASK 0x15C
+/* G-Scaler output cb base address */
+#define GSC_OUT_BASE_ADDR_CB(n) (0x160 + (n) * 0x4)
+
+/* G-Scaler output cr address mask */
+#define GSC_OUT_BASE_ADDR_CR_MASK 0x1AC
+/* G-Scaler output cr base address */
+#define GSC_OUT_BASE_ADDR_CR(n) (0x1B0 + (n) * 0x4)
+
+/* G-Scaler output address mask */
+#define GSC_OUT_CURR_ADDR_INDEX (0xf << 24)
+#define GSC_OUT_CURR_GET_INDEX(x) ((x) >> 24)
+#define GSC_OUT_BASE_ADDR_PINGPONG(x) ((x) << 16)
+#define GSC_OUT_BASE_ADDR_MASK (0xffff << 0)
+
+#endif /* REGS_GSC_H_ */
--- /dev/null
+config EXYNOS_MEDIA_DEVICE
+ bool
+ depends on MEDIA_EXYNOS
+ select MEDIA_CONTROLLER
+ select VIDEO_V4L2_SUBDEV_API
+ default y
+ help
+ This is a v4l2 driver for exynos media device.
--- /dev/null
+mdev-objs := exynos-mdev.o
+obj-$(CONFIG_EXYNOS_MEDIA_DEVICE) += mdev.o
--- /dev/null
+/* drviers/media/video/exynos/mdev/exynos-mdev.c
+ *
+ * Copyright (c) 2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com
+ *
+ * EXYNOS5 SoC series media device driver
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+#include <linux/bug.h>
+#include <linux/device.h>
+#include <linux/errno.h>
+#include <linux/i2c.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/version.h>
+#include <media/v4l2-ctrls.h>
+#include <media/media-device.h>
+#include <media/exynos_mc.h>
+
+static int __devinit mdev_probe(struct platform_device *pdev)
+{
+ struct v4l2_device *v4l2_dev;
+ struct exynos_md *mdev;
+ int ret;
+
+ mdev = kzalloc(sizeof(struct exynos_md), GFP_KERNEL);
+ if (!mdev)
+ return -ENOMEM;
+
+ mdev->id = pdev->id;
+ mdev->pdev = pdev;
+ spin_lock_init(&mdev->slock);
+
+ snprintf(mdev->media_dev.model, sizeof(mdev->media_dev.model), "%s%d",
+ dev_name(&pdev->dev), mdev->id);
+
+ mdev->media_dev.dev = &pdev->dev;
+
+ v4l2_dev = &mdev->v4l2_dev;
+ v4l2_dev->mdev = &mdev->media_dev;
+ snprintf(v4l2_dev->name, sizeof(v4l2_dev->name), "%s",
+ dev_name(&pdev->dev));
+
+ ret = v4l2_device_register(&pdev->dev, &mdev->v4l2_dev);
+ if (ret < 0) {
+ v4l2_err(v4l2_dev, "Failed to register v4l2_device: %d\n", ret);
+ goto err_v4l2_reg;
+ }
+ ret = media_device_register(&mdev->media_dev);
+ if (ret < 0) {
+ v4l2_err(v4l2_dev, "Failed to register media device: %d\n", ret);
+ goto err_mdev_reg;
+ }
+
+ platform_set_drvdata(pdev, mdev);
+ v4l2_info(v4l2_dev, "Media%d[0x%08x] was registered successfully\n",
+ mdev->id, (unsigned int)mdev);
+ return 0;
+
+err_mdev_reg:
+ v4l2_device_unregister(&mdev->v4l2_dev);
+err_v4l2_reg:
+ kfree(mdev);
+ return ret;
+}
+
+static int __devexit mdev_remove(struct platform_device *pdev)
+{
+ struct exynos_md *mdev = platform_get_drvdata(pdev);
+
+ if (!mdev)
+ return 0;
+ media_device_unregister(&mdev->media_dev);
+ v4l2_device_unregister(&mdev->v4l2_dev);
+ kfree(mdev);
+ return 0;
+}
+
+static struct platform_driver mdev_driver = {
+ .probe = mdev_probe,
+ .remove = __devexit_p(mdev_remove),
+ .driver = {
+ .name = MDEV_MODULE_NAME,
+ .owner = THIS_MODULE,
+ }
+};
+
+int __init mdev_init(void)
+{
+ int ret = platform_driver_register(&mdev_driver);
+ if (ret)
+ err("platform_driver_register failed: %d\n", ret);
+ return ret;
+}
+
+void __exit mdev_exit(void)
+{
+ platform_driver_unregister(&mdev_driver);
+}
+
+module_init(mdev_init);
+module_exit(mdev_exit);
+
+MODULE_AUTHOR("Hyunwoong Kim <khw0178.kim@samsung.com>");
+MODULE_DESCRIPTION("EXYNOS5 SoC series media device driver");
+MODULE_LICENSE("GPL");
--- /dev/null
+# drivers/media/video/s5p-tv/Kconfig
+#
+# Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
+# http://www.samsung.com/
+# Tomasz Stanislawski <t.stanislaws@samsung.com>
+#
+# Licensed under GPL
+
+comment "Exynos TV support"
+
+config VIDEO_EXYNOS_TV
+ bool "Samsung TV driver for S5P platform (experimental)"
+ depends on PLAT_S5P
+ depends on EXPERIMENTAL
+ select MEDIA_EXYNOS
+ select VIDEO_EXYNOS_HDMI
+ select VIDEO_EXYNOS_MIXER
+ default n
+ ---help---
+ Say Y here to enable selecting the TV output devices for
+ Samsung S5P platform.
+
+if VIDEO_EXYNOS_TV
+
+config VIDEO_EXYNOS_HDMI
+ tristate "Samsung HDMI Driver"
+ depends on VIDEO_V4L2
+ depends on VIDEO_EXYNOS_TV
+ select VIDEO_EXYNOS_HDMIPHY
+ help
+ Say Y here if you want support for the HDMI output
+ interface in S5P Samsung SoC. The driver can be compiled
+ as module. It is an auxiliary driver, that exposes a V4L2
+ subdev for use by other drivers. This driver requires
+ hdmiphy driver to work correctly.
+
+config VIDEO_EXYNOS_HDMI_AUDIO_I2S
+ bool "Enable HDMI audio using I2S path"
+ depends on VIDEO_EXYNOS_HDMI
+ depends on SND_SOC_SAMSUNG_SMDK_WM8994
+ default y
+ help
+ Enables HDMI audio through I2S path.
+
+config VIDEO_EXYNOS_HDMI_AUDIO_SPDIF
+ bool "Enable HDMI audio using SPDIF path"
+ depends on VIDEO_EXYNOS_HDMI
+ depends on SND_SOC_SAMSUNG_SMDK_SPDIF
+ default n
+ help
+ Enables HDMI audio through SPDIF path.
+
+config VIDEO_EXYNOS_HDCP
+ bool "Enable HDCP"
+ depends on VIDEO_EXYNOS_HDMI
+ depends on I2C
+ default n
+ help
+ Enables HDCP feature. However if you want to use HDCP,
+ device private key must be e-fused in SoC.
+
+config VIDEO_EXYNOS_HDMI_DEBUG
+ bool "Enable debug for HDMI Driver"
+ depends on VIDEO_EXYNOS_HDMI
+ default n
+ help
+ Enables debugging for HDMI driver.
+
+config VIDEO_EXYNOS_HDMIPHY
+ tristate "Samsung HDMIPHY Driver"
+ depends on VIDEO_DEV && VIDEO_V4L2 && I2C
+ depends on VIDEO_EXYNOS_TV
+ help
+ Say Y here if you want support for the physical HDMI
+ interface in S5P Samsung SoC. The driver can be compiled
+ as module. It is an I2C driver, that exposes a V4L2
+ subdev for use by other drivers.
+
+config VIDEO_EXYNOS_SDO
+ tristate "Samsung Analog TV Driver"
+ depends on VIDEO_DEV && VIDEO_V4L2
+ depends on VIDEO_EXYNOS_TV
+ depends on CPU_EXYNOS4210
+ help
+ Say Y here if you want support for the analog TV output
+ interface in S5P Samsung SoC. The driver can be compiled
+ as module. It is an auxiliary driver, that exposes a V4L2
+ subdev for use by other drivers. This driver requires
+ hdmiphy driver to work correctly.
+
+config VIDEO_EXYNOS_MIXER
+ tristate "Samsung Mixer and Video Processor Driver"
+ depends on VIDEO_DEV && VIDEO_V4L2
+ depends on VIDEO_EXYNOS_TV
+ #select VIDEOBUF2_DMA_CONTIG
+ select VIDEOBUF2_FB
+ help
+ Say Y here if you want support for the Mixer in Samsung S5P SoCs.
+ This device produce image data to one of output interfaces.
+
+config VIDEO_EXYNOS_HDMI_CEC
+ tristate "Samsung HDMI CEC Driver"
+ depends on VIDEO_DEV && VIDEO_V4L2 && I2C
+ depends on VIDEO_EXYNOS_TV
+ help
+ Say Y here if you want support for the HDMI CEC
+ interface in S5P Samsung SoC. The driver can be compiled
+ as module.
+
+config VIDEO_SAMSUNG_MEMSIZE_TV
+ int "Memory size in kbytes for TV"
+ depends on VIDEO_EXYNOS_MIXER && VIDEOBUF2_CMA_PHYS
+ default "16200"
+
+config VIDEO_EXYNOS_MIXER_DEBUG
+ bool "Enable debug for Mixer Driver"
+ depends on VIDEO_EXYNOS_MIXER
+ default n
+ help
+ Enables debugging for Mixer driver.
+
+endif # VIDEO_EXYNOS_TV
--- /dev/null
+# drivers/media/video/samsung/tvout/Makefile
+#
+# Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
+# http://www.samsung.com/
+# Tomasz Stanislawski <t.stanislaws@samsung.com>
+#
+# Licensed under GPL
+obj-$(CONFIG_VIDEO_EXYNOS_HDMIPHY) += s5p-hdmiphy.o
+s5p-hdmiphy-y += hdmiphy_drv.o
+obj-$(CONFIG_VIDEO_EXYNOS_HDMI) += s5p-hdmi.o
+s5p-hdmi-y += hdcp_drv.o hdmi_drv.o
+obj-$(CONFIG_VIDEO_EXYNOS_SDO) += s5p-sdo.o
+s5p-sdo-y += sdo_drv.o
+obj-$(CONFIG_VIDEO_EXYNOS_MIXER) += s5p-mixer.o
+s5p-mixer-y += mixer_vb2.o mixer_drv.o mixer_video.o mixer_reg.o mixer_grp_layer.o
+obj-$(CONFIG_VIDEO_EXYNOS_HDMI_CEC) += s5p-hdmi_cec.o
+s5p-hdmi_cec-y += hdmi_cec.o hdmi_cec_ctrl.o
+
+ifeq ($(CONFIG_ARCH_EXYNOS4), y)
+ s5p-mixer-y += mixer_vp_layer.o
+else
+ s5p-mixer-y += mixer_video_layer.o
+endif
+
+ifeq ($(CONFIG_VIDEO_EXYNOS_HDMI),y)
+ ifeq ($(CONFIG_CPU_EXYNOS4210), y)
+ s5p-hdmi-y += hdmi_reg_4210.o hdmiphy_conf_4210.o
+ else
+ s5p-hdmi-y += hdmi_reg_5250.o hdmiphy_conf_5250.o
+ endif
+endif
--- /dev/null
+/* linux/drivers/media/video/samsung/tvout/hw_if/hw_if.h
+ *
+ * Copyright (c) 2010 Samsung Electronics
+ * http://www.samsung.com/
+ *
+ * Header file for interface of Samsung TVOUT-related hardware
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef _SAMSUNG_TVOUT_CEC_H_
+#define _SAMSUNG_TVOUT_CEC_H_ __FILE__
+
+/*****************************************************************************
+ * This file includes declarations for external functions of
+ * Samsung TVOUT-related hardware. So only external functions
+ * to be used by higher layer must exist in this file.
+ *
+ * Higher layer must use only the declarations included in this file.
+ ****************************************************************************/
+
+#define to_tvout_plat(d) (to_platform_device(d)->dev.platform_data)
+
+#ifndef tvout_dbg
+#ifdef CONFIG_TV_DEBUG
+#define tvout_dbg(fmt, ...) \
+ printk(KERN_INFO "[%s] %s(): " fmt, \
+ DRV_NAME, __func__, ##__VA_ARGS__)
+#else
+#define tvout_dbg(fmt, ...)
+#endif
+#endif
+
+enum s5p_tvout_endian {
+ TVOUT_LITTLE_ENDIAN = 0,
+ TVOUT_BIG_ENDIAN = 1
+};
+
+enum cec_state {
+ STATE_RX,
+ STATE_TX,
+ STATE_DONE,
+ STATE_ERROR
+};
+
+struct cec_rx_struct {
+ spinlock_t lock;
+ wait_queue_head_t waitq;
+ atomic_t state;
+ u8 *buffer;
+ unsigned int size;
+};
+
+struct cec_tx_struct {
+ wait_queue_head_t waitq;
+ atomic_t state;
+};
+
+extern struct cec_rx_struct cec_rx_struct;
+extern struct cec_tx_struct cec_tx_struct;
+
+void s5p_cec_set_divider(void);
+void s5p_cec_enable_rx(void);
+void s5p_cec_mask_rx_interrupts(void);
+void s5p_cec_unmask_rx_interrupts(void);
+void s5p_cec_mask_tx_interrupts(void);
+void s5p_cec_unmask_tx_interrupts(void);
+void s5p_cec_reset(void);
+void s5p_cec_tx_reset(void);
+void s5p_cec_rx_reset(void);
+void s5p_cec_threshold(void);
+void s5p_cec_set_tx_state(enum cec_state state);
+void s5p_cec_set_rx_state(enum cec_state state);
+void s5p_cec_copy_packet(char *data, size_t count);
+void s5p_cec_set_addr(u32 addr);
+u32 s5p_cec_get_status(void);
+void s5p_clr_pending_tx(void);
+void s5p_clr_pending_rx(void);
+void s5p_cec_get_rx_buf(u32 size, u8 *buffer);
+void __init s5p_cec_mem_probe(struct platform_device *pdev);
+
+#endif /* _SAMSUNG_TVOUT_CEC_H_ */
--- /dev/null
+/* linux/drivers/media/video/exynos/tv/hdcp_drv.c
+ *
+ * Copyright (c) 2011 Samsung Electronics
+ * http://www.samsung.com/
+ *
+ * HDCP function for Samsung TV driver
+ *
+ * This program is free software. you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/i2c.h>
+#include <linux/io.h>
+#include <linux/delay.h>
+#include <linux/sched.h>
+#include <linux/workqueue.h>
+#include <linux/export.h>
+#include <linux/module.h>
+
+#include "hdmi.h"
+#include "regs-hdmi-5250.h"
+
+#define AN_SIZE 8
+#define AKSV_SIZE 5
+#define BKSV_SIZE 5
+#define MAX_KEY_SIZE 16
+
+#define BKSV_RETRY_CNT 14
+#define BKSV_DELAY 100
+
+#define DDC_RETRY_CNT 400000
+#define DDC_DELAY 25
+
+#define KEY_LOAD_RETRY_CNT 1000
+#define ENCRYPT_CHECK_CNT 10
+
+#define KSV_FIFO_RETRY_CNT 50
+#define KSV_FIFO_CHK_DELAY 100 /* ms */
+#define KSV_LIST_RETRY_CNT 10000
+
+#define BCAPS_SIZE 1
+#define BSTATUS_SIZE 2
+#define SHA_1_HASH_SIZE 20
+#define HDCP_MAX_DEVS 128
+#define HDCP_KSV_SIZE 5
+
+/* offset of HDCP port */
+#define HDCP_BKSV 0x00
+#define HDCP_RI 0x08
+#define HDCP_AKSV 0x10
+#define HDCP_AN 0x18
+#define HDCP_SHA1 0x20
+#define HDCP_BCAPS 0x40
+#define HDCP_BSTATUS 0x41
+#define HDCP_KSVFIFO 0x43
+
+#define KSV_FIFO_READY (0x1 << 5)
+
+#define MAX_CASCADE_EXCEEDED_ERROR (-2)
+#define MAX_DEVS_EXCEEDED_ERROR (-3)
+#define REPEATER_ILLEGAL_DEVICE_ERROR (-4)
+#define REPEATER_TIMEOUT_ERROR (-5)
+
+#define MAX_CASCADE_EXCEEDED (0x1 << 3)
+#define MAX_DEVS_EXCEEDED (0x1 << 7)
+
+struct i2c_client *hdcp_client;
+
+int hdcp_i2c_read(struct hdmi_device *hdev, u8 offset, int bytes, u8 *buf)
+{
+ struct device *dev = hdev->dev;
+ struct i2c_client *i2c = hdcp_client;
+ int ret, cnt = 0;
+
+ struct i2c_msg msg[] = {
+ [0] = {
+ .addr = i2c->addr,
+ .flags = 0,
+ .len = 1,
+ .buf = &offset
+ },
+ [1] = {
+ .addr = i2c->addr,
+ .flags = I2C_M_RD,
+ .len = bytes,
+ .buf = buf
+ }
+ };
+
+ do {
+ if (!is_hdmi_streaming(hdev))
+ goto ddc_read_err;
+
+ ret = i2c_transfer(i2c->adapter, msg, 2);
+
+ if (ret < 0 || ret != 2)
+ dev_dbg(dev, "%s: can't read data, retry %d\n",
+ __func__, cnt);
+ else
+ break;
+
+ if (hdev->hdcp_info.auth_status == FIRST_AUTHENTICATION_DONE
+ || hdev->hdcp_info.auth_status
+ == SECOND_AUTHENTICATION_DONE)
+ goto ddc_read_err;
+
+ msleep(DDC_DELAY);
+ cnt++;
+ } while (cnt < DDC_RETRY_CNT);
+
+ if (cnt == DDC_RETRY_CNT)
+ goto ddc_read_err;
+
+ dev_dbg(dev, "%s: read data ok\n", __func__);
+
+ return 0;
+
+ddc_read_err:
+ dev_err(dev, "%s: can't read data, timeout\n", __func__);
+ return -ETIME;
+}
+
+int hdcp_i2c_write(struct hdmi_device *hdev, u8 offset, int bytes, u8 *buf)
+{
+ struct device *dev = hdev->dev;
+ struct i2c_client *i2c = hdcp_client;
+ u8 msg[bytes + 1];
+ int ret, cnt = 0;
+
+ msg[0] = offset;
+ memcpy(&msg[1], buf, bytes);
+
+ do {
+ if (!is_hdmi_streaming(hdev))
+ goto ddc_write_err;
+
+ ret = i2c_master_send(i2c, msg, bytes + 1);
+
+ if (ret < 0 || ret < bytes + 1)
+ dev_dbg(dev, "%s: can't write data, retry %d\n",
+ __func__, cnt);
+ else
+ break;
+
+ msleep(DDC_DELAY);
+ cnt++;
+ } while (cnt < DDC_RETRY_CNT);
+
+ if (cnt == DDC_RETRY_CNT)
+ goto ddc_write_err;
+
+ dev_dbg(dev, "%s: write data ok\n", __func__);
+ return 0;
+
+ddc_write_err:
+ dev_err(dev, "%s: can't write data, timeout\n", __func__);
+ return -ETIME;
+}
+
+static int __devinit hdcp_probe(struct i2c_client *client,
+ const struct i2c_device_id *dev_id)
+{
+ int ret = 0;
+
+ hdcp_client = client;
+
+ dev_info(&client->adapter->dev, "attached exynos hdcp "
+ "into i2c adapter successfully\n");
+
+ return ret;
+}
+
+static int hdcp_remove(struct i2c_client *client)
+{
+ dev_info(&client->adapter->dev, "detached exynos hdcp "
+ "from i2c adapter successfully\n");
+
+ return 0;
+}
+
+static int hdcp_suspend(struct i2c_client *cl, pm_message_t mesg)
+{
+ return 0;
+};
+
+static int hdcp_resume(struct i2c_client *cl)
+{
+ return 0;
+};
+
+static struct i2c_device_id hdcp_idtable[] = {
+ {"exynos_hdcp", 0},
+};
+MODULE_DEVICE_TABLE(i2c, hdcp_idtable);
+
+static struct i2c_driver hdcp_driver = {
+ .driver = {
+ .name = "exynos_hdcp",
+ .owner = THIS_MODULE,
+ },
+ .id_table = hdcp_idtable,
+ .probe = hdcp_probe,
+ .remove = __devexit_p(hdcp_remove),
+ .suspend = hdcp_suspend,
+ .resume = hdcp_resume,
+};
+
+static int __init hdcp_init(void)
+{
+ return i2c_add_driver(&hdcp_driver);
+}
+
+static void __exit hdcp_exit(void)
+{
+ i2c_del_driver(&hdcp_driver);
+}
+
+module_init(hdcp_init);
+module_exit(hdcp_exit);
+
+/* internal functions of HDCP */
+static void hdcp_encryption(struct hdmi_device *hdev, bool on)
+{
+ if (on)
+ hdmi_write_mask(hdev, HDMI_ENC_EN, ~0, HDMI_HDCP_ENC_ENABLE);
+ else
+ hdmi_write_mask(hdev, HDMI_ENC_EN, 0, HDMI_HDCP_ENC_ENABLE);
+
+ hdmi_reg_mute(hdev, !on);
+}
+
+static int hdcp_write_key(struct hdmi_device *hdev, int size,
+ int reg, int offset)
+{
+ struct device *dev = hdev->dev;
+ u8 buf[MAX_KEY_SIZE];
+ int cnt, zero = 0;
+ int i;
+
+ memset(buf, 0, sizeof(buf));
+ hdmi_read_bytes(hdev, reg, buf, size);
+
+ for (cnt = 0; cnt < size; cnt++)
+ if (buf[cnt] == 0)
+ zero++;
+
+ if (zero == size) {
+ dev_dbg(dev, "%s: %s is null\n", __func__,
+ offset == HDCP_AN ? "An" : "Aksv");
+ goto write_key_err;
+ }
+
+ if (hdcp_i2c_write(hdev, offset, size, buf) < 0)
+ goto write_key_err;
+
+ for (i = 1; i < size + 1; i++)
+ dev_dbg(dev, "%s: %s[%d] : 0x%02x\n", __func__,
+ offset == HDCP_AN ? "An" : "Aksv", i, buf[i]);
+
+ return 0;
+
+write_key_err:
+ dev_dbg(dev, "%s: write %s is failed\n", __func__,
+ offset == HDCP_AN ? "An" : "Aksv");
+ return -1;
+}
+
+static int hdcp_read_bcaps(struct hdmi_device *hdev)
+{
+ struct device *dev = hdev->dev;
+ u8 bcaps = 0;
+
+ if (hdcp_i2c_read(hdev, HDCP_BCAPS, BCAPS_SIZE, &bcaps) < 0)
+ goto bcaps_read_err;
+
+ if (!is_hdmi_streaming(hdev))
+ goto bcaps_read_err;
+
+ hdmi_writeb(hdev, HDMI_HDCP_BCAPS, bcaps);
+
+ if (bcaps & HDMI_HDCP_BCAPS_REPEATER)
+ hdev->hdcp_info.is_repeater = 1;
+ else
+ hdev->hdcp_info.is_repeater = 0;
+
+ dev_dbg(dev, "%s: device is %s\n", __func__,
+ hdev->hdcp_info.is_repeater ? "REPEAT" : "SINK");
+ dev_dbg(dev, "%s: [i2c] bcaps : 0x%02x\n", __func__, bcaps);
+
+ return 0;
+
+bcaps_read_err:
+ dev_err(dev, "can't read bcaps : timeout\n");
+ return -ETIME;
+}
+
+static int hdcp_read_bksv(struct hdmi_device *hdev)
+{
+ struct device *dev = hdev->dev;
+ u8 bksv[BKSV_SIZE];
+ int i, j;
+ u32 one = 0, zero = 0, result = 0;
+ u32 cnt = 0;
+
+ memset(bksv, 0, sizeof(bksv));
+
+ do {
+ if (hdcp_i2c_read(hdev, HDCP_BKSV, BKSV_SIZE, bksv) < 0)
+ goto bksv_read_err;
+
+ for (i = 0; i < BKSV_SIZE; i++)
+ dev_dbg(dev, "%s: i2c read : bksv[%d]: 0x%x\n",
+ __func__, i, bksv[i]);
+
+ for (i = 0; i < BKSV_SIZE; i++) {
+
+ for (j = 0; j < 8; j++) {
+ result = bksv[i] & (0x1 << j);
+
+ if (result == 0)
+ zero++;
+ else
+ one++;
+ }
+
+ }
+
+ if (!is_hdmi_streaming(hdev))
+ goto bksv_read_err;
+
+ if ((zero == 20) && (one == 20)) {
+ hdmi_write_bytes(hdev, HDMI_HDCP_BKSV_(0),
+ bksv, BKSV_SIZE);
+ break;
+ }
+ dev_dbg(dev, "%s: invalid bksv, retry : %d\n", __func__, cnt);
+
+ msleep(BKSV_DELAY);
+ cnt++;
+ } while (cnt < BKSV_RETRY_CNT);
+
+ if (cnt == BKSV_RETRY_CNT)
+ goto bksv_read_err;
+
+ dev_dbg(dev, "%s: bksv read OK, retry : %d\n", __func__, cnt);
+ return 0;
+
+bksv_read_err:
+ dev_err(dev, "%s: can't read bksv : timeout\n", __func__);
+ return -ETIME;
+}
+
+static int hdcp_read_ri(struct hdmi_device *hdev)
+{
+ struct device *dev = hdev->dev;
+ u8 ri[2] = {0, 0};
+ u8 rj[2] = {0, 0};
+
+
+ ri[0] = hdmi_readb(hdev, HDMI_HDCP_RI_0);
+ ri[1] = hdmi_readb(hdev, HDMI_HDCP_RI_1);
+
+ if (hdcp_i2c_read(hdev, HDCP_RI, 2, rj) < 0)
+ goto compare_err;
+
+ dev_dbg(dev, "%s: Rx -> rj[0]: 0x%02x, rj[1]: 0x%02x\n", __func__,
+ rj[0], rj[1]);
+ dev_dbg(dev, "%s: Tx -> ri[0]: 0x%02x, ri[1]: 0x%02x\n", __func__,
+ ri[0], ri[1]);
+
+ if ((ri[0] == rj[0]) && (ri[1] == rj[1]) && (ri[0] | ri[1]))
+ hdmi_writeb(hdev, HDMI_HDCP_CHECK_RESULT,
+ HDMI_HDCP_RI_MATCH_RESULT_Y);
+ else {
+ hdmi_writeb(hdev, HDMI_HDCP_CHECK_RESULT,
+ HDMI_HDCP_RI_MATCH_RESULT_N);
+ goto compare_err;
+ }
+
+ memset(ri, 0, sizeof(ri));
+ memset(rj, 0, sizeof(rj));
+
+ dev_dbg(dev, "%s: ri and ri' are matched\n", __func__);
+
+ return 0;
+
+compare_err:
+ hdev->hdcp_info.event = HDCP_EVENT_STOP;
+ hdev->hdcp_info.auth_status = NOT_AUTHENTICATED;
+ dev_err(dev, "%s: ri and ri' are mismatched\n", __func__);
+ msleep(10);
+ return -1;
+}
+
+static void hdcp_sw_reset(struct hdmi_device *hdev)
+{
+ u8 val;
+
+ val = hdmi_get_int_mask(hdev);
+
+ hdmi_set_int_mask(hdev, HDMI_INTC_EN_HPD_PLUG, 0);
+ hdmi_set_int_mask(hdev, HDMI_INTC_EN_HPD_UNPLUG, 0);
+
+ hdmi_sw_hpd_enable(hdev, 1);
+ hdmi_sw_hpd_plug(hdev, 0);
+ hdmi_sw_hpd_plug(hdev, 1);
+ hdmi_sw_hpd_enable(hdev, 0);
+
+ if (val & HDMI_INTC_EN_HPD_PLUG)
+ hdmi_set_int_mask(hdev, HDMI_INTC_EN_HPD_PLUG, 1);
+ if (val & HDMI_INTC_EN_HPD_UNPLUG)
+ hdmi_set_int_mask(hdev, HDMI_INTC_EN_HPD_UNPLUG, 1);
+}
+
+static int hdcp_reset_auth(struct hdmi_device *hdev)
+{
+ struct device *dev = hdev->dev;
+ u8 val;
+ unsigned long spin_flags;
+
+ if (!is_hdmi_streaming(hdev))
+ return -ENODEV;
+
+ spin_lock_irqsave(&hdev->hdcp_info.reset_lock, spin_flags);
+
+ hdev->hdcp_info.event = HDCP_EVENT_STOP;
+ hdev->hdcp_info.auth_status = NOT_AUTHENTICATED;
+
+ hdmi_write(hdev, HDMI_HDCP_CTRL1, 0x0);
+ hdmi_write(hdev, HDMI_HDCP_CTRL2, 0x0);
+ hdmi_reg_mute(hdev, 1);
+
+ hdcp_encryption(hdev, 0);
+
+ dev_dbg(dev, "%s: reset authentication\n", __func__);
+
+ val = HDMI_UPDATE_RI_INT_EN | HDMI_WRITE_INT_EN |
+ HDMI_WATCHDOG_INT_EN | HDMI_WTFORACTIVERX_INT_EN;
+ hdmi_write_mask(hdev, HDMI_STATUS_EN, 0, val);
+
+ hdmi_writeb(hdev, HDMI_HDCP_CHECK_RESULT, HDMI_HDCP_CLR_ALL_RESULTS);
+
+ /* need some delay (at least 1 frame) */
+ mdelay(16);
+
+ hdcp_sw_reset(hdev);
+
+ val = HDMI_UPDATE_RI_INT_EN | HDMI_WRITE_INT_EN |
+ HDMI_WATCHDOG_INT_EN | HDMI_WTFORACTIVERX_INT_EN;
+ hdmi_write_mask(hdev, HDMI_STATUS_EN, ~0, val);
+ hdmi_write_mask(hdev, HDMI_HDCP_CTRL1, ~0, HDMI_HDCP_CP_DESIRED_EN);
+ spin_unlock_irqrestore(&hdev->hdcp_info.reset_lock, spin_flags);
+
+ return 0;
+}
+
+static int hdcp_loadkey(struct hdmi_device *hdev)
+{
+ struct device *dev = hdev->dev;
+ u8 val;
+ int cnt = 0;
+
+ hdmi_write_mask(hdev, HDMI_EFUSE_CTRL, ~0,
+ HDMI_EFUSE_CTRL_HDCP_KEY_READ);
+
+ do {
+ val = hdmi_readb(hdev, HDMI_EFUSE_STATUS);
+ if (val & HDMI_EFUSE_ECC_DONE)
+ break;
+ cnt++;
+ mdelay(1);
+ } while (cnt < KEY_LOAD_RETRY_CNT);
+
+ if (cnt == KEY_LOAD_RETRY_CNT)
+ goto key_load_err;
+
+ val = hdmi_readb(hdev, HDMI_EFUSE_STATUS);
+
+ if (val & HDMI_EFUSE_ECC_FAIL)
+ goto key_load_err;
+
+ dev_dbg(dev, "%s: load key is ok\n", __func__);
+ return 0;
+
+key_load_err:
+ dev_err(dev, "%s: can't load key\n", __func__);
+ return -1;
+}
+
+static int hdmi_start_encryption(struct hdmi_device *hdev)
+{
+ struct device *dev = hdev->dev;
+ u8 val;
+ u32 cnt = 0;
+
+ do {
+ val = hdmi_readb(hdev, HDMI_STATUS);
+
+ if (val & HDMI_AUTHEN_ACK_AUTH) {
+ hdcp_encryption(hdev, 1);
+ break;
+ }
+
+ mdelay(1);
+
+ cnt++;
+ } while (cnt < ENCRYPT_CHECK_CNT);
+
+ if (cnt == ENCRYPT_CHECK_CNT)
+ goto encrypt_err;
+
+
+ dev_dbg(dev, "%s: encryption is start\n", __func__);
+ return 0;
+
+encrypt_err:
+ hdcp_encryption(hdev, 0);
+ dev_err(dev, "%s: encryption is failed\n", __func__);
+ return -1;
+}
+
+static int hdmi_check_repeater(struct hdmi_device *hdev)
+{
+ struct device *dev = hdev->dev;
+ int val, i;
+ int cnt = 0, cnt2 = 0;
+
+ u8 bcaps = 0;
+ u8 status[BSTATUS_SIZE];
+ u8 rx_v[SHA_1_HASH_SIZE];
+ u8 ksv_list[HDCP_MAX_DEVS * HDCP_KSV_SIZE];
+
+ u32 dev_cnt;
+
+ memset(status, 0, sizeof(status));
+ memset(rx_v, 0, sizeof(rx_v));
+ memset(ksv_list, 0, sizeof(ksv_list));
+
+ do {
+ if (hdcp_read_bcaps(hdev) < 0)
+ goto check_repeater_err;
+
+ bcaps = hdmi_readb(hdev, HDMI_HDCP_BCAPS);
+
+ if (bcaps & KSV_FIFO_READY) {
+ dev_dbg(dev, "%s: repeater : ksv fifo not ready\n",
+ __func__);
+ dev_dbg(dev, "%s: retries = %d\n", __func__, cnt);
+ break;
+ }
+
+ msleep(KSV_FIFO_CHK_DELAY);
+
+ cnt++;
+ } while (cnt < KSV_FIFO_RETRY_CNT);
+
+ if (cnt == KSV_FIFO_RETRY_CNT)
+ return REPEATER_TIMEOUT_ERROR;
+
+ dev_dbg(dev, "%s: repeater : ksv fifo ready\n", __func__);
+
+ if (hdcp_i2c_read(hdev, HDCP_BSTATUS, BSTATUS_SIZE, status) < 0)
+ goto check_repeater_err;
+
+ if (status[1] & MAX_CASCADE_EXCEEDED)
+ return MAX_CASCADE_EXCEEDED_ERROR;
+ else if (status[0] & MAX_DEVS_EXCEEDED)
+ return MAX_DEVS_EXCEEDED_ERROR;
+
+ hdmi_writeb(hdev, HDMI_HDCP_BSTATUS_0, status[0]);
+ hdmi_writeb(hdev, HDMI_HDCP_BSTATUS_1, status[1]);
+
+ dev_dbg(dev, "%s: status[0] :0x%02x\n", __func__, status[0]);
+ dev_dbg(dev, "%s: status[1] :0x%02x\n", __func__, status[1]);
+
+ dev_cnt = status[0] & 0x7f;
+
+ dev_dbg(dev, "%s: repeater : dev cnt = %d\n", __func__, dev_cnt);
+
+ if (dev_cnt) {
+
+ if (hdcp_i2c_read(hdev, HDCP_KSVFIFO, dev_cnt * HDCP_KSV_SIZE,
+ ksv_list) < 0)
+ goto check_repeater_err;
+
+ cnt = 0;
+
+ do {
+ hdmi_write_bytes(hdev, HDMI_HDCP_KSV_LIST_(0),
+ &ksv_list[cnt * 5], HDCP_KSV_SIZE);
+
+ val = HDMI_HDCP_KSV_WRITE_DONE;
+
+ if (cnt == dev_cnt - 1)
+ val |= HDMI_HDCP_KSV_END;
+
+ hdmi_write(hdev, HDMI_HDCP_KSV_LIST_CON, val);
+
+ if (cnt < dev_cnt - 1) {
+ cnt2 = 0;
+ do {
+ val = hdmi_readb(hdev,
+ HDMI_HDCP_KSV_LIST_CON);
+ if (val & HDMI_HDCP_KSV_READ)
+ break;
+ cnt2++;
+ } while (cnt2 < KSV_LIST_RETRY_CNT);
+
+ if (cnt2 == KSV_LIST_RETRY_CNT)
+ dev_dbg(dev, "%s: ksv list not readed\n",
+ __func__);
+ }
+ cnt++;
+ } while (cnt < dev_cnt);
+ } else
+ hdmi_writeb(hdev, HDMI_HDCP_KSV_LIST_CON,
+ HDMI_HDCP_KSV_LIST_EMPTY);
+
+ if (hdcp_i2c_read(hdev, HDCP_SHA1, SHA_1_HASH_SIZE, rx_v) < 0)
+ goto check_repeater_err;
+
+ for (i = 0; i < SHA_1_HASH_SIZE; i++)
+ dev_dbg(dev, "%s: [i2c] SHA-1 rx :: %02x\n", __func__, rx_v[i]);
+
+ hdmi_write_bytes(hdev, HDMI_HDCP_SHA1_(0), rx_v, SHA_1_HASH_SIZE);
+
+ val = hdmi_readb(hdev, HDMI_HDCP_SHA_RESULT);
+ if (val & HDMI_HDCP_SHA_VALID_RD) {
+ if (val & HDMI_HDCP_SHA_VALID) {
+ dev_dbg(dev, "%s: SHA-1 result is ok\n", __func__);
+ hdmi_writeb(hdev, HDMI_HDCP_SHA_RESULT, 0x0);
+ } else {
+ dev_dbg(dev, "%s: SHA-1 result is not vaild\n",
+ __func__);
+ hdmi_writeb(hdev, HDMI_HDCP_SHA_RESULT, 0x0);
+ goto check_repeater_err;
+ }
+ } else {
+ dev_dbg(dev, "%s: SHA-1 result is not ready\n", __func__);
+ hdmi_writeb(hdev, HDMI_HDCP_SHA_RESULT, 0x0);
+ goto check_repeater_err;
+ }
+
+ dev_dbg(dev, "%s: check repeater is ok\n", __func__);
+ return 0;
+
+check_repeater_err:
+ dev_err(dev, "%s: check repeater is failed\n", __func__);
+ return -1;
+}
+
+static int hdcp_bksv(struct hdmi_device *hdev)
+{
+ struct device *dev = hdev->dev;
+
+ dev_dbg(dev, "%s\n", __func__);
+
+ hdev->hdcp_info.auth_status = RECEIVER_READ_READY;
+
+ if (hdcp_read_bcaps(hdev) < 0)
+ goto bksv_start_err;
+
+ hdev->hdcp_info.auth_status = BCAPS_READ_DONE;
+
+ if (hdcp_read_bksv(hdev) < 0)
+ goto bksv_start_err;
+
+ hdev->hdcp_info.auth_status = BKSV_READ_DONE;
+
+ dev_dbg(dev, "%s: bksv start is ok\n", __func__);
+
+ return 0;
+
+bksv_start_err:
+ dev_err(dev, "%s: failed to start bksv\n", __func__);
+ msleep(100);
+ return -1;
+}
+
+static int hdcp_second_auth(struct hdmi_device *hdev)
+{
+ struct device *dev = hdev->dev;
+ int ret = 0;
+
+ dev_dbg(dev, "%s\n", __func__);
+
+ if (!hdev->hdcp_info.hdcp_start)
+ goto second_auth_err;
+
+ if (!is_hdmi_streaming(hdev))
+ goto second_auth_err;
+
+ ret = hdmi_check_repeater(hdev);
+
+ if (!ret) {
+ hdev->hdcp_info.auth_status = SECOND_AUTHENTICATION_DONE;
+ hdmi_start_encryption(hdev);
+ } else {
+ switch (ret) {
+
+ case REPEATER_ILLEGAL_DEVICE_ERROR:
+ hdmi_writeb(hdev, HDMI_HDCP_CTRL2, 0x1);
+ mdelay(1);
+ hdmi_writeb(hdev, HDMI_HDCP_CTRL2, 0x0);
+
+ dev_dbg(dev, "%s: repeater : illegal device\n",
+ __func__);
+ break;
+ case REPEATER_TIMEOUT_ERROR:
+ hdmi_write_mask(hdev, HDMI_HDCP_CTRL1, ~0,
+ HDMI_HDCP_SET_REPEATER_TIMEOUT);
+ hdmi_write_mask(hdev, HDMI_HDCP_CTRL1, 0,
+ HDMI_HDCP_SET_REPEATER_TIMEOUT);
+
+ dev_dbg(dev, "%s: repeater : timeout\n", __func__);
+ break;
+ case MAX_CASCADE_EXCEEDED_ERROR:
+
+ dev_dbg(dev, "%s: repeater : exceeded MAX_CASCADE\n",
+ __func__);
+ break;
+ case MAX_DEVS_EXCEEDED_ERROR:
+
+ dev_dbg(dev, "%s: repeater : exceeded MAX_DEVS\n",
+ __func__);
+ break;
+ default:
+ break;
+ }
+
+ hdev->hdcp_info.auth_status = NOT_AUTHENTICATED;
+
+ goto second_auth_err;
+ }
+
+ dev_dbg(dev, "%s: second authentication is OK\n", __func__);
+ return 0;
+
+second_auth_err:
+ dev_dbg(dev, "%s: second authentication is failed\n", __func__);
+ return -1;
+}
+
+static int hdcp_write_aksv(struct hdmi_device *hdev)
+{
+ struct device *dev = hdev->dev;
+ dev_dbg(dev, "%s\n", __func__);
+
+ if (hdev->hdcp_info.auth_status != BKSV_READ_DONE) {
+ dev_err(dev, "%s: bksv is not ready\n", __func__);
+ goto aksv_write_err;
+ }
+ if (!is_hdmi_streaming(hdev))
+ goto aksv_write_err;
+
+ if (hdcp_write_key(hdev, AN_SIZE, HDMI_HDCP_AN_(0), HDCP_AN) < 0)
+ goto aksv_write_err;
+
+ hdev->hdcp_info.auth_status = AN_WRITE_DONE;
+
+ dev_dbg(dev, "%s: write An is done\n", __func__);
+
+ if (hdcp_write_key(hdev, AKSV_SIZE, HDMI_HDCP_AKSV_(0), HDCP_AKSV) < 0)
+ goto aksv_write_err;
+
+ msleep(100);
+
+ hdev->hdcp_info.auth_status = AKSV_WRITE_DONE;
+
+ dev_dbg(dev, "%s: write aksv is done\n", __func__);
+ dev_dbg(dev, "%s: aksv start is OK\n", __func__);
+ return 0;
+
+aksv_write_err:
+ dev_err(dev, "%s: aksv start is failed\n", __func__);
+ return -1;
+}
+
+static int hdcp_check_ri(struct hdmi_device *hdev)
+{
+ struct device *dev = hdev->dev;
+
+ dev_dbg(dev, "%s\n", __func__);
+
+ if (hdev->hdcp_info.auth_status < AKSV_WRITE_DONE) {
+ dev_dbg(dev, "%s: ri check is not ready\n", __func__);
+ goto check_ri_err;
+ }
+
+ if (!is_hdmi_streaming(hdev))
+ goto check_ri_err;
+
+ if (hdcp_read_ri(hdev) < 0)
+ goto check_ri_err;
+
+ if (hdev->hdcp_info.is_repeater)
+ hdev->hdcp_info.auth_status
+ = SECOND_AUTHENTICATION_RDY;
+ else {
+ hdev->hdcp_info.auth_status
+ = FIRST_AUTHENTICATION_DONE;
+ hdmi_start_encryption(hdev);
+ }
+
+ dev_dbg(dev, "%s: ri check is OK\n", __func__);
+ return 0;
+
+check_ri_err:
+ dev_err(dev, "%s: ri check is failed\n", __func__);
+ return -1;
+}
+
+static void hdcp_work(struct work_struct *work)
+{
+ struct hdmi_device *hdev = container_of(work, struct hdmi_device, work);
+
+ if (!hdev->hdcp_info.hdcp_start)
+ return;
+
+ if (!is_hdmi_streaming(hdev))
+ return;
+
+ if (hdev->hdcp_info.event & HDCP_EVENT_READ_BKSV_START) {
+ if (hdcp_bksv(hdev) < 0)
+ goto work_err;
+ else
+ hdev->hdcp_info.event &= ~HDCP_EVENT_READ_BKSV_START;
+ }
+
+ if (hdev->hdcp_info.event & HDCP_EVENT_SECOND_AUTH_START) {
+ if (hdcp_second_auth(hdev) < 0)
+ goto work_err;
+ else
+ hdev->hdcp_info.event &= ~HDCP_EVENT_SECOND_AUTH_START;
+ }
+
+ if (hdev->hdcp_info.event & HDCP_EVENT_WRITE_AKSV_START) {
+ if (hdcp_write_aksv(hdev) < 0)
+ goto work_err;
+ else
+ hdev->hdcp_info.event &= ~HDCP_EVENT_WRITE_AKSV_START;
+ }
+
+ if (hdev->hdcp_info.event & HDCP_EVENT_CHECK_RI_START) {
+ if (hdcp_check_ri(hdev) < 0)
+ goto work_err;
+ else
+ hdev->hdcp_info.event &= ~HDCP_EVENT_CHECK_RI_START;
+ }
+ return;
+work_err:
+ if (!hdev->hdcp_info.hdcp_start)
+ return;
+ if (!is_hdmi_streaming(hdev))
+ return;
+
+ hdcp_reset_auth(hdev);
+}
+
+/* HDCP APIs for hdmi driver */
+irqreturn_t hdcp_irq_handler(struct hdmi_device *hdev)
+{
+ struct device *dev = hdev->dev;
+ u32 event = 0;
+ u8 flag;
+ event = 0;
+
+ if (!hdev->streaming) {
+ hdev->hdcp_info.event = HDCP_EVENT_STOP;
+ hdev->hdcp_info.auth_status = NOT_AUTHENTICATED;
+ return IRQ_HANDLED;
+ }
+
+ flag = hdmi_readb(hdev, HDMI_STATUS);
+
+ if (flag & HDMI_WTFORACTIVERX_INT_OCC) {
+ event |= HDCP_EVENT_READ_BKSV_START;
+ hdmi_write_mask(hdev, HDMI_STATUS, ~0,
+ HDMI_WTFORACTIVERX_INT_OCC);
+ hdmi_write(hdev, HDMI_HDCP_I2C_INT, 0x0);
+ }
+
+ if (flag & HDMI_WRITE_INT_OCC) {
+ event |= HDCP_EVENT_WRITE_AKSV_START;
+ hdmi_write_mask(hdev, HDMI_STATUS, ~0, HDMI_WRITE_INT_OCC);
+ hdmi_write(hdev, HDMI_HDCP_AN_INT, 0x0);
+ }
+
+ if (flag & HDMI_UPDATE_RI_INT_OCC) {
+ event |= HDCP_EVENT_CHECK_RI_START;
+ hdmi_write_mask(hdev, HDMI_STATUS, ~0, HDMI_UPDATE_RI_INT_OCC);
+ hdmi_write(hdev, HDMI_HDCP_RI_INT, 0x0);
+ }
+
+ if (flag & HDMI_WATCHDOG_INT_OCC) {
+ event |= HDCP_EVENT_SECOND_AUTH_START;
+ hdmi_write_mask(hdev, HDMI_STATUS, ~0, HDMI_WATCHDOG_INT_OCC);
+ hdmi_write(hdev, HDMI_HDCP_WDT_INT, 0x0);
+ }
+
+ if (!event) {
+ dev_dbg(dev, "%s: unknown irq\n", __func__);
+ return IRQ_HANDLED;
+ }
+
+ if (is_hdmi_streaming(hdev)) {
+ hdev->hdcp_info.event |= event;
+ queue_work(hdev->hdcp_wq, &hdev->work);
+ } else {
+ hdev->hdcp_info.event = HDCP_EVENT_STOP;
+ hdev->hdcp_info.auth_status = NOT_AUTHENTICATED;
+ }
+
+ return IRQ_HANDLED;
+}
+
+int hdcp_prepare(struct hdmi_device *hdev)
+{
+ hdev->hdcp_wq = create_workqueue("khdcpd");
+ if (hdev->hdcp_wq == NULL)
+ return -ENOMEM;
+
+ INIT_WORK(&hdev->work, hdcp_work);
+
+ spin_lock_init(&hdev->hdcp_info.reset_lock);
+
+#if defined(CONFIG_VIDEO_EXYNOS_HDCP)
+ hdev->hdcp_info.hdcp_enable = 1;
+#else
+ hdev->hdcp_info.hdcp_enable = 0;
+#endif
+ return 0;
+}
+
+int hdcp_start(struct hdmi_device *hdev)
+{
+ struct device *dev = hdev->dev;
+
+ hdev->hdcp_info.event = HDCP_EVENT_STOP;
+ hdev->hdcp_info.auth_status = NOT_AUTHENTICATED;
+
+ dev_dbg(dev, "%s\n", __func__);
+
+ hdcp_sw_reset(hdev);
+
+ dev_dbg(dev, "%s: stop encryption\n", __func__);
+
+ hdcp_encryption(hdev, 0);
+
+ msleep(120);
+ if (hdcp_loadkey(hdev) < 0)
+ return -1;
+
+ hdmi_write(hdev, HDMI_GCP_CON, HDMI_GCP_CON_NO_TRAN);
+ hdmi_write(hdev, HDMI_STATUS_EN, HDMI_INT_EN_ALL);
+
+ hdmi_write(hdev, HDMI_HDCP_CTRL1, HDMI_HDCP_CP_DESIRED_EN);
+
+ hdmi_set_int_mask(hdev, HDMI_INTC_EN_HDCP, 1);
+
+ hdev->hdcp_info.hdcp_start = 1;
+
+ return 0;
+}
+
+int hdcp_stop(struct hdmi_device *hdev)
+{
+ struct device *dev = hdev->dev;
+ u8 val;
+
+ dev_dbg(dev, "%s\n", __func__);
+
+ hdmi_set_int_mask(hdev, HDMI_INTC_EN_HDCP, 0);
+
+ hdev->hdcp_info.event = HDCP_EVENT_STOP;
+ hdev->hdcp_info.auth_status = NOT_AUTHENTICATED;
+ hdev->hdcp_info.hdcp_start = false;
+
+ hdmi_writeb(hdev, HDMI_HDCP_CTRL1, 0x0);
+
+ hdmi_sw_hpd_enable(hdev, 0);
+
+ val = HDMI_UPDATE_RI_INT_EN | HDMI_WRITE_INT_EN |
+ HDMI_WATCHDOG_INT_EN | HDMI_WTFORACTIVERX_INT_EN;
+ hdmi_write_mask(hdev, HDMI_STATUS_EN, 0, val);
+ hdmi_write_mask(hdev, HDMI_STATUS_EN, ~0, val);
+
+ hdmi_write_mask(hdev, HDMI_STATUS, ~0, HDMI_INT_EN_ALL);
+
+ dev_dbg(dev, "%s: stop encryption\n", __func__);
+ hdcp_encryption(hdev, 0);
+
+ hdmi_writeb(hdev, HDMI_HDCP_CHECK_RESULT, HDMI_HDCP_CLR_ALL_RESULTS);
+
+ return 0;
+}
--- /dev/null
+/*
+ * Samsung HDMI interface driver
+ *
+ * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
+ *
+ * Jiun Yu, <jiun.yu@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published
+ * by the Free Software Foundiation. either version 2 of the License,
+ * or (at your option) any later version
+ */
+
+#ifndef SAMSUMG_HDMI_H
+#define SAMSUNG_HDMI_H
+
+#ifdef CONFIG_VIDEO_EXYNOS_HDMI_DEBUG
+#define DEBUG
+#endif
+
+#include <linux/io.h>
+#include <linux/clk.h>
+#include <linux/interrupt.h>
+#include <linux/regulator/consumer.h>
+
+#include <media/v4l2-subdev.h>
+#include <media/v4l2-device.h>
+
+#define INFOFRAME_CNT 2
+
+#define HDMI_VSI_VERSION 0x01;
+#define HDMI_AVI_VERSION 0x02;
+#define HDMI_VSI_LENGTH 0x05;
+#define HDMI_AVI_LENGTH 0x0d;
+
+/* HDMI audio configuration value */
+#define DEFAULT_SAMPLE_RATE 44100
+#define DEFAULT_BITS_PER_SAMPLE 16
+
+/* HDMI pad definitions */
+#define HDMI_PAD_SINK 0
+#define HDMI_PADS_NUM 1
+
+/* HPD state definitions */
+#define HPD_LOW 0
+#define HPD_HIGH 1
+
+enum HDMI_VIDEO_FORMAT {
+ HDMI_VIDEO_FORMAT_2D = 0x0,
+ /** refer to Table 8-12 HDMI_Video_Format in HDMI specification v1.4a */
+ HDMI_VIDEO_FORMAT_3D = 0x2
+};
+
+enum HDMI_3D_FORMAT {
+ /** refer to Table 8-13 3D_Structure in HDMI specification v1.4a */
+
+ /** Frame Packing */
+ HDMI_3D_FORMAT_FP = 0x0,
+ /** Top-and-Bottom */
+ HDMI_3D_FORMAT_TB = 0x6,
+ /** Side-by-Side Half */
+ HDMI_3D_FORMAT_SB_HALF = 0x8
+};
+
+enum HDMI_3D_EXT_DATA {
+ /* refer to Table H-3 3D_Ext_Data - Additional video format
+ * information for Side-by-side(half) 3D structure */
+
+ /** Horizontal sub-sampleing */
+ HDMI_H_SUB_SAMPLE = 0x1
+};
+
+enum HDMI_OUTPUT_FMT {
+ HDMI_OUTPUT_RGB888 = 0x0,
+ HDMI_OUTPUT_YUV444 = 0x2
+};
+
+enum HDMI_PACKET_TYPE {
+ /** refer to Table 5-8 Packet Type in HDMI specification v1.4a */
+
+ /** InfoFrame packet type */
+ HDMI_PACKET_TYPE_INFOFRAME = 0X80,
+ /** Vendor-Specific InfoFrame */
+ HDMI_PACKET_TYPE_VSI = HDMI_PACKET_TYPE_INFOFRAME + 1,
+ /** Auxiliary Video information InfoFrame */
+ HDMI_PACKET_TYPE_AVI = HDMI_PACKET_TYPE_INFOFRAME + 2
+};
+
+enum HDMI_AUDIO_CODEC {
+ HDMI_AUDIO_PCM,
+ HDMI_AUDIO_AC3,
+ HDMI_AUDIO_MP3
+};
+
+enum HDCP_EVENT {
+ HDCP_EVENT_STOP = 1 << 0,
+ HDCP_EVENT_START = 1 << 1,
+ HDCP_EVENT_READ_BKSV_START = 1 << 2,
+ HDCP_EVENT_WRITE_AKSV_START = 1 << 4,
+ HDCP_EVENT_CHECK_RI_START = 1 << 8,
+ HDCP_EVENT_SECOND_AUTH_START = 1 << 16
+};
+
+enum HDCP_STATE {
+ NOT_AUTHENTICATED,
+ RECEIVER_READ_READY,
+ BCAPS_READ_DONE,
+ BKSV_READ_DONE,
+ AN_WRITE_DONE,
+ AKSV_WRITE_DONE,
+ FIRST_AUTHENTICATION_DONE,
+ SECOND_AUTHENTICATION_RDY,
+ SECOND_AUTHENTICATION_DONE,
+};
+
+#define DEFAULT_AUDIO_CODEC HDMI_AUDIO_PCM
+
+struct hdmi_resources {
+ struct clk *hdmi;
+ struct clk *sclk_hdmi;
+ struct clk *sclk_pixel;
+ struct clk *sclk_hdmiphy;
+ struct clk *hdmiphy;
+ struct regulator_bulk_data *regul_bulk;
+ int regul_count;
+};
+
+struct hdmi_tg_regs {
+ u8 cmd;
+ u8 h_fsz_l;
+ u8 h_fsz_h;
+ u8 hact_st_l;
+ u8 hact_st_h;
+ u8 hact_sz_l;
+ u8 hact_sz_h;
+ u8 v_fsz_l;
+ u8 v_fsz_h;
+ u8 vsync_l;
+ u8 vsync_h;
+ u8 vsync2_l;
+ u8 vsync2_h;
+ u8 vact_st_l;
+ u8 vact_st_h;
+ u8 vact_sz_l;
+ u8 vact_sz_h;
+ u8 field_chg_l;
+ u8 field_chg_h;
+ u8 vact_st2_l;
+ u8 vact_st2_h;
+#ifndef CONFIG_CPU_EXYNOS4210
+ u8 vact_st3_l;
+ u8 vact_st3_h;
+ u8 vact_st4_l;
+ u8 vact_st4_h;
+#endif
+ u8 vsync_top_hdmi_l;
+ u8 vsync_top_hdmi_h;
+ u8 vsync_bot_hdmi_l;
+ u8 vsync_bot_hdmi_h;
+ u8 field_top_hdmi_l;
+ u8 field_top_hdmi_h;
+ u8 field_bot_hdmi_l;
+ u8 field_bot_hdmi_h;
+#ifndef CONFIG_CPU_EXYNOS4210
+ u8 tg_3d;
+#endif
+};
+
+struct hdmi_core_regs {
+#ifndef CONFIG_CPU_EXYNOS4210
+ u8 h_blank[2];
+ u8 v2_blank[2];
+ u8 v1_blank[2];
+ u8 v_line[2];
+ u8 h_line[2];
+ u8 hsync_pol[1];
+ u8 vsync_pol[1];
+ u8 int_pro_mode[1];
+ u8 v_blank_f0[2];
+ u8 v_blank_f1[2];
+ u8 h_sync_start[2];
+ u8 h_sync_end[2];
+ u8 v_sync_line_bef_2[2];
+ u8 v_sync_line_bef_1[2];
+ u8 v_sync_line_aft_2[2];
+ u8 v_sync_line_aft_1[2];
+ u8 v_sync_line_aft_pxl_2[2];
+ u8 v_sync_line_aft_pxl_1[2];
+ u8 v_blank_f2[2]; /* for 3D mode */
+ u8 v_blank_f3[2]; /* for 3D mode */
+ u8 v_blank_f4[2]; /* for 3D mode */
+ u8 v_blank_f5[2]; /* for 3D mode */
+ u8 v_sync_line_aft_3[2];
+ u8 v_sync_line_aft_4[2];
+ u8 v_sync_line_aft_5[2];
+ u8 v_sync_line_aft_6[2];
+ u8 v_sync_line_aft_pxl_3[2];
+ u8 v_sync_line_aft_pxl_4[2];
+ u8 v_sync_line_aft_pxl_5[2];
+ u8 v_sync_line_aft_pxl_6[2];
+ u8 vact_space_1[2];
+ u8 vact_space_2[2];
+ u8 vact_space_3[2];
+ u8 vact_space_4[2];
+ u8 vact_space_5[2];
+ u8 vact_space_6[2];
+#else
+ u8 h_blank[2];
+ u8 v_blank[3];
+ u8 h_v_line[3];
+ u8 vsync_pol[1];
+ u8 int_pro_mode[1];
+ u8 v_blank_f[3];
+ u8 h_sync_gen[3];
+ u8 v_sync_gen1[3];
+ u8 v_sync_gen2[3];
+ u8 v_sync_gen3[3];
+#endif
+};
+
+struct hdmi_3d_info {
+ enum HDMI_VIDEO_FORMAT is_3d;
+ enum HDMI_3D_FORMAT fmt_3d;
+};
+
+struct hdmi_preset_conf {
+ struct hdmi_core_regs core;
+ struct hdmi_tg_regs tg;
+ struct v4l2_mbus_framefmt mbus_fmt;
+};
+
+struct hdmi_driver_data {
+ int hdmiphy_bus;
+};
+
+struct hdmi_infoframe {
+ enum HDMI_PACKET_TYPE type;
+ u8 ver;
+ u8 len;
+};
+
+struct hdcp_info {
+ u8 is_repeater;
+ u32 hdcp_start;
+ int hdcp_enable;
+ spinlock_t reset_lock;
+
+ enum HDCP_EVENT event;
+ enum HDCP_STATE auth_status;
+};
+
+struct hdmi_device {
+ /** base address of HDMI registers */
+ void __iomem *regs;
+
+ /** HDMI interrupt */
+ unsigned int int_irq;
+ unsigned int ext_irq;
+ unsigned int curr_irq;
+
+ /** pointer to device parent */
+ struct device *dev;
+ /** subdev generated by HDMI device */
+ struct v4l2_subdev sd;
+ /** sink pad connected to mixer */
+ struct media_pad pad;
+ /** V4L2 device structure */
+ struct v4l2_device v4l2_dev;
+ /** subdev of HDMIPHY interface */
+ struct v4l2_subdev *phy_sd;
+ /** configuration of current graphic mode */
+ const struct hdmi_preset_conf *cur_conf;
+ /** current preset */
+ u32 cur_preset;
+ /** other resources */
+ struct hdmi_resources res;
+ /** HDMI is streaming or not */
+ int streaming;
+ /** supported HDMI InfoFrame */
+ struct hdmi_infoframe infoframe[INFOFRAME_CNT];
+ /** audio on/off control flag */
+ int audio_enable;
+ /** audio sample rate */
+ int sample_rate;
+ /** audio bits per sample */
+ int bits_per_sample;
+ /** current audio codec type */
+ enum HDMI_AUDIO_CODEC audio_codec;
+ /** HDMI output format */
+ enum HDMI_OUTPUT_FMT output_fmt;
+
+ /** HDCP information */
+ struct hdcp_info hdcp_info;
+ struct work_struct work;
+ struct work_struct hpd_work;
+ struct workqueue_struct *hdcp_wq;
+ struct workqueue_struct *hpd_wq;
+
+ /* HPD releated */
+ bool hpd_user_checked;
+ atomic_t hpd_state;
+ spinlock_t hpd_lock;
+};
+
+struct hdmi_conf {
+ u32 preset;
+ const struct hdmi_preset_conf *conf;
+ const struct hdmi_3d_info *info;
+};
+extern const struct hdmi_conf hdmi_conf[];
+
+struct hdmiphy_conf {
+ u32 preset;
+ const u8 *data;
+};
+extern const struct hdmiphy_conf hdmiphy_conf[];
+extern const int hdmi_pre_cnt;
+extern const int hdmiphy_conf_cnt;
+
+const struct hdmi_3d_info *hdmi_preset2info(u32 preset);
+
+irqreturn_t hdmi_irq_handler(int irq, void *dev_data);
+int hdmi_conf_apply(struct hdmi_device *hdmi_dev);
+int is_hdmiphy_ready(struct hdmi_device *hdev);
+void hdmi_enable(struct hdmi_device *hdev, int on);
+void hdmi_hpd_enable(struct hdmi_device *hdev, int on);
+void hdmi_tg_enable(struct hdmi_device *hdev, int on);
+void hdmi_reg_stop_vsi(struct hdmi_device *hdev);
+void hdmi_reg_infoframe(struct hdmi_device *hdev,
+ struct hdmi_infoframe *infoframe);
+void hdmi_reg_set_acr(struct hdmi_device *hdev);
+void hdmi_reg_spdif_audio_init(struct hdmi_device *hdev);
+void hdmi_reg_i2s_audio_init(struct hdmi_device *hdev);
+void hdmi_audio_enable(struct hdmi_device *hdev, int on);
+void hdmi_bluescreen_enable(struct hdmi_device *hdev, int on);
+void hdmi_reg_mute(struct hdmi_device *hdev, int on);
+int hdmi_hpd_status(struct hdmi_device *hdev);
+int is_hdmi_streaming(struct hdmi_device *hdev);
+u8 hdmi_get_int_mask(struct hdmi_device *hdev);
+void hdmi_set_int_mask(struct hdmi_device *hdev, u8 mask, int en);
+void hdmi_sw_hpd_enable(struct hdmi_device *hdev, int en);
+void hdmi_sw_hpd_plug(struct hdmi_device *hdev, int en);
+void hdmi_phy_sw_reset(struct hdmi_device *hdev);
+void hdmi_dumpregs(struct hdmi_device *hdev, char *prefix);
+void hdmi_set_3d_info(struct hdmi_device *hdev);
+
+/** HDCP functions */
+irqreturn_t hdcp_irq_handler(struct hdmi_device *hdev);
+int hdcp_stop(struct hdmi_device *hdev);
+int hdcp_start(struct hdmi_device *hdev);
+int hdcp_prepare(struct hdmi_device *hdev);
+int hdcp_i2c_read(struct hdmi_device *hdev, u8 offset, int bytes, u8 *buf);
+int hdcp_i2c_write(struct hdmi_device *hdev, u8 offset, int bytes, u8 *buf);
+
+static inline
+void hdmi_write(struct hdmi_device *hdev, u32 reg_id, u32 value)
+{
+ writel(value, hdev->regs + reg_id);
+}
+
+static inline
+void hdmi_write_mask(struct hdmi_device *hdev, u32 reg_id, u32 value, u32 mask)
+{
+ u32 old = readl(hdev->regs + reg_id);
+ value = (value & mask) | (old & ~mask);
+ writel(value, hdev->regs + reg_id);
+}
+
+static inline
+void hdmi_writeb(struct hdmi_device *hdev, u32 reg_id, u8 value)
+{
+ writeb(value, hdev->regs + reg_id);
+}
+
+static inline void hdmi_write_bytes(struct hdmi_device *hdev, u32 reg_id,
+ u8 *buf, int bytes)
+{
+ int i;
+
+ for (i = 0; i < bytes; ++i)
+ writeb(buf[i], hdev->regs + reg_id + i * 4);
+}
+
+static inline u32 hdmi_read(struct hdmi_device *hdev, u32 reg_id)
+{
+ return readl(hdev->regs + reg_id);
+}
+
+static inline u8 hdmi_readb(struct hdmi_device *hdev, u32 reg_id)
+{
+ return readb(hdev->regs + reg_id);
+}
+
+static inline void hdmi_read_bytes(struct hdmi_device *hdev, u32 reg_id,
+ u8 *buf, int bytes)
+{
+ int i;
+
+ for (i = 0; i < bytes; ++i)
+ buf[i] = readb(hdev->regs + reg_id + i * 4);
+}
+
+#endif /* SAMSUNG_HDMI_H */
--- /dev/null
+/* linux/drivers/media/video/samsung/tvout/s5p_cec_ctrl.c
+ *
+ * Copyright (c) 2009 Samsung Electronics
+ * http://www.samsung.com/
+ *
+ * cec interface file for Samsung TVOut driver
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <linux/poll.h>
+#include <linux/miscdevice.h>
+#include <linux/clk.h>
+#include <linux/sched.h>
+#include <linux/platform_device.h>
+#include <linux/interrupt.h>
+#include <linux/export.h>
+#include <linux/module.h>
+#include <plat/devs.h>
+#include <plat/tv-core.h>
+#include <plat/tvout.h>
+
+#include "cec.h"
+
+MODULE_AUTHOR("SangPil Moon");
+MODULE_DESCRIPTION("S5P CEC driver");
+MODULE_LICENSE("GPL");
+
+#define CEC_IOC_MAGIC 'c'
+#define CEC_IOC_SETLADDR _IOW(CEC_IOC_MAGIC, 0, unsigned int)
+
+#define VERSION "1.0" /* Driver version number */
+#define CEC_MINOR 243 /* Major 10, Minor 242, /dev/cec */
+
+
+#define CEC_STATUS_TX_RUNNING (1<<0)
+#define CEC_STATUS_TX_TRANSFERRING (1<<1)
+#define CEC_STATUS_TX_DONE (1<<2)
+#define CEC_STATUS_TX_ERROR (1<<3)
+#define CEC_STATUS_TX_BYTES (0xFF<<8)
+#define CEC_STATUS_RX_RUNNING (1<<16)
+#define CEC_STATUS_RX_RECEIVING (1<<17)
+#define CEC_STATUS_RX_DONE (1<<18)
+#define CEC_STATUS_RX_ERROR (1<<19)
+#define CEC_STATUS_RX_BCAST (1<<20)
+#define CEC_STATUS_RX_BYTES (0xFF<<24)
+
+
+/* CEC Rx buffer size */
+#define CEC_RX_BUFF_SIZE 16
+/* CEC Tx buffer size */
+#define CEC_TX_BUFF_SIZE 16
+
+#define TV_CLK_GET_WITH_ERR_CHECK(clk, pdev, clk_name) \
+ do { \
+ clk = clk_get(&pdev->dev, clk_name); \
+ if (IS_ERR(clk)) { \
+ printk(KERN_ERR \
+ "failed to find clock %s\n", clk_name); \
+ return -ENOENT; \
+ } \
+ } while (0);
+
+static atomic_t hdmi_on = ATOMIC_INIT(0);
+static DEFINE_MUTEX(cec_lock);
+struct clk *hdmi_cec_clk;
+
+static int s5p_cec_open(struct inode *inode, struct file *file)
+{
+ int ret = 0;
+
+ mutex_lock(&cec_lock);
+ clk_enable(hdmi_cec_clk);
+
+ if (atomic_read(&hdmi_on)) {
+ tvout_dbg("do not allow multiple open for tvout cec\n");
+ ret = -EBUSY;
+ goto err_multi_open;
+ } else
+ atomic_inc(&hdmi_on);
+
+ s5p_cec_reset();
+
+ s5p_cec_set_divider();
+
+ s5p_cec_threshold();
+
+ s5p_cec_unmask_tx_interrupts();
+
+ s5p_cec_set_rx_state(STATE_RX);
+ s5p_cec_unmask_rx_interrupts();
+ s5p_cec_enable_rx();
+
+err_multi_open:
+ mutex_unlock(&cec_lock);
+
+ return ret;
+}
+
+static int s5p_cec_release(struct inode *inode, struct file *file)
+{
+ atomic_dec(&hdmi_on);
+
+ s5p_cec_mask_tx_interrupts();
+ s5p_cec_mask_rx_interrupts();
+
+ clk_disable(hdmi_cec_clk);
+ clk_put(hdmi_cec_clk);
+
+ return 0;
+}
+
+static ssize_t s5p_cec_read(struct file *file, char __user *buffer,
+ size_t count, loff_t *ppos)
+{
+ ssize_t retval;
+ unsigned long spin_flags;
+
+ if (wait_event_interruptible(cec_rx_struct.waitq,
+ atomic_read(&cec_rx_struct.state) == STATE_DONE)) {
+ return -ERESTARTSYS;
+ }
+ spin_lock_irqsave(&cec_rx_struct.lock, spin_flags);
+
+ if (cec_rx_struct.size > count) {
+ spin_unlock_irqrestore(&cec_rx_struct.lock, spin_flags);
+
+ return -1;
+ }
+
+ if (copy_to_user(buffer, cec_rx_struct.buffer, cec_rx_struct.size)) {
+ spin_unlock_irqrestore(&cec_rx_struct.lock, spin_flags);
+ printk(KERN_ERR " copy_to_user() failed!\n");
+
+ return -EFAULT;
+ }
+
+ retval = cec_rx_struct.size;
+
+ s5p_cec_set_rx_state(STATE_RX);
+ spin_unlock_irqrestore(&cec_rx_struct.lock, spin_flags);
+
+ return retval;
+}
+
+static ssize_t s5p_cec_write(struct file *file, const char __user *buffer,
+ size_t count, loff_t *ppos)
+{
+ char *data;
+
+ /* check data size */
+
+ if (count > CEC_TX_BUFF_SIZE || count == 0)
+ return -1;
+
+ data = kmalloc(count, GFP_KERNEL);
+
+ if (!data) {
+ printk(KERN_ERR " kmalloc() failed!\n");
+
+ return -1;
+ }
+
+ if (copy_from_user(data, buffer, count)) {
+ printk(KERN_ERR " copy_from_user() failed!\n");
+ kfree(data);
+
+ return -EFAULT;
+ }
+
+ s5p_cec_copy_packet(data, count);
+
+ kfree(data);
+
+ /* wait for interrupt */
+ if (wait_event_interruptible(cec_tx_struct.waitq,
+ atomic_read(&cec_tx_struct.state)
+ != STATE_TX)) {
+
+ return -ERESTARTSYS;
+ }
+
+ if (atomic_read(&cec_tx_struct.state) == STATE_ERROR)
+ return -1;
+
+ return count;
+}
+
+#if 0
+static int s5p_cec_ioctl(struct inode *inode, struct file *file, u32 cmd,
+ unsigned long arg)
+#else
+static long s5p_cec_ioctl(struct file *file, unsigned int cmd,
+ unsigned long arg)
+#endif
+{
+ u32 laddr;
+
+ switch (cmd) {
+ case CEC_IOC_SETLADDR:
+ if (get_user(laddr, (u32 __user *) arg))
+ return -EFAULT;
+
+ tvout_dbg("logical address = 0x%02x\n", laddr);
+
+ s5p_cec_set_addr(laddr);
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static u32 s5p_cec_poll(struct file *file, poll_table *wait)
+{
+ poll_wait(file, &cec_rx_struct.waitq, wait);
+
+ if (atomic_read(&cec_rx_struct.state) == STATE_DONE)
+ return POLLIN | POLLRDNORM;
+
+ return 0;
+}
+
+static const struct file_operations cec_fops = {
+ .owner = THIS_MODULE,
+ .open = s5p_cec_open,
+ .release = s5p_cec_release,
+ .read = s5p_cec_read,
+ .write = s5p_cec_write,
+#if 1
+ .unlocked_ioctl = s5p_cec_ioctl,
+#else
+ .ioctl = s5p_cec_ioctl,
+#endif
+ .poll = s5p_cec_poll,
+};
+
+static struct miscdevice cec_misc_device = {
+ .minor = CEC_MINOR,
+ .name = "CEC",
+ .fops = &cec_fops,
+};
+
+static irqreturn_t s5p_cec_irq_handler(int irq, void *dev_id)
+{
+
+ u32 status = 0;
+
+ status = s5p_cec_get_status();
+
+ if (status & CEC_STATUS_TX_DONE) {
+ if (status & CEC_STATUS_TX_ERROR) {
+ tvout_dbg(" CEC_STATUS_TX_ERROR!\n");
+ s5p_cec_set_tx_state(STATE_ERROR);
+ } else {
+ tvout_dbg(" CEC_STATUS_TX_DONE!\n");
+ s5p_cec_set_tx_state(STATE_DONE);
+ }
+
+ s5p_clr_pending_tx();
+
+ wake_up_interruptible(&cec_tx_struct.waitq);
+ }
+
+ if (status & CEC_STATUS_RX_DONE) {
+ if (status & CEC_STATUS_RX_ERROR) {
+ tvout_dbg(" CEC_STATUS_RX_ERROR!\n");
+ s5p_cec_rx_reset();
+
+ } else {
+ u32 size;
+
+ tvout_dbg(" CEC_STATUS_RX_DONE!\n");
+
+ /* copy data from internal buffer */
+ size = status >> 24;
+
+ spin_lock(&cec_rx_struct.lock);
+
+ s5p_cec_get_rx_buf(size, cec_rx_struct.buffer);
+
+ cec_rx_struct.size = size;
+
+ s5p_cec_set_rx_state(STATE_DONE);
+
+ spin_unlock(&cec_rx_struct.lock);
+
+ s5p_cec_enable_rx();
+ }
+
+ /* clear interrupt pending bit */
+ s5p_clr_pending_rx();
+
+ wake_up_interruptible(&cec_rx_struct.waitq);
+ }
+
+ return IRQ_HANDLED;
+}
+
+static int __devinit s5p_cec_probe(struct platform_device *pdev)
+{
+ struct s5p_platform_cec *pdata;
+ u8 *buffer;
+ int ret;
+ struct resource *res;
+
+ pdata = to_tvout_plat(&pdev->dev);
+
+ if (pdata->cfg_gpio)
+ pdata->cfg_gpio(pdev);
+
+
+ s5p_cec_mem_probe(pdev);
+
+ if (misc_register(&cec_misc_device)) {
+ printk(KERN_WARNING " Couldn't register device 10, %d.\n",
+ CEC_MINOR);
+
+ return -EBUSY;
+ }
+
+#if 0
+ irq_num = platform_get_irq(pdev, 0);
+
+ if (irq_num < 0) {
+ printk(KERN_ERR "failed to get %s irq resource\n", "cec");
+ ret = -ENOENT;
+
+ return ret;
+ }
+#endif
+
+ res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+ if (res == NULL) {
+ dev_err(&pdev->dev, "failed to get irq resource.\n");
+ ret = -ENOENT;
+ return ret;
+ }
+
+ ret = request_irq(res->start, s5p_cec_irq_handler, IRQF_DISABLED,
+ pdev->name, &pdev->id);
+
+ if (ret != 0) {
+ printk(KERN_ERR "failed to install %s irq (%d)\n", "cec", ret);
+
+ return ret;
+ }
+
+ init_waitqueue_head(&cec_rx_struct.waitq);
+ spin_lock_init(&cec_rx_struct.lock);
+ init_waitqueue_head(&cec_tx_struct.waitq);
+
+ buffer = kmalloc(CEC_TX_BUFF_SIZE, GFP_KERNEL);
+
+ if (!buffer) {
+ printk(KERN_ERR " kmalloc() failed!\n");
+ misc_deregister(&cec_misc_device);
+
+ return -EIO;
+ }
+
+ cec_rx_struct.buffer = buffer;
+
+ cec_rx_struct.size = 0;
+ TV_CLK_GET_WITH_ERR_CHECK(hdmi_cec_clk, pdev, "sclk_cec");
+
+ dev_info(&pdev->dev, "probe successful\n");
+
+ return 0;
+}
+
+static int __devexit s5p_cec_remove(struct platform_device *pdev)
+{
+ return 0;
+}
+
+#ifdef CONFIG_PM
+static int s5p_cec_suspend(struct platform_device *dev, pm_message_t state)
+{
+ return 0;
+}
+
+static int s5p_cec_resume(struct platform_device *dev)
+{
+ return 0;
+}
+#else
+#define s5p_cec_suspend NULL
+#define s5p_cec_resume NULL
+#endif
+
+static struct platform_driver s5p_cec_driver = {
+ .probe = s5p_cec_probe,
+ .remove = __devexit_p(s5p_cec_remove),
+ .suspend = s5p_cec_suspend,
+ .resume = s5p_cec_resume,
+ .driver = {
+ .name = "s5p-tvout-cec",
+ .owner = THIS_MODULE,
+ },
+};
+
+static char banner[] __initdata =
+ "S5P CEC for Exynos4 Driver, (c) 2009 Samsung Electronics\n";
+
+static int __init s5p_cec_init(void)
+{
+ int ret;
+
+ printk(banner);
+
+ ret = platform_driver_register(&s5p_cec_driver);
+
+ if (ret) {
+ printk(KERN_ERR "Platform Device Register Failed %d\n", ret);
+
+ return -1;
+ }
+
+ return 0;
+}
+
+static void __exit s5p_cec_exit(void)
+{
+ kfree(cec_rx_struct.buffer);
+
+ platform_driver_unregister(&s5p_cec_driver);
+}
+
+module_init(s5p_cec_init);
+module_exit(s5p_cec_exit);
--- /dev/null
+/* linux/drivers/media/video/samsung/tvout/hw_if/cec.c
+ *
+ * Copyright (c) 2009 Samsung Electronics
+ * http://www.samsung.com/
+ *
+ * cec ftn file for Samsung TVOUT driver
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/io.h>
+#include <linux/slab.h>
+#include <linux/platform_device.h>
+#include <linux/videodev2.h>
+#include <linux/irqreturn.h>
+#include <linux/stddef.h>
+
+#include <mach/regs-clock.h>
+#include <mach/regs-clock.h>
+#include <mach/regs-cec.h>
+#include <mach/regs-pmu.h>
+
+#include "cec.h"
+
+#undef tvout_dbg
+
+#ifdef CONFIG_CEC_DEBUG
+#define tvout_dbg(fmt, ...) \
+ printk(KERN_INFO "\t\t[CEC] %s(): " fmt, \
+ __func__, ##__VA_ARGS__)
+#else
+#define tvout_dbg(fmt, ...)
+#endif
+
+#define S5P_HDMI_FIN 24000000
+#define CEC_DIV_RATIO 320000
+
+#define CEC_MESSAGE_BROADCAST_MASK 0x0F
+#define CEC_MESSAGE_BROADCAST 0x0F
+#define CEC_FILTER_THRESHOLD 0x15
+
+static struct resource *cec_mem;
+void __iomem *cec_base;
+
+struct cec_rx_struct cec_rx_struct;
+struct cec_tx_struct cec_tx_struct;
+
+void s5p_cec_set_divider(void)
+{
+ u32 div_ratio, reg, div_val;
+
+ div_ratio = S5P_HDMI_FIN / CEC_DIV_RATIO - 1;
+
+ reg = readl(S5P_HDMI_PHY_CONTROL);
+ reg = (reg & ~(0x3FF << 16)) | (div_ratio << 16);
+
+ writel(reg, S5P_HDMI_PHY_CONTROL);
+
+ div_val = CEC_DIV_RATIO * 0.00005 - 1;
+
+ writeb(0x0, cec_base + S5P_CEC_DIVISOR_3);
+ writeb(0x0, cec_base + S5P_CEC_DIVISOR_2);
+ writeb(0x0, cec_base + S5P_CEC_DIVISOR_1);
+ writeb(div_val, cec_base + S5P_CEC_DIVISOR_0);
+}
+
+void s5p_cec_enable_rx(void)
+{
+ u8 reg;
+
+ reg = readb(cec_base + S5P_CEC_RX_CTRL);
+ reg |= S5P_CEC_RX_CTRL_ENABLE;
+ writeb(reg, cec_base + S5P_CEC_RX_CTRL);
+}
+
+void s5p_cec_mask_rx_interrupts(void)
+{
+ u8 reg;
+
+ reg = readb(cec_base + S5P_CEC_IRQ_MASK);
+ reg |= S5P_CEC_IRQ_RX_DONE;
+ reg |= S5P_CEC_IRQ_RX_ERROR;
+ writeb(reg, cec_base + S5P_CEC_IRQ_MASK);
+}
+
+void s5p_cec_unmask_rx_interrupts(void)
+{
+ u8 reg;
+
+ reg = readb(cec_base + S5P_CEC_IRQ_MASK);
+ reg &= ~S5P_CEC_IRQ_RX_DONE;
+ reg &= ~S5P_CEC_IRQ_RX_ERROR;
+ writeb(reg, cec_base + S5P_CEC_IRQ_MASK);
+}
+
+void s5p_cec_mask_tx_interrupts(void)
+{
+ u8 reg;
+ reg = readb(cec_base + S5P_CEC_IRQ_MASK);
+ reg |= S5P_CEC_IRQ_TX_DONE;
+ reg |= S5P_CEC_IRQ_TX_ERROR;
+ writeb(reg, cec_base + S5P_CEC_IRQ_MASK);
+
+}
+
+void s5p_cec_unmask_tx_interrupts(void)
+{
+ u8 reg;
+
+ reg = readb(cec_base + S5P_CEC_IRQ_MASK);
+ reg &= ~S5P_CEC_IRQ_TX_DONE;
+ reg &= ~S5P_CEC_IRQ_TX_ERROR;
+ writeb(reg, cec_base + S5P_CEC_IRQ_MASK);
+}
+
+void s5p_cec_reset(void)
+{
+ writeb(S5P_CEC_RX_CTRL_RESET, cec_base + S5P_CEC_RX_CTRL);
+ writeb(S5P_CEC_TX_CTRL_RESET, cec_base + S5P_CEC_TX_CTRL);
+}
+
+void s5p_cec_tx_reset(void)
+{
+ writeb(S5P_CEC_TX_CTRL_RESET, cec_base + S5P_CEC_TX_CTRL);
+}
+
+void s5p_cec_rx_reset(void)
+{
+ writeb(S5P_CEC_RX_CTRL_RESET, cec_base + S5P_CEC_RX_CTRL);
+}
+
+void s5p_cec_threshold(void)
+{
+ writeb(CEC_FILTER_THRESHOLD, cec_base + S5P_CEC_RX_FILTER_TH);
+ writeb(0, cec_base + S5P_CEC_RX_FILTER_CTRL);
+}
+
+void s5p_cec_set_tx_state(enum cec_state state)
+{
+ atomic_set(&cec_tx_struct.state, state);
+}
+
+void s5p_cec_set_rx_state(enum cec_state state)
+{
+ atomic_set(&cec_rx_struct.state, state);
+}
+
+void s5p_cec_copy_packet(char *data, size_t count)
+{
+ int i = 0;
+ u8 reg;
+
+ while (i < count) {
+ writeb(data[i], cec_base + (S5P_CEC_TX_BUFF0 + (i * 4)));
+ i++;
+ }
+
+ writeb(count, cec_base + S5P_CEC_TX_BYTES);
+ s5p_cec_set_tx_state(STATE_TX);
+ reg = readb(cec_base + S5P_CEC_TX_CTRL);
+ reg |= S5P_CEC_TX_CTRL_START;
+
+ if ((data[0] & CEC_MESSAGE_BROADCAST_MASK) == CEC_MESSAGE_BROADCAST)
+ reg |= S5P_CEC_TX_CTRL_BCAST;
+ else
+ reg &= ~S5P_CEC_TX_CTRL_BCAST;
+
+ reg |= 0x50;
+ writeb(reg, cec_base + S5P_CEC_TX_CTRL);
+}
+
+void s5p_cec_set_addr(u32 addr)
+{
+ writeb(addr & 0x0F, cec_base + S5P_CEC_LOGIC_ADDR);
+}
+
+u32 s5p_cec_get_status(void)
+{
+ u32 status = 0;
+
+ status = readb(cec_base + S5P_CEC_TX_STATUS_0);
+ status |= readb(cec_base + S5P_CEC_TX_STATUS_1) << 8;
+ status |= readb(cec_base + S5P_CEC_RX_STATUS_0) << 16;
+ status |= readb(cec_base + S5P_CEC_RX_STATUS_1) << 24;
+
+ tvout_dbg("status = 0x%x!\n", status);
+
+ return status;
+}
+
+void s5p_clr_pending_tx(void)
+{
+ writeb(S5P_CEC_IRQ_TX_DONE | S5P_CEC_IRQ_TX_ERROR,
+ cec_base + S5P_CEC_IRQ_CLEAR);
+}
+
+void s5p_clr_pending_rx(void)
+{
+ writeb(S5P_CEC_IRQ_RX_DONE | S5P_CEC_IRQ_RX_ERROR,
+ cec_base + S5P_CEC_IRQ_CLEAR);
+}
+
+void s5p_cec_get_rx_buf(u32 size, u8 *buffer)
+{
+ u32 i = 0;
+
+ while (i < size) {
+ buffer[i] = readb(cec_base + S5P_CEC_RX_BUFF0 + (i * 4));
+ i++;
+ }
+}
+
+void s5p_cec_mem_probe(struct platform_device *pdev)
+{
+ struct resource *res;
+ size_t size;
+ int ret;
+
+ dev_dbg(&pdev->dev, "%s\n", __func__);
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+
+ if (res == NULL) {
+ dev_err(&pdev->dev,
+ "failed to get memory region resource for cec\n");
+ ret = -ENOENT;
+ }
+
+ size = (res->end - res->start) + 1;
+ cec_mem = request_mem_region(res->start, size, pdev->name);
+
+ if (cec_mem == NULL) {
+ dev_err(&pdev->dev,
+ "failed to get memory region for cec\n");
+ ret = -ENOENT;
+ }
+
+ cec_base = ioremap(res->start, size);
+
+ if (cec_base == NULL) {
+ dev_err(&pdev->dev,
+ "failed to ioremap address region for cec\n");
+ ret = -ENOENT;
+ }
+}
+
+int __init s5p_cec_mem_release(struct platform_device *pdev)
+{
+ iounmap(cec_base);
+
+ if (cec_mem != NULL) {
+ if (release_resource(cec_mem))
+ dev_err(&pdev->dev,
+ "Can't remove tvout drv !!\n");
+
+ kfree(cec_mem);
+
+ cec_mem = NULL;
+ }
+
+ return 0;
+}
--- /dev/null
+/*
+ * Samsung HDMI interface driver
+ *
+ * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
+ *
+ * Tomasz Stanislawski, <t.stanislaws@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published
+ * by the Free Software Foundiation. either version 2 of the License,
+ * or (at your option) any later version
+ */
+#include "hdmi.h"
+
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/io.h>
+#include <linux/i2c.h>
+#include <linux/platform_device.h>
+#include <media/v4l2-subdev.h>
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/delay.h>
+#include <linux/bug.h>
+#include <linux/pm_runtime.h>
+#include <linux/clk.h>
+#include <linux/regulator/consumer.h>
+#include <linux/sched.h>
+#include <linux/of_i2c.h>
+#include <plat/devs.h>
+#include <plat/tv-core.h>
+
+#include <media/v4l2-common.h>
+#include <media/v4l2-dev.h>
+#include <media/v4l2-device.h>
+#include <media/exynos_mc.h>
+
+MODULE_AUTHOR("Tomasz Stanislawski, <t.stanislaws@samsung.com>");
+MODULE_DESCRIPTION("Samsung HDMI");
+MODULE_LICENSE("GPL");
+
+/* default preset configured on probe */
+#define HDMI_DEFAULT_PRESET V4L2_DV_1080P60
+
+/* I2C module and id for HDMIPHY */
+static struct i2c_board_info hdmiphy_info = {
+ I2C_BOARD_INFO("hdmiphy", 0x38),
+};
+
+static struct hdmi_driver_data hdmi_driver_data[] = {
+ { .hdmiphy_bus = 3 },
+ { .hdmiphy_bus = 8 },
+};
+
+#ifdef CONFIG_OF
+static const struct of_device_id exynos_hdmi_match[] = {
+ {
+ .compatible = "samsung,exynos5-hdmi",
+ .data = &hdmi_driver_data[1],
+ },
+ {},
+};
+MODULE_DEVICE_TABLE(of, exynos_hdmi_match);
+#endif
+
+static inline struct hdmi_driver_data *get_drvdata(struct platform_device *pdev)
+{
+ if (pdev->dev.of_node) {
+ const struct of_device_id *match;
+ match = of_match_node(of_match_ptr(exynos_hdmi_match), pdev->dev.of_node);
+ return (struct hdmi_driver_data *) match->data;
+ }
+ return (struct hdmi_driver_data *)platform_get_device_id(pdev)->driver_data;
+}
+
+static struct platform_device_id hdmi_driver_types[] = {
+ {
+ .name = "s5pv210-hdmi",
+ .driver_data = (unsigned long)&hdmi_driver_data[0],
+ }, {
+ .name = "exynos4-hdmi",
+ .driver_data = (unsigned long)&hdmi_driver_data[1],
+ }, {
+ .name = "exynos5-hdmi",
+ .driver_data = (unsigned long)&hdmi_driver_data[1],
+ }, {
+ /* end node */
+ }
+};
+
+static const struct v4l2_subdev_ops hdmi_sd_ops;
+
+static struct hdmi_device *sd_to_hdmi_dev(struct v4l2_subdev *sd)
+{
+ return container_of(sd, struct hdmi_device, sd);
+}
+
+static int set_external_hpd_int(struct hdmi_device *hdev)
+{
+ int ret = 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&hdev->hpd_lock, flags);
+
+ s5p_v4l2_int_src_ext_hpd();
+ /* irq change by TV power status */
+ if (hdev->curr_irq != hdev->ext_irq) {
+ disable_irq(hdev->curr_irq);
+ free_irq(hdev->curr_irq, hdev);
+ } else {
+ spin_unlock_irqrestore(&hdev->hpd_lock, flags);
+ return ret;
+ }
+
+ hdev->curr_irq = hdev->ext_irq;
+ ret = request_irq(hdev->curr_irq, hdmi_irq_handler,
+ IRQ_TYPE_EDGE_BOTH, "hdmi", hdev);
+
+ if (ret)
+ dev_err(hdev->dev, "request change failed.\n");
+
+ dev_info(hdev->dev, "HDMI interrupt source is changed : external\n");
+
+ spin_unlock_irqrestore(&hdev->hpd_lock, flags);
+ return ret;
+}
+
+static int set_internal_hpd_int(struct hdmi_device *hdev)
+{
+ int ret = 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&hdev->hpd_lock, flags);
+
+ s5p_v4l2_int_src_hdmi_hpd();
+ /* irq change by TV power status */
+ if (hdev->curr_irq != hdev->int_irq) {
+ disable_irq(hdev->curr_irq);
+ free_irq(hdev->curr_irq, hdev);
+ } else {
+ spin_unlock_irqrestore(&hdev->hpd_lock, flags);
+ return ret;
+ }
+
+ hdev->curr_irq = hdev->int_irq;
+ ret = request_irq(hdev->curr_irq, hdmi_irq_handler,
+ 0, "hdmi", hdev);
+ if (ret)
+ dev_err(hdev->dev, "request change failed.\n");
+
+ dev_info(hdev->dev, "HDMI interrupt source is changed : internal\n");
+ spin_unlock_irqrestore(&hdev->hpd_lock, flags);
+
+ return ret;
+}
+
+static const struct hdmi_preset_conf *hdmi_preset2conf(u32 preset)
+{
+ int i;
+
+ for (i = 0; i < hdmi_pre_cnt; ++i)
+ if (hdmi_conf[i].preset == preset)
+ return hdmi_conf[i].conf;
+ return NULL;
+}
+
+const struct hdmi_3d_info *hdmi_preset2info(u32 preset)
+{
+ int i;
+
+ for (i = 0; i < hdmi_pre_cnt; ++i)
+ if (hdmi_conf[i].preset == preset)
+ return hdmi_conf[i].info;
+ return NULL;
+}
+
+static int hdmi_set_infoframe(struct hdmi_device *hdev)
+{
+ struct hdmi_infoframe infoframe;
+ const struct hdmi_3d_info *info;
+
+ info = hdmi_preset2info(hdev->cur_preset);
+
+ if (info->is_3d == HDMI_VIDEO_FORMAT_3D) {
+ infoframe.type = HDMI_PACKET_TYPE_VSI;
+ infoframe.ver = HDMI_VSI_VERSION;
+ infoframe.len = HDMI_VSI_LENGTH;
+ hdmi_reg_infoframe(hdev, &infoframe);
+ } else
+ hdmi_reg_stop_vsi(hdev);
+
+ infoframe.type = HDMI_PACKET_TYPE_AVI;
+ infoframe.ver = HDMI_AVI_VERSION;
+ infoframe.len = HDMI_AVI_LENGTH;
+ hdmi_reg_infoframe(hdev, &infoframe);
+
+ return 0;
+}
+
+static int hdmi_set_packets(struct hdmi_device *hdev)
+{
+ hdmi_reg_set_acr(hdev);
+ return 0;
+}
+
+static int hdmi_streamon(struct hdmi_device *hdev)
+{
+ struct device *dev = hdev->dev;
+ struct hdmi_resources *res = &hdev->res;
+ int ret, tries;
+
+ dev_dbg(dev, "%s\n", __func__);
+
+ hdev->streaming = 1;
+ ret = v4l2_subdev_call(hdev->phy_sd, video, s_stream, 1);
+ if (ret)
+ return ret;
+
+ /* waiting for HDMIPHY's PLL to get to steady state */
+ for (tries = 100; tries; --tries) {
+ if (is_hdmiphy_ready(hdev))
+ break;
+
+ mdelay(1);
+ }
+ /* steady state not achieved */
+ if (tries == 0) {
+ dev_err(dev, "hdmiphy's pll could not reach steady state.\n");
+ v4l2_subdev_call(hdev->phy_sd, video, s_stream, 0);
+ hdmi_dumpregs(hdev, "s_stream");
+ return -EIO;
+ }
+
+ /* hdmiphy clock is used for HDMI in streaming mode */
+ clk_disable(res->sclk_hdmi);
+ clk_set_parent(res->sclk_hdmi, res->sclk_hdmiphy);
+ clk_enable(res->sclk_hdmi);
+
+ /* 3D test */
+ hdmi_set_infoframe(hdev);
+
+ /* set packets for audio */
+ hdmi_set_packets(hdev);
+
+ /* init audio */
+#if defined(CONFIG_VIDEO_EXYNOS_HDMI_AUDIO_I2S)
+ hdmi_reg_i2s_audio_init(hdev);
+#elif defined(CONFIG_VIDEO_EXYNOS_HDMI_AUDIO_SPDIF)
+ hdmi_reg_spdif_audio_init(hdev);
+#endif
+ /* enbale HDMI audio */
+ if (hdev->audio_enable)
+ hdmi_audio_enable(hdev, 1);
+
+ /* enable HDMI and timing generator */
+ hdmi_enable(hdev, 1);
+ hdmi_tg_enable(hdev, 1);
+
+ /* start HDCP if enabled */
+ if (hdev->hdcp_info.hdcp_enable) {
+ ret = hdcp_start(hdev);
+ if (ret)
+ return ret;
+ }
+
+ hdmi_dumpregs(hdev, "streamon");
+ return 0;
+}
+
+static int hdmi_streamoff(struct hdmi_device *hdev)
+{
+ struct device *dev = hdev->dev;
+ struct hdmi_resources *res = &hdev->res;
+
+ dev_dbg(dev, "%s\n", __func__);
+
+ if (hdev->hdcp_info.hdcp_enable)
+ hdcp_stop(hdev);
+
+ hdmi_audio_enable(hdev, 0);
+ hdmi_enable(hdev, 0);
+ hdmi_tg_enable(hdev, 0);
+
+ /* pixel(vpll) clock is used for HDMI in config mode */
+ clk_disable(res->sclk_hdmi);
+ clk_set_parent(res->sclk_hdmi, res->sclk_pixel);
+ clk_enable(res->sclk_hdmi);
+
+ v4l2_subdev_call(hdev->phy_sd, video, s_stream, 0);
+
+ hdev->streaming = 0;
+ hdmi_dumpregs(hdev, "streamoff");
+ return 0;
+}
+
+static int hdmi_s_stream(struct v4l2_subdev *sd, int enable)
+{
+ struct hdmi_device *hdev = sd_to_hdmi_dev(sd);
+ struct device *dev = hdev->dev;
+
+ dev_dbg(dev, "%s(%d)\n", __func__, enable);
+ if (enable)
+ return hdmi_streamon(hdev);
+ return hdmi_streamoff(hdev);
+}
+
+static void hdmi_resource_poweron(struct hdmi_resources *res)
+{
+ /* power-on hdmi physical interface */
+ clk_enable(res->hdmiphy);
+ /* use VPP as parent clock; HDMIPHY is not working yet */
+ clk_set_parent(res->sclk_hdmi, res->sclk_pixel);
+ /* turn clocks on */
+ clk_enable(res->sclk_hdmi);
+}
+
+static int hdmi_runtime_resume(struct device *dev);
+static int hdmi_runtime_suspend(struct device *dev);
+
+static int hdmi_s_power(struct v4l2_subdev *sd, int on)
+{
+ struct hdmi_device *hdev = sd_to_hdmi_dev(sd);
+
+ /* If runtime PM is not implemented, hdmi_runtime_resume
+ * and hdmi_runtime_suspend functions are directly called.
+ */
+#ifdef CONFIG_PM_RUNTIME
+ int ret;
+
+ if (on) {
+ clk_enable(hdev->res.hdmi);
+ hdmi_hpd_enable(hdev, 1);
+ ret = pm_runtime_get_sync(hdev->dev);
+ set_internal_hpd_int(hdev);
+ } else {
+ hdmi_hpd_enable(hdev, 0);
+ set_external_hpd_int(hdev);
+ ret = pm_runtime_put_sync(hdev->dev);
+ clk_disable(hdev->res.hdmi);
+ }
+ /* only values < 0 indicate errors */
+ return IS_ERR_VALUE(ret) ? ret : 0;
+#else
+ if (on)
+ hdmi_runtime_resume(hdev->dev);
+ else
+ hdmi_runtime_suspend(hdev->dev);
+ return 0;
+#endif
+}
+
+int hdmi_s_ctrl(struct v4l2_subdev *sd, struct v4l2_control *ctrl)
+{
+ struct hdmi_device *hdev = sd_to_hdmi_dev(sd);
+ struct device *dev = hdev->dev;
+
+ dev_info(dev, "S_CTRL is not applied yet.\n");
+
+ return 0;
+}
+
+int hdmi_g_ctrl(struct v4l2_subdev *sd, struct v4l2_control *ctrl)
+{
+ struct hdmi_device *hdev = sd_to_hdmi_dev(sd);
+ struct device *dev = hdev->dev;
+
+ if (!pm_runtime_suspended(hdev->dev) && !hdev->hpd_user_checked)
+ ctrl->value = hdmi_hpd_status(hdev);
+ else
+ ctrl->value = atomic_read(&hdev->hpd_state);
+
+ dev_dbg(dev, "HDMI cable is %s\n", ctrl->value ?
+ "connected" : "disconnected");
+
+ return 0;
+}
+
+static int hdmi_s_dv_preset(struct v4l2_subdev *sd,
+ struct v4l2_dv_preset *preset)
+{
+ struct hdmi_device *hdev = sd_to_hdmi_dev(sd);
+ struct device *dev = hdev->dev;
+ const struct hdmi_preset_conf *conf;
+
+ conf = hdmi_preset2conf(preset->preset);
+ if (conf == NULL) {
+ dev_err(dev, "preset (%u) not supported\n", preset->preset);
+ return -EINVAL;
+ }
+ hdev->cur_conf = conf;
+ hdev->cur_preset = preset->preset;
+ return 0;
+}
+
+static int hdmi_g_dv_preset(struct v4l2_subdev *sd,
+ struct v4l2_dv_preset *preset)
+{
+ memset(preset, 0, sizeof(*preset));
+ preset->preset = sd_to_hdmi_dev(sd)->cur_preset;
+ return 0;
+}
+
+static int hdmi_g_mbus_fmt(struct v4l2_subdev *sd,
+ struct v4l2_mbus_framefmt *fmt)
+{
+ struct hdmi_device *hdev = sd_to_hdmi_dev(sd);
+ struct device *dev = hdev->dev;
+
+ dev_dbg(dev, "%s\n", __func__);
+ if (!hdev->cur_conf)
+ return -EINVAL;
+ *fmt = hdev->cur_conf->mbus_fmt;
+ return 0;
+}
+
+static int hdmi_s_mbus_fmt(struct v4l2_subdev *sd,
+ struct v4l2_mbus_framefmt *fmt)
+{
+ struct hdmi_device *hdev = sd_to_hdmi_dev(sd);
+ struct device *dev = hdev->dev;
+
+ dev_dbg(dev, "%s\n", __func__);
+ if (fmt->code == V4L2_MBUS_FMT_YUV8_1X24)
+ hdev->output_fmt = HDMI_OUTPUT_YUV444;
+ else
+ hdev->output_fmt = HDMI_OUTPUT_RGB888;
+
+ return 0;
+}
+
+static int hdmi_enum_dv_presets(struct v4l2_subdev *sd,
+ struct v4l2_dv_enum_preset *preset)
+{
+ if (preset->index >= hdmi_pre_cnt)
+ return -EINVAL;
+ return v4l_fill_dv_preset_info(hdmi_conf[preset->index].preset, preset);
+}
+
+static const struct v4l2_subdev_core_ops hdmi_sd_core_ops = {
+ .s_power = hdmi_s_power,
+ .s_ctrl = hdmi_s_ctrl,
+ .g_ctrl = hdmi_g_ctrl,
+};
+
+static const struct v4l2_subdev_video_ops hdmi_sd_video_ops = {
+ .s_dv_preset = hdmi_s_dv_preset,
+ .g_dv_preset = hdmi_g_dv_preset,
+ .enum_dv_presets = hdmi_enum_dv_presets,
+ .g_mbus_fmt = hdmi_g_mbus_fmt,
+ .s_mbus_fmt = hdmi_s_mbus_fmt,
+ .s_stream = hdmi_s_stream,
+};
+
+static const struct v4l2_subdev_ops hdmi_sd_ops = {
+ .core = &hdmi_sd_core_ops,
+ .video = &hdmi_sd_video_ops,
+};
+
+static int hdmi_runtime_suspend(struct device *dev)
+{
+ struct v4l2_subdev *sd = dev_get_drvdata(dev);
+ struct hdmi_device *hdev = sd_to_hdmi_dev(sd);
+ struct hdmi_resources *res = &hdev->res;
+
+ dev_dbg(dev, "%s\n", __func__);
+
+ /* HDMI PHY off sequence
+ * LINK off -> PHY off -> HDMI_PHY_CONTROL disable */
+
+ /* turn clocks off */
+ clk_disable(res->sclk_hdmi);
+
+ v4l2_subdev_call(hdev->phy_sd, core, s_power, 0);
+
+ /* power-off hdmiphy */
+ clk_disable(res->hdmiphy);
+
+ return 0;
+}
+
+static int hdmi_runtime_resume(struct device *dev)
+{
+ struct v4l2_subdev *sd = dev_get_drvdata(dev);
+ struct hdmi_device *hdev = sd_to_hdmi_dev(sd);
+ struct hdmi_resources *res = &hdev->res;
+ int ret = 0;
+
+ dev_dbg(dev, "%s\n", __func__);
+
+ hdmi_resource_poweron(&hdev->res);
+
+ hdmi_phy_sw_reset(hdev);
+ ret = v4l2_subdev_call(hdev->phy_sd, core, s_power, 1);
+ if (ret) {
+ dev_err(dev, "failed to turn on hdmiphy\n");
+ goto fail;
+ }
+
+ ret = hdmi_conf_apply(hdev);
+ if (ret)
+ goto fail;
+
+ dev_dbg(dev, "poweron succeed\n");
+
+ return 0;
+
+fail:
+ clk_disable(res->sclk_hdmi);
+ v4l2_subdev_call(hdev->phy_sd, core, s_power, 0);
+ clk_disable(res->hdmiphy);
+ dev_err(dev, "poweron failed\n");
+
+ return ret;
+}
+
+static const struct dev_pm_ops hdmi_pm_ops = {
+ .runtime_suspend = hdmi_runtime_suspend,
+ .runtime_resume = hdmi_runtime_resume,
+};
+
+static void hdmi_resources_cleanup(struct hdmi_device *hdev)
+{
+ struct hdmi_resources *res = &hdev->res;
+
+ dev_dbg(hdev->dev, "HDMI resource cleanup\n");
+ /* put clocks */
+ if (!IS_ERR_OR_NULL(res->hdmiphy))
+ clk_put(res->hdmiphy);
+ if (!IS_ERR_OR_NULL(res->sclk_hdmiphy))
+ clk_put(res->sclk_hdmiphy);
+ if (!IS_ERR_OR_NULL(res->sclk_pixel))
+ clk_put(res->sclk_pixel);
+ if (!IS_ERR_OR_NULL(res->sclk_hdmi))
+ clk_put(res->sclk_hdmi);
+ if (!IS_ERR_OR_NULL(res->hdmi))
+ clk_put(res->hdmi);
+ memset(res, 0, sizeof *res);
+}
+
+static int hdmi_resources_init(struct hdmi_device *hdev)
+{
+ struct device *dev = hdev->dev;
+ struct hdmi_resources *res = &hdev->res;
+
+ dev_dbg(dev, "HDMI resource init\n");
+
+ memset(res, 0, sizeof *res);
+ /* get clocks, power */
+
+ res->hdmi = clk_get(dev, "hdmi");
+ if (IS_ERR_OR_NULL(res->hdmi)) {
+ dev_err(dev, "failed to get clock 'hdmi'\n");
+ goto fail;
+ }
+ res->sclk_hdmi = clk_get(dev, "sclk_hdmi");
+ if (IS_ERR_OR_NULL(res->sclk_hdmi)) {
+ dev_err(dev, "failed to get clock 'sclk_hdmi'\n");
+ goto fail;
+ }
+ res->sclk_pixel = clk_get(dev, "sclk_pixel");
+ if (IS_ERR_OR_NULL(res->sclk_pixel)) {
+ dev_err(dev, "failed to get clock 'sclk_pixel'\n");
+ goto fail;
+ }
+ res->sclk_hdmiphy = clk_get(dev, "sclk_hdmiphy");
+ if (IS_ERR_OR_NULL(res->sclk_hdmiphy)) {
+ dev_err(dev, "failed to get clock 'sclk_hdmiphy'\n");
+ goto fail;
+ }
+ res->hdmiphy = clk_get(dev, "hdmiphy");
+ if (IS_ERR_OR_NULL(res->hdmiphy)) {
+ dev_err(dev, "failed to get clock 'hdmiphy'\n");
+ goto fail;
+ }
+
+ return 0;
+fail:
+ dev_err(dev, "HDMI resource init - failed\n");
+ hdmi_resources_cleanup(hdev);
+ return -ENODEV;
+}
+
+static int hdmi_link_setup(struct media_entity *entity,
+ const struct media_pad *local,
+ const struct media_pad *remote, u32 flags)
+{
+ return 0;
+}
+
+/* hdmi entity operations */
+static const struct media_entity_operations hdmi_entity_ops = {
+ .link_setup = hdmi_link_setup,
+};
+
+static int hdmi_register_entity(struct hdmi_device *hdev)
+{
+ struct v4l2_subdev *sd = &hdev->sd;
+ struct v4l2_device *v4l2_dev;
+ struct media_pad *pads = &hdev->pad;
+ struct media_entity *me = &sd->entity;
+ struct device *dev = hdev->dev;
+ struct exynos_md *md;
+ int ret;
+
+ dev_dbg(dev, "HDMI entity init\n");
+
+ /* init hdmi subdev */
+ v4l2_subdev_init(sd, &hdmi_sd_ops);
+ sd->owner = THIS_MODULE;
+ strlcpy(sd->name, "exynos5-hdmi", sizeof(sd->name));
+
+ dev_set_drvdata(dev, sd);
+
+ /* init hdmi sub-device as entity */
+ pads[HDMI_PAD_SINK].flags = MEDIA_PAD_FL_SINK;
+ me->ops = &hdmi_entity_ops;
+ ret = media_entity_init(me, HDMI_PADS_NUM, pads, 0);
+ if (ret) {
+ dev_err(dev, "failed to initialize media entity\n");
+ return ret;
+ }
+
+ /* get output media ptr for registering hdmi's sd */
+ md = (struct exynos_md *)module_name_to_driver_data(MDEV_MODULE_NAME);
+ if (!md) {
+ dev_err(dev, "failed to get output media device\n");
+ return -ENODEV;
+ }
+
+ v4l2_dev = &md->v4l2_dev;
+
+ /* regiser HDMI subdev as entity to v4l2_dev pointer of
+ * output media device
+ */
+ ret = v4l2_device_register_subdev(v4l2_dev, sd);
+ if (ret) {
+ dev_err(dev, "failed to register HDMI subdev\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static void hdmi_entity_info_print(struct hdmi_device *hdev)
+{
+ struct v4l2_subdev *sd = &hdev->sd;
+ struct media_entity *me = &sd->entity;
+
+ dev_dbg(hdev->dev, "\n************* HDMI entity info **************\n");
+ dev_dbg(hdev->dev, "[SUB DEVICE INFO]\n");
+ entity_info_print(me, hdev->dev);
+ dev_dbg(hdev->dev, "*********************************************\n\n");
+}
+
+static void s5p_hpd_kobject_uevent(struct work_struct *work)
+{
+ struct hdmi_device *hdev = container_of(work, struct hdmi_device,
+ hpd_work);
+ char *disconnected[2] = { "HDMI_STATE=offline", NULL };
+ char *connected[2] = { "HDMI_STATE=online", NULL };
+ char **envp = NULL;
+ int state = atomic_read(&hdev->hpd_state);
+
+ /* irq setting by TV power on/off status */
+ if (!pm_runtime_suspended(hdev->dev))
+ set_internal_hpd_int(hdev);
+ else
+ set_external_hpd_int(hdev);
+
+ if (state)
+ envp = connected;
+ else
+ envp = disconnected;
+
+ hdev->hpd_user_checked = true;
+
+ kobject_uevent_env(&hdev->dev->kobj, KOBJ_CHANGE, envp);
+ pr_info("%s: sent uevent %s\n", __func__, envp[0]);
+}
+
+static int __devinit hdmi_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct resource *res;
+ struct i2c_adapter *phy_adapter;
+ struct hdmi_device *hdmi_dev = NULL;
+ struct hdmi_driver_data *drv_data;
+ int ret;
+ unsigned int irq_type;
+
+ dev_dbg(dev, "probe start\n");
+
+ hdmi_dev = kzalloc(sizeof(*hdmi_dev), GFP_KERNEL);
+ if (!hdmi_dev) {
+ dev_err(dev, "out of memory\n");
+ ret = -ENOMEM;
+ goto fail;
+ }
+
+ hdmi_dev->dev = dev;
+ ret = hdmi_resources_init(hdmi_dev);
+ if (ret)
+ goto fail_hdev;
+
+ /* mapping HDMI registers */
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (res == NULL) {
+ dev_err(dev, "get memory resource failed.\n");
+ ret = -ENXIO;
+ goto fail_init;
+ }
+
+ hdmi_dev->regs = ioremap(res->start, resource_size(res));
+ if (hdmi_dev->regs == NULL) {
+ dev_err(dev, "register mapping failed.\n");
+ ret = -ENXIO;
+ goto fail_hdev;
+ }
+
+ /* External hpd */
+ res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+ if (res == NULL) {
+ dev_err(dev, "get external interrupt resource failed.\n");
+ ret = -ENXIO;
+ goto fail_regs;
+ }
+ hdmi_dev->ext_irq = res->start;
+
+ /* Internal hpd */
+ res = platform_get_resource(pdev, IORESOURCE_IRQ, 1);
+ if (res == NULL) {
+ dev_err(dev, "get internal interrupt resource failed.\n");
+ ret = -ENXIO;
+ goto fail_regs;
+ }
+ hdmi_dev->int_irq = res->start;
+
+ /* workqueue for HPD */
+ hdmi_dev->hpd_wq = create_workqueue("hdmi-hpd");
+ if (hdmi_dev->hpd_wq == NULL)
+ ret = -ENXIO;
+ INIT_WORK(&hdmi_dev->hpd_work, s5p_hpd_kobject_uevent);
+
+ /* setting v4l2 name to prevent WARN_ON in v4l2_device_register */
+ strlcpy(hdmi_dev->v4l2_dev.name, dev_name(dev),
+ sizeof(hdmi_dev->v4l2_dev.name));
+ /* passing NULL owner prevents driver from erasing drvdata */
+ ret = v4l2_device_register(NULL, &hdmi_dev->v4l2_dev);
+ if (ret) {
+ dev_err(dev, "could not register v4l2 device.\n");
+ goto fail_regs;
+ }
+
+ drv_data = (struct hdmi_driver_data *)get_drvdata(pdev);
+ dev_info(dev, "hdmiphy i2c bus number = %d\n", drv_data->hdmiphy_bus);
+
+ phy_adapter = i2c_get_adapter(drv_data->hdmiphy_bus);
+ if (phy_adapter == NULL) {
+ dev_err(dev, "adapter request failed\n");
+ ret = -ENXIO;
+ goto fail_vdev;
+ }
+
+ hdmi_dev->phy_sd = v4l2_i2c_new_subdev_board(&hdmi_dev->v4l2_dev,
+ phy_adapter, &hdmiphy_info, NULL);
+ /* on failure or not adapter is no longer useful */
+ i2c_put_adapter(phy_adapter);
+ if (hdmi_dev->phy_sd == NULL) {
+ dev_err(dev, "missing subdev for hdmiphy\n");
+ ret = -ENODEV;
+ goto fail_vdev;
+ }
+
+ /* HDMI PHY power off
+ * HDMI PHY is on as default configuration
+ * So, HDMI PHY must be turned off if it's not used */
+ clk_enable(hdmi_dev->res.hdmiphy);
+ v4l2_subdev_call(hdmi_dev->phy_sd, core, s_power, 0);
+ clk_disable(hdmi_dev->res.hdmiphy);
+
+ pm_runtime_enable(dev);
+
+ /* irq setting by TV power on/off status */
+ if (!pm_runtime_suspended(hdmi_dev->dev)) {
+ hdmi_dev->curr_irq = hdmi_dev->int_irq;
+ irq_type = 0;
+ s5p_v4l2_int_src_hdmi_hpd();
+ } else {
+ if (s5p_v4l2_hpd_read_gpio())
+ atomic_set(&hdmi_dev->hpd_state, HPD_HIGH);
+ else
+ atomic_set(&hdmi_dev->hpd_state, HPD_LOW);
+ hdmi_dev->curr_irq = hdmi_dev->ext_irq;
+ irq_type = IRQ_TYPE_EDGE_BOTH;
+ s5p_v4l2_int_src_ext_hpd();
+ }
+
+ hdmi_dev->hpd_user_checked = false;
+
+ ret = request_irq(hdmi_dev->curr_irq, hdmi_irq_handler,
+ irq_type, "hdmi", hdmi_dev);
+
+ if (ret) {
+ dev_err(dev, "request interrupt failed.\n");
+ goto fail_vdev;
+ }
+
+ hdmi_dev->cur_preset = HDMI_DEFAULT_PRESET;
+ /* FIXME: missing fail preset is not supported */
+ hdmi_dev->cur_conf = hdmi_preset2conf(hdmi_dev->cur_preset);
+
+ /* default audio configuration : enable audio */
+ hdmi_dev->audio_enable = 1;
+ hdmi_dev->sample_rate = DEFAULT_SAMPLE_RATE;
+ hdmi_dev->bits_per_sample = DEFAULT_BITS_PER_SAMPLE;
+ hdmi_dev->audio_codec = DEFAULT_AUDIO_CODEC;
+
+ /* register hdmi subdev as entity */
+ ret = hdmi_register_entity(hdmi_dev);
+ if (ret)
+ goto fail_irq;
+
+ hdmi_entity_info_print(hdmi_dev);
+
+ /* initialize hdcp resource */
+ ret = hdcp_prepare(hdmi_dev);
+ if (ret)
+ goto fail_irq;
+
+ dev_info(dev, "probe sucessful\n");
+
+ return 0;
+
+fail_irq:
+ free_irq(hdmi_dev->curr_irq, hdmi_dev);
+
+fail_vdev:
+ v4l2_device_unregister(&hdmi_dev->v4l2_dev);
+
+fail_regs:
+ iounmap(hdmi_dev->regs);
+
+fail_init:
+ hdmi_resources_cleanup(hdmi_dev);
+
+fail_hdev:
+ kfree(hdmi_dev);
+
+fail:
+ dev_err(dev, "probe failed\n");
+ return ret;
+}
+
+static int __devexit hdmi_remove(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct v4l2_subdev *sd = dev_get_drvdata(dev);
+ struct hdmi_device *hdmi_dev = sd_to_hdmi_dev(sd);
+
+ pm_runtime_disable(dev);
+ clk_disable(hdmi_dev->res.hdmi);
+ v4l2_device_unregister(&hdmi_dev->v4l2_dev);
+ disable_irq(hdmi_dev->curr_irq);
+ free_irq(hdmi_dev->curr_irq, hdmi_dev);
+ iounmap(hdmi_dev->regs);
+ hdmi_resources_cleanup(hdmi_dev);
+ flush_workqueue(hdmi_dev->hdcp_wq);
+ destroy_workqueue(hdmi_dev->hdcp_wq);
+ kfree(hdmi_dev);
+ dev_info(dev, "remove sucessful\n");
+
+ return 0;
+}
+
+static struct platform_driver hdmi_driver __refdata = {
+ .probe = hdmi_probe,
+ .remove = __devexit_p(hdmi_remove),
+ .id_table = hdmi_driver_types,
+ .driver = {
+ .name = "exynos5-hdmi",
+ .owner = THIS_MODULE,
+ .pm = &hdmi_pm_ops,
+ .of_match_table = of_match_ptr(exynos_hdmi_match),
+ }
+};
+
+/* D R I V E R I N I T I A L I Z A T I O N */
+
+static int __init hdmi_init(void)
+{
+ int ret;
+ static const char banner[] __initdata = KERN_INFO \
+ "Samsung HDMI output driver, "
+ "(c) 2010-2011 Samsung Electronics Co., Ltd.\n";
+ printk(banner);
+
+ ret = platform_driver_register(&hdmi_driver);
+ if (ret)
+ printk(KERN_ERR "HDMI platform driver register failed\n");
+
+ return ret;
+}
+module_init(hdmi_init);
+/*late_initcall(hdmi_init);*/
+
+static void __exit hdmi_exit(void)
+{
+ platform_driver_unregister(&hdmi_driver);
+}
+module_exit(hdmi_exit);
--- /dev/null
+/*
+ * Samsung HDMI driver
+ *
+ * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
+ *
+ * Jiun Yu <jiun.yu@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published
+ * by the Free Software Foundiation. either version 2 of the License,
+ * or (at your option) any later version
+ */
+
+#include <linux/delay.h>
+
+#include "hdmi.h"
+#include "regs-hdmi-4210.h"
+
+static const struct hdmi_preset_conf hdmi_conf_480p = {
+ .core = {
+ .h_blank = {0x8a, 0x00},
+ .v_blank = {0x0d, 0x6a, 0x01},
+ .h_v_line = {0x0d, 0xa2, 0x35},
+ .vsync_pol = {0x01},
+ .int_pro_mode = {0x00},
+ .v_blank_f = {0x00, 0x00, 0x00},
+ .h_sync_gen = {0x0e, 0x30, 0x11},
+ .v_sync_gen1 = {0x0f, 0x90, 0x00},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x5a, 0x03, /* h_fsz */
+ 0x8a, 0x00, 0xd0, 0x02, /* hact */
+ 0x0d, 0x02, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x2d, 0x00, 0xe0, 0x01, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x49, 0x02, /* vact_st2 */
+ 0x01, 0x00, 0x33, 0x02, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ },
+ .mbus_fmt = {
+ .width = 720,
+ .height = 480,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_720p60 = {
+ .core = {
+ .h_blank = {0x72, 0x01},
+ .v_blank = {0xee, 0xf2, 0x00},
+ .h_v_line = {0xee, 0x22, 0x67},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f = {0x00, 0x00, 0x00}, /* don't care */
+ .h_sync_gen = {0x6c, 0x50, 0x02},
+ .v_sync_gen1 = {0x0a, 0x50, 0x00},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x72, 0x06, /* h_fsz */
+ 0x72, 0x01, 0x00, 0x05, /* hact */
+ 0xee, 0x02, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x1e, 0x00, 0xd0, 0x02, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x49, 0x02, /* vact_st2 */
+ 0x01, 0x00, 0x33, 0x02, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ },
+ .mbus_fmt = {
+ .width = 1280,
+ .height = 720,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080p50 = {
+ .core = {
+ .h_blank = {0xd0, 0x02},
+ .v_blank = {0x65, 0x6c, 0x01},
+ .h_v_line = {0x65, 0x04, 0xa5},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f = {0x00, 0x00, 0x00}, /* don't care */
+ .h_sync_gen = {0x0e, 0xea, 0x08},
+ .v_sync_gen1 = {0x09, 0x40, 0x00},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x98, 0x08, /* h_fsz */
+ 0x18, 0x01, 0x80, 0x07, /* hact */
+ 0x65, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x2d, 0x00, 0x38, 0x04, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x49, 0x02, /* vact_st2 */
+ 0x01, 0x00, 0x33, 0x02, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080p60 = {
+ .core = {
+ .h_blank = {0x18, 0x01},
+ .v_blank = {0x65, 0x6c, 0x01},
+ .h_v_line = {0x65, 0x84, 0x89},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f = {0x00, 0x00, 0x00}, /* don't care */
+ .h_sync_gen = {0x56, 0x08, 0x02},
+ .v_sync_gen1 = {0x09, 0x40, 0x00},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x98, 0x08, /* h_fsz */
+ 0x18, 0x01, 0x80, 0x07, /* hact */
+ 0x65, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x2d, 0x00, 0x38, 0x04, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x48, 0x02, /* vact_st2 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080i60 = {
+ .core = {
+ .h_blank = {0x18, 0x01},
+ .v_blank = {0x32, 0xb2, 0x00},
+ .h_v_line = {0x65, 0x84, 0x89},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x01},
+ .v_blank_f = {0x49, 0x2a, 0x23},
+ .h_sync_gen = {0x56, 0x08, 0x02},
+ .v_sync_gen1 = {0x07, 0x20, 0x00},
+ .v_sync_gen2 = {0x39, 0x42, 0x23},
+ .v_sync_gen3 = {0xa4, 0x44, 0x4a},
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x98, 0x08, /* h_fsz */
+ 0x17, 0x01, 0x81, 0x07, /* hact */
+ 0x65, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x16, 0x00, 0x1c, 0x02, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x49, 0x02, /* vact_st2 */
+ 0x01, 0x00, 0x33, 0x02, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_INTERLACED,
+ },
+};
+
+const struct hdmi_conf hdmi_conf[] = {
+ { V4L2_DV_480P59_94, &hdmi_conf_480p },
+ { V4L2_DV_720P59_94, &hdmi_conf_720p60 },
+ { V4L2_DV_1080P50, &hdmi_conf_1080p50 },
+ { V4L2_DV_1080P30, &hdmi_conf_1080p60 },
+ { V4L2_DV_1080P60, &hdmi_conf_1080p60 },
+ { V4L2_DV_1080I60, &hdmi_conf_1080i60 },
+};
+
+const int hdmi_pre_cnt = ARRAY_SIZE(hdmi_conf);
+
+irqreturn_t hdmi_irq_handler(int irq, void *dev_data)
+{
+ struct hdmi_device *hdev = dev_data;
+ u32 intc_flag;
+
+ (void)irq;
+ intc_flag = hdmi_read(hdev, HDMI_INTC_FLAG);
+ /* clearing flags for HPD plug/unplug */
+ if (intc_flag & HDMI_INTC_FLAG_HPD_UNPLUG) {
+ printk(KERN_INFO "unplugged\n");
+ hdmi_write_mask(hdev, HDMI_INTC_FLAG, ~0,
+ HDMI_INTC_FLAG_HPD_UNPLUG);
+ }
+ if (intc_flag & HDMI_INTC_FLAG_HPD_PLUG) {
+ printk(KERN_INFO "plugged\n");
+ hdmi_write_mask(hdev, HDMI_INTC_FLAG, ~0,
+ HDMI_INTC_FLAG_HPD_PLUG);
+ }
+
+ return IRQ_HANDLED;
+}
+
+static void hdmi_reg_init(struct hdmi_device *hdev)
+{
+ /* enable HPD interrupts */
+ hdmi_write_mask(hdev, HDMI_INTC_CON, ~0, HDMI_INTC_EN_GLOBAL |
+ HDMI_INTC_EN_HPD_PLUG | HDMI_INTC_EN_HPD_UNPLUG);
+ /* choose HDMI mode */
+ hdmi_write_mask(hdev, HDMI_MODE_SEL,
+ HDMI_MODE_HDMI_EN, HDMI_MODE_MASK);
+ /* disable bluescreen */
+ hdmi_write_mask(hdev, HDMI_CON_0, 0, HDMI_BLUE_SCR_EN);
+ /* choose bluescreen (fecal) color */
+ hdmi_writeb(hdev, HDMI_BLUE_SCREEN_0, 0x12);
+ hdmi_writeb(hdev, HDMI_BLUE_SCREEN_1, 0x34);
+ hdmi_writeb(hdev, HDMI_BLUE_SCREEN_2, 0x56);
+ /* enable AVI packet every vsync, fixes purple line problem */
+ hdmi_writeb(hdev, HDMI_AVI_CON, 0x02);
+ /* force YUV444, look to CEA-861-D, table 7 for more detail */
+ hdmi_writeb(hdev, HDMI_AVI_BYTE(0), 2 << 5);
+ hdmi_write_mask(hdev, HDMI_CON_1, 2, 3 << 5);
+}
+
+static void hdmi_timing_apply(struct hdmi_device *hdev,
+ const struct hdmi_preset_conf *conf)
+{
+ const struct hdmi_core_regs *core = &conf->core;
+ const struct hdmi_tg_regs *tg = &conf->tg;
+
+ /* setting core registers */
+ hdmi_writeb(hdev, HDMI_H_BLANK_0, core->h_blank[0]);
+ hdmi_writeb(hdev, HDMI_H_BLANK_1, core->h_blank[1]);
+ hdmi_writeb(hdev, HDMI_V_BLANK_0, core->v_blank[0]);
+ hdmi_writeb(hdev, HDMI_V_BLANK_1, core->v_blank[1]);
+ hdmi_writeb(hdev, HDMI_V_BLANK_2, core->v_blank[2]);
+ hdmi_writeb(hdev, HDMI_H_V_LINE_0, core->h_v_line[0]);
+ hdmi_writeb(hdev, HDMI_H_V_LINE_1, core->h_v_line[1]);
+ hdmi_writeb(hdev, HDMI_H_V_LINE_2, core->h_v_line[2]);
+ hdmi_writeb(hdev, HDMI_VSYNC_POL, core->vsync_pol[0]);
+ hdmi_writeb(hdev, HDMI_INT_PRO_MODE, core->int_pro_mode[0]);
+ hdmi_writeb(hdev, HDMI_V_BLANK_F_0, core->v_blank_f[0]);
+ hdmi_writeb(hdev, HDMI_V_BLANK_F_1, core->v_blank_f[1]);
+ hdmi_writeb(hdev, HDMI_V_BLANK_F_2, core->v_blank_f[2]);
+ hdmi_writeb(hdev, HDMI_H_SYNC_GEN_0, core->h_sync_gen[0]);
+ hdmi_writeb(hdev, HDMI_H_SYNC_GEN_1, core->h_sync_gen[1]);
+ hdmi_writeb(hdev, HDMI_H_SYNC_GEN_2, core->h_sync_gen[2]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_GEN_1_0, core->v_sync_gen1[0]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_GEN_1_1, core->v_sync_gen1[1]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_GEN_1_2, core->v_sync_gen1[2]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_GEN_2_0, core->v_sync_gen2[0]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_GEN_2_1, core->v_sync_gen2[1]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_GEN_2_2, core->v_sync_gen2[2]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_GEN_3_0, core->v_sync_gen3[0]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_GEN_3_1, core->v_sync_gen3[1]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_GEN_3_2, core->v_sync_gen3[2]);
+ /* Timing generator registers */
+ hdmi_writeb(hdev, HDMI_TG_H_FSZ_L, tg->h_fsz_l);
+ hdmi_writeb(hdev, HDMI_TG_H_FSZ_H, tg->h_fsz_h);
+ hdmi_writeb(hdev, HDMI_TG_HACT_ST_L, tg->hact_st_l);
+ hdmi_writeb(hdev, HDMI_TG_HACT_ST_H, tg->hact_st_h);
+ hdmi_writeb(hdev, HDMI_TG_HACT_SZ_L, tg->hact_sz_l);
+ hdmi_writeb(hdev, HDMI_TG_HACT_SZ_H, tg->hact_sz_h);
+ hdmi_writeb(hdev, HDMI_TG_V_FSZ_L, tg->v_fsz_l);
+ hdmi_writeb(hdev, HDMI_TG_V_FSZ_H, tg->v_fsz_h);
+ hdmi_writeb(hdev, HDMI_TG_VSYNC_L, tg->vsync_l);
+ hdmi_writeb(hdev, HDMI_TG_VSYNC_H, tg->vsync_h);
+ hdmi_writeb(hdev, HDMI_TG_VSYNC2_L, tg->vsync2_l);
+ hdmi_writeb(hdev, HDMI_TG_VSYNC2_H, tg->vsync2_h);
+ hdmi_writeb(hdev, HDMI_TG_VACT_ST_L, tg->vact_st_l);
+ hdmi_writeb(hdev, HDMI_TG_VACT_ST_H, tg->vact_st_h);
+ hdmi_writeb(hdev, HDMI_TG_VACT_SZ_L, tg->vact_sz_l);
+ hdmi_writeb(hdev, HDMI_TG_VACT_SZ_H, tg->vact_sz_h);
+ hdmi_writeb(hdev, HDMI_TG_FIELD_CHG_L, tg->field_chg_l);
+ hdmi_writeb(hdev, HDMI_TG_FIELD_CHG_H, tg->field_chg_h);
+ hdmi_writeb(hdev, HDMI_TG_VACT_ST2_L, tg->vact_st2_l);
+ hdmi_writeb(hdev, HDMI_TG_VACT_ST2_H, tg->vact_st2_h);
+ hdmi_writeb(hdev, HDMI_TG_VSYNC_TOP_HDMI_L, tg->vsync_top_hdmi_l);
+ hdmi_writeb(hdev, HDMI_TG_VSYNC_TOP_HDMI_H, tg->vsync_top_hdmi_h);
+ hdmi_writeb(hdev, HDMI_TG_VSYNC_BOT_HDMI_L, tg->vsync_bot_hdmi_l);
+ hdmi_writeb(hdev, HDMI_TG_VSYNC_BOT_HDMI_H, tg->vsync_bot_hdmi_h);
+ hdmi_writeb(hdev, HDMI_TG_FIELD_TOP_HDMI_L, tg->field_top_hdmi_l);
+ hdmi_writeb(hdev, HDMI_TG_FIELD_TOP_HDMI_H, tg->field_top_hdmi_h);
+ hdmi_writeb(hdev, HDMI_TG_FIELD_BOT_HDMI_L, tg->field_bot_hdmi_l);
+ hdmi_writeb(hdev, HDMI_TG_FIELD_BOT_HDMI_H, tg->field_bot_hdmi_h);
+}
+
+int hdmi_conf_apply(struct hdmi_device *hdmi_dev)
+{
+ struct device *dev = hdmi_dev->dev;
+ const struct hdmi_preset_conf *conf = hdmi_dev->cur_conf;
+ struct v4l2_dv_preset preset;
+ int ret;
+
+ dev_dbg(dev, "%s\n", __func__);
+
+ /* reset hdmiphy */
+ hdmi_write_mask(hdmi_dev, HDMI_PHY_RSTOUT, ~0, HDMI_PHY_SW_RSTOUT);
+ mdelay(10);
+ hdmi_write_mask(hdmi_dev, HDMI_PHY_RSTOUT, 0, HDMI_PHY_SW_RSTOUT);
+ mdelay(10);
+
+ /* configure presets */
+ preset.preset = hdmi_dev->cur_preset;
+ ret = v4l2_subdev_call(hdmi_dev->phy_sd, video, s_dv_preset, &preset);
+ if (ret) {
+ dev_err(dev, "failed to set preset (%u)\n", preset.preset);
+ return ret;
+ }
+
+ /* resetting HDMI core */
+ hdmi_write_mask(hdmi_dev, HDMI_CORE_RSTOUT, 0, HDMI_CORE_SW_RSTOUT);
+ mdelay(10);
+ hdmi_write_mask(hdmi_dev, HDMI_CORE_RSTOUT, ~0, HDMI_CORE_SW_RSTOUT);
+ mdelay(10);
+
+ hdmi_reg_init(hdmi_dev);
+
+ /* setting core registers */
+ hdmi_timing_apply(hdmi_dev, conf);
+
+ return 0;
+}
+
+int is_hdmiphy_ready(struct hdmi_device *hdev)
+{
+ u32 val = hdmi_read(hdev, HDMI_PHY_STATUS);
+ if (val & HDMI_PHY_STATUS_READY)
+ return 1;
+
+ return 0;
+}
+
+void hdmi_enable(struct hdmi_device *hdev, int on)
+{
+ if (on)
+ hdmi_write_mask(hdev, HDMI_CON_0, ~0, HDMI_EN);
+ else
+ hdmi_write_mask(hdev, HDMI_CON_0, 0, HDMI_EN);
+}
+
+void hdmi_tg_enable(struct hdmi_device *hdev, int on)
+{
+ u32 mask;
+
+ mask = (hdev->cur_conf->mbus_fmt.field == V4L2_FIELD_INTERLACED) ?
+ HDMI_TG_EN | HDMI_FIELD_EN : HDMI_TG_EN;
+
+ if (on)
+ hdmi_write_mask(hdev, HDMI_TG_CMD, ~0, mask);
+ else
+ hdmi_write_mask(hdev, HDMI_TG_CMD, 0, mask);
+}
+
+void hdmi_dumpregs(struct hdmi_device *hdev, char *prefix)
+{
+#define DUMPREG(reg_id) \
+ dev_dbg(hdev->dev, "%s:" #reg_id " = %08x\n", prefix, \
+ readl(hdev->regs + reg_id))
+
+ dev_dbg(hdev->dev, "%s: ---- CONTROL REGISTERS ----\n", prefix);
+ DUMPREG(HDMI_INTC_FLAG);
+ DUMPREG(HDMI_INTC_CON);
+ DUMPREG(HDMI_HPD_STATUS);
+ DUMPREG(HDMI_PHY_RSTOUT);
+ DUMPREG(HDMI_PHY_VPLL);
+ DUMPREG(HDMI_PHY_CMU);
+ DUMPREG(HDMI_CORE_RSTOUT);
+
+ dev_dbg(hdev->dev, "%s: ---- CORE REGISTERS ----\n", prefix);
+ DUMPREG(HDMI_CON_0);
+ DUMPREG(HDMI_CON_1);
+ DUMPREG(HDMI_CON_2);
+ DUMPREG(HDMI_SYS_STATUS);
+ DUMPREG(HDMI_PHY_STATUS);
+ DUMPREG(HDMI_STATUS_EN);
+ DUMPREG(HDMI_HPD);
+ DUMPREG(HDMI_MODE_SEL);
+ DUMPREG(HDMI_HPD_GEN);
+ DUMPREG(HDMI_DC_CONTROL);
+ DUMPREG(HDMI_VIDEO_PATTERN_GEN);
+
+ dev_dbg(hdev->dev, "%s: ---- CORE SYNC REGISTERS ----\n", prefix);
+ DUMPREG(HDMI_H_BLANK_0);
+ DUMPREG(HDMI_H_BLANK_1);
+ DUMPREG(HDMI_V_BLANK_0);
+ DUMPREG(HDMI_V_BLANK_1);
+ DUMPREG(HDMI_V_BLANK_2);
+ DUMPREG(HDMI_H_V_LINE_0);
+ DUMPREG(HDMI_H_V_LINE_1);
+ DUMPREG(HDMI_H_V_LINE_2);
+ DUMPREG(HDMI_VSYNC_POL);
+ DUMPREG(HDMI_INT_PRO_MODE);
+ DUMPREG(HDMI_V_BLANK_F_0);
+ DUMPREG(HDMI_V_BLANK_F_1);
+ DUMPREG(HDMI_V_BLANK_F_2);
+ DUMPREG(HDMI_H_SYNC_GEN_0);
+ DUMPREG(HDMI_H_SYNC_GEN_1);
+ DUMPREG(HDMI_H_SYNC_GEN_2);
+ DUMPREG(HDMI_V_SYNC_GEN_1_0);
+ DUMPREG(HDMI_V_SYNC_GEN_1_1);
+ DUMPREG(HDMI_V_SYNC_GEN_1_2);
+ DUMPREG(HDMI_V_SYNC_GEN_2_0);
+ DUMPREG(HDMI_V_SYNC_GEN_2_1);
+ DUMPREG(HDMI_V_SYNC_GEN_2_2);
+ DUMPREG(HDMI_V_SYNC_GEN_3_0);
+ DUMPREG(HDMI_V_SYNC_GEN_3_1);
+ DUMPREG(HDMI_V_SYNC_GEN_3_2);
+
+ dev_dbg(hdev->dev, "%s: ---- TG REGISTERS ----\n", prefix);
+ DUMPREG(HDMI_TG_CMD);
+ DUMPREG(HDMI_TG_H_FSZ_L);
+ DUMPREG(HDMI_TG_H_FSZ_H);
+ DUMPREG(HDMI_TG_HACT_ST_L);
+ DUMPREG(HDMI_TG_HACT_ST_H);
+ DUMPREG(HDMI_TG_HACT_SZ_L);
+ DUMPREG(HDMI_TG_HACT_SZ_H);
+ DUMPREG(HDMI_TG_V_FSZ_L);
+ DUMPREG(HDMI_TG_V_FSZ_H);
+ DUMPREG(HDMI_TG_VSYNC_L);
+ DUMPREG(HDMI_TG_VSYNC_H);
+ DUMPREG(HDMI_TG_VSYNC2_L);
+ DUMPREG(HDMI_TG_VSYNC2_H);
+ DUMPREG(HDMI_TG_VACT_ST_L);
+ DUMPREG(HDMI_TG_VACT_ST_H);
+ DUMPREG(HDMI_TG_VACT_SZ_L);
+ DUMPREG(HDMI_TG_VACT_SZ_H);
+ DUMPREG(HDMI_TG_FIELD_CHG_L);
+ DUMPREG(HDMI_TG_FIELD_CHG_H);
+ DUMPREG(HDMI_TG_VACT_ST2_L);
+ DUMPREG(HDMI_TG_VACT_ST2_H);
+ DUMPREG(HDMI_TG_VSYNC_TOP_HDMI_L);
+ DUMPREG(HDMI_TG_VSYNC_TOP_HDMI_H);
+ DUMPREG(HDMI_TG_VSYNC_BOT_HDMI_L);
+ DUMPREG(HDMI_TG_VSYNC_BOT_HDMI_H);
+ DUMPREG(HDMI_TG_FIELD_TOP_HDMI_L);
+ DUMPREG(HDMI_TG_FIELD_TOP_HDMI_H);
+ DUMPREG(HDMI_TG_FIELD_BOT_HDMI_L);
+ DUMPREG(HDMI_TG_FIELD_BOT_HDMI_H);
+#undef DUMPREG
+}
--- /dev/null
+/*
+ * Samsung HDMI driver
+ *
+ * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
+ *
+ * Jiun Yu <jiun.yu@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published
+ * by the Free Software Foundiation. either version 2 of the License,
+ * or (at your option) any later version
+ */
+
+#include <linux/delay.h>
+#include <linux/pm_runtime.h>
+#include <plat/devs.h>
+#include <plat/tv-core.h>
+
+#include "hdmi.h"
+#include "regs-hdmi-5250.h"
+
+static const struct hdmi_preset_conf hdmi_conf_480p60 = {
+ .core = {
+ .h_blank = {0x8a, 0x00},
+ .v2_blank = {0x0d, 0x02},
+ .v1_blank = {0x2d, 0x00},
+ .v_line = {0x0d, 0x02},
+ .h_line = {0x5a, 0x03},
+ .hsync_pol = {0x01},
+ .vsync_pol = {0x01},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x0e, 0x00},
+ .h_sync_end = {0x4c, 0x00},
+ .v_sync_line_bef_2 = {0x0f, 0x00},
+ .v_sync_line_bef_1 = {0x09, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x5a, 0x03, /* h_fsz */
+ 0x8a, 0x00, 0xd0, 0x02, /* hact */
+ 0x0d, 0x02, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x2d, 0x00, 0xe0, 0x01, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x48, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 720,
+ .height = 480,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_720p60 = {
+ .core = {
+ .h_blank = {0x72, 0x01},
+ .v2_blank = {0xee, 0x02},
+ .v1_blank = {0x1e, 0x00},
+ .v_line = {0xee, 0x02},
+ .h_line = {0x72, 0x06},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x6c, 0x00},
+ .h_sync_end = {0x94, 0x00},
+ .v_sync_line_bef_2 = {0x0a, 0x00},
+ .v_sync_line_bef_1 = {0x05, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x72, 0x06, /* h_fsz */
+ 0x72, 0x01, 0x00, 0x05, /* hact */
+ 0xee, 0x02, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x1e, 0x00, 0xd0, 0x02, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x48, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1280,
+ .height = 720,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080i60 = {
+ .core = {
+ .h_blank = {0x18, 0x01},
+ .v2_blank = {0x32, 0x02},
+ .v1_blank = {0x16, 0x00},
+ .v_line = {0x65, 0x04},
+ .h_line = {0x98, 0x08},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x01},
+ .v_blank_f0 = {0x49, 0x02},
+ .v_blank_f1 = {0x65, 0x04},
+ .h_sync_start = {0x56, 0x00},
+ .h_sync_end = {0x82, 0x00},
+ .v_sync_line_bef_2 = {0x07, 0x00},
+ .v_sync_line_bef_1 = {0x02, 0x00},
+ .v_sync_line_aft_2 = {0x39, 0x02},
+ .v_sync_line_aft_1 = {0x34, 0x02},
+ .v_sync_line_aft_pxl_2 = {0xa4, 0x04},
+ .v_sync_line_aft_pxl_1 = {0xa4, 0x04},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x98, 0x08, /* h_fsz */
+ 0x18, 0x01, 0x80, 0x07, /* hact */
+ 0x65, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x16, 0x00, 0x1c, 0x02, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x49, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x33, 0x02, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080p60 = {
+ .core = {
+ .h_blank = {0x18, 0x01},
+ .v2_blank = {0x65, 0x04},
+ .v1_blank = {0x2d, 0x00},
+ .v_line = {0x65, 0x04},
+ .h_line = {0x98, 0x08},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x56, 0x00},
+ .h_sync_end = {0x82, 0x00},
+ .v_sync_line_bef_2 = {0x09, 0x00},
+ .v_sync_line_bef_1 = {0x04, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x98, 0x08, /* h_fsz */
+ 0x18, 0x01, 0x80, 0x07, /* hact */
+ 0x65, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x2d, 0x00, 0x38, 0x04, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x48, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_576p50 = {
+ .core = {
+ .h_blank = {0x90, 0x00},
+ .v2_blank = {0x71, 0x02},
+ .v1_blank = {0x31, 0x00},
+ .v_line = {0x71, 0x02},
+ .h_line = {0x60, 0x03},
+ .hsync_pol = {0x01},
+ .vsync_pol = {0x01},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x0a, 0x00},
+ .h_sync_end = {0x4a, 0x00},
+ .v_sync_line_bef_2 = {0x0a, 0x00},
+ .v_sync_line_bef_1 = {0x05, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x60, 0x03, /* h_fsz */
+ 0x90, 0x00, 0xd0, 0x02, /* hact */
+ 0x71, 0x02, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x31, 0x00, 0x40, 0x02, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x48, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 720,
+ .height = 576,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_720p50 = {
+ .core = {
+ .h_blank = {0xbc, 0x02},
+ .v2_blank = {0xee, 0x02},
+ .v1_blank = {0x1e, 0x00},
+ .v_line = {0xee, 0x02},
+ .h_line = {0xbc, 0x07},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0xb6, 0x01},
+ .h_sync_end = {0xde, 0x01},
+ .v_sync_line_bef_2 = {0x0a, 0x00},
+ .v_sync_line_bef_1 = {0x05, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0xbc, 0x07, /* h_fsz */
+ 0xbc, 0x02, 0x00, 0x05, /* hact */
+ 0xee, 0x02, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x1e, 0x00, 0xd0, 0x02, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x48, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1280,
+ .height = 720,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080i50 = {
+ .core = {
+ .h_blank = {0xd0, 0x02},
+ .v2_blank = {0x32, 0x02},
+ .v1_blank = {0x16, 0x00},
+ .v_line = {0x65, 0x04},
+ .h_line = {0x50, 0x0a},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x01},
+ .v_blank_f0 = {0x49, 0x02},
+ .v_blank_f1 = {0x65, 0x04},
+ .h_sync_start = {0x0e, 0x02},
+ .h_sync_end = {0x3a, 0x02},
+ .v_sync_line_bef_2 = {0x07, 0x00},
+ .v_sync_line_bef_1 = {0x02, 0x00},
+ .v_sync_line_aft_2 = {0x39, 0x02},
+ .v_sync_line_aft_1 = {0x34, 0x02},
+ .v_sync_line_aft_pxl_2 = {0x38, 0x07},
+ .v_sync_line_aft_pxl_1 = {0x38, 0x07},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x50, 0x0a, /* h_fsz */
+ 0xd0, 0x02, 0x80, 0x07, /* hact */
+ 0x65, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x16, 0x00, 0x1c, 0x02, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x49, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x33, 0x02, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080p50 = {
+ .core = {
+ .h_blank = {0xd0, 0x02},
+ .v2_blank = {0x65, 0x04},
+ .v1_blank = {0x2d, 0x00},
+ .v_line = {0x65, 0x04},
+ .h_line = {0x50, 0x0a},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x0e, 0x02},
+ .h_sync_end = {0x3a, 0x02},
+ .v_sync_line_bef_2 = {0x09, 0x00},
+ .v_sync_line_bef_1 = {0x04, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x50, 0x0a, /* h_fsz */
+ 0xd0, 0x02, 0x80, 0x07, /* hact */
+ 0x65, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x2d, 0x00, 0x38, 0x04, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x48, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080p30 = {
+ .core = {
+ .h_blank = {0x18, 0x01},
+ .v2_blank = {0x65, 0x04},
+ .v1_blank = {0x2d, 0x00},
+ .v_line = {0x65, 0x04},
+ .h_line = {0x98, 0x08},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x56, 0x00},
+ .h_sync_end = {0x82, 0x00},
+ .v_sync_line_bef_2 = {0x09, 0x00},
+ .v_sync_line_bef_1 = {0x04, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x98, 0x08, /* h_fsz */
+ 0x18, 0x01, 0x80, 0x07, /* hact */
+ 0x65, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x2d, 0x00, 0x38, 0x04, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x48, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080p24 = {
+ .core = {
+ .h_blank = {0x3e, 0x03},
+ .v2_blank = {0x65, 0x04},
+ .v1_blank = {0x2d, 0x00},
+ .v_line = {0x65, 0x04},
+ .h_line = {0xbe, 0x0a},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x7c, 0x02},
+ .h_sync_end = {0xa8, 0x02},
+ .v_sync_line_bef_2 = {0x09, 0x00},
+ .v_sync_line_bef_1 = {0x04, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0xbe, 0x0a, /* h_fsz */
+ 0x3e, 0x03, 0x80, 0x07, /* hact */
+ 0x65, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x2d, 0x00, 0x38, 0x04, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x48, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080p25 = {
+ .core = {
+ .h_blank = {0xd0, 0x02},
+ .v2_blank = {0x65, 0x04},
+ .v1_blank = {0x2d, 0x00},
+ .v_line = {0x65, 0x04},
+ .h_line = {0x50, 0x0a},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x0e, 0x02},
+ .h_sync_end = {0x3a, 0x02},
+ .v_sync_line_bef_2 = {0x09, 0x00},
+ .v_sync_line_bef_1 = {0x04, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x50, 0x0a, /* h_fsz */
+ 0xd0, 0x02, 0x80, 0x07, /* hact */
+ 0x65, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x2d, 0x00, 0x38, 0x04, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x48, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_480p59_94 = {
+ .core = {
+ .h_blank = {0x8a, 0x00},
+ .v2_blank = {0x0d, 0x02},
+ .v1_blank = {0x2d, 0x00},
+ .v_line = {0x0d, 0x02},
+ .h_line = {0x5a, 0x03},
+ .hsync_pol = {0x01},
+ .vsync_pol = {0x01},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x0e, 0x00},
+ .h_sync_end = {0x4c, 0x00},
+ .v_sync_line_bef_2 = {0x0f, 0x00},
+ .v_sync_line_bef_1 = {0x09, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x5a, 0x03, /* h_fsz */
+ 0x8a, 0x00, 0xd0, 0x02, /* hact */
+ 0x0d, 0x02, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x2d, 0x00, 0xe0, 0x01, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x48, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 720,
+ .height = 480,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_720p59_94 = {
+ .core = {
+ .h_blank = {0x72, 0x01},
+ .v2_blank = {0xee, 0x02},
+ .v1_blank = {0x1e, 0x00},
+ .v_line = {0xee, 0x02},
+ .h_line = {0x72, 0x06},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x6c, 0x00},
+ .h_sync_end = {0x94, 0x00},
+ .v_sync_line_bef_2 = {0x0a, 0x00},
+ .v_sync_line_bef_1 = {0x05, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x72, 0x06, /* h_fsz */
+ 0x72, 0x01, 0x00, 0x05, /* hact */
+ 0xee, 0x02, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x1e, 0x00, 0xd0, 0x02, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x48, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1280,
+ .height = 720,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080i59_94 = {
+ .core = {
+ .h_blank = {0x18, 0x01},
+ .v2_blank = {0x32, 0x02},
+ .v1_blank = {0x16, 0x00},
+ .v_line = {0x65, 0x04},
+ .h_line = {0x98, 0x08},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x01},
+ .v_blank_f0 = {0x49, 0x02},
+ .v_blank_f1 = {0x65, 0x04},
+ .h_sync_start = {0x56, 0x00},
+ .h_sync_end = {0x82, 0x00},
+ .v_sync_line_bef_2 = {0x07, 0x00},
+ .v_sync_line_bef_1 = {0x02, 0x00},
+ .v_sync_line_aft_2 = {0x39, 0x02},
+ .v_sync_line_aft_1 = {0x34, 0x02},
+ .v_sync_line_aft_pxl_2 = {0xa4, 0x04},
+ .v_sync_line_aft_pxl_1 = {0xa4, 0x04},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x98, 0x08, /* h_fsz */
+ 0x18, 0x01, 0x80, 0x07, /* hact */
+ 0x65, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x16, 0x00, 0x1c, 0x02, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x49, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x33, 0x02, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080p59_94 = {
+ .core = {
+ .h_blank = {0x18, 0x01},
+ .v2_blank = {0x65, 0x04},
+ .v1_blank = {0x2d, 0x00},
+ .v_line = {0x65, 0x04},
+ .h_line = {0x98, 0x08},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x56, 0x00},
+ .h_sync_end = {0x82, 0x00},
+ .v_sync_line_bef_2 = {0x09, 0x00},
+ .v_sync_line_bef_1 = {0x04, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x98, 0x08, /* h_fsz */
+ 0x18, 0x01, 0x80, 0x07, /* hact */
+ 0x65, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x2d, 0x00, 0x38, 0x04, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x48, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_720p60_sb_half = {
+ .core = {
+ .h_blank = {0x72, 0x01},
+ .v2_blank = {0xee, 0x02},
+ .v1_blank = {0x1e, 0x00},
+ .v_line = {0xee, 0x02},
+ .h_line = {0x72, 0x06},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x6c, 0x00},
+ .h_sync_end = {0x94, 0x00},
+ .v_sync_line_bef_2 = {0x0a, 0x00},
+ .v_sync_line_bef_1 = {0x05, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x72, 0x06, /* h_fsz */
+ 0x72, 0x01, 0x00, 0x05, /* hact */
+ 0xee, 0x02, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x1e, 0x00, 0xd0, 0x02, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x0c, 0x03, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1280,
+ .height = 720,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_720p60_tb = {
+ .core = {
+ .h_blank = {0x72, 0x01},
+ .v2_blank = {0xee, 0x02},
+ .v1_blank = {0x1e, 0x00},
+ .v_line = {0xee, 0x02},
+ .h_line = {0x72, 0x06},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x6c, 0x00},
+ .h_sync_end = {0x94, 0x00},
+ .v_sync_line_bef_2 = {0x0a, 0x00},
+ .v_sync_line_bef_1 = {0x05, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x72, 0x06, /* h_fsz */
+ 0x72, 0x01, 0x00, 0x05, /* hact */
+ 0xee, 0x02, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x1e, 0x00, 0xd0, 0x02, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x0c, 0x03, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1280,
+ .height = 720,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_720p59_94_sb_half = {
+ .core = {
+ .h_blank = {0x72, 0x01},
+ .v2_blank = {0xee, 0x02},
+ .v1_blank = {0x1e, 0x00},
+ .v_line = {0xee, 0x02},
+ .h_line = {0x72, 0x06},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x6c, 0x00},
+ .h_sync_end = {0x94, 0x00},
+ .v_sync_line_bef_2 = {0x0a, 0x00},
+ .v_sync_line_bef_1 = {0x05, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x72, 0x06, /* h_fsz */
+ 0x72, 0x01, 0x00, 0x05, /* hact */
+ 0xee, 0x02, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x1e, 0x00, 0xd0, 0x02, /* vact */
+ 0x00, 0x00, /* field_chg */
+ 0x0c, 0x03, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1280,
+ .height = 720,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_720p59_94_tb = {
+ .core = {
+ .h_blank = {0x72, 0x01},
+ .v2_blank = {0xee, 0x02},
+ .v1_blank = {0x1e, 0x00},
+ .v_line = {0xee, 0x02},
+ .h_line = {0x72, 0x06},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x6c, 0x00},
+ .h_sync_end = {0x94, 0x00},
+ .v_sync_line_bef_2 = {0x0a, 0x00},
+ .v_sync_line_bef_1 = {0x05, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x72, 0x06, /* h_fsz */
+ 0x72, 0x01, 0x00, 0x05, /* hact */
+ 0xee, 0x02, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x1e, 0x00, 0xd0, 0x02, /* vact */
+ 0x00, 0x00, /* field_chg */
+ 0x0c, 0x03, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1280,
+ .height = 720,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_720p50_sb_half = {
+ .core = {
+ .h_blank = {0xbc, 0x02},
+ .v2_blank = {0xdc, 0x05},
+ .v1_blank = {0x1e, 0x00},
+ .v_line = {0xdc, 0x05},
+ .h_line = {0xbc, 0x07},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0xb6, 0x01},
+ .h_sync_end = {0xde, 0x01},
+ .v_sync_line_bef_2 = {0x0a, 0x00},
+ .v_sync_line_bef_1 = {0x05, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xee, 0x02},
+ .vact_space_2 = {0x0c, 0x03},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0xbc, 0x07, /* h_fsz */
+ 0xbc, 0x02, 0x00, 0x05, /* hact */
+ 0xee, 0x02, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x1e, 0x00, 0xd0, 0x02, /* vact */
+ 0x00, 0x00, /* field_chg */
+ 0x0c, 0x03, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1280,
+ .height = 720,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_720p50_tb = {
+ .core = {
+ .h_blank = {0xbc, 0x02},
+ .v2_blank = {0xdc, 0x05},
+ .v1_blank = {0x1e, 0x00},
+ .v_line = {0xdc, 0x05},
+ .h_line = {0xbc, 0x07},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0xb6, 0x01},
+ .h_sync_end = {0xde, 0x01},
+ .v_sync_line_bef_2 = {0x0a, 0x00},
+ .v_sync_line_bef_1 = {0x05, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xee, 0x02},
+ .vact_space_2 = {0x0c, 0x03},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0xbc, 0x07, /* h_fsz */
+ 0xbc, 0x02, 0x00, 0x05, /* hact */
+ 0xee, 0x02, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x1e, 0x00, 0xd0, 0x02, /* vact */
+ 0x00, 0x00, /* field_chg */
+ 0x0c, 0x03, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1280,
+ .height = 720,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080p24_fp = {
+ .core = {
+ .h_blank = {0x3e, 0x03},
+ .v2_blank = {0xca, 0x08},
+ .v1_blank = {0x2d, 0x00},
+ .v_line = {0xca, 0x08},
+ .h_line = {0xbe, 0x0a},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x7c, 0x02},
+ .h_sync_end = {0xa8, 0x02},
+ .v_sync_line_bef_2 = {0x09, 0x00},
+ .v_sync_line_bef_1 = {0x04, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0x65, 0x04},
+ .vact_space_2 = {0x92, 0x04},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0xbe, 0x0a, /* h_fsz */
+ 0x3e, 0x03, 0x80, 0x07, /* hact */
+ 0xca, 0x08, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x2d, 0x00, 0x38, 0x04, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x92, 0x04, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x01, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080p24_sb_half = {
+ .core = {
+ .h_blank = {0x3e, 0x03},
+ .v2_blank = {0x65, 0x04},
+ .v1_blank = {0x2d, 0x00},
+ .v_line = {0x65, 0x04},
+ .h_line = {0xbe, 0x0a},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x7c, 0x02},
+ .h_sync_end = {0xa8, 0x02},
+ .v_sync_line_bef_2 = {0x09, 0x00},
+ .v_sync_line_bef_1 = {0x04, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0xbe, 0x0a, /* h_fsz */
+ 0x3e, 0x03, 0x80, 0x07, /* hact */
+ 0x65, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x2d, 0x00, 0x38, 0x04, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x48, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080p24_tb = {
+ .core = {
+ .h_blank = {0x3e, 0x03},
+ .v2_blank = {0x65, 0x04},
+ .v1_blank = {0x2d, 0x00},
+ .v_line = {0x65, 0x04},
+ .h_line = {0xbe, 0x0a},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x7c, 0x02},
+ .h_sync_end = {0xa8, 0x02},
+ .v_sync_line_bef_2 = {0x09, 0x00},
+ .v_sync_line_bef_1 = {0x04, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0xbe, 0x0a, /* h_fsz */
+ 0x3e, 0x03, 0x80, 0x07, /* hact */
+ 0x65, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x2d, 0x00, 0x38, 0x04, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x48, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080p23_98_fp = {
+ .core = {
+ .h_blank = {0x3e, 0x03},
+ .v2_blank = {0xca, 0x08},
+ .v1_blank = {0x2d, 0x00},
+ .v_line = {0xca, 0x08},
+ .h_line = {0xbe, 0x0a},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x7c, 0x02},
+ .h_sync_end = {0xa8, 0x02},
+ .v_sync_line_bef_2 = {0x09, 0x00},
+ .v_sync_line_bef_1 = {0x04, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0x65, 0x04},
+ .vact_space_2 = {0x92, 0x04},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0xbe, 0x0a, /* h_fsz */
+ 0x3e, 0x03, 0x80, 0x07, /* hact */
+ 0xca, 0x08, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x2d, 0x00, 0x38, 0x04, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x92, 0x04, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x01, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080p23_98_sb_half = {
+ .core = {
+ .h_blank = {0x3e, 0x03},
+ .v2_blank = {0x65, 0x04},
+ .v1_blank = {0x2d, 0x00},
+ .v_line = {0x65, 0x04},
+ .h_line = {0xbe, 0x0a},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x7c, 0x02},
+ .h_sync_end = {0xa8, 0x02},
+ .v_sync_line_bef_2 = {0x09, 0x00},
+ .v_sync_line_bef_1 = {0x04, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0xbe, 0x0a, /* h_fsz */
+ 0x3e, 0x03, 0x80, 0x07, /* hact */
+ 0x65, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x2d, 0x00, 0x38, 0x04, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x48, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080p23_98_tb = {
+ .core = {
+ .h_blank = {0x3e, 0x03},
+ .v2_blank = {0x65, 0x04},
+ .v1_blank = {0x2d, 0x00},
+ .v_line = {0x65, 0x04},
+ .h_line = {0xbe, 0x0a},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x7c, 0x02},
+ .h_sync_end = {0xa8, 0x02},
+ .v_sync_line_bef_2 = {0x09, 0x00},
+ .v_sync_line_bef_1 = {0x04, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0xbe, 0x0a, /* h_fsz */
+ 0x3e, 0x03, 0x80, 0x07, /* hact */
+ 0x65, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x2d, 0x00, 0x38, 0x04, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x48, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080i60_sb_half = {
+ .core = {
+ .h_blank = {0x18, 0x01},
+ .v2_blank = {0x32, 0x02},
+ .v1_blank = {0x16, 0x00},
+ .v_line = {0x65, 0x04},
+ .h_line = {0x98, 0x08},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x01},
+ .v_blank_f0 = {0x49, 0x02},
+ .v_blank_f1 = {0x65, 0x04},
+ .h_sync_start = {0x56, 0x00},
+ .h_sync_end = {0x82, 0x00},
+ .v_sync_line_bef_2 = {0x07, 0x00},
+ .v_sync_line_bef_1 = {0x02, 0x00},
+ .v_sync_line_aft_2 = {0x39, 0x02},
+ .v_sync_line_aft_1 = {0x34, 0x02},
+ .v_sync_line_aft_pxl_2 = {0xa4, 0x04},
+ .v_sync_line_aft_pxl_1 = {0xa4, 0x04},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x98, 0x08, /* h_fsz */
+ 0x18, 0x01, 0x80, 0x07, /* hact */
+ 0x64, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x16, 0x00, 0x1c, 0x02, /* vact */
+ 0x65, 0x04, /* field_chg */
+ 0x49, 0x02, /* vact_st2 */
+ 0x7b, 0x04, /* vact_st3 */
+ 0xae, 0x06, /* vact_st4 */
+ 0x01, 0x00, 0x33, 0x02, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080i59_94_sb_half = {
+ .core = {
+ .h_blank = {0x18, 0x01},
+ .v2_blank = {0x32, 0x02},
+ .v1_blank = {0x16, 0x00},
+ .v_line = {0x65, 0x04},
+ .h_line = {0x98, 0x08},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x01},
+ .v_blank_f0 = {0x49, 0x02},
+ .v_blank_f1 = {0x65, 0x04},
+ .h_sync_start = {0x56, 0x00},
+ .h_sync_end = {0x82, 0x00},
+ .v_sync_line_bef_2 = {0x07, 0x00},
+ .v_sync_line_bef_1 = {0x02, 0x00},
+ .v_sync_line_aft_2 = {0x39, 0x02},
+ .v_sync_line_aft_1 = {0x34, 0x02},
+ .v_sync_line_aft_pxl_2 = {0xa4, 0x04},
+ .v_sync_line_aft_pxl_1 = {0xa4, 0x04},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x98, 0x08, /* h_fsz */
+ 0x18, 0x01, 0x80, 0x07, /* hact */
+ 0x64, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x16, 0x00, 0x1c, 0x02, /* vact */
+ 0x65, 0x04, /* field_chg */
+ 0x49, 0x02, /* vact_st2 */
+ 0x7b, 0x04, /* vact_st3 */
+ 0xae, 0x06, /* vact_st4 */
+ 0x01, 0x00, 0x33, 0x02, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080i50_sb_half = {
+ .core = {
+ .h_blank = {0xd0, 0x02},
+ .v2_blank = {0x32, 0x02},
+ .v1_blank = {0x16, 0x00},
+ .v_line = {0x65, 0x04},
+ .h_line = {0x50, 0x0a},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x01},
+ .v_blank_f0 = {0x49, 0x02},
+ .v_blank_f1 = {0x65, 0x04},
+ .h_sync_start = {0x0e, 0x02},
+ .h_sync_end = {0x3a, 0x02},
+ .v_sync_line_bef_2 = {0x07, 0x00},
+ .v_sync_line_bef_1 = {0x02, 0x00},
+ .v_sync_line_aft_2 = {0x39, 0x02},
+ .v_sync_line_aft_1 = {0x34, 0x02},
+ .v_sync_line_aft_pxl_2 = {0x38, 0x07},
+ .v_sync_line_aft_pxl_1 = {0x38, 0x07},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x50, 0x0a, /* h_fsz */
+ 0xd0, 0x02, 0x80, 0x07, /* hact */
+ 0x64, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x16, 0x00, 0x1c, 0x02, /* vact */
+ 0x65, 0x04, /* field_chg */
+ 0x49, 0x02, /* vact_st2 */
+ 0x7b, 0x04, /* vact_st3 */
+ 0xae, 0x06, /* vact_st4 */
+ 0x01, 0x00, 0x33, 0x02, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080p60_sb_half = {
+ .core = {
+ .h_blank = {0x18, 0x01},
+ .v2_blank = {0x65, 0x04},
+ .v1_blank = {0x2d, 0x00},
+ .v_line = {0x65, 0x04},
+ .h_line = {0x98, 0x08},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x56, 0x00},
+ .h_sync_end = {0x82, 0x00},
+ .v_sync_line_bef_2 = {0x09, 0x00},
+ .v_sync_line_bef_1 = {0x04, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x98, 0x08, /* h_fsz */
+ 0x18, 0x01, 0x80, 0x07, /* hact */
+ 0x65, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x2d, 0x00, 0x38, 0x04, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x48, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080p60_tb = {
+ .core = {
+ .h_blank = {0x18, 0x01},
+ .v2_blank = {0x65, 0x04},
+ .v1_blank = {0x2d, 0x00},
+ .v_line = {0x65, 0x04},
+ .h_line = {0x98, 0x08},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x56, 0x00},
+ .h_sync_end = {0x82, 0x00},
+ .v_sync_line_bef_2 = {0x09, 0x00},
+ .v_sync_line_bef_1 = {0x04, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x98, 0x08, /* h_fsz */
+ 0x18, 0x01, 0x80, 0x07, /* hact */
+ 0x65, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x2d, 0x00, 0x38, 0x04, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x48, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080p30_sb_half = {
+ .core = {
+ .h_blank = {0x18, 0x01},
+ .v2_blank = {0x65, 0x04},
+ .v1_blank = {0x2d, 0x00},
+ .v_line = {0x65, 0x04},
+ .h_line = {0x98, 0x08},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x56, 0x00},
+ .h_sync_end = {0x82, 0x00},
+ .v_sync_line_bef_2 = {0x09, 0x00},
+ .v_sync_line_bef_1 = {0x04, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x98, 0x08, /* h_fsz */
+ 0x18, 0x01, 0x80, 0x07, /* hact */
+ 0x65, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x2d, 0x00, 0x38, 0x04, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x48, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_preset_conf hdmi_conf_1080p30_tb = {
+ .core = {
+ .h_blank = {0x18, 0x01},
+ .v2_blank = {0x65, 0x04},
+ .v1_blank = {0x2d, 0x00},
+ .v_line = {0x65, 0x04},
+ .h_line = {0x98, 0x08},
+ .hsync_pol = {0x00},
+ .vsync_pol = {0x00},
+ .int_pro_mode = {0x00},
+ .v_blank_f0 = {0xff, 0xff},
+ .v_blank_f1 = {0xff, 0xff},
+ .h_sync_start = {0x56, 0x00},
+ .h_sync_end = {0x82, 0x00},
+ .v_sync_line_bef_2 = {0x09, 0x00},
+ .v_sync_line_bef_1 = {0x04, 0x00},
+ .v_sync_line_aft_2 = {0xff, 0xff},
+ .v_sync_line_aft_1 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_2 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_1 = {0xff, 0xff},
+ .v_blank_f2 = {0xff, 0xff},
+ .v_blank_f3 = {0xff, 0xff},
+ .v_blank_f4 = {0xff, 0xff},
+ .v_blank_f5 = {0xff, 0xff},
+ .v_sync_line_aft_3 = {0xff, 0xff},
+ .v_sync_line_aft_4 = {0xff, 0xff},
+ .v_sync_line_aft_5 = {0xff, 0xff},
+ .v_sync_line_aft_6 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_3 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_4 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_5 = {0xff, 0xff},
+ .v_sync_line_aft_pxl_6 = {0xff, 0xff},
+ .vact_space_1 = {0xff, 0xff},
+ .vact_space_2 = {0xff, 0xff},
+ .vact_space_3 = {0xff, 0xff},
+ .vact_space_4 = {0xff, 0xff},
+ .vact_space_5 = {0xff, 0xff},
+ .vact_space_6 = {0xff, 0xff},
+ /* other don't care */
+ },
+ .tg = {
+ 0x00, /* cmd */
+ 0x98, 0x08, /* h_fsz */
+ 0x18, 0x01, 0x80, 0x07, /* hact */
+ 0x65, 0x04, /* v_fsz */
+ 0x01, 0x00, 0x33, 0x02, /* vsync */
+ 0x2d, 0x00, 0x38, 0x04, /* vact */
+ 0x33, 0x02, /* field_chg */
+ 0x48, 0x02, /* vact_st2 */
+ 0x00, 0x00, /* vact_st3 */
+ 0x00, 0x00, /* vact_st4 */
+ 0x01, 0x00, 0x01, 0x00, /* vsync top/bot */
+ 0x01, 0x00, 0x33, 0x02, /* field top/bot */
+ 0x00, /* 3d FP */
+ },
+ .mbus_fmt = {
+ .width = 1920,
+ .height = 1080,
+ .code = V4L2_MBUS_FMT_FIXED, /* means RGB888 */
+ .field = V4L2_FIELD_NONE,
+ },
+};
+
+static const struct hdmi_3d_info info_2d = {
+ .is_3d = HDMI_VIDEO_FORMAT_2D,
+};
+
+static const struct hdmi_3d_info info_3d_sb_h = {
+ .is_3d = HDMI_VIDEO_FORMAT_3D,
+ .fmt_3d = HDMI_3D_FORMAT_SB_HALF,
+};
+
+static const struct hdmi_3d_info info_3d_tb = {
+ .is_3d = HDMI_VIDEO_FORMAT_3D,
+ .fmt_3d = HDMI_3D_FORMAT_TB,
+};
+
+static const struct hdmi_3d_info info_3d_fp = {
+ .is_3d = HDMI_VIDEO_FORMAT_3D,
+ .fmt_3d = HDMI_3D_FORMAT_FP,
+};
+
+const struct hdmi_conf hdmi_conf[] = {
+ { V4L2_DV_480P59_94, &hdmi_conf_480p59_94, &info_2d },
+ { V4L2_DV_480P60, &hdmi_conf_480p60, &info_2d },
+ { V4L2_DV_576P50, &hdmi_conf_576p50, &info_2d },
+ { V4L2_DV_720P50, &hdmi_conf_720p50, &info_2d },
+ { V4L2_DV_720P59_94, &hdmi_conf_720p59_94, &info_2d },
+ { V4L2_DV_720P60, &hdmi_conf_720p60, &info_2d },
+ { V4L2_DV_1080I50, &hdmi_conf_1080i50, &info_2d },
+ { V4L2_DV_1080I59_94, &hdmi_conf_1080i59_94, &info_2d },
+ { V4L2_DV_1080I60, &hdmi_conf_1080i60, &info_2d },
+ { V4L2_DV_1080P24, &hdmi_conf_1080p24, &info_2d },
+ { V4L2_DV_1080P25, &hdmi_conf_1080p25, &info_2d },
+ { V4L2_DV_1080P30, &hdmi_conf_1080p30, &info_2d },
+ { V4L2_DV_1080P50, &hdmi_conf_1080p50, &info_2d },
+ { V4L2_DV_1080P59_94, &hdmi_conf_1080p59_94, &info_2d },
+ { V4L2_DV_1080P60, &hdmi_conf_1080p60, &info_2d },
+ { V4L2_DV_720P60_SB_HALF, &hdmi_conf_720p60_sb_half, &info_3d_sb_h },
+ { V4L2_DV_720P60_TB, &hdmi_conf_720p60_tb, &info_3d_tb },
+ { V4L2_DV_720P59_94_SB_HALF, &hdmi_conf_720p59_94_sb_half,
+ &info_3d_sb_h },
+ { V4L2_DV_720P59_94_TB, &hdmi_conf_720p59_94_tb, &info_3d_tb },
+ { V4L2_DV_720P50_SB_HALF, &hdmi_conf_720p50_sb_half, &info_3d_sb_h },
+ { V4L2_DV_720P50_TB, &hdmi_conf_720p50_tb, &info_3d_tb },
+ { V4L2_DV_1080P24_FP, &hdmi_conf_1080p24_fp, &info_3d_fp },
+ { V4L2_DV_1080P24_SB_HALF, &hdmi_conf_1080p24_sb_half, &info_3d_sb_h },
+ { V4L2_DV_1080P24_TB, &hdmi_conf_1080p24_tb, &info_3d_tb },
+ { V4L2_DV_1080P23_98_FP, &hdmi_conf_1080p23_98_fp, &info_3d_fp },
+ { V4L2_DV_1080P23_98_SB_HALF, &hdmi_conf_1080p23_98_sb_half,
+ &info_3d_sb_h },
+ { V4L2_DV_1080P23_98_TB, &hdmi_conf_1080p23_98_tb, &info_3d_tb },
+ { V4L2_DV_1080I60_SB_HALF, &hdmi_conf_1080i60_sb_half, &info_3d_sb_h },
+ { V4L2_DV_1080I59_94_SB_HALF, &hdmi_conf_1080i59_94_sb_half,
+ &info_3d_sb_h },
+ { V4L2_DV_1080I50_SB_HALF, &hdmi_conf_1080i50_sb_half, &info_3d_sb_h },
+ { V4L2_DV_1080P60_SB_HALF, &hdmi_conf_1080p60_sb_half, &info_3d_sb_h },
+ { V4L2_DV_1080P60_TB, &hdmi_conf_1080p60_tb, &info_3d_tb },
+ { V4L2_DV_1080P30_SB_HALF, &hdmi_conf_1080p30_sb_half, &info_3d_sb_h },
+ { V4L2_DV_1080P30_TB, &hdmi_conf_1080p30_tb, &info_3d_tb },
+};
+
+const int hdmi_pre_cnt = ARRAY_SIZE(hdmi_conf);
+
+irqreturn_t hdmi_irq_handler(int irq, void *dev_data)
+{
+ struct hdmi_device *hdev = dev_data;
+ u32 intc_flag;
+
+ if (!pm_runtime_suspended(hdev->dev)) {
+ intc_flag = hdmi_read(hdev, HDMI_INTC_FLAG_0);
+ /* clearing flags for HPD plug/unplug */
+ if (intc_flag & HDMI_INTC_FLAG_HPD_UNPLUG) {
+ printk(KERN_INFO "unplugged\n");
+ if (hdev->hdcp_info.hdcp_enable)
+ hdcp_stop(hdev);
+ hdmi_write_mask(hdev, HDMI_INTC_FLAG_0, ~0,
+ HDMI_INTC_FLAG_HPD_UNPLUG);
+ atomic_set(&hdev->hpd_state, HPD_LOW);
+ }
+ if (intc_flag & HDMI_INTC_FLAG_HPD_PLUG) {
+ printk(KERN_INFO "plugged\n");
+ hdmi_write_mask(hdev, HDMI_INTC_FLAG_0, ~0,
+ HDMI_INTC_FLAG_HPD_PLUG);
+ atomic_set(&hdev->hpd_state, HPD_HIGH);
+ }
+ if (intc_flag & HDMI_INTC_FLAG_HDCP) {
+ printk(KERN_INFO "hdcp interrupt occur\n");
+ hdcp_irq_handler(hdev);
+ hdmi_write_mask(hdev, HDMI_INTC_FLAG_0, ~0,
+ HDMI_INTC_FLAG_HDCP);
+ }
+ } else{
+ if (s5p_v4l2_hpd_read_gpio())
+ atomic_set(&hdev->hpd_state, HPD_HIGH);
+ else
+ atomic_set(&hdev->hpd_state, HPD_LOW);
+ }
+
+ queue_work(hdev->hpd_wq, &hdev->hpd_work);
+
+ return IRQ_HANDLED;
+}
+
+void hdmi_reg_init(struct hdmi_device *hdev)
+{
+ /* enable HPD interrupts */
+ hdmi_write_mask(hdev, HDMI_INTC_CON_0, ~0, HDMI_INTC_EN_GLOBAL |
+ HDMI_INTC_EN_HPD_PLUG | HDMI_INTC_EN_HPD_UNPLUG);
+ /* choose HDMI mode */
+ hdmi_write_mask(hdev, HDMI_MODE_SEL,
+ HDMI_MODE_HDMI_EN, HDMI_MODE_MASK);
+ /* disable bluescreen */
+ hdmi_write_mask(hdev, HDMI_CON_0, 0, HDMI_BLUE_SCR_EN);
+ /* enable AVI packet every vsync, fixes purple line problem */
+ hdmi_writeb(hdev, HDMI_AVI_CON, 0x02);
+ /* RGB888 is default output format of HDMI,
+ * look to CEA-861-D, table 7 for more detail */
+ hdmi_writeb(hdev, HDMI_AVI_BYTE(1), 0 << 5);
+ hdmi_write_mask(hdev, HDMI_CON_1, 2, 3 << 5);
+}
+
+void hdmi_timing_apply(struct hdmi_device *hdev,
+ const struct hdmi_preset_conf *conf)
+{
+ const struct hdmi_core_regs *core = &conf->core;
+ const struct hdmi_tg_regs *tg = &conf->tg;
+
+ /* setting core registers */
+ hdmi_writeb(hdev, HDMI_H_BLANK_0, core->h_blank[0]);
+ hdmi_writeb(hdev, HDMI_H_BLANK_1, core->h_blank[1]);
+ hdmi_writeb(hdev, HDMI_V2_BLANK_0, core->v2_blank[0]);
+ hdmi_writeb(hdev, HDMI_V2_BLANK_1, core->v2_blank[1]);
+ hdmi_writeb(hdev, HDMI_V1_BLANK_0, core->v1_blank[0]);
+ hdmi_writeb(hdev, HDMI_V1_BLANK_1, core->v1_blank[1]);
+ hdmi_writeb(hdev, HDMI_V_LINE_0, core->v_line[0]);
+ hdmi_writeb(hdev, HDMI_V_LINE_1, core->v_line[1]);
+ hdmi_writeb(hdev, HDMI_H_LINE_0, core->h_line[0]);
+ hdmi_writeb(hdev, HDMI_H_LINE_1, core->h_line[1]);
+ hdmi_writeb(hdev, HDMI_HSYNC_POL, core->hsync_pol[0]);
+ hdmi_writeb(hdev, HDMI_VSYNC_POL, core->vsync_pol[0]);
+ hdmi_writeb(hdev, HDMI_INT_PRO_MODE, core->int_pro_mode[0]);
+ hdmi_writeb(hdev, HDMI_V_BLANK_F0_0, core->v_blank_f0[0]);
+ hdmi_writeb(hdev, HDMI_V_BLANK_F0_1, core->v_blank_f0[1]);
+ hdmi_writeb(hdev, HDMI_V_BLANK_F1_0, core->v_blank_f1[0]);
+ hdmi_writeb(hdev, HDMI_V_BLANK_F1_1, core->v_blank_f1[1]);
+ hdmi_writeb(hdev, HDMI_H_SYNC_START_0, core->h_sync_start[0]);
+ hdmi_writeb(hdev, HDMI_H_SYNC_START_1, core->h_sync_start[1]);
+ hdmi_writeb(hdev, HDMI_H_SYNC_END_0, core->h_sync_end[0]);
+ hdmi_writeb(hdev, HDMI_H_SYNC_END_1, core->h_sync_end[1]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_BEF_2_0, core->v_sync_line_bef_2[0]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_BEF_2_1, core->v_sync_line_bef_2[1]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_BEF_1_0, core->v_sync_line_bef_1[0]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_BEF_1_1, core->v_sync_line_bef_1[1]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_2_0, core->v_sync_line_aft_2[0]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_2_1, core->v_sync_line_aft_2[1]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_1_0, core->v_sync_line_aft_1[0]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_1_1, core->v_sync_line_aft_1[1]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_PXL_2_0,
+ core->v_sync_line_aft_pxl_2[0]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_PXL_2_1,
+ core->v_sync_line_aft_pxl_2[1]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_PXL_1_0,
+ core->v_sync_line_aft_pxl_1[0]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_PXL_1_1,
+ core->v_sync_line_aft_pxl_1[1]);
+ hdmi_writeb(hdev, HDMI_V_BLANK_F2_0, core->v_blank_f2[0]);
+ hdmi_writeb(hdev, HDMI_V_BLANK_F2_1, core->v_blank_f2[1]);
+ hdmi_writeb(hdev, HDMI_V_BLANK_F3_0, core->v_blank_f3[0]);
+ hdmi_writeb(hdev, HDMI_V_BLANK_F3_1, core->v_blank_f3[1]);
+ hdmi_writeb(hdev, HDMI_V_BLANK_F4_0, core->v_blank_f4[0]);
+ hdmi_writeb(hdev, HDMI_V_BLANK_F4_1, core->v_blank_f4[1]);
+ hdmi_writeb(hdev, HDMI_V_BLANK_F5_0, core->v_blank_f5[0]);
+ hdmi_writeb(hdev, HDMI_V_BLANK_F5_1, core->v_blank_f5[1]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_3_0, core->v_sync_line_aft_3[0]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_3_1, core->v_sync_line_aft_3[1]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_4_0, core->v_sync_line_aft_4[0]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_4_1, core->v_sync_line_aft_4[1]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_5_0, core->v_sync_line_aft_5[0]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_5_1, core->v_sync_line_aft_5[1]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_6_0, core->v_sync_line_aft_6[0]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_6_1, core->v_sync_line_aft_6[1]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_PXL_3_0,
+ core->v_sync_line_aft_pxl_3[0]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_PXL_3_1,
+ core->v_sync_line_aft_pxl_3[1]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_PXL_4_0,
+ core->v_sync_line_aft_pxl_4[0]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_PXL_4_1,
+ core->v_sync_line_aft_pxl_4[1]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_PXL_5_0,
+ core->v_sync_line_aft_pxl_5[0]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_PXL_5_1,
+ core->v_sync_line_aft_pxl_5[1]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_PXL_6_0,
+ core->v_sync_line_aft_pxl_6[0]);
+ hdmi_writeb(hdev, HDMI_V_SYNC_LINE_AFT_PXL_6_1,
+ core->v_sync_line_aft_pxl_6[1]);
+ hdmi_writeb(hdev, HDMI_VACT_SPACE_1_0, core->vact_space_1[0]);
+ hdmi_writeb(hdev, HDMI_VACT_SPACE_1_1, core->vact_space_1[1]);
+ hdmi_writeb(hdev, HDMI_VACT_SPACE_2_0, core->vact_space_2[0]);
+ hdmi_writeb(hdev, HDMI_VACT_SPACE_2_1, core->vact_space_2[1]);
+ hdmi_writeb(hdev, HDMI_VACT_SPACE_3_0, core->vact_space_3[0]);
+ hdmi_writeb(hdev, HDMI_VACT_SPACE_3_1, core->vact_space_3[1]);
+ hdmi_writeb(hdev, HDMI_VACT_SPACE_4_0, core->vact_space_4[0]);
+ hdmi_writeb(hdev, HDMI_VACT_SPACE_4_1, core->vact_space_4[1]);
+ hdmi_writeb(hdev, HDMI_VACT_SPACE_5_0, core->vact_space_5[0]);
+ hdmi_writeb(hdev, HDMI_VACT_SPACE_5_1, core->vact_space_5[1]);
+ hdmi_writeb(hdev, HDMI_VACT_SPACE_6_0, core->vact_space_6[0]);
+ hdmi_writeb(hdev, HDMI_VACT_SPACE_6_1, core->vact_space_6[1]);
+
+ /* Timing generator registers */
+ hdmi_writeb(hdev, HDMI_TG_H_FSZ_L, tg->h_fsz_l);
+ hdmi_writeb(hdev, HDMI_TG_H_FSZ_H, tg->h_fsz_h);
+ hdmi_writeb(hdev, HDMI_TG_HACT_ST_L, tg->hact_st_l);
+ hdmi_writeb(hdev, HDMI_TG_HACT_ST_H, tg->hact_st_h);
+ hdmi_writeb(hdev, HDMI_TG_HACT_SZ_L, tg->hact_sz_l);
+ hdmi_writeb(hdev, HDMI_TG_HACT_SZ_H, tg->hact_sz_h);
+ hdmi_writeb(hdev, HDMI_TG_V_FSZ_L, tg->v_fsz_l);
+ hdmi_writeb(hdev, HDMI_TG_V_FSZ_H, tg->v_fsz_h);
+ hdmi_writeb(hdev, HDMI_TG_VSYNC_L, tg->vsync_l);
+ hdmi_writeb(hdev, HDMI_TG_VSYNC_H, tg->vsync_h);
+ hdmi_writeb(hdev, HDMI_TG_VSYNC2_L, tg->vsync2_l);
+ hdmi_writeb(hdev, HDMI_TG_VSYNC2_H, tg->vsync2_h);
+ hdmi_writeb(hdev, HDMI_TG_VACT_ST_L, tg->vact_st_l);
+ hdmi_writeb(hdev, HDMI_TG_VACT_ST_H, tg->vact_st_h);
+ hdmi_writeb(hdev, HDMI_TG_VACT_SZ_L, tg->vact_sz_l);
+ hdmi_writeb(hdev, HDMI_TG_VACT_SZ_H, tg->vact_sz_h);
+ hdmi_writeb(hdev, HDMI_TG_FIELD_CHG_L, tg->field_chg_l);
+ hdmi_writeb(hdev, HDMI_TG_FIELD_CHG_H, tg->field_chg_h);
+ hdmi_writeb(hdev, HDMI_TG_VACT_ST2_L, tg->vact_st2_l);
+ hdmi_writeb(hdev, HDMI_TG_VACT_ST2_H, tg->vact_st2_h);
+ hdmi_writeb(hdev, HDMI_TG_VACT_ST3_L, tg->vact_st3_l);
+ hdmi_writeb(hdev, HDMI_TG_VACT_ST3_H, tg->vact_st3_h);
+ hdmi_writeb(hdev, HDMI_TG_VACT_ST4_L, tg->vact_st4_l);
+ hdmi_writeb(hdev, HDMI_TG_VACT_ST4_H, tg->vact_st4_h);
+ hdmi_writeb(hdev, HDMI_TG_VSYNC_TOP_HDMI_L, tg->vsync_top_hdmi_l);
+ hdmi_writeb(hdev, HDMI_TG_VSYNC_TOP_HDMI_H, tg->vsync_top_hdmi_h);
+ hdmi_writeb(hdev, HDMI_TG_VSYNC_BOT_HDMI_L, tg->vsync_bot_hdmi_l);
+ hdmi_writeb(hdev, HDMI_TG_VSYNC_BOT_HDMI_H, tg->vsync_bot_hdmi_h);
+ hdmi_writeb(hdev, HDMI_TG_FIELD_TOP_HDMI_L, tg->field_top_hdmi_l);
+ hdmi_writeb(hdev, HDMI_TG_FIELD_TOP_HDMI_H, tg->field_top_hdmi_h);
+ hdmi_writeb(hdev, HDMI_TG_FIELD_BOT_HDMI_L, tg->field_bot_hdmi_l);
+ hdmi_writeb(hdev, HDMI_TG_FIELD_BOT_HDMI_H, tg->field_bot_hdmi_h);
+ hdmi_writeb(hdev, HDMI_TG_3D, tg->tg_3d);
+}
+
+int hdmi_conf_apply(struct hdmi_device *hdmi_dev)
+{
+ struct device *dev = hdmi_dev->dev;
+ const struct hdmi_preset_conf *conf = hdmi_dev->cur_conf;
+ struct v4l2_dv_preset preset;
+ int ret;
+
+ dev_dbg(dev, "%s\n", __func__);
+
+ /* configure presets */
+ preset.preset = hdmi_dev->cur_preset;
+ ret = v4l2_subdev_call(hdmi_dev->phy_sd, video, s_dv_preset, &preset);
+ if (ret) {
+ dev_err(dev, "failed to set preset (%u)\n", preset.preset);
+ return ret;
+ }
+
+ hdmi_reg_init(hdmi_dev);
+
+ /* setting core registers */
+ hdmi_timing_apply(hdmi_dev, conf);
+
+ return 0;
+}
+
+int is_hdmiphy_ready(struct hdmi_device *hdev)
+{
+ u32 val = hdmi_read(hdev, HDMI_PHY_STATUS);
+ if (val & HDMI_PHY_STATUS_READY)
+ return 1;
+
+ return 0;
+}
+
+void hdmi_enable(struct hdmi_device *hdev, int on)
+{
+ if (on)
+ hdmi_write_mask(hdev, HDMI_CON_0, ~0, HDMI_EN);
+ else
+ hdmi_write_mask(hdev, HDMI_CON_0, 0, HDMI_EN);
+}
+
+void hdmi_hpd_enable(struct hdmi_device *hdev, int on)
+{
+ /* enable HPD interrupts */
+ hdmi_write_mask(hdev, HDMI_INTC_CON_0, ~0, HDMI_INTC_EN_GLOBAL |
+ HDMI_INTC_EN_HPD_PLUG | HDMI_INTC_EN_HPD_UNPLUG);
+}
+
+void hdmi_tg_enable(struct hdmi_device *hdev, int on)
+{
+ u32 mask;
+
+ mask = (hdev->cur_conf->mbus_fmt.field == V4L2_FIELD_INTERLACED) ?
+ HDMI_TG_EN | HDMI_FIELD_EN : HDMI_TG_EN;
+
+ if (on)
+ hdmi_write_mask(hdev, HDMI_TG_CMD, ~0, mask);
+ else
+ hdmi_write_mask(hdev, HDMI_TG_CMD, 0, mask);
+}
+
+static u8 hdmi_chksum(struct hdmi_device *hdev, u32 start, u8 len, u32 hdr_sum)
+{
+ int i;
+
+ /* hdr_sum : header0 + header1 + header2
+ * start : start address of packet byte1
+ * len : packet bytes - 1 */
+ for (i = 0; i < len; ++i)
+ hdr_sum += hdmi_read(hdev, start + i * 4);
+
+ return (u8)(0x100 - (hdr_sum & 0xff));
+}
+
+void hdmi_reg_stop_vsi(struct hdmi_device *hdev)
+{
+ hdmi_writeb(hdev, HDMI_VSI_CON, HDMI_VSI_CON_DO_NOT_TRANSMIT);
+}
+
+void hdmi_reg_infoframe(struct hdmi_device *hdev,
+ struct hdmi_infoframe *infoframe)
+{
+ struct device *dev = hdev->dev;
+ const struct hdmi_3d_info *info = hdmi_preset2info(hdev->cur_preset);
+ u32 hdr_sum;
+ u8 chksum;
+ dev_dbg(dev, "%s: InfoFrame type = 0x%x\n", __func__, infoframe->type);
+
+ switch (infoframe->type) {
+ case HDMI_PACKET_TYPE_VSI:
+ hdmi_writeb(hdev, HDMI_VSI_CON, HDMI_VSI_CON_EVERY_VSYNC);
+ hdmi_writeb(hdev, HDMI_VSI_HEADER0, infoframe->type);
+ hdmi_writeb(hdev, HDMI_VSI_HEADER1, infoframe->ver);
+ /* 0x000C03 : 24-bit IEEE Registration Identifier */
+ hdmi_writeb(hdev, HDMI_VSI_DATA(1), 0x03);
+ hdmi_writeb(hdev, HDMI_VSI_DATA(2), 0x0c);
+ hdmi_writeb(hdev, HDMI_VSI_DATA(3), 0x00);
+ hdmi_writeb(hdev, HDMI_VSI_DATA(4),
+ HDMI_VSI_DATA04_VIDEO_FORMAT(info->is_3d));
+ hdmi_writeb(hdev, HDMI_VSI_DATA(5),
+ HDMI_VSI_DATA05_3D_STRUCTURE(info->fmt_3d));
+ if (info->fmt_3d == HDMI_3D_FORMAT_SB_HALF) {
+ infoframe->len += 1;
+ hdmi_writeb(hdev, HDMI_VSI_DATA(6),
+ (u8)HDMI_VSI_DATA06_3D_EXT_DATA(HDMI_H_SUB_SAMPLE));
+ }
+ hdmi_writeb(hdev, HDMI_VSI_HEADER2, infoframe->len);
+ hdr_sum = infoframe->type + infoframe->ver + infoframe->len;
+ chksum = hdmi_chksum(hdev, HDMI_VSI_DATA(1), infoframe->len, hdr_sum);
+ dev_dbg(dev, "VSI checksum = 0x%x\n", chksum);
+ hdmi_writeb(hdev, HDMI_VSI_DATA(0), chksum);
+ break;
+ case HDMI_PACKET_TYPE_AVI:
+ hdmi_writeb(hdev, HDMI_AVI_CON, HDMI_AVI_CON_EVERY_VSYNC);
+ hdmi_writeb(hdev, HDMI_AVI_HEADER0, infoframe->type);
+ hdmi_writeb(hdev, HDMI_AVI_HEADER1, infoframe->ver);
+ hdmi_writeb(hdev, HDMI_AVI_HEADER2, infoframe->len);
+ hdmi_writeb(hdev, HDMI_AVI_BYTE(1), hdev->output_fmt << 5);
+ hdr_sum = infoframe->type + infoframe->ver + infoframe->len;
+ chksum = hdmi_chksum(hdev, HDMI_AVI_BYTE(1), infoframe->len, hdr_sum);
+ dev_dbg(dev, "AVI checksum = 0x%x\n", chksum);
+ hdmi_writeb(hdev, HDMI_AVI_CHECK_SUM, chksum);
+ break;
+ default:
+ break;
+ }
+}
+
+void hdmi_reg_set_acr(struct hdmi_device *hdev)
+{
+ u32 n, cts;
+ int sample_rate = hdev->sample_rate;
+
+ if (sample_rate == 32000) {
+ n = 4096;
+ cts = 27000;
+ } else if (sample_rate == 44100) {
+ n = 6272;
+ cts = 30000;
+ } else if (sample_rate == 48000) {
+ n = 6144;
+ cts = 27000;
+ } else if (sample_rate == 88200) {
+ n = 12544;
+ cts = 30000;
+ } else if (sample_rate == 96000) {
+ n = 12288;
+ cts = 27000;
+ } else if (sample_rate == 176400) {
+ n = 25088;
+ cts = 30000;
+ } else if (sample_rate == 192000) {
+ n = 24576;
+ cts = 27000;
+ } else {
+ n = 0;
+ cts = 0;
+ }
+
+ hdmi_write(hdev, HDMI_ACR_N0, HDMI_ACR_N0_VAL(n));
+ hdmi_write(hdev, HDMI_ACR_N1, HDMI_ACR_N1_VAL(n));
+ hdmi_write(hdev, HDMI_ACR_N2, HDMI_ACR_N2_VAL(n));
+
+ /* transfer ACR packet */
+ hdmi_write(hdev, HDMI_ACR_CON, HDMI_ACR_CON_TX_MODE_MESURED_CTS);
+}
+
+void hdmi_reg_spdif_audio_init(struct hdmi_device *hdev)
+{
+ u32 val;
+ int bps, rep_time;
+
+ hdmi_write(hdev, HDMI_I2S_CLK_CON, HDMI_I2S_CLK_ENABLE);
+
+ val = HDMI_SPDIFIN_CFG_NOISE_FILTER_2_SAMPLE |
+ HDMI_SPDIFIN_CFG_PCPD_MANUAL |
+ HDMI_SPDIFIN_CFG_WORD_LENGTH_MANUAL |
+ HDMI_SPDIFIN_CFG_UVCP_REPORT |
+ HDMI_SPDIFIN_CFG_HDMI_2_BURST |
+ HDMI_SPDIFIN_CFG_DATA_ALIGN_32;
+ hdmi_write(hdev, HDMI_SPDIFIN_CONFIG_1, val);
+ hdmi_write(hdev, HDMI_SPDIFIN_CONFIG_2, 0);
+
+ bps = hdev->audio_codec == HDMI_AUDIO_PCM ? hdev->bits_per_sample : 16;
+ rep_time = hdev->audio_codec == HDMI_AUDIO_AC3 ? 1536 * 2 - 1 : 0;
+ val = HDMI_SPDIFIN_USER_VAL_REPETITION_TIME_LOW(rep_time) |
+ HDMI_SPDIFIN_USER_VAL_WORD_LENGTH_24;
+ hdmi_write(hdev, HDMI_SPDIFIN_USER_VALUE_1, val);
+ val = HDMI_SPDIFIN_USER_VAL_REPETITION_TIME_HIGH(rep_time);
+ hdmi_write(hdev, HDMI_SPDIFIN_USER_VALUE_2, val);
+ hdmi_write(hdev, HDMI_SPDIFIN_USER_VALUE_3, 0);
+ hdmi_write(hdev, HDMI_SPDIFIN_USER_VALUE_4, 0);
+
+ val = HDMI_I2S_IN_ENABLE | HDMI_I2S_AUD_SPDIF | HDMI_I2S_MUX_ENABLE;
+ hdmi_write(hdev, HDMI_I2S_IN_MUX_CON, val);
+
+ hdmi_write(hdev, HDMI_I2S_MUX_CH, HDMI_I2S_CH_ALL_EN);
+ hdmi_write(hdev, HDMI_I2S_MUX_CUV, HDMI_I2S_CUV_RL_EN);
+
+ hdmi_write_mask(hdev, HDMI_SPDIFIN_CLK_CTRL, 0, HDMI_SPDIFIN_CLK_ON);
+ hdmi_write_mask(hdev, HDMI_SPDIFIN_CLK_CTRL, ~0, HDMI_SPDIFIN_CLK_ON);
+
+ hdmi_write(hdev, HDMI_SPDIFIN_OP_CTRL, HDMI_SPDIFIN_STATUS_CHECK_MODE);
+ hdmi_write(hdev, HDMI_SPDIFIN_OP_CTRL,
+ HDMI_SPDIFIN_STATUS_CHECK_MODE_HDMI);
+}
+
+void hdmi_reg_i2s_audio_init(struct hdmi_device *hdev)
+{
+ u32 data_num, bit_ch, sample_frq, val;
+ int sample_rate = hdev->sample_rate;
+ int bits_per_sample = hdev->bits_per_sample;
+
+ if (bits_per_sample == 16) {
+ data_num = 1;
+ bit_ch = 0;
+ } else if (bits_per_sample == 20) {
+ data_num = 2;
+ bit_ch = 1;
+ } else if (bits_per_sample == 24) {
+ data_num = 3;
+ bit_ch = 1;
+ } else if (bits_per_sample == 32) {
+ data_num = 1;
+ bit_ch = 2;
+ } else {
+ data_num = 1;
+ bit_ch = 0;
+ }
+
+ /* reset I2S */
+ hdmi_write(hdev, HDMI_I2S_CLK_CON, HDMI_I2S_CLK_DISABLE);
+ hdmi_write(hdev, HDMI_I2S_CLK_CON, HDMI_I2S_CLK_ENABLE);
+
+ hdmi_write_mask(hdev, HDMI_I2S_DSD_CON, 0, HDMI_I2S_DSD_ENABLE);
+
+ /* Configuration I2S input ports. Configure I2S_PIN_SEL_0~4 */
+ val = HDMI_I2S_SEL_SCLK(5) | HDMI_I2S_SEL_LRCK(6);
+ hdmi_write(hdev, HDMI_I2S_PIN_SEL_0, val);
+ val = HDMI_I2S_SEL_SDATA1(3) | HDMI_I2S_SEL_SDATA0(4);
+ hdmi_write(hdev, HDMI_I2S_PIN_SEL_1, val);
+ val = HDMI_I2S_SEL_SDATA3(1) | HDMI_I2S_SEL_SDATA2(2);
+ hdmi_write(hdev, HDMI_I2S_PIN_SEL_2, val);
+ hdmi_write(hdev, HDMI_I2S_PIN_SEL_3, HDMI_I2S_SEL_DSD(0));
+
+ /* I2S_CON_1 & 2 */
+ val = HDMI_I2S_SCLK_FALLING_EDGE | HDMI_I2S_L_CH_LOW_POL;
+ hdmi_write(hdev, HDMI_I2S_CON_1, val);
+ val = HDMI_I2S_MSB_FIRST_MODE | HDMI_I2S_SET_BIT_CH(bit_ch) |
+ HDMI_I2S_SET_SDATA_BIT(data_num) | HDMI_I2S_BASIC_FORMAT;
+ hdmi_write(hdev, HDMI_I2S_CON_2, val);
+
+ if (sample_rate == 32000)
+ sample_frq = 0x3;
+ else if (sample_rate == 44100)
+ sample_frq = 0x0;
+ else if (sample_rate == 48000)
+ sample_frq = 0x2;
+ else if (sample_rate == 96000)
+ sample_frq = 0xa;
+ else
+ sample_frq = 0;
+
+ /* Configure register related to CUV information */
+ val = HDMI_I2S_CH_STATUS_MODE_0 | HDMI_I2S_2AUD_CH_WITHOUT_PREEMPH |
+ HDMI_I2S_COPYRIGHT | HDMI_I2S_LINEAR_PCM |
+ HDMI_I2S_CONSUMER_FORMAT;
+ hdmi_write(hdev, HDMI_I2S_CH_ST_0, val);
+ hdmi_write(hdev, HDMI_I2S_CH_ST_1, HDMI_I2S_CD_PLAYER);
+ hdmi_write(hdev, HDMI_I2S_CH_ST_2, HDMI_I2S_SET_SOURCE_NUM(0));
+ val = HDMI_I2S_CLK_ACCUR_LEVEL_1 |
+ HDMI_I2S_SET_SAMPLING_FREQ(sample_frq);
+ hdmi_write(hdev, HDMI_I2S_CH_ST_3, val);
+ val = HDMI_I2S_ORG_SAMPLING_FREQ_44_1 |
+ HDMI_I2S_WORD_LENGTH_MAX24_20BITS |
+ HDMI_I2S_WORD_LENGTH_MAX_20BITS;
+ hdmi_write(hdev, HDMI_I2S_CH_ST_4, val);
+
+ hdmi_write(hdev, HDMI_I2S_CH_ST_CON, HDMI_I2S_CH_STATUS_RELOAD);
+
+ val = HDMI_I2S_IN_ENABLE | HDMI_I2S_AUD_I2S | HDMI_I2S_CUV_I2S_ENABLE
+ | HDMI_I2S_MUX_ENABLE;
+ hdmi_write(hdev, HDMI_I2S_IN_MUX_CON, val);
+
+ val = HDMI_I2S_CH0_L_EN | HDMI_I2S_CH0_R_EN | HDMI_I2S_CH1_L_EN |
+ HDMI_I2S_CH1_R_EN | HDMI_I2S_CH2_L_EN | HDMI_I2S_CH2_R_EN |
+ HDMI_I2S_CH3_L_EN | HDMI_I2S_CH3_R_EN;
+ hdmi_write(hdev, HDMI_I2S_MUX_CH, val);
+
+ val = HDMI_I2S_CUV_L_EN | HDMI_I2S_CUV_R_EN;
+ hdmi_write(hdev, HDMI_I2S_MUX_CUV, val);
+}
+
+void hdmi_audio_enable(struct hdmi_device *hdev, int on)
+{
+ if (on) {
+ hdmi_write(hdev, HDMI_AUI_CON, HDMI_AUI_CON_TRANS_EVERY_VSYNC);
+ hdmi_write_mask(hdev, HDMI_CON_0, ~0, HDMI_ASP_ENABLE);
+ } else {
+ hdmi_write(hdev, HDMI_AUI_CON, HDMI_AUI_CON_NO_TRAN);
+ hdmi_write_mask(hdev, HDMI_CON_0, 0, HDMI_ASP_ENABLE);
+ }
+}
+
+void hdmi_bluescreen_enable(struct hdmi_device *hdev, int on)
+{
+ if (on)
+ hdmi_write_mask(hdev, HDMI_CON_0, ~0, HDMI_BLUE_SCR_EN);
+ else
+ hdmi_write_mask(hdev, HDMI_CON_0, 0, HDMI_BLUE_SCR_EN);
+}
+
+void hdmi_reg_mute(struct hdmi_device *hdev, int on)
+{
+ hdmi_bluescreen_enable(hdev, on);
+ hdmi_audio_enable(hdev, !on);
+}
+
+int hdmi_hpd_status(struct hdmi_device *hdev)
+{
+ return hdmi_read(hdev, HDMI_HPD_STATUS);
+}
+
+int is_hdmi_streaming(struct hdmi_device *hdev)
+{
+ if (hdmi_hpd_status(hdev) && hdev->streaming)
+ return 1;
+ return 0;
+}
+
+u8 hdmi_get_int_mask(struct hdmi_device *hdev)
+{
+ return hdmi_readb(hdev, HDMI_INTC_CON_0);
+}
+
+void hdmi_set_int_mask(struct hdmi_device *hdev, u8 mask, int en)
+{
+ if (en) {
+ mask |= HDMI_INTC_EN_GLOBAL;
+ hdmi_write_mask(hdev, HDMI_INTC_CON_0, ~0, mask);
+ } else
+ hdmi_write_mask(hdev, HDMI_INTC_CON_0, 0,
+ HDMI_INTC_EN_GLOBAL);
+}
+
+void hdmi_sw_hpd_enable(struct hdmi_device *hdev, int en)
+{
+ if (en)
+ hdmi_write_mask(hdev, HDMI_HPD, ~0, HDMI_HPD_SEL_I_HPD);
+ else
+ hdmi_write_mask(hdev, HDMI_HPD, 0, HDMI_HPD_SEL_I_HPD);
+}
+
+void hdmi_sw_hpd_plug(struct hdmi_device *hdev, int en)
+{
+ if (en)
+ hdmi_write_mask(hdev, HDMI_HPD, ~0, HDMI_SW_HPD_PLUGGED);
+ else
+ hdmi_write_mask(hdev, HDMI_HPD, 0, HDMI_SW_HPD_PLUGGED);
+}
+
+void hdmi_phy_sw_reset(struct hdmi_device *hdev)
+{
+ hdmi_write_mask(hdev, HDMI_PHY_RSTOUT, ~0, HDMI_PHY_SW_RSTOUT);
+ mdelay(10);
+ hdmi_write_mask(hdev, HDMI_PHY_RSTOUT, 0, HDMI_PHY_SW_RSTOUT);
+}
+
+void hdmi_dumpregs(struct hdmi_device *hdev, char *prefix)
+{
+#define DUMPREG(reg_id) \
+ dev_dbg(hdev->dev, "%s:" #reg_id " = %08x\n", prefix, \
+ readl(hdev->regs + reg_id))
+
+ int i;
+
+ dev_dbg(hdev->dev, "%s: ---- CONTROL REGISTERS ----\n", prefix);
+ DUMPREG(HDMI_INTC_CON_0);
+ DUMPREG(HDMI_INTC_FLAG_0);
+ DUMPREG(HDMI_HPD_STATUS);
+ DUMPREG(HDMI_INTC_CON_1);
+ DUMPREG(HDMI_INTC_FLAG_1);
+ DUMPREG(HDMI_PHY_STATUS_0);
+ DUMPREG(HDMI_PHY_STATUS_PLL);
+ DUMPREG(HDMI_PHY_CON_0);
+ DUMPREG(HDMI_PHY_RSTOUT);
+ DUMPREG(HDMI_PHY_VPLL);
+ DUMPREG(HDMI_PHY_CMU);
+ DUMPREG(HDMI_CORE_RSTOUT);
+
+ dev_dbg(hdev->dev, "%s: ---- CORE REGISTERS ----\n", prefix);
+ DUMPREG(HDMI_CON_0);
+ DUMPREG(HDMI_CON_1);
+ DUMPREG(HDMI_CON_2);
+ DUMPREG(HDMI_STATUS);
+ DUMPREG(HDMI_PHY_STATUS);
+ DUMPREG(HDMI_STATUS_EN);
+ DUMPREG(HDMI_HPD);
+ DUMPREG(HDMI_MODE_SEL);
+ DUMPREG(HDMI_ENC_EN);
+ DUMPREG(HDMI_DC_CONTROL);
+ DUMPREG(HDMI_VIDEO_PATTERN_GEN);
+
+ dev_dbg(hdev->dev, "%s: ---- CORE SYNC REGISTERS ----\n", prefix);
+ DUMPREG(HDMI_H_BLANK_0);
+ DUMPREG(HDMI_H_BLANK_1);
+ DUMPREG(HDMI_V2_BLANK_0);
+ DUMPREG(HDMI_V2_BLANK_1);
+ DUMPREG(HDMI_V1_BLANK_0);
+ DUMPREG(HDMI_V1_BLANK_1);
+ DUMPREG(HDMI_V_LINE_0);
+ DUMPREG(HDMI_V_LINE_1);
+ DUMPREG(HDMI_H_LINE_0);
+ DUMPREG(HDMI_H_LINE_1);
+ DUMPREG(HDMI_HSYNC_POL);
+
+ DUMPREG(HDMI_VSYNC_POL);
+ DUMPREG(HDMI_INT_PRO_MODE);
+ DUMPREG(HDMI_V_BLANK_F0_0);
+ DUMPREG(HDMI_V_BLANK_F0_1);
+ DUMPREG(HDMI_V_BLANK_F1_0);
+ DUMPREG(HDMI_V_BLANK_F1_1);
+
+ DUMPREG(HDMI_H_SYNC_START_0);
+ DUMPREG(HDMI_H_SYNC_START_1);
+ DUMPREG(HDMI_H_SYNC_END_0);
+ DUMPREG(HDMI_H_SYNC_END_1);
+
+ DUMPREG(HDMI_V_SYNC_LINE_BEF_2_0);
+ DUMPREG(HDMI_V_SYNC_LINE_BEF_2_1);
+ DUMPREG(HDMI_V_SYNC_LINE_BEF_1_0);
+ DUMPREG(HDMI_V_SYNC_LINE_BEF_1_1);
+
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_2_0);
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_2_1);
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_1_0);
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_1_1);
+
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_PXL_2_0);
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_PXL_2_1);
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_PXL_1_0);
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_PXL_1_1);
+
+ DUMPREG(HDMI_V_BLANK_F2_0);
+ DUMPREG(HDMI_V_BLANK_F2_1);
+ DUMPREG(HDMI_V_BLANK_F3_0);
+ DUMPREG(HDMI_V_BLANK_F3_1);
+ DUMPREG(HDMI_V_BLANK_F4_0);
+ DUMPREG(HDMI_V_BLANK_F4_1);
+ DUMPREG(HDMI_V_BLANK_F5_0);
+ DUMPREG(HDMI_V_BLANK_F5_1);
+
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_3_0);
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_3_1);
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_4_0);
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_4_1);
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_5_0);
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_5_1);
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_6_0);
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_6_1);
+
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_PXL_3_0);
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_PXL_3_1);
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_PXL_4_0);
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_PXL_4_1);
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_PXL_5_0);
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_PXL_5_1);
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_PXL_6_0);
+ DUMPREG(HDMI_V_SYNC_LINE_AFT_PXL_6_1);
+
+ DUMPREG(HDMI_VACT_SPACE_1_0);
+ DUMPREG(HDMI_VACT_SPACE_1_1);
+ DUMPREG(HDMI_VACT_SPACE_2_0);
+ DUMPREG(HDMI_VACT_SPACE_2_1);
+ DUMPREG(HDMI_VACT_SPACE_3_0);
+ DUMPREG(HDMI_VACT_SPACE_3_1);
+ DUMPREG(HDMI_VACT_SPACE_4_0);
+ DUMPREG(HDMI_VACT_SPACE_4_1);
+ DUMPREG(HDMI_VACT_SPACE_5_0);
+ DUMPREG(HDMI_VACT_SPACE_5_1);
+ DUMPREG(HDMI_VACT_SPACE_6_0);
+ DUMPREG(HDMI_VACT_SPACE_6_1);
+
+ dev_dbg(hdev->dev, "%s: ---- TG REGISTERS ----\n", prefix);
+ DUMPREG(HDMI_TG_CMD);
+ DUMPREG(HDMI_TG_H_FSZ_L);
+ DUMPREG(HDMI_TG_H_FSZ_H);
+ DUMPREG(HDMI_TG_HACT_ST_L);
+ DUMPREG(HDMI_TG_HACT_ST_H);
+ DUMPREG(HDMI_TG_HACT_SZ_L);
+ DUMPREG(HDMI_TG_HACT_SZ_H);
+ DUMPREG(HDMI_TG_V_FSZ_L);
+ DUMPREG(HDMI_TG_V_FSZ_H);
+ DUMPREG(HDMI_TG_VSYNC_L);
+ DUMPREG(HDMI_TG_VSYNC_H);
+ DUMPREG(HDMI_TG_VSYNC2_L);
+ DUMPREG(HDMI_TG_VSYNC2_H);
+ DUMPREG(HDMI_TG_VACT_ST_L);
+ DUMPREG(HDMI_TG_VACT_ST_H);
+ DUMPREG(HDMI_TG_VACT_SZ_L);
+ DUMPREG(HDMI_TG_VACT_SZ_H);
+ DUMPREG(HDMI_TG_FIELD_CHG_L);
+ DUMPREG(HDMI_TG_FIELD_CHG_H);
+ DUMPREG(HDMI_TG_VACT_ST2_L);
+ DUMPREG(HDMI_TG_VACT_ST2_H);
+ DUMPREG(HDMI_TG_VACT_ST3_L);
+ DUMPREG(HDMI_TG_VACT_ST3_H);
+ DUMPREG(HDMI_TG_VACT_ST4_L);
+ DUMPREG(HDMI_TG_VACT_ST4_H);
+ DUMPREG(HDMI_TG_VSYNC_TOP_HDMI_L);
+ DUMPREG(HDMI_TG_VSYNC_TOP_HDMI_H);
+ DUMPREG(HDMI_TG_VSYNC_BOT_HDMI_L);
+ DUMPREG(HDMI_TG_VSYNC_BOT_HDMI_H);
+ DUMPREG(HDMI_TG_FIELD_TOP_HDMI_L);
+ DUMPREG(HDMI_TG_FIELD_TOP_HDMI_H);
+ DUMPREG(HDMI_TG_FIELD_BOT_HDMI_L);
+ DUMPREG(HDMI_TG_FIELD_BOT_HDMI_H);
+ DUMPREG(HDMI_TG_3D);
+
+ dev_dbg(hdev->dev, "%s: ---- PACKET REGISTERS ----\n", prefix);
+ DUMPREG(HDMI_AVI_CON);
+ DUMPREG(HDMI_AVI_HEADER0);
+ DUMPREG(HDMI_AVI_HEADER1);
+ DUMPREG(HDMI_AVI_HEADER2);
+ DUMPREG(HDMI_AVI_CHECK_SUM);
+ DUMPREG(HDMI_AVI_BYTE(1));
+ DUMPREG(HDMI_VSI_CON);
+ DUMPREG(HDMI_VSI_HEADER0);
+ DUMPREG(HDMI_VSI_HEADER1);
+ DUMPREG(HDMI_VSI_HEADER2);
+ for (i = 0; i < 7; ++i)
+ DUMPREG(HDMI_VSI_DATA(i));
+
+#undef DUMPREG
+}
--- /dev/null
+/*
+ * Samsung HDMI Physical interface driver
+ *
+ * Copyright (C) 2010-2011 Samsung Electronics Co.Ltd
+ * Author: Jiun Yu <jiun.yu@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include "hdmi.h"
+
+static const u8 hdmiphy_conf27[32] = {
+ 0x01, 0x05, 0x00, 0xD8, 0x10, 0x1C, 0x30, 0x40,
+ 0x6B, 0x10, 0x02, 0x51, 0xDf, 0xF2, 0x54, 0x87,
+ 0x84, 0x00, 0x30, 0x38, 0x00, 0x08, 0x10, 0xE0,
+ 0x22, 0x40, 0xe3, 0x26, 0x00, 0x00, 0x00, 0x80,
+};
+
+static const u8 hdmiphy_conf74_175[32] = {
+ 0x01, 0x05, 0x00, 0xD8, 0x10, 0x9C, 0xef, 0x5B,
+ 0x6D, 0x10, 0x01, 0x51, 0xef, 0xF3, 0x54, 0xb9,
+ 0x84, 0x00, 0x30, 0x38, 0x00, 0x08, 0x10, 0xE0,
+ 0x22, 0x40, 0xa5, 0x26, 0x01, 0x00, 0x00, 0x80,
+};
+
+static const u8 hdmiphy_conf74_25[32] = {
+ 0x01, 0x05, 0x00, 0xd8, 0x10, 0x9c, 0xf8, 0x40,
+ 0x6a, 0x10, 0x01, 0x51, 0xff, 0xf1, 0x54, 0xba,
+ 0x84, 0x00, 0x10, 0x38, 0x00, 0x08, 0x10, 0xe0,
+ 0x22, 0x40, 0xa4, 0x26, 0x01, 0x00, 0x00, 0x80,
+};
+
+static const u8 hdmiphy_conf148_5[32] = {
+ 0x01, 0x05, 0x00, 0xD8, 0x10, 0x9C, 0xf8, 0x40,
+ 0x6A, 0x18, 0x00, 0x51, 0xff, 0xF1, 0x54, 0xba,
+ 0x84, 0x00, 0x10, 0x38, 0x00, 0x08, 0x10, 0xE0,
+ 0x22, 0x40, 0xa4, 0x26, 0x02, 0x00, 0x00, 0x80,
+};
+
+const struct hdmiphy_conf hdmiphy_conf[] = {
+ { V4L2_DV_480P59_94, hdmiphy_conf27 },
+ { V4L2_DV_1080P30, hdmiphy_conf74_175 },
+ { V4L2_DV_720P59_94, hdmiphy_conf74_175 },
+ { V4L2_DV_720P60, hdmiphy_conf74_25 },
+ { V4L2_DV_1080P50, hdmiphy_conf148_5 },
+ { V4L2_DV_1080P60, hdmiphy_conf148_5 },
+ { V4L2_DV_1080I60, hdmiphy_conf74_25 },
+};
+
+const int hdmiphy_conf_cnt = ARRAY_SIZE(hdmiphy_conf);
--- /dev/null
+/*
+ * Samsung HDMI Physical interface driver
+ *
+ * Copyright (C) 2010-2011 Samsung Electronics Co.Ltd
+ * Author: Jiun Yu <jiun.yu@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include "hdmi.h"
+
+static const u8 hdmiphy_conf27[32] = {
+ 0x01, 0x51, 0x2d, 0x75, 0x40, 0x01, 0x00, 0x08,
+ 0x82, 0xa0, 0x0e, 0xd9, 0x45, 0xa0, 0xac, 0x80,
+ 0x08, 0x80, 0x11, 0x04, 0x02, 0x22, 0x44, 0x86,
+ 0x54, 0xe3, 0x24, 0x00, 0x00, 0x00, 0x01, 0x00,
+};
+
+static const u8 hdmiphy_conf27_027[32] = {
+ 0x01, 0xd1, 0x2d, 0x72, 0x40, 0x64, 0x12, 0x08,
+ 0x43, 0xa0, 0x0e, 0xd9, 0x45, 0xa0, 0xac, 0x80,
+ 0x08, 0x80, 0x11, 0x04, 0x02, 0x22, 0x44, 0x86,
+ 0x54, 0xe3, 0x24, 0x00, 0x00, 0x00, 0x01, 0x00,
+};
+
+static const u8 hdmiphy_conf74_175[32] = {
+ 0x01, 0xd1, 0x1f, 0x10, 0x40, 0x5b, 0xef, 0x08,
+ 0x81, 0xa0, 0xb9, 0xd8, 0x45, 0xa0, 0xac, 0x80,
+ 0x5a, 0x80, 0x11, 0x04, 0x02, 0x22, 0x44, 0x86,
+ 0x54, 0xa6, 0x24, 0x01, 0x00, 0x00, 0x01, 0x00,
+};
+
+static const u8 hdmiphy_conf74_25[32] = {
+ 0x01, 0xd1, 0x1f, 0x10, 0x40, 0x40, 0xf8, 0x08,
+ 0x81, 0xa0, 0xba, 0xd8, 0x45, 0xa0, 0xac, 0x80,
+ 0x3c, 0x80, 0x11, 0x04, 0x02, 0x22, 0x44, 0x86,
+ 0x54, 0xa5, 0x24, 0x01, 0x00, 0x00, 0x01, 0x00,
+};
+
+static const u8 hdmiphy_conf148_352[32] = {
+ 0x01, 0xd2, 0x3e, 0x00, 0x40, 0x5b, 0xef, 0x08,
+ 0x81, 0xa0, 0xb9, 0xd8, 0x45, 0xa0, 0xac, 0x80,
+ 0x3c, 0x80, 0x11, 0x04, 0x02, 0x22, 0x44, 0x86,
+ 0x54, 0x4b, 0x25, 0x03, 0x00, 0x00, 0x01, 0x00,
+};
+
+static const u8 hdmiphy_conf148_5[32] = {
+ 0x01, 0xd1, 0x1f, 0x00, 0x40, 0x40, 0xf8, 0x08,
+ 0x81, 0xa0, 0xba, 0xd8, 0x45, 0xa0, 0xac, 0x80,
+ 0x3c, 0x80, 0x11, 0x04, 0x02, 0x22, 0x44, 0x86,
+ 0x54, 0x4b, 0x25, 0x03, 0x00, 0x00, 0x01, 0x00,
+};
+
+const struct hdmiphy_conf hdmiphy_conf[] = {
+ { V4L2_DV_480P59_94, hdmiphy_conf27 },
+ { V4L2_DV_480P60, hdmiphy_conf27_027 },
+ { V4L2_DV_576P50, hdmiphy_conf27 },
+ { V4L2_DV_720P50, hdmiphy_conf74_25 },
+ { V4L2_DV_720P59_94, hdmiphy_conf74_175 },
+ { V4L2_DV_720P60, hdmiphy_conf74_25 },
+ { V4L2_DV_1080I50, hdmiphy_conf74_25 },
+ { V4L2_DV_1080I59_94, hdmiphy_conf74_175 },
+ { V4L2_DV_1080I60, hdmiphy_conf74_25 },
+ { V4L2_DV_1080P24, hdmiphy_conf74_25 },
+ { V4L2_DV_1080P25, hdmiphy_conf74_25 },
+ { V4L2_DV_1080P30, hdmiphy_conf74_175 },
+ { V4L2_DV_1080P50, hdmiphy_conf148_5 },
+ { V4L2_DV_1080P59_94, hdmiphy_conf148_352 },
+ { V4L2_DV_1080P60, hdmiphy_conf148_5 },
+ { V4L2_DV_720P60_SB_HALF, hdmiphy_conf74_25 },
+ { V4L2_DV_720P60_TB, hdmiphy_conf74_25 },
+ { V4L2_DV_720P59_94_SB_HALF, hdmiphy_conf74_25 },
+ { V4L2_DV_720P59_94_TB, hdmiphy_conf74_25 },
+ { V4L2_DV_720P50_SB_HALF, hdmiphy_conf74_25 },
+ { V4L2_DV_720P50_TB, hdmiphy_conf74_25 },
+ { V4L2_DV_1080P24_FP, hdmiphy_conf148_5 },
+ { V4L2_DV_1080P24_SB_HALF, hdmiphy_conf74_25 },
+ { V4L2_DV_1080P24_TB, hdmiphy_conf74_25 },
+ { V4L2_DV_1080P23_98_FP, hdmiphy_conf148_5 },
+ { V4L2_DV_1080P23_98_SB_HALF, hdmiphy_conf74_25 },
+ { V4L2_DV_1080P23_98_TB, hdmiphy_conf74_25 },
+ { V4L2_DV_1080I60_SB_HALF, hdmiphy_conf74_25 },
+ { V4L2_DV_1080I59_94_SB_HALF, hdmiphy_conf74_25 },
+ { V4L2_DV_1080I50_SB_HALF, hdmiphy_conf74_25 },
+ { V4L2_DV_1080P60_SB_HALF, hdmiphy_conf148_5 },
+ { V4L2_DV_1080P60_TB, hdmiphy_conf148_5 },
+ { V4L2_DV_1080P30_SB_HALF, hdmiphy_conf74_25 },
+ { V4L2_DV_1080P30_TB, hdmiphy_conf74_25 },
+};
+
+const int hdmiphy_conf_cnt = ARRAY_SIZE(hdmiphy_conf);
--- /dev/null
+/*
+ * Samsung HDMI Physical interface driver
+ *
+ * Copyright (C) 2010-2011 Samsung Electronics Co.Ltd
+ * Author: Tomasz Stanislawski <t.stanislaws@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+#include "hdmi.h"
+
+#include <linux/module.h>
+#include <linux/i2c.h>
+#include <linux/slab.h>
+#include <linux/clk.h>
+#include <linux/io.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/err.h>
+
+#include <media/v4l2-subdev.h>
+
+MODULE_AUTHOR("Tomasz Stanislawski <t.stanislaws@samsung.com>");
+MODULE_DESCRIPTION("Samsung HDMI Physical interface driver");
+MODULE_LICENSE("GPL");
+
+#ifdef DEBUG
+static void hdmiphy_print_reg(u8 *recv_buffer)
+{
+ int i;
+
+ for (i = 1; i <= 32; i++) {
+ printk("[%2x]", recv_buffer[i - 1]);
+ if (!(i % 8) && i)
+ printk("\n");
+ }
+ printk("\n");
+}
+#endif
+
+const u8 *hdmiphy_preset2conf(u32 preset)
+{
+ int i;
+ for (i = 0; i < hdmiphy_conf_cnt; ++i)
+ if (hdmiphy_conf[i].preset == preset)
+ return hdmiphy_conf[i].data;
+ return NULL;
+}
+
+static int hdmiphy_ctrl(struct i2c_client *client, u8 reg, u8 bit,
+ u8 *recv_buffer, int en)
+{
+ int ret;
+ u8 buffer[2];
+ struct device *dev = &client->dev;
+
+ buffer[0] = reg;
+ buffer[1] = en ? (recv_buffer[reg] & (~(1 << bit))) :
+ (recv_buffer[reg] | (1 << bit));
+ recv_buffer[reg] = buffer[1];
+
+ ret = i2c_master_send(client, buffer, 2);
+ if (ret != 2) {
+ dev_err(dev, "failed to turn %s HDMIPHY via I2C\n",
+ en ? "on" : "off");
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static int hdmiphy_enable_oscpad(struct i2c_client *client, int on,
+ u8 *recv_buffer)
+{
+ int ret;
+ u8 buffer[2];
+ struct device *dev = &client->dev;
+
+ buffer[0] = 0x0b;
+ if (on)
+ buffer[1] = 0xd8;
+ else
+ buffer[1] = 0x18;
+ recv_buffer[0x0b] = buffer[1];
+
+ ret = i2c_master_send(client, buffer, 2);
+ if (ret != 2) {
+ dev_err(dev, "failed to %s osc pad\n",
+ on ? "enable" : "disable");
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static int hdmiphy_s_power(struct v4l2_subdev *sd, int on)
+{
+ u8 recv_buffer[32];
+ u8 buffer[2];
+ struct i2c_client *client = v4l2_get_subdevdata(sd);
+ struct device *dev = &client->dev;
+
+ memset(recv_buffer, 0, sizeof(recv_buffer));
+
+ dev_dbg(dev, "%s: hdmiphy is %s\n", __func__, on ? "on" : "off");
+
+ buffer[0] = 0x1;
+ i2c_master_send(client, buffer, 1);
+ i2c_master_recv(client, recv_buffer, 32);
+
+#ifdef DEBUG
+ hdmiphy_print_reg(recv_buffer);
+#endif
+
+ if (!on)
+ hdmiphy_enable_oscpad(client, 0, recv_buffer);
+
+ hdmiphy_ctrl(client, 0x1d, 0x7, recv_buffer, on);
+ hdmiphy_ctrl(client, 0x1d, 0x0, recv_buffer, on);
+ hdmiphy_ctrl(client, 0x1d, 0x1, recv_buffer, on);
+ hdmiphy_ctrl(client, 0x1d, 0x2, recv_buffer, on);
+ hdmiphy_ctrl(client, 0x1d, 0x4, recv_buffer, on);
+ hdmiphy_ctrl(client, 0x1d, 0x5, recv_buffer, on);
+ hdmiphy_ctrl(client, 0x1d, 0x6, recv_buffer, on);
+
+ if (!on)
+ hdmiphy_ctrl(client, 0x4, 0x3, recv_buffer, 0);
+
+#ifdef DEBUG
+ buffer[0] = 0x1;
+ i2c_master_send(client, buffer, 1);
+ i2c_master_recv(client, recv_buffer, 32);
+
+ hdmiphy_print_reg(recv_buffer);
+#endif
+ return 0;
+}
+
+static int hdmiphy_s_dv_preset(struct v4l2_subdev *sd,
+ struct v4l2_dv_preset *preset)
+{
+ const u8 *data;
+ u8 buffer[32];
+ u8 recv_buffer[32];
+ int ret;
+ struct i2c_client *client = v4l2_get_subdevdata(sd);
+ struct device *dev = &client->dev;
+
+ dev_dbg(dev, "s_dv_preset(preset = %d)\n", preset->preset);
+ data = hdmiphy_preset2conf(preset->preset);
+ if (!data) {
+ dev_err(dev, "format not supported\n");
+ return -EINVAL;
+ }
+
+ memset(recv_buffer, 0, 32);
+
+#ifdef DEBUG
+ i2c_master_recv(client, recv_buffer, 32);
+ hdmiphy_print_reg(recv_buffer);
+#endif
+
+ /* storing configuration to the device */
+ memcpy(buffer, data, 32);
+ ret = i2c_master_send(client, buffer, 32);
+ if (ret != 32) {
+ dev_err(dev, "failed to configure HDMIPHY via I2C\n");
+ return -EIO;
+ }
+
+#ifdef DEBUG
+ i2c_master_recv(client, recv_buffer, 32);
+ hdmiphy_print_reg(recv_buffer);
+#endif
+
+ return 0;
+}
+
+static int hdmiphy_s_stream(struct v4l2_subdev *sd, int enable)
+{
+ struct i2c_client *client = v4l2_get_subdevdata(sd);
+ struct device *dev = &client->dev;
+ u8 buffer[2];
+ int ret;
+
+ dev_dbg(dev, "s_stream(%d)\n", enable);
+ /* going to/from configuration from/to operation mode */
+ buffer[0] = 0x1f;
+ buffer[1] = enable ? 0x80 : 0x00;
+
+ ret = i2c_master_send(client, buffer, 2);
+ if (ret != 2) {
+ dev_err(dev, "stream (%d) failed\n", enable);
+ return -EIO;
+ }
+ return 0;
+}
+
+static const struct v4l2_subdev_core_ops hdmiphy_core_ops = {
+ .s_power = hdmiphy_s_power,
+};
+
+static const struct v4l2_subdev_video_ops hdmiphy_video_ops = {
+ .s_dv_preset = hdmiphy_s_dv_preset,
+ .s_stream = hdmiphy_s_stream,
+};
+
+static const struct v4l2_subdev_ops hdmiphy_ops = {
+ .core = &hdmiphy_core_ops,
+ .video = &hdmiphy_video_ops,
+};
+
+static int __devinit hdmiphy_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ static struct v4l2_subdev sd;
+
+ dev_info(&client->dev, "hdmiphy_probe start\n");
+
+ v4l2_i2c_subdev_init(&sd, client, &hdmiphy_ops);
+ dev_info(&client->dev, "probe successful\n");
+ return 0;
+}
+
+static int __devexit hdmiphy_remove(struct i2c_client *client)
+{
+ dev_info(&client->dev, "remove successful\n");
+ return 0;
+}
+
+static const struct i2c_device_id hdmiphy_id[] = {
+ { "hdmiphy", 0 },
+ { },
+};
+MODULE_DEVICE_TABLE(i2c, hdmiphy_id);
+
+static struct i2c_driver hdmiphy_driver = {
+ .driver = {
+ .name = "s5p-hdmiphy",
+ .owner = THIS_MODULE,
+ },
+ .probe = hdmiphy_probe,
+ .remove = __devexit_p(hdmiphy_remove),
+ .id_table = hdmiphy_id,
+};
+
+static int __init hdmiphy_init(void)
+{
+ return i2c_add_driver(&hdmiphy_driver);
+}
+module_init(hdmiphy_init);
+
+static void __exit hdmiphy_exit(void)
+{
+ i2c_del_driver(&hdmiphy_driver);
+}
+module_exit(hdmiphy_exit);
--- /dev/null
+/*
+ * Samsung TV Mixer driver
+ *
+ * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
+ *
+ * Tomasz Stanislawski, <t.stanislaws@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published
+ * by the Free Software Foundiation. either version 2 of the License,
+ * or (at your option) any later version
+ */
+
+#ifndef SAMSUNG_MIXER_H
+#define SAMSUNG_MIXER_H
+
+#ifdef CONFIG_VIDEO_EXYNOS_MIXER_DEBUG
+ #define DEBUG
+#endif
+
+#include <linux/fb.h>
+#include <linux/kernel.h>
+#include <linux/spinlock.h>
+#include <linux/wait.h>
+#include <media/v4l2-device.h>
+#include <media/videobuf2-core.h>
+#include <media/exynos_mc.h>
+
+#include "regs-mixer.h"
+
+/** maximum number of output interfaces */
+#define MXR_MAX_OUTPUTS 2
+
+/** There are 2 mixers after EXYNOS5250 */
+#define MXR_SUB_MIXER0 0
+#define MXR_SUB_MIXER1 1
+/** maximum number of sub-mixers */
+#if defined(CONFIG_ARCH_EXYNOS4)
+#define MXR_MAX_SUB_MIXERS 1
+#else
+#define MXR_MAX_SUB_MIXERS 2
+#endif
+
+/** each sub-mixer supports 1 video layer and 2 graphic layers */
+#define MXR_LAYER_VIDEO 0
+#define MXR_LAYER_GRP0 1
+#define MXR_LAYER_GRP1 2
+
+#define EXYNOS_VIDEONODE_MXR_GRP(x) (16 + x)
+#define EXYNOS_VIDEONODE_MXR_VIDEO 20
+
+/** maximum number of input interfaces (layers) */
+#define MXR_MAX_LAYERS 3
+#define MXR_DRIVER_NAME "s5p-mixer"
+/** maximal number of planes for every layer */
+#define MXR_MAX_PLANES 2
+
+#define MXR_ENABLE 1
+#define MXR_DISABLE 0
+
+/* mixer pad definitions */
+#define MXR_PAD_SINK_GSCALER 0
+#define MXR_PAD_SINK_GRP0 1
+#define MXR_PAD_SINK_GRP1 2
+#define MXR_PAD_SOURCE_GSCALER 3
+#define MXR_PAD_SOURCE_GRP0 4
+#define MXR_PAD_SOURCE_GRP1 5
+#define MXR_PADS_NUM 6
+
+#define V4L2_CID_TV_LAYER_BLEND_ENABLE (V4L2_CID_LASTP1 + 2)
+#define V4L2_CID_TV_LAYER_BLEND_ALPHA (V4L2_CID_LASTP1 + 3)
+#define V4L2_CID_TV_PIXEL_BLEND_ENABLE (V4L2_CID_LASTP1 + 4)
+#define V4L2_CID_TV_CHROMA_ENABLE (V4L2_CID_LASTP1 + 5)
+#define V4L2_CID_TV_CHROMA_VALUE (V4L2_CID_LASTP1 + 6)
+
+/** description of a macroblock for packed formats */
+struct mxr_block {
+ /** vertical number of pixels in macroblock */
+ unsigned int width;
+ /** horizontal number of pixels in macroblock */
+ unsigned int height;
+ /** size of block in bytes */
+ unsigned int size;
+};
+
+/** description of supported format */
+struct mxr_format {
+ /** format name/mnemonic */
+ const char *name;
+ /** fourcc identifier */
+ u32 fourcc;
+ /** colorspace identifier */
+ enum v4l2_colorspace colorspace;
+ /** number of planes in image data */
+ int num_planes;
+ /** description of block for each plane */
+ struct mxr_block plane[MXR_MAX_PLANES];
+ /** number of subframes in image data */
+ int num_subframes;
+ /** specifies to which subframe belong given plane */
+ int plane2subframe[MXR_MAX_PLANES];
+ /** internal code, driver dependant */
+ unsigned long cookie;
+};
+
+/** description of crop configuration for image */
+struct mxr_crop {
+ /** width of layer in pixels */
+ unsigned int full_width;
+ /** height of layer in pixels */
+ unsigned int full_height;
+ /** horizontal offset of first pixel to be displayed */
+ unsigned int x_offset;
+ /** vertical offset of first pixel to be displayed */
+ unsigned int y_offset;
+ /** width of displayed data in pixels */
+ unsigned int width;
+ /** height of displayed data in pixels */
+ unsigned int height;
+ /** indicate which fields are present in buffer */
+ unsigned int field;
+};
+
+/** stages of geometry operations */
+enum mxr_geometry_stage {
+ MXR_GEOMETRY_SINK,
+ MXR_GEOMETRY_COMPOSE,
+ MXR_GEOMETRY_CROP,
+ MXR_GEOMETRY_SOURCE,
+};
+
+/** description of transformation from source to destination image */
+struct mxr_geometry {
+ /** cropping for source image */
+ struct mxr_crop src;
+ /** cropping for destination image */
+ struct mxr_crop dst;
+ /** layer-dependant description of horizontal scaling */
+ unsigned int x_ratio;
+ /** layer-dependant description of vertical scaling */
+ unsigned int y_ratio;
+};
+
+/** instance of a buffer */
+struct mxr_buffer {
+ /** common v4l buffer stuff -- must be first */
+ struct vb2_buffer vb;
+ /** node for layer's lists */
+ struct list_head list;
+};
+
+/** TV graphic layer pipeline state */
+enum tv_graph_pipeline_state {
+ /** graphic layer is not shown */
+ TV_GRAPH_PIPELINE_IDLE = 0,
+ /** state between STREAMON and hardware start */
+ TV_GRAPH_PIPELINE_STREAMING_START,
+ /** graphic layer is shown */
+ TV_GRAPH_PIPELINE_STREAMING,
+ /** state before STREAMOFF is finished */
+ TV_GRAPH_PIPELINE_STREAMING_FINISH,
+};
+
+/** TV graphic layer pipeline structure for streaming media data */
+struct tv_graph_pipeline {
+ struct media_pipeline pipe;
+ enum tv_graph_pipeline_state state;
+
+ /** starting point on pipeline */
+ struct mxr_layer *layer;
+};
+
+/** forward declarations */
+struct mxr_device;
+struct mxr_layer;
+
+/** callback for layers operation */
+struct mxr_layer_ops {
+ /* TODO: try to port it to subdev API */
+ /** handler for resource release function */
+ void (*release)(struct mxr_layer *);
+ /** setting buffer to HW */
+ void (*buffer_set)(struct mxr_layer *, struct mxr_buffer *);
+ /** setting format and geometry in HW */
+ void (*format_set)(struct mxr_layer *);
+ /** streaming stop/start */
+ void (*stream_set)(struct mxr_layer *, int);
+ /** adjusting geometry */
+ void (*fix_geometry)(struct mxr_layer *);
+};
+
+enum mxr_layer_type {
+ MXR_LAYER_TYPE_VIDEO = 0,
+ MXR_LAYER_TYPE_GRP = 1,
+};
+
+struct mxr_layer_en {
+ int graph0;
+ int graph1;
+ int graph2;
+ int graph3;
+};
+/** layer instance, a single window and content displayed on output */
+struct mxr_layer {
+ /** parent mixer device */
+ struct mxr_device *mdev;
+ /** layer index (unique identifier) */
+ int idx;
+ /** layer type */
+ enum mxr_layer_type type;
+ /** minor number of mixer layer as video device */
+ int minor;
+ /** callbacks for layer methods */
+ struct mxr_layer_ops ops;
+ /** format array */
+ const struct mxr_format **fmt_array;
+ /** size of format array */
+ unsigned long fmt_array_size;
+ /** frame buffer emulator */
+ void *fb;
+
+ /** lock for protection of list and state fields */
+ spinlock_t enq_slock;
+ /** list for enqueued buffers */
+ struct list_head enq_list;
+ /** buffer currently owned by hardware in temporary registers */
+ struct mxr_buffer *update_buf;
+ /** buffer currently owned by hardware in shadow registers */
+ struct mxr_buffer *shadow_buf;
+
+ /** mutex for protection of fields below */
+ struct mutex mutex;
+ /** handler for video node */
+ struct video_device vfd;
+ /** queue for output buffers */
+ struct vb2_queue vb_queue;
+ /** current image format */
+ const struct mxr_format *fmt;
+ /** current geometry of image */
+ struct mxr_geometry geo;
+
+ /** index of current mixer path : MXR_SUB_MIXERx*/
+ int cur_mxr;
+ /** source pad of mixer input */
+ struct media_pad pad;
+ /** pipeline structure for streaming TV graphic layer */
+ struct tv_graph_pipeline pipe;
+
+ /** enable per layer blending for each layer */
+ int layer_blend_en;
+ /** alpha value for per layer blending */
+ u32 layer_alpha;
+ /** enable per pixel blending */
+ int pixel_blend_en;
+ /** enable chromakey */
+ int chroma_en;
+ /** value for chromakey */
+ u32 chroma_val;
+};
+
+/** description of mixers output interface */
+struct mxr_output {
+ /** name of output */
+ char name[32];
+ /** output subdev */
+ struct v4l2_subdev *sd;
+ /** cookie used for configuration of registers */
+ int cookie;
+};
+
+/** specify source of output subdevs */
+struct mxr_output_conf {
+ /** name of output (connector) */
+ char *output_name;
+ /** name of module that generates output subdev */
+ char *module_name;
+ /** cookie need for mixer HW */
+ int cookie;
+};
+
+struct clk;
+struct regulator;
+
+/** auxiliary resources used my mixer */
+struct mxr_resources {
+ /** interrupt index */
+ int irq;
+ /** pointer to Mixer registers */
+ void __iomem *mxr_regs;
+#if defined(CONFIG_ARCH_EXYNOS4)
+ /** pointer to Video Processor registers */
+ void __iomem *vp_regs;
+ /** other resources, should used under mxr_device.mutex */
+ struct clk *vp;
+#endif
+#if defined(CONFIG_CPU_EXYNOS4210)
+ struct clk *sclk_dac;
+#endif
+ struct clk *sclk_mixer;
+ struct clk *mixer;
+ struct clk *sclk_hdmi;
+};
+
+/* event flags used */
+enum mxr_devide_flags {
+ MXR_EVENT_VSYNC = 0,
+};
+
+/** videobuf2 context of mixer */
+struct mxr_vb2 {
+ const struct vb2_mem_ops *ops;
+ void *(*init)(struct mxr_device *mdev);
+ void (*cleanup)(void *alloc_ctx);
+
+ dma_addr_t (*plane_addr)(struct vb2_buffer *vb, u32 plane_no);
+
+ void (*resume)(void *alloc_ctx);
+ void (*suspend)(void *alloc_ctx);
+
+ int (*cache_flush)(void *alloc_ctx, struct vb2_buffer *vb,
+ u32 num_planes);
+ void (*set_cacheable)(void *alloc_ctx, bool cacheable);
+ void *(*attach_dmabuf)(void *alloc_ctx, struct dma_buf *dbuf,
+ unsigned long size, int write);
+ void (*detach_dmabuf)(void *buf_priv);
+ int (*map_dmabuf)(void *buf_priv);
+ void (*unmap_dmabuf)(void *buf_priv);
+};
+
+/** sub-mixer 0,1 drivers instance */
+struct sub_mxr_device {
+ /** state of each layer */
+ struct mxr_layer *layer[MXR_MAX_LAYERS];
+
+ /** use of each sub mixer */
+ int use;
+ /** use of local path gscaler to mixer */
+ int local;
+ /** for mixer as sub-device */
+ struct v4l2_subdev sd;
+ /** mixer's pads : 3 sink pad, 3 source pad */
+ struct media_pad pads[MXR_PADS_NUM];
+ /** format info of mixer's pads */
+ struct v4l2_mbus_framefmt mbus_fmt[MXR_PADS_NUM];
+ /** crop info of mixer's pads */
+ struct v4l2_rect crop[MXR_PADS_NUM];
+};
+
+/** drivers instance */
+struct mxr_device {
+ /** master device */
+ struct device *dev;
+ /** state of each output */
+ struct mxr_output *output[MXR_MAX_OUTPUTS];
+ /** number of registered outputs */
+ int output_cnt;
+
+ /* video resources */
+
+ /** videbuf2 context */
+ const struct mxr_vb2 *vb2;
+ /** context of allocator */
+ void *alloc_ctx;
+ /** event wait queue */
+ wait_queue_head_t event_queue;
+ /** state flags */
+ unsigned long event_flags;
+
+ /** spinlock for protection of registers */
+ spinlock_t reg_slock;
+
+ /** mutex for protection of fields below */
+ struct mutex mutex;
+ /** mutex for protection of streamer */
+ struct mutex s_mutex;
+
+ /** number of entities depndant on output configuration */
+ int n_output;
+ /** number of users that do streaming */
+ int n_streamer;
+ /** index of current output */
+ int current_output;
+ /** auxiliary resources used my mixer */
+ struct mxr_resources res;
+
+ /** number of G-Scaler linked to mixer0 */
+ int mxr0_gsc;
+ /** number of G-Scaler linked to mixer1 */
+ int mxr1_gsc;
+ /** media entity link setup flags */
+ unsigned long flags;
+
+ /** entity info which transfers media data to mixer subdev */
+ enum mxr_data_from mxr_data_from;
+
+ /** count of sub-mixers */
+ struct sub_mxr_device sub_mxr[MXR_MAX_SUB_MIXERS];
+
+ /** enabled layer number **/
+ struct mxr_layer_en layer_en;
+ /** frame packing flag **/
+ int frame_packing;
+};
+
+#if defined(CONFIG_VIDEOBUF2_CMA_PHYS)
+extern const struct mxr_vb2 mxr_vb2_cma;
+#elif defined(CONFIG_VIDEOBUF2_DMA_CONTIG)
+extern const struct mxr_vb2 mxr_vb2_dma_contig;
+#endif
+
+extern struct mxr_device *my_gbl_mdev;
+/** transform device structure into mixer device */
+static inline struct mxr_device *to_mdev(struct device *dev)
+{
+ return dev_get_drvdata(dev);
+}
+
+/** transform subdev structure into mixer device */
+static inline struct mxr_device *sd_to_mdev(struct v4l2_subdev *sd)
+{
+ struct sub_mxr_device *sub_mxr =
+ container_of(sd, struct sub_mxr_device, sd);
+ return sub_mxr->layer[MXR_LAYER_GRP0]->mdev;
+}
+
+/** transform subdev structure into sub mixer device */
+static inline struct sub_mxr_device *sd_to_sub_mxr(struct v4l2_subdev *sd)
+{
+ return container_of(sd, struct sub_mxr_device, sd);
+}
+
+/** transform entity structure into sub mixer device */
+static inline struct sub_mxr_device *entity_to_sub_mxr(struct media_entity *me)
+{
+ struct v4l2_subdev *sd;
+
+ sd = container_of(me, struct v4l2_subdev, entity);
+ return container_of(sd, struct sub_mxr_device, sd);
+}
+
+/** transform entity structure into sub mixer device */
+static inline struct mxr_device *sub_mxr_to_mdev(struct sub_mxr_device *sub_mxr)
+{
+ int idx;
+
+ if (!strcmp(sub_mxr->sd.name, "s5p-mixer0"))
+ idx = MXR_SUB_MIXER0;
+ else
+ idx = MXR_SUB_MIXER1;
+
+ return container_of(sub_mxr, struct mxr_device, sub_mxr[idx]);
+}
+
+/** get current output data, should be called under mdev's mutex */
+static inline struct mxr_output *to_output(struct mxr_device *mdev)
+{
+ return mdev->output[mdev->current_output];
+}
+
+/** get current output subdev, should be called under mdev's mutex */
+static inline struct v4l2_subdev *to_outsd(struct mxr_device *mdev)
+{
+ struct mxr_output *out = to_output(mdev);
+ return out ? out->sd : NULL;
+}
+
+/** forward declaration for mixer platform data */
+struct mxr_platform_data;
+
+/** acquiring common video resources */
+int __devinit mxr_acquire_video(struct mxr_device *mdev,
+ struct mxr_output_conf *output_cont, int output_count);
+
+/** releasing common video resources */
+void __devexit mxr_release_video(struct mxr_device *mdev);
+
+struct mxr_layer *mxr_graph_layer_create(struct mxr_device *mdev, int cur_mxr,
+ int idx, int nr);
+struct mxr_layer *mxr_vp_layer_create(struct mxr_device *mdev, int cur_mxr,
+ int idx, int nr);
+struct mxr_layer *mxr_video_layer_create(struct mxr_device *mdev, int cur_mxr,
+ int idx);
+struct mxr_layer *mxr_base_layer_create(struct mxr_device *mdev,
+ int idx, char *name, struct mxr_layer_ops *ops);
+
+const struct mxr_format *find_format_by_fourcc(
+ struct mxr_layer *layer, unsigned long fourcc);
+
+void mxr_base_layer_release(struct mxr_layer *layer);
+void mxr_layer_release(struct mxr_layer *layer);
+void mxr_layer_geo_fix(struct mxr_layer *layer);
+void mxr_layer_default_geo(struct mxr_layer *layer);
+
+int mxr_base_layer_register(struct mxr_layer *layer);
+void mxr_base_layer_unregister(struct mxr_layer *layer);
+
+unsigned long mxr_get_plane_size(const struct mxr_block *blk,
+ unsigned int width, unsigned int height);
+
+/** adds new consumer for mixer's power */
+int __must_check mxr_power_get(struct mxr_device *mdev);
+/** removes consumer for mixer's power */
+void mxr_power_put(struct mxr_device *mdev);
+/** add new client for output configuration */
+void mxr_output_get(struct mxr_device *mdev);
+/** removes new client for output configuration */
+void mxr_output_put(struct mxr_device *mdev);
+/** returns format of data delivared to current output */
+void mxr_get_mbus_fmt(struct mxr_device *mdev,
+ struct v4l2_mbus_framefmt *mbus_fmt);
+
+/* Debug */
+
+#define mxr_err(mdev, fmt, ...) dev_err(mdev->dev, fmt, ##__VA_ARGS__)
+#define mxr_warn(mdev, fmt, ...) dev_warn(mdev->dev, fmt, ##__VA_ARGS__)
+#define mxr_info(mdev, fmt, ...) dev_info(mdev->dev, fmt, ##__VA_ARGS__)
+
+#ifdef CONFIG_VIDEO_EXYNOS_MIXER_DEBUG
+ #define mxr_dbg(mdev, fmt, ...) dev_dbg(mdev->dev, fmt, ##__VA_ARGS__)
+#else
+ #define mxr_dbg(mdev, fmt, ...) do { (void) mdev; } while (0)
+#endif
+
+/* accessing Mixer's and Video Processor's registers */
+
+void mxr_layer_sync(struct mxr_device *mdev, int en);
+void mxr_vsync_set_update(struct mxr_device *mdev, int en);
+void mxr_reg_reset(struct mxr_device *mdev);
+void mxr_reg_set_layer_blend(struct mxr_device *mdev, int sub_mxr, int num,
+ int en);
+void mxr_reg_layer_alpha(struct mxr_device *mdev, int sub_mxr, int num, u32 a);
+void mxr_reg_set_pixel_blend(struct mxr_device *mdev, int sub_mxr, int num,
+ int en);
+void mxr_reg_set_colorkey(struct mxr_device *mdev, int sub_mxr,
+ int num, int en);
+void mxr_reg_colorkey_val(struct mxr_device *mdev, int sub_mxr, int num, u32 v);
+irqreturn_t mxr_irq_handler(int irq, void *dev_data);
+void mxr_reg_s_output(struct mxr_device *mdev, int cookie);
+void mxr_reg_streamon(struct mxr_device *mdev);
+void mxr_reg_streamoff(struct mxr_device *mdev);
+int mxr_reg_wait4vsync(struct mxr_device *mdev);
+void mxr_reg_set_mbus_fmt(struct mxr_device *mdev,
+ struct v4l2_mbus_framefmt *fmt);
+void mxr_reg_local_path_clear(struct mxr_device *mdev);
+void mxr_reg_local_path_set(struct mxr_device *mdev, int mxr0_gsc, int mxr1_gsc,
+ u32 flags);
+void mxr_reg_graph_layer_stream(struct mxr_device *mdev, int idx, int en);
+void mxr_reg_graph_buffer(struct mxr_device *mdev, int idx, dma_addr_t addr);
+void mxr_reg_graph_format(struct mxr_device *mdev, int idx,
+ const struct mxr_format *fmt, const struct mxr_geometry *geo);
+
+void mxr_reg_video_layer_stream(struct mxr_device *mdev, int idx, int en);
+void mxr_reg_video_geo(struct mxr_device *mdev, int cur_mxr, int idx,
+ const struct mxr_geometry *geo);
+
+#if defined(CONFIG_ARCH_EXYNOS4)
+void mxr_reg_vp_layer_stream(struct mxr_device *mdev, int en);
+void mxr_reg_vp_buffer(struct mxr_device *mdev,
+ dma_addr_t luma_addr[2], dma_addr_t chroma_addr[2]);
+void mxr_reg_vp_format(struct mxr_device *mdev,
+ const struct mxr_format *fmt, const struct mxr_geometry *geo);
+#endif
+void mxr_reg_dump(struct mxr_device *mdev);
+
+#endif /* SAMSUNG_MIXER_H */
--- /dev/null
+/*
+ * Samsung TV Mixer driver
+ *
+ * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
+ *
+ * Tomasz Stanislawski, <t.stanislaws@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published
+ * by the Free Software Foundiation. either version 2 of the License,
+ * or (at your option) any later version
+ */
+#include "mixer.h"
+
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/io.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/fb.h>
+#include <linux/delay.h>
+#include <linux/pm_runtime.h>
+#include <linux/clk.h>
+#include <linux/kernel.h>
+
+#if defined(CONFIG_VIDEOBUF2_DMA_CONTIG)
+#include <media/videobuf2-dma-contig.h>
+#ifdef CONFIG_EXYNOS_IOMMU
+#include <mach/sysmmu.h>
+#include <linux/of_platform.h>
+#endif
+#endif
+#include <media/exynos_mc.h>
+
+struct mxr_device *my_gbl_mdev;
+MODULE_AUTHOR("Tomasz Stanislawski, <t.stanislaws@samsung.com>");
+MODULE_DESCRIPTION("Samsung MIXER");
+MODULE_LICENSE("GPL");
+
+/* --------- DRIVER PARAMETERS ---------- */
+
+static struct mxr_output_conf mxr_output_conf[] = {
+ {
+ .output_name = "S5P HDMI connector",
+ .module_name = "exynos5-hdmi",
+ .cookie = 1,
+ },
+ {
+ .output_name = "S5P SDO connector",
+ .module_name = "s5p-sdo",
+ .cookie = 0,
+ },
+};
+
+void mxr_get_mbus_fmt(struct mxr_device *mdev,
+ struct v4l2_mbus_framefmt *mbus_fmt)
+{
+ struct v4l2_subdev *sd;
+ int ret;
+
+ mutex_lock(&mdev->mutex);
+ sd = to_outsd(mdev);
+ ret = v4l2_subdev_call(sd, video, g_mbus_fmt, mbus_fmt);
+ WARN(ret, "failed to get mbus_fmt for output %s\n", sd->name);
+ mutex_unlock(&mdev->mutex);
+}
+
+static void mxr_set_alpha_blend(struct mxr_device *mdev)
+{
+ int i, j;
+ int layer_en, pixel_en, chroma_en;
+ u32 a, v;
+
+ for (i = 0; i < MXR_MAX_SUB_MIXERS; ++i) {
+ for (j = 0; j < MXR_MAX_LAYERS; ++j) {
+ layer_en = mdev->sub_mxr[i].layer[j]->layer_blend_en;
+ a = mdev->sub_mxr[i].layer[j]->layer_alpha;
+ pixel_en = mdev->sub_mxr[i].layer[j]->pixel_blend_en;
+ chroma_en = mdev->sub_mxr[i].layer[j]->chroma_en;
+ v = mdev->sub_mxr[i].layer[j]->chroma_val;
+
+ mxr_dbg(mdev, "mixer%d: layer%d\n", i, j);
+ mxr_dbg(mdev, "layer blend is %s, alpha = %d\n",
+ layer_en ? "enabled" : "disabled", a);
+ mxr_dbg(mdev, "pixel blend is %s\n",
+ pixel_en ? "enabled" : "disabled");
+ mxr_dbg(mdev, "chromakey is %s, value = %d\n",
+ chroma_en ? "enabled" : "disabled", v);
+
+ mxr_reg_set_layer_blend(mdev, i, j, layer_en);
+ mxr_reg_layer_alpha(mdev, i, j, a);
+ mxr_reg_set_pixel_blend(mdev, i, j, pixel_en);
+ mxr_reg_set_colorkey(mdev, i, j, chroma_en);
+ mxr_reg_colorkey_val(mdev, i, j, v);
+ }
+ }
+}
+
+static int mxr_streamer_get(struct mxr_device *mdev, struct v4l2_subdev *sd)
+{
+ int i, ret;
+ int local = 1;
+ struct sub_mxr_device *sub_mxr;
+ struct mxr_layer *layer;
+ struct media_pad *pad;
+ struct v4l2_mbus_framefmt mbus_fmt;
+#if defined(CONFIG_CPU_EXYNOS4210)
+ struct mxr_resources *res = &mdev->res;
+#endif
+
+ mutex_lock(&mdev->s_mutex);
+ ++mdev->n_streamer;
+ mxr_dbg(mdev, "%s(%d)\n", __func__, mdev->n_streamer);
+ /* If pipeline is started from Gscaler input video device,
+ * TV basic configuration must be set before running mixer */
+ if (mdev->mxr_data_from == FROM_GSC_SD) {
+ mxr_dbg(mdev, "%s: from gscaler\n", __func__);
+ local = 0;
+ /* enable mixer clock */
+ ret = mxr_power_get(mdev);
+ if (ret) {
+ mxr_err(mdev, "power on failed\n");
+ return -ENODEV;
+ }
+ /* turn on connected output device through link
+ * with mixer */
+ mxr_output_get(mdev);
+
+ for (i = 0; i < MXR_MAX_SUB_MIXERS; ++i) {
+ sub_mxr = &mdev->sub_mxr[i];
+ if (sub_mxr->local) {
+ layer = sub_mxr->layer[MXR_LAYER_VIDEO];
+ layer->pipe.state = TV_GRAPH_PIPELINE_STREAMING;
+ mxr_layer_geo_fix(layer);
+ layer->ops.format_set(layer);
+ layer->ops.stream_set(layer, 1);
+ local += sub_mxr->local;
+ }
+ }
+ if (local == 2)
+ mxr_layer_sync(mdev, MXR_ENABLE);
+ /* Set the TVOUT register about gsc-mixer local path */
+ mxr_reg_local_path_set(mdev, mdev->mxr0_gsc, mdev->mxr1_gsc,
+ mdev->flags);
+ }
+
+ /* Alpha blending configuration always can be changed
+ * whenever streaming */
+ mxr_set_alpha_blend(mdev);
+
+ if ((mdev->n_streamer == 1 && local == 1) ||
+ (mdev->n_streamer == 2 && local == 2)) {
+ for (i = MXR_PAD_SOURCE_GSCALER; i < MXR_PADS_NUM; ++i) {
+ pad = &sd->entity.pads[i];
+
+ /* find sink pad of output via enabled link*/
+ pad = media_entity_remote_source(pad);
+ if (pad)
+ if (media_entity_type(pad->entity)
+ == MEDIA_ENT_T_V4L2_SUBDEV)
+ break;
+
+ if (i == MXR_PAD_SOURCE_GRP1)
+ return -ENODEV;
+ }
+
+ sd = media_entity_to_v4l2_subdev(pad->entity);
+
+ mxr_dbg(mdev, "cookie of current output = (%d)\n",
+ to_output(mdev)->cookie);
+
+#if defined(CONFIG_CPU_EXYNOS4210)
+ if (to_output(mdev)->cookie == 0)
+ clk_set_parent(res->sclk_mixer, res->sclk_dac);
+ else
+ clk_set_parent(res->sclk_mixer, res->sclk_hdmi);
+#endif
+ mxr_reg_s_output(mdev, to_output(mdev)->cookie);
+
+ ret = v4l2_subdev_call(sd, video, g_mbus_fmt, &mbus_fmt);
+ if (ret) {
+ mxr_err(mdev, "failed to get mbus_fmt for output %s\n",
+ sd->name);
+ return ret;
+ }
+
+ mxr_reg_set_mbus_fmt(mdev, &mbus_fmt);
+ ret = v4l2_subdev_call(sd, video, s_mbus_fmt, &mbus_fmt);
+ if (ret) {
+ mxr_err(mdev, "failed to set mbus_fmt for output %s\n",
+ sd->name);
+ return ret;
+ }
+ mxr_reg_streamon(mdev);
+
+ ret = v4l2_subdev_call(sd, video, s_stream, 1);
+ if (ret) {
+ mxr_err(mdev, "starting stream failed for output %s\n",
+ sd->name);
+ return ret;
+ }
+
+ ret = mxr_reg_wait4vsync(mdev);
+ if (ret) {
+ mxr_err(mdev, "failed to get vsync (%d) from output\n",
+ ret);
+ return ret;
+ }
+ }
+
+ mutex_unlock(&mdev->s_mutex);
+ mxr_reg_dump(mdev);
+
+ return 0;
+}
+
+/*
+ * When using local path between gscaler and mixer, below stop sequence
+ * must be processed
+ */
+static void mxr_streamer_off(struct mxr_device *mdev, struct v4l2_subdev *sd)
+{
+ struct media_pad *pad;
+ struct v4l2_subdev *gsc_sd;
+ struct exynos_entity_data *md_data;
+
+ if (mdev->mxr_data_from == FROM_GSC_SD) {
+ pad = &sd->entity.pads[MXR_PAD_SINK_GSCALER];
+ pad = media_entity_remote_source(pad);
+ if (pad) {
+ gsc_sd = media_entity_to_v4l2_subdev(
+ pad->entity);
+ mxr_dbg(mdev, "stop from %s\n", gsc_sd->name);
+ md_data = (struct exynos_entity_data *)
+ gsc_sd->dev_priv;
+ md_data->media_ops->power_off(gsc_sd);
+ }
+ }
+}
+
+static int mxr_streamer_put(struct mxr_device *mdev, struct v4l2_subdev *sd)
+{
+ int ret, i;
+ int local = 1;
+ struct media_pad *pad;
+ struct sub_mxr_device *sub_mxr;
+ struct mxr_layer *layer;
+ struct v4l2_subdev *hdmi_sd;
+
+ mutex_lock(&mdev->s_mutex);
+ --mdev->n_streamer;
+ mxr_dbg(mdev, "%s(%d)\n", __func__, mdev->n_streamer);
+
+ /* distinction number of local path */
+ if (mdev->mxr_data_from == FROM_GSC_SD) {
+ local = 0;
+ for (i = 0; i < MXR_MAX_SUB_MIXERS; ++i) {
+ sub_mxr = &mdev->sub_mxr[i];
+ if (sub_mxr->local)
+ local += sub_mxr->local;
+ }
+ if (local == 2)
+ mxr_layer_sync(mdev, MXR_DISABLE);
+ }
+
+ if ((mdev->n_streamer == 0 && local == 1) ||
+ (mdev->n_streamer == 1 && local == 2)) {
+ for (i = MXR_PAD_SOURCE_GSCALER; i < MXR_PADS_NUM; ++i) {
+ pad = &sd->entity.pads[i];
+
+ /* find sink pad of output via enabled link*/
+ pad = media_entity_remote_source(pad);
+ if (pad)
+ if (media_entity_type(pad->entity)
+ == MEDIA_ENT_T_V4L2_SUBDEV)
+ break;
+
+ if (i == MXR_PAD_SOURCE_GRP1)
+ return -ENODEV;
+ }
+
+ hdmi_sd = media_entity_to_v4l2_subdev(pad->entity);
+
+ mxr_reg_streamoff(mdev);
+ /* vsync applies Mixer setup */
+ ret = mxr_reg_wait4vsync(mdev);
+ if (ret) {
+ mxr_err(mdev, "failed to get vsync (%d) from output\n",
+ ret);
+ return ret;
+ }
+
+ mxr_streamer_off(mdev, sd);
+
+ ret = v4l2_subdev_call(hdmi_sd, video, s_stream, 0);
+ if (ret) {
+ mxr_err(mdev, "stopping stream failed for output %s\n",
+ hdmi_sd->name);
+ return ret;
+ }
+ } else {
+ mxr_streamer_off(mdev, sd);
+ }
+
+ /* turn off connected output device through link
+ * with mixer */
+ if (mdev->mxr_data_from == FROM_GSC_SD) {
+ for (i = 0; i < MXR_MAX_SUB_MIXERS; ++i) {
+ sub_mxr = &mdev->sub_mxr[i];
+ if (sub_mxr->local) {
+ layer = sub_mxr->layer[MXR_LAYER_VIDEO];
+ layer->ops.stream_set(layer, 0);
+ layer->pipe.state = TV_GRAPH_PIPELINE_IDLE ;
+ }
+ }
+ mxr_reg_local_path_clear(mdev);
+ mxr_output_put(mdev);
+
+ /* disable mixer clock */
+ mxr_power_put(mdev);
+ }
+ WARN(mdev->n_streamer < 0, "negative number of streamers (%d)\n",
+ mdev->n_streamer);
+ mutex_unlock(&mdev->s_mutex);
+ mxr_reg_dump(mdev);
+
+ return 0;
+}
+
+void mxr_output_get(struct mxr_device *mdev)
+{
+ mutex_lock(&mdev->mutex);
+ ++mdev->n_output;
+ mxr_dbg(mdev, "%s(%d)\n", __func__, mdev->n_output);
+ /* turn on auxiliary driver */
+ if (mdev->n_output == 1)
+ v4l2_subdev_call(to_outsd(mdev), core, s_power, 1);
+ mutex_unlock(&mdev->mutex);
+}
+
+void mxr_output_put(struct mxr_device *mdev)
+{
+ mutex_lock(&mdev->mutex);
+ --mdev->n_output;
+ mxr_dbg(mdev, "%s(%d)\n", __func__, mdev->n_output);
+ /* turn on auxiliary driver */
+ if (mdev->n_output == 0)
+ v4l2_subdev_call(to_outsd(mdev), core, s_power, 0);
+ WARN(mdev->n_output < 0, "negative number of output users (%d)\n",
+ mdev->n_output);
+ mutex_unlock(&mdev->mutex);
+}
+
+static int mxr_runtime_resume(struct device *dev);
+static int mxr_runtime_suspend(struct device *dev);
+
+int mxr_power_get(struct mxr_device *mdev)
+{
+ /* If runtime PM is not implemented, mxr_runtime_resume
+ * function is directly called.
+ */
+#ifdef CONFIG_PM_RUNTIME
+ int ret = pm_runtime_get_sync(mdev->dev);
+ /* returning 1 means that power is already enabled,
+ * so zero success be returned */
+ if (IS_ERR_VALUE(ret))
+ return ret;
+ return 0;
+#else
+ mxr_runtime_resume(mdev->dev);
+ return 0;
+#endif
+}
+
+void mxr_power_put(struct mxr_device *mdev)
+{
+ /* If runtime PM is not implemented, mxr_runtime_suspend
+ * function is directly called.
+ */
+#ifdef CONFIG_PM_RUNTIME
+ pm_runtime_put_sync(mdev->dev);
+#else
+ mxr_runtime_suspend(mdev->dev);
+#endif
+}
+
+/*--------- RESOURCE MANAGEMENT -------------*/
+
+static int __devinit mxr_acquire_plat_resources(struct mxr_device *mdev,
+ struct platform_device *pdev)
+{
+ struct resource *res;
+ int ret;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (res == NULL) {
+ mxr_err(mdev, "get memory resource failed.\n");
+ ret = -ENXIO;
+ goto fail;
+ }
+
+ mdev->res.mxr_regs = ioremap(res->start, resource_size(res));
+ if (mdev->res.mxr_regs == NULL) {
+ mxr_err(mdev, "register mapping failed.\n");
+ ret = -ENXIO;
+ goto fail;
+ }
+
+#if defined(CONFIG_ARCH_EXYNOS4)
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "vp");
+ if (res == NULL) {
+ mxr_err(mdev, "get memory resource failed.\n");
+ ret = -ENXIO;
+ goto fail_mxr_regs;
+ }
+
+ mdev->res.vp_regs = ioremap(res->start, resource_size(res));
+ if (mdev->res.vp_regs == NULL) {
+ mxr_err(mdev, "register mapping failed.\n");
+ ret = -ENXIO;
+ goto fail_mxr_regs;
+ }
+#endif
+
+ res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+ if (res == NULL) {
+ mxr_err(mdev, "get interrupt resource failed.\n");
+ ret = -ENXIO;
+ goto fail_vp_regs;
+ }
+
+ ret = request_irq(res->start, mxr_irq_handler, 0, "s5p-mixer", mdev);
+ if (ret) {
+ mxr_err(mdev, "request interrupt failed.\n");
+ goto fail_vp_regs;
+ }
+ mdev->res.irq = res->start;
+
+ return 0;
+
+fail_vp_regs:
+#if defined(CONFIG_ARCH_EXYNOS4)
+ iounmap(mdev->res.vp_regs);
+
+fail_mxr_regs:
+#endif
+ iounmap(mdev->res.mxr_regs);
+
+fail:
+ return ret;
+}
+
+static void mxr_release_plat_resources(struct mxr_device *mdev)
+{
+ free_irq(mdev->res.irq, mdev);
+#if defined(CONFIG_ARCH_EXYNOS4)
+ iounmap(mdev->res.vp_regs);
+#endif
+ iounmap(mdev->res.mxr_regs);
+}
+
+static void mxr_release_clocks(struct mxr_device *mdev)
+{
+ struct mxr_resources *res = &mdev->res;
+
+#if defined(CONFIG_ARCH_EXYNOS4)
+ if (!IS_ERR_OR_NULL(res->vp))
+ clk_put(res->vp);
+#endif
+#if defined(CONFIG_CPU_EXYNOS4210)
+ if (!IS_ERR_OR_NULL(res->sclk_mixer))
+ clk_put(res->sclk_mixer);
+ if (!IS_ERR_OR_NULL(res->sclk_dac))
+ clk_put(res->sclk_dac);
+#endif
+ if (!IS_ERR_OR_NULL(res->mixer))
+ clk_put(res->mixer);
+ if (!IS_ERR_OR_NULL(res->sclk_hdmi))
+ clk_put(res->sclk_hdmi);
+}
+
+static int mxr_acquire_clocks(struct mxr_device *mdev)
+{
+ struct mxr_resources *res = &mdev->res;
+ struct device *dev = mdev->dev;
+
+#if defined(CONFIG_ARCH_EXYNOS4)
+ res->vp = clk_get(dev, "vp");
+ if (IS_ERR_OR_NULL(res->vp)) {
+ mxr_err(mdev, "failed to get clock 'vp'\n");
+ goto fail;
+ }
+ res->sclk_mixer = clk_get(dev, "sclk_mixer");
+ if (IS_ERR_OR_NULL(res->sclk_mixer)) {
+ mxr_err(mdev, "failed to get clock 'sclk_mixer'\n");
+ goto fail;
+ }
+#endif
+#if defined(CONFIG_CPU_EXYNOS4210)
+
+ res->sclk_dac = clk_get(dev, "sclk_dac");
+ if (IS_ERR_OR_NULL(res->sclk_dac)) {
+ mxr_err(mdev, "failed to get clock 'sclk_dac'\n");
+ goto fail;
+ }
+#endif
+ res->mixer = clk_get(dev, "mixer");
+ if (IS_ERR_OR_NULL(res->mixer)) {
+ mxr_err(mdev, "failed to get clock 'mixer'\n");
+ goto fail;
+ }
+ res->sclk_hdmi = clk_get(dev, "sclk_hdmi");
+ if (IS_ERR_OR_NULL(res->sclk_hdmi)) {
+ mxr_err(mdev, "failed to get clock 'sclk_hdmi'\n");
+ goto fail;
+ }
+
+ return 0;
+fail:
+ mxr_release_clocks(mdev);
+ return -ENODEV;
+}
+
+static int __devinit mxr_acquire_resources(struct mxr_device *mdev,
+ struct platform_device *pdev)
+{
+ int ret;
+ ret = mxr_acquire_plat_resources(mdev, pdev);
+
+ if (ret)
+ goto fail;
+
+ ret = mxr_acquire_clocks(mdev);
+ if (ret)
+ goto fail_plat;
+
+ mxr_info(mdev, "resources acquired\n");
+ return 0;
+
+fail_plat:
+ mxr_release_plat_resources(mdev);
+fail:
+ mxr_err(mdev, "resources acquire failed\n");
+ return ret;
+}
+
+static void mxr_release_resources(struct mxr_device *mdev)
+{
+ mxr_release_clocks(mdev);
+ mxr_release_plat_resources(mdev);
+ memset(&mdev->res, 0, sizeof mdev->res);
+}
+
+static void mxr_release_layers(struct mxr_device *mdev)
+{
+ int i, j;
+
+ for (i = 0; i < MXR_MAX_SUB_MIXERS; ++i) {
+ for (j = 0; j < MXR_MAX_LAYERS; ++j)
+ if (mdev->sub_mxr[i].layer[j])
+ mxr_layer_release(mdev->sub_mxr[i].layer[j]);
+ }
+}
+
+static int __devinit mxr_acquire_layers(struct mxr_device *mdev,
+ struct mxr_platform_data *pdata)
+{
+ struct sub_mxr_device *sub_mxr;
+
+ sub_mxr = &mdev->sub_mxr[MXR_SUB_MIXER0];
+#if defined(CONFIG_ARCH_EXYNOS4)
+ sub_mxr->layer[MXR_LAYER_VIDEO] = mxr_vp_layer_create(mdev,
+ MXR_SUB_MIXER0, 0, EXYNOS_VIDEONODE_MXR_VIDEO);
+#else
+ sub_mxr->layer[MXR_LAYER_VIDEO] =
+ mxr_video_layer_create(mdev, MXR_SUB_MIXER0, 0);
+#endif
+ sub_mxr->layer[MXR_LAYER_GRP0] = mxr_graph_layer_create(mdev,
+ MXR_SUB_MIXER0, 0, EXYNOS_VIDEONODE_MXR_GRP(0));
+ sub_mxr->layer[MXR_LAYER_GRP1] = mxr_graph_layer_create(mdev,
+ MXR_SUB_MIXER0, 1, EXYNOS_VIDEONODE_MXR_GRP(1));
+ if (!sub_mxr->layer[MXR_LAYER_VIDEO] || !sub_mxr->layer[MXR_LAYER_GRP0]
+ || !sub_mxr->layer[MXR_LAYER_GRP1]) {
+ mxr_err(mdev, "failed to acquire layers\n");
+ goto fail;
+ }
+
+ /* Exynos5250 supports 2 sub-mixers */
+ if (MXR_MAX_SUB_MIXERS == 2) {
+ sub_mxr = &mdev->sub_mxr[MXR_SUB_MIXER1];
+ sub_mxr->layer[MXR_LAYER_VIDEO] =
+ mxr_video_layer_create(mdev, MXR_SUB_MIXER1, 1);
+ sub_mxr->layer[MXR_LAYER_GRP0] = mxr_graph_layer_create(mdev,
+ MXR_SUB_MIXER1, 2, EXYNOS_VIDEONODE_MXR_GRP(2));
+ sub_mxr->layer[MXR_LAYER_GRP1] = mxr_graph_layer_create(mdev,
+ MXR_SUB_MIXER1, 3, EXYNOS_VIDEONODE_MXR_GRP(3));
+ if (!sub_mxr->layer[MXR_LAYER_VIDEO] ||
+ !sub_mxr->layer[MXR_LAYER_GRP0] ||
+ !sub_mxr->layer[MXR_LAYER_GRP1]) {
+ mxr_err(mdev, "failed to acquire layers\n");
+ goto fail;
+ }
+ }
+
+ return 0;
+
+fail:
+ mxr_release_layers(mdev);
+ return -ENODEV;
+}
+
+/* ---------- POWER MANAGEMENT ----------- */
+
+static int mxr_runtime_resume(struct device *dev)
+{
+ struct mxr_device *mdev = to_mdev(dev);
+ struct mxr_resources *res = &mdev->res;
+
+ mxr_dbg(mdev, "resume - start\n");
+ mutex_lock(&mdev->mutex);
+ /* turn clocks on */
+ clk_enable(res->mixer);
+#if defined(CONFIG_ARCH_EXYNOS4)
+ clk_enable(res->vp);
+#endif
+#if defined(CONFIG_CPU_EXYNOS4210)
+ clk_enable(res->sclk_mixer);
+#endif
+ /* enable system mmu for tv. It must be enabled after enabling
+ * mixer's clock. Because of system mmu limitation. */
+ /*mdev->vb2->resume(mdev->alloc_ctx);*/
+ /* apply default configuration */
+ mxr_reg_reset(mdev);
+ mxr_dbg(mdev, "resume - finished\n");
+
+ mutex_unlock(&mdev->mutex);
+ return 0;
+}
+
+static int mxr_runtime_suspend(struct device *dev)
+{
+ struct mxr_device *mdev = to_mdev(dev);
+ struct mxr_resources *res = &mdev->res;
+ mxr_dbg(mdev, "suspend - start\n");
+ mutex_lock(&mdev->mutex);
+ /* disable system mmu for tv. It must be disabled before disabling
+ * mixer's clock. Because of system mmu limitation. */
+ /*mdev->vb2->suspend(mdev->alloc_ctx);*/
+ /* turn clocks off */
+#if defined(CONFIG_CPU_EXYNOS4210)
+ clk_disable(res->sclk_mixer);
+#endif
+#if defined(CONFIG_ARCH_EXYNOS4)
+ clk_disable(res->vp);
+#endif
+ clk_disable(res->mixer);
+ mutex_unlock(&mdev->mutex);
+ mxr_dbg(mdev, "suspend - finished\n");
+ return 0;
+}
+
+/* ---------- SUB-DEVICE CALLBACKS ----------- */
+
+static const struct dev_pm_ops mxr_pm_ops = {
+ .runtime_suspend = mxr_runtime_suspend,
+ .runtime_resume = mxr_runtime_resume,
+};
+
+static int mxr_s_power(struct v4l2_subdev *sd, int on)
+{
+ return 0;
+}
+
+/* When mixer is connected to gscaler through local path, only gscaler's
+ * video device can command alpha blending functionality for mixer */
+static int mxr_s_ctrl(struct v4l2_subdev *sd, struct v4l2_control *ctrl)
+{
+ struct mxr_device *mdev = sd_to_mdev(sd);
+ int v = ctrl->value;
+ int num = 0;
+
+ mxr_dbg(mdev, "%s start\n", __func__);
+ mxr_dbg(mdev, "id = %d, value = %d\n", ctrl->id, ctrl->value);
+
+ if (!strcmp(sd->name, "s5p-mixer0"))
+ num = MXR_SUB_MIXER0;
+ else if (!strcmp(sd->name, "s5p-mixer1"))
+ num = MXR_SUB_MIXER1;
+
+ switch (ctrl->id) {
+ case V4L2_CID_TV_LAYER_BLEND_ENABLE:
+ mdev->sub_mxr[num].layer[MXR_LAYER_VIDEO]->layer_blend_en = v;
+ break;
+ case V4L2_CID_TV_LAYER_BLEND_ALPHA:
+ mdev->sub_mxr[num].layer[MXR_LAYER_VIDEO]->layer_alpha = (u32)v;
+ break;
+ case V4L2_CID_TV_PIXEL_BLEND_ENABLE:
+ mdev->sub_mxr[num].layer[MXR_LAYER_VIDEO]->pixel_blend_en = v;
+ break;
+ case V4L2_CID_TV_CHROMA_ENABLE:
+ mdev->sub_mxr[num].layer[MXR_LAYER_VIDEO]->chroma_en = v;
+ break;
+ case V4L2_CID_TV_CHROMA_VALUE:
+ mdev->sub_mxr[num].layer[MXR_LAYER_VIDEO]->chroma_val = (u32)v;
+ break;
+ default:
+ mxr_err(mdev, "invalid control id\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int mxr_s_stream(struct v4l2_subdev *sd, int enable)
+{
+ struct mxr_device *mdev = sd_to_mdev(sd);
+ struct exynos_entity_data *md_data;
+ int ret;
+
+ /* It can be known which entity calls this function */
+ md_data = v4l2_get_subdevdata(sd);
+ mdev->mxr_data_from = md_data->mxr_data_from;
+
+ if (enable)
+ ret = mxr_streamer_get(mdev, sd);
+ else
+ ret = mxr_streamer_put(mdev, sd);
+
+ if (ret)
+ return ret;
+
+ return 0;
+}
+
+static struct v4l2_mbus_framefmt *
+__mxr_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+ unsigned int pad, enum v4l2_subdev_format_whence which)
+{
+ struct sub_mxr_device *sub_mxr = sd_to_sub_mxr(sd);
+
+ if (which == V4L2_SUBDEV_FORMAT_TRY)
+ return v4l2_subdev_get_try_format(fh, pad);
+ else
+ return &sub_mxr->mbus_fmt[pad];
+}
+
+static struct v4l2_rect *
+__mxr_get_crop(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+ unsigned int pad, enum v4l2_subdev_format_whence which)
+{
+ struct sub_mxr_device *sub_mxr = sd_to_sub_mxr(sd);
+
+ if (which == V4L2_SUBDEV_FORMAT_TRY)
+ return v4l2_subdev_get_try_crop(fh, pad);
+ else
+ return &sub_mxr->crop[pad];
+}
+
+static unsigned int mxr_adjust_graph_format(unsigned int code)
+{
+ switch (code) {
+ case V4L2_MBUS_FMT_RGB444_2X8_PADHI_BE:
+ case V4L2_MBUS_FMT_RGB444_2X8_PADHI_LE:
+ case V4L2_MBUS_FMT_RGB555_2X8_PADHI_BE:
+ case V4L2_MBUS_FMT_RGB555_2X8_PADHI_LE:
+ case V4L2_MBUS_FMT_RGB565_2X8_BE:
+ case V4L2_MBUS_FMT_RGB565_2X8_LE:
+ case V4L2_MBUS_FMT_XRGB8888_4X8_LE:
+ return code;
+ default:
+ return V4L2_MBUS_FMT_XRGB8888_4X8_LE; /* default format */
+ }
+}
+
+/* This can be moved to graphic layer's callback function */
+static void mxr_set_layer_src_fmt(struct sub_mxr_device *sub_mxr, u32 pad)
+{
+ /* sink pad number and array index of layer are same */
+ struct mxr_layer *layer = sub_mxr->layer[pad];
+ struct v4l2_mbus_framefmt *fmt = &sub_mxr->mbus_fmt[pad];
+ u32 fourcc;
+
+ switch (fmt->code) {
+ case V4L2_MBUS_FMT_RGB444_2X8_PADHI_BE:
+ case V4L2_MBUS_FMT_RGB444_2X8_PADHI_LE:
+ fourcc = V4L2_PIX_FMT_RGB444;
+ break;
+ case V4L2_MBUS_FMT_RGB555_2X8_PADHI_BE:
+ case V4L2_MBUS_FMT_RGB555_2X8_PADHI_LE:
+ fourcc = V4L2_PIX_FMT_RGB555;
+ break;
+ case V4L2_MBUS_FMT_RGB565_2X8_BE:
+ case V4L2_MBUS_FMT_RGB565_2X8_LE:
+ fourcc = V4L2_PIX_FMT_RGB565;
+ break;
+ case V4L2_MBUS_FMT_XRGB8888_4X8_LE:
+ fourcc = V4L2_PIX_FMT_BGR32;
+ break;
+ default:
+ fourcc = V4L2_PIX_FMT_RGB565;
+ dev_warn(layer->mdev->dev, "unknown source format - falling back to RGB565\n");
+ break;
+ }
+ /* This will be applied to hardware right after streamon */
+ layer->fmt = find_format_by_fourcc(layer, fourcc);
+}
+
+static int mxr_try_format(struct mxr_device *mdev,
+ struct v4l2_subdev_fh *fh, u32 pad,
+ struct v4l2_mbus_framefmt *fmt,
+ enum v4l2_subdev_format_whence which)
+{
+ struct v4l2_mbus_framefmt mbus_fmt;
+
+ fmt->width = clamp_val(fmt->width, 1, 32767);
+ fmt->height = clamp_val(fmt->height, 1, 2047);
+
+ switch (pad) {
+ case MXR_PAD_SINK_GSCALER:
+ fmt->code = V4L2_MBUS_FMT_YUV8_1X24;
+ break;
+ case MXR_PAD_SINK_GRP0:
+ case MXR_PAD_SINK_GRP1:
+ fmt->code = mxr_adjust_graph_format(fmt->code);
+ break;
+ case MXR_PAD_SOURCE_GSCALER:
+ case MXR_PAD_SOURCE_GRP0:
+ case MXR_PAD_SOURCE_GRP1:
+ mxr_get_mbus_fmt(mdev, &mbus_fmt);
+ fmt->code = (fmt->code == V4L2_MBUS_FMT_YUV8_1X24) ?
+ V4L2_MBUS_FMT_YUV8_1X24 : V4L2_MBUS_FMT_XRGB8888_4X8_LE;
+ fmt->width = mbus_fmt.width;
+ fmt->height = mbus_fmt.height;
+ break;
+ }
+
+ return 0;
+}
+
+static void mxr_apply_format(struct v4l2_subdev *sd,
+ struct v4l2_subdev_fh *fh, u32 pad,
+ struct v4l2_mbus_framefmt *fmt,
+ enum v4l2_subdev_format_whence which)
+{
+ struct sub_mxr_device *sub_mxr;
+ struct mxr_device *mdev;
+ int i, j;
+ sub_mxr = sd_to_sub_mxr(sd);
+ mdev = sd_to_mdev(sd);
+
+ if (which == V4L2_SUBDEV_FORMAT_ACTIVE) {
+ if (pad == MXR_PAD_SINK_GRP0 || pad == MXR_PAD_SINK_GRP1) {
+ struct mxr_layer *layer = sub_mxr->layer[pad];
+
+ mxr_set_layer_src_fmt(sub_mxr, pad);
+ layer->geo.src.full_width = fmt->width;
+ layer->geo.src.full_height = fmt->height;
+ layer->ops.fix_geometry(layer);
+ } else if (pad == MXR_PAD_SOURCE_GSCALER
+ || pad == MXR_PAD_SOURCE_GRP0
+ || pad == MXR_PAD_SOURCE_GRP1) {
+ for (i = 0; i < MXR_MAX_LAYERS; ++i) {
+ struct mxr_layer *layer = sub_mxr->layer[i];
+ layer->geo.dst.full_width = fmt->width;
+ layer->geo.dst.full_height = fmt->height;
+ layer->ops.fix_geometry(layer);
+ }
+ for (i = 0; i < MXR_MAX_SUB_MIXERS; ++i) {
+ sub_mxr = &mdev->sub_mxr[i];
+ for (j = MXR_PAD_SOURCE_GSCALER;
+ j < MXR_PADS_NUM; ++j)
+ sub_mxr->mbus_fmt[j].code = fmt->code;
+ }
+ }
+ }
+}
+
+static int mxr_try_crop(struct v4l2_subdev *sd,
+ struct v4l2_subdev_fh *fh, unsigned int pad,
+ struct v4l2_rect *r, enum v4l2_subdev_format_whence which)
+{
+ struct v4l2_mbus_framefmt *fmt;
+
+ fmt = __mxr_get_fmt(sd, fh, pad, which);
+ if (fmt == NULL)
+ return -EINVAL;
+
+ r->left = clamp_val(r->left, 0, fmt->width);
+ r->top = clamp_val(r->top, 0, fmt->height);
+ r->width = clamp_val(r->width, 1, fmt->width - r->left);
+ r->height = clamp_val(r->height, 1, fmt->height - r->top);
+
+ /* need to align size with G-Scaler */
+ if (pad == MXR_PAD_SINK_GSCALER || pad == MXR_PAD_SOURCE_GSCALER)
+ if (r->width % 2)
+ r->width -= 1;
+
+ return 0;
+}
+
+static void mxr_apply_crop(struct v4l2_subdev *sd,
+ struct v4l2_subdev_fh *fh, unsigned int pad,
+ struct v4l2_rect *r, enum v4l2_subdev_format_whence which)
+{
+ struct sub_mxr_device *sub_mxr = sd_to_sub_mxr(sd);
+ struct mxr_layer *layer;
+
+ if (which == V4L2_SUBDEV_FORMAT_ACTIVE) {
+ if (pad == MXR_PAD_SINK_GRP0 || pad == MXR_PAD_SINK_GRP1) {
+ layer = sub_mxr->layer[pad];
+
+ layer->geo.src.width = r->width;
+ layer->geo.src.height = r->height;
+ layer->geo.src.x_offset = r->left;
+ layer->geo.src.y_offset = r->top;
+ layer->ops.fix_geometry(layer);
+ } else if (pad == MXR_PAD_SOURCE_GSCALER
+ || pad == MXR_PAD_SOURCE_GRP0
+ || pad == MXR_PAD_SOURCE_GRP1) {
+ layer = sub_mxr->layer[pad - (MXR_PADS_NUM >> 1)];
+
+ layer->geo.dst.width = r->width;
+ layer->geo.dst.height = r->height;
+ layer->geo.dst.x_offset = r->left;
+ layer->geo.dst.y_offset = r->top;
+ layer->ops.fix_geometry(layer);
+ }
+ }
+}
+
+static int mxr_get_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_format *format)
+{
+ struct v4l2_mbus_framefmt *fmt;
+
+ fmt = __mxr_get_fmt(sd, fh, format->pad, format->which);
+ if (fmt == NULL)
+ return -EINVAL;
+
+ format->format = *fmt;
+
+ return 0;
+}
+
+static int mxr_set_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_format *format)
+{
+ struct mxr_device *mdev = sd_to_mdev(sd);
+ struct v4l2_mbus_framefmt *fmt;
+ int ret;
+ u32 pad;
+
+ fmt = __mxr_get_fmt(sd, fh, format->pad, format->which);
+ if (fmt == NULL)
+ return -EINVAL;
+
+ ret = mxr_try_format(mdev, fh, format->pad, &format->format,
+ format->which);
+ if (ret)
+ return ret;
+
+ *fmt = format->format;
+
+ mxr_apply_format(sd, fh, format->pad, &format->format, format->which);
+
+ if (format->pad == MXR_PAD_SINK_GSCALER ||
+ format->pad == MXR_PAD_SINK_GRP0 ||
+ format->pad == MXR_PAD_SINK_GRP1) {
+ pad = format->pad + (MXR_PADS_NUM >> 1);
+ fmt = __mxr_get_fmt(sd, fh, pad, format->which);
+ if (fmt == NULL)
+ return -EINVAL;
+
+ *fmt = format->format;
+
+ ret = mxr_try_format(mdev, fh, pad, fmt, format->which);
+ if (ret)
+ return ret;
+
+ mxr_apply_format(sd, fh, pad, fmt, format->which);
+ }
+
+ return 0;
+}
+
+static int mxr_set_crop(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_crop *crop)
+{
+ struct v4l2_rect *r;
+ int ret;
+ u32 pad;
+
+ r = __mxr_get_crop(sd, fh, crop->pad, crop->which);
+ if (r == NULL)
+ return -EINVAL;
+
+ ret = mxr_try_crop(sd, fh, crop->pad, &crop->rect, crop->which);
+ if (ret)
+ return ret;
+
+ /* transfer adjusted crop information to user space */
+ *r = crop->rect;
+
+ /* reserved[0] is used for sink pad number temporally */
+ mxr_apply_crop(sd, fh, crop->pad, r, crop->which);
+
+ /* In case of sink pad, crop info will be propagated to source pad */
+ if (crop->pad == MXR_PAD_SINK_GSCALER ||
+ crop->pad == MXR_PAD_SINK_GRP0 ||
+ crop->pad == MXR_PAD_SINK_GRP1) {
+ pad = crop->pad + (MXR_PADS_NUM >> 1);
+ r = __mxr_get_crop(sd, fh, pad, crop->which);
+ if (r == NULL)
+ return -EINVAL;
+ /* store propagated crop info to source pad */
+ *r = crop->rect;
+
+ ret = mxr_try_crop(sd, fh, pad, r, crop->which);
+ if (ret)
+ return ret;
+
+ mxr_apply_crop(sd, fh, pad, r, crop->which);
+ }
+
+ return 0;
+}
+
+static int mxr_get_crop(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh,
+ struct v4l2_subdev_crop *crop)
+{
+ struct v4l2_rect *r;
+
+ r = __mxr_get_crop(sd, fh, crop->pad, crop->which);
+ if (r == NULL)
+ return -EINVAL;
+
+ crop->rect = *r;
+
+ return 0;
+}
+
+static const struct v4l2_subdev_core_ops mxr_sd_core_ops = {
+ .s_power = mxr_s_power,
+ .s_ctrl = mxr_s_ctrl,
+};
+
+static const struct v4l2_subdev_video_ops mxr_sd_video_ops = {
+ .s_stream = mxr_s_stream,
+};
+
+static const struct v4l2_subdev_pad_ops mxr_sd_pad_ops = {
+ .get_fmt = mxr_get_fmt,
+ .set_fmt = mxr_set_fmt,
+ .get_crop = mxr_get_crop,
+ .set_crop = mxr_set_crop
+};
+
+static const struct v4l2_subdev_ops mxr_sd_ops = {
+ .core = &mxr_sd_core_ops,
+ .video = &mxr_sd_video_ops,
+ .pad = &mxr_sd_pad_ops,
+};
+
+static int mxr_link_setup(struct media_entity *entity,
+ const struct media_pad *local,
+ const struct media_pad *remote, u32 flags)
+{
+ struct media_pad *pad;
+ struct sub_mxr_device *sub_mxr = entity_to_sub_mxr(entity);
+ struct mxr_device *mdev = sub_mxr_to_mdev(sub_mxr);
+ int i;
+ int gsc_num = 0;
+
+ /* difficult to get dev ptr */
+ printk(KERN_DEBUG "%s start\n", __func__);
+
+ if (flags & MEDIA_LNK_FL_ENABLED) {
+ sub_mxr->use = 1;
+ if (local->index == MXR_PAD_SINK_GSCALER)
+ sub_mxr->local = 1;
+ /* find a remote pad by interating over all links
+ * until enabled link is found.
+ * This will be remove. because Exynos5250 only supports
+ * HDMI output */
+ pad = media_entity_remote_source((struct media_pad *)local);
+ if (pad) {
+ printk(KERN_ERR "%s is already connected to %s\n",
+ entity->name, pad->entity->name);
+ return -EBUSY;
+ }
+ } else {
+ if (local->index == MXR_PAD_SINK_GSCALER)
+ sub_mxr->local = 0;
+ sub_mxr->use = 0;
+ for (i = 0; i < entity->num_links; ++i)
+ if (entity->links[i].flags & MEDIA_LNK_FL_ENABLED)
+ sub_mxr->use = 1;
+ }
+
+ if (!strcmp(remote->entity->name, "exynos-gsc-sd.0"))
+ gsc_num = 0;
+ else if (!strcmp(remote->entity->name, "exynos-gsc-sd.1"))
+ gsc_num = 1;
+ else if (!strcmp(remote->entity->name, "exynos-gsc-sd.2"))
+ gsc_num = 2;
+ else if (!strcmp(remote->entity->name, "exynos-gsc-sd.3"))
+ gsc_num = 3;
+
+ if (!strcmp(local->entity->name, "s5p-mixer0"))
+ mdev->mxr0_gsc = gsc_num;
+ else if (!strcmp(local->entity->name, "s5p-mixer1"))
+ mdev->mxr1_gsc = gsc_num;
+
+ /* deliver those variables to mxr_streamer_get() */
+ mdev->flags = flags;
+ return 0;
+}
+
+/* mixer entity operations */
+static const struct media_entity_operations mxr_entity_ops = {
+ .link_setup = mxr_link_setup,
+};
+
+/* ---------- MEDIA CONTROLLER MANAGEMENT ----------- */
+
+static int mxr_register_entity(struct mxr_device *mdev, int mxr_num)
+{
+ struct v4l2_subdev *sd = &mdev->sub_mxr[mxr_num].sd;
+ struct media_pad *pads = mdev->sub_mxr[mxr_num].pads;
+ struct media_entity *me = &sd->entity;
+ struct exynos_md *md;
+ int ret;
+
+ mxr_dbg(mdev, "mixer%d entity init\n", mxr_num);
+
+ /* init mixer sub-device */
+ v4l2_subdev_init(sd, &mxr_sd_ops);
+ sd->owner = THIS_MODULE;
+ sprintf(sd->name, "s5p-mixer%d", mxr_num);
+
+ /* mixer sub-device can be opened in user space */
+ sd->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
+
+ /* init mixer sub-device as entity */
+ pads[MXR_PAD_SINK_GSCALER].flags = MEDIA_PAD_FL_SINK;
+ pads[MXR_PAD_SINK_GRP0].flags = MEDIA_PAD_FL_SINK;
+ pads[MXR_PAD_SINK_GRP1].flags = MEDIA_PAD_FL_SINK;
+ pads[MXR_PAD_SOURCE_GSCALER].flags = MEDIA_PAD_FL_SOURCE;
+ pads[MXR_PAD_SOURCE_GRP0].flags = MEDIA_PAD_FL_SOURCE;
+ pads[MXR_PAD_SOURCE_GRP1].flags = MEDIA_PAD_FL_SOURCE;
+ me->ops = &mxr_entity_ops;
+ ret = media_entity_init(me, MXR_PADS_NUM, pads, 0);
+ if (ret) {
+ mxr_err(mdev, "failed to initialize media entity\n");
+ return ret;
+ }
+
+ md = (struct exynos_md *)module_name_to_driver_data(MDEV_MODULE_NAME);
+ if (!md) {
+ mxr_err(mdev, "failed to get output media device\n");
+ return -ENODEV;
+ }
+
+ ret = v4l2_device_register_subdev(&md->v4l2_dev, sd);
+ if (ret) {
+ mxr_err(mdev, "failed to register mixer subdev\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static int mxr_register_entities(struct mxr_device *mdev)
+{
+ int ret, i;
+
+ for (i = 0; i < MXR_MAX_SUB_MIXERS; ++i) {
+ ret = mxr_register_entity(mdev, i);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+static void mxr_unregister_entity(struct mxr_device *mdev, int mxr_num)
+{
+ v4l2_device_unregister_subdev(&mdev->sub_mxr[mxr_num].sd);
+}
+
+static void mxr_unregister_entities(struct mxr_device *mdev)
+{
+ int i;
+
+ for (i = 0; i < MXR_MAX_SUB_MIXERS; ++i)
+ mxr_unregister_entity(mdev, i);
+}
+
+static void mxr_entities_info_print(struct mxr_device *mdev)
+{
+ struct v4l2_subdev *sd;
+ struct media_entity *sd_me;
+ struct media_entity *vd_me;
+ int num_layers;
+ int i, j;
+
+#if defined(CONFIG_ARCH_EXYNOS4)
+ num_layers = 3;
+#else
+ num_layers = 2;
+#endif
+ mxr_dbg(mdev, "\n************ MIXER entities info ***********\n");
+
+ for (i = 0; i < MXR_MAX_SUB_MIXERS; ++i) {
+ mxr_dbg(mdev, "[SUB DEVICE INFO]\n");
+ sd = &mdev->sub_mxr[i].sd;
+ sd_me = &sd->entity;
+ entity_info_print(sd_me, mdev->dev);
+
+ for (j = 0; j < num_layers; ++j) {
+ vd_me = &mdev->sub_mxr[i].layer[j]->vfd.entity;
+
+ mxr_dbg(mdev, "\n[VIDEO DEVICE %d INFO]\n", j);
+ entity_info_print(vd_me, mdev->dev);
+ }
+ }
+
+ mxr_dbg(mdev, "**************************************************\n\n");
+}
+
+static int mxr_create_links_sub_mxr(struct mxr_device *mdev, int mxr_num,
+ int flags)
+{
+ struct exynos_md *md;
+ struct mxr_layer *layer;
+ int ret;
+ int i, j;
+ char err[80];
+
+ mxr_info(mdev, "mixer%d create links\n", mxr_num);
+
+ memset(err, 0, sizeof(err));
+
+ /* link creation : gscaler0~3[1] -> mixer[0] */
+ md = (struct exynos_md *)module_name_to_driver_data(MDEV_MODULE_NAME);
+ for (i = 0; i < MAX_GSC_SUBDEV; ++i) {
+ if (md->gsc_sd[i] != NULL) {
+ ret = media_entity_create_link(&md->gsc_sd[i]->entity,
+ GSC_OUT_PAD_SOURCE,
+ &mdev->sub_mxr[mxr_num].sd.entity,
+ MXR_PAD_SINK_GSCALER, 0);
+ if (ret) {
+ sprintf(err, "%s --> %s",
+ md->gsc_sd[i]->entity.name,
+ mdev->sub_mxr[mxr_num].sd.entity.name);
+ goto fail;
+ }
+ }
+ }
+
+ /* link creation : mixer input0[0] -> mixer[1] */
+ layer = mdev->sub_mxr[mxr_num].layer[MXR_LAYER_GRP0];
+ ret = media_entity_create_link(&layer->vfd.entity, 0,
+ &mdev->sub_mxr[mxr_num].sd.entity, MXR_PAD_SINK_GRP0, flags);
+ if (ret) {
+ sprintf(err, "%s --> %s", layer->vfd.entity.name,
+ mdev->sub_mxr[mxr_num].sd.entity.name);
+ goto fail;
+ }
+
+ /* link creation : mixer input1[0] -> mixer[2] */
+ layer = mdev->sub_mxr[mxr_num].layer[MXR_LAYER_GRP1];
+ ret = media_entity_create_link(&layer->vfd.entity, 0,
+ &mdev->sub_mxr[mxr_num].sd.entity, MXR_PAD_SINK_GRP1, flags);
+ if (ret) {
+ sprintf(err, "%s --> %s", layer->vfd.entity.name,
+ mdev->sub_mxr[mxr_num].sd.entity.name);
+ goto fail;
+ }
+
+ /* link creation : mixer[3,4,5] -> output device(hdmi or sdo)[0] */
+ mxr_dbg(mdev, "output device count = %d\n", mdev->output_cnt);
+ for (i = 0; i < mdev->output_cnt; ++i) { /* sink pad of hdmi/sdo is 0 */
+ flags = 0;
+ /* default output device link is HDMI */
+ if (!strcmp(mdev->output[i]->sd->name, "exynos5-hdmi"))
+ flags = MEDIA_LNK_FL_ENABLED;
+
+ for (j = MXR_PAD_SOURCE_GSCALER; j < MXR_PADS_NUM; ++j) {
+ ret = media_entity_create_link(
+ &mdev->sub_mxr[mxr_num].sd.entity,
+ j, &mdev->output[i]->sd->entity,
+ 0, flags);
+ if (ret) {
+ sprintf(err, "%s --> %s",
+ mdev->sub_mxr[mxr_num].sd.entity.name,
+ mdev->output[i]->sd->entity.name);
+ goto fail;
+ }
+ }
+ }
+
+ return 0;
+
+fail:
+ mxr_err(mdev, "failed to create link : %s\n", err);
+ return ret;
+}
+
+static int mxr_create_links(struct mxr_device *mdev)
+{
+ int ret, i;
+ int flags;
+
+#if defined(CONFIG_ARCH_EXYNOS4)
+ struct mxr_layer *layer;
+ struct media_entity *source, *sink;
+
+ layer = mdev->sub_mxr[MXR_SUB_MIXER0].layer[MXR_LAYER_VIDEO];
+ source = &layer->vfd.entity;
+ sink = &mdev->sub_mxr[MXR_SUB_MIXER0].sd.entity;
+ ret = media_entity_create_link(source, 0, sink, MXR_PAD_SINK_GSCALER,
+ MEDIA_LNK_FL_ENABLED);
+#endif
+ for (i = 0; i < MXR_MAX_SUB_MIXERS; ++i) {
+ if (mdev->sub_mxr[i].use)
+ flags = MEDIA_LNK_FL_ENABLED;
+ else
+ flags = 0;
+
+ ret = mxr_create_links_sub_mxr(mdev, i, flags);
+ if (ret)
+ return ret;
+ }
+
+ mxr_info(mdev, "mixer links are created successfully\n");
+
+ return 0;
+}
+
+#ifdef CONFIG_EXYNOS_IOMMU
+static int iommu_init(struct platform_device *pdev)
+{
+ struct platform_device *pds;
+
+ pds = find_sysmmu_dt(pdev, "sysmmu");
+ if (pds==NULL) {
+ printk(KERN_ERR "No sysmmu found\n");
+ return -1;
+ }
+
+ platform_set_sysmmu(&pds->dev, &pdev->dev);
+ if (!s5p_create_iommu_mapping(&pdev->dev, 0x20000000,
+ SZ_128M, 4, NULL)) {
+ printk(KERN_ERR "IOMMU mapping not created\n");
+ return -1;
+ }
+
+ return 0;
+}
+#endif
+/* --------- DRIVER INITIALIZATION ---------- */
+
+static int __devinit mxr_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct mxr_platform_data *pdata = dev->platform_data;
+ struct mxr_device *mdev;
+ int ret;
+
+ /* mdev does not exist yet so no mxr_dbg is used */
+ dev_info(dev, "probe start\n");
+
+ mdev = kzalloc(sizeof *mdev, GFP_KERNEL);
+ if (!mdev) {
+ mxr_err(mdev, "not enough memory.\n");
+ ret = -ENOMEM;
+ goto fail;
+ }
+
+#ifdef CONFIG_EXYNOS_IOMMU
+ if (iommu_init(pdev)) {
+ mxr_err(mdev, "failed to initialize IOMMU.\n");
+ ret = -EINVAL;
+ goto fail;
+ }
+#endif
+ /* setup pointer to master device */
+ mdev->dev = dev;
+ /* use only sub mixer0 as default */
+ mdev->sub_mxr[MXR_SUB_MIXER0].use = 1;
+
+#if defined(CONFIG_VIDEOBUF2_CMA_PHYS)
+ mdev->vb2 = &mxr_vb2_cma;
+#elif defined(CONFIG_VIDEOBUF2_DMA_CONTIG)
+ mdev->vb2 = &mxr_vb2_dma_contig;
+#endif
+
+ mutex_init(&mdev->mutex);
+ mutex_init(&mdev->s_mutex);
+ spin_lock_init(&mdev->reg_slock);
+ init_waitqueue_head(&mdev->event_queue);
+
+ /* acquire resources: regs, irqs, clocks, regulators */
+ ret = mxr_acquire_resources(mdev, pdev);
+ if (ret)
+ goto fail_mem;
+
+ /* configure resources for video output */
+ ret = mxr_acquire_video(mdev, mxr_output_conf,
+ ARRAY_SIZE(mxr_output_conf));
+ if (ret)
+ goto fail_resources;
+
+ /* register mixer subdev as entity */
+ ret = mxr_register_entities(mdev);
+ if (ret)
+ goto fail_video;
+
+ /* configure layers */
+ ret = mxr_acquire_layers(mdev, pdata);
+ if (ret)
+ goto fail_entity;
+
+ /* create links connected to gscaler, mixer inputs and hdmi */
+ ret = mxr_create_links(mdev);
+ if (ret)
+ goto fail_entity;
+
+ dev_set_drvdata(dev, mdev);
+
+ pm_runtime_enable(dev);
+
+ mxr_entities_info_print(mdev);
+
+ mxr_info(mdev, "probe successful\n");
+ my_gbl_mdev = mdev;
+ memcpy(my_gbl_mdev, mdev, sizeof(mdev));
+ printk(KERN_CRIT"PROBE END");
+ return 0;
+
+fail_entity:
+ mxr_unregister_entities(mdev);
+
+fail_video:
+ mxr_release_video(mdev);
+
+fail_resources:
+ mxr_release_resources(mdev);
+
+fail_mem:
+ kfree(mdev);
+
+fail:
+ dev_info(dev, "probe failed\n");
+ return ret;
+}
+
+static int __devexit mxr_remove(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct mxr_device *mdev = to_mdev(dev);
+
+ pm_runtime_disable(dev);
+
+ mxr_release_layers(mdev);
+ mxr_release_video(mdev);
+ mxr_release_resources(mdev);
+
+ kfree(mdev);
+
+ dev_info(dev, "remove sucessful\n");
+ return 0;
+}
+
+#ifdef CONFIG_OF
+static const struct of_device_id exynos_mixer_match[] = {
+ { .compatible = "samsung,s5p-mixer" },
+ {},
+};
+MODULE_DEVICE_TABLE(of, exynos_mixer_match);
+#endif
+
+static struct platform_driver mxr_driver __refdata = {
+ .probe = mxr_probe,
+ .remove = __devexit_p(mxr_remove),
+ .driver = {
+ .name = MXR_DRIVER_NAME,
+ .owner = THIS_MODULE,
+ .pm = &mxr_pm_ops,
+ .of_match_table = of_match_ptr(exynos_mixer_match),
+ }
+};
+
+static int __init mxr_init(void)
+{
+ int i, ret;
+ static const char banner[] __initdata = KERN_INFO
+ "Samsung TV Mixer driver, "
+ "(c) 2010-2011 Samsung Electronics Co., Ltd.\n";
+ printk(banner);
+
+ /* Loading auxiliary modules */
+ for (i = 0; i < ARRAY_SIZE(mxr_output_conf); ++i)
+ request_module(mxr_output_conf[i].module_name);
+
+ ret = platform_driver_register(&mxr_driver);
+ if (ret != 0) {
+ printk(KERN_ERR "registration of MIXER driver failed\n");
+ return -ENXIO;
+ }
+
+ return 0;
+}
+module_init(mxr_init);
+/*late_initcall(mxr_init);*/
+
+static void __exit mxr_exit(void)
+{
+ platform_driver_unregister(&mxr_driver);
+}
+module_exit(mxr_exit);
--- /dev/null
+/*
+ * Samsung TV Mixer driver
+ *
+ * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
+ *
+ * Tomasz Stanislawski, <t.stanislaws@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published
+ * by the Free Software Foundiation. either version 2 of the License,
+ * or (at your option) any later version
+ */
+
+#include "mixer.h"
+
+#if defined(CONFIG_VIDEOBUF2_CMA_PHYS)
+#include <media/videobuf2-cma-phys.h>
+#elif defined(CONFIG_VIDEOBUF2_ION)
+#include <media/videobuf2-ion.h>
+#endif
+
+/* FORMAT DEFINITIONS */
+
+static const struct mxr_format mxr_fb_fmt_rgb565 = {
+ .name = "RGB565",
+ .fourcc = V4L2_PIX_FMT_RGB565,
+ .colorspace = V4L2_COLORSPACE_SRGB,
+ .num_planes = 1,
+ .plane = {
+ { .width = 1, .height = 1, .size = 2 },
+ },
+ .num_subframes = 1,
+ .cookie = 4,
+};
+
+static const struct mxr_format mxr_fb_fmt_argb1555 = {
+ .name = "ARGB1555",
+ .num_planes = 1,
+ .fourcc = V4L2_PIX_FMT_RGB555,
+ .colorspace = V4L2_COLORSPACE_SRGB,
+ .plane = {
+ { .width = 1, .height = 1, .size = 2 },
+ },
+ .num_subframes = 1,
+ .cookie = 5,
+};
+
+static const struct mxr_format mxr_fb_fmt_argb4444 = {
+ .name = "ARGB4444",
+ .num_planes = 1,
+ .fourcc = V4L2_PIX_FMT_RGB444,
+ .colorspace = V4L2_COLORSPACE_SRGB,
+ .plane = {
+ { .width = 1, .height = 1, .size = 2 },
+ },
+ .num_subframes = 1,
+ .cookie = 6,
+};
+
+static const struct mxr_format mxr_fb_fmt_argb8888 = {
+ .name = "ARGB8888",
+ .fourcc = V4L2_PIX_FMT_BGR32,
+ .colorspace = V4L2_COLORSPACE_SRGB,
+ .num_planes = 1,
+ .plane = {
+ { .width = 1, .height = 1, .size = 4 },
+ },
+ .num_subframes = 1,
+ .cookie = 7,
+};
+
+static const struct mxr_format *mxr_graph_format[] = {
+ &mxr_fb_fmt_rgb565,
+ &mxr_fb_fmt_argb1555,
+ &mxr_fb_fmt_argb4444,
+ &mxr_fb_fmt_argb8888,
+};
+
+/* AUXILIARY CALLBACKS */
+
+static void mxr_graph_layer_release(struct mxr_layer *layer)
+{
+ mxr_base_layer_unregister(layer);
+ mxr_base_layer_release(layer);
+}
+
+static void mxr_graph_buffer_set(struct mxr_layer *layer,
+ struct mxr_buffer *buf)
+{
+ struct mxr_device *mdev = layer->mdev;
+ dma_addr_t addr = 0;
+
+ if (buf)
+ addr = mdev->vb2->plane_addr(&buf->vb, 0);
+ mxr_reg_graph_buffer(layer->mdev, layer->idx, addr);
+}
+
+static void mxr_graph_stream_set(struct mxr_layer *layer, int en)
+{
+ mxr_reg_graph_layer_stream(layer->mdev, layer->idx, en);
+}
+
+static void mxr_graph_format_set(struct mxr_layer *layer)
+{
+ mxr_reg_graph_format(layer->mdev, layer->idx,
+ layer->fmt, &layer->geo);
+}
+
+static void mxr_graph_fix_geometry(struct mxr_layer *layer)
+{
+ struct mxr_geometry *geo = &layer->geo;
+
+ mxr_dbg(layer->mdev, "%s start\n", __func__);
+ /* limit to boundary size */
+ geo->src.full_width = clamp_val(geo->src.full_width, 1, 32767);
+ geo->src.full_height = clamp_val(geo->src.full_height, 1, 2047);
+
+ /* limit to coordinate of source x, y */
+ geo->src.x_offset = clamp_val(geo->src.x_offset, 0,
+ geo->src.full_width - 1);
+ geo->src.y_offset = clamp_val(geo->src.y_offset, 0,
+ geo->src.full_height - 1);
+
+ /* limit to boundary size of crop width, height */
+ geo->src.width = clamp_val(geo->src.width, 1,
+ geo->src.full_width - geo->src.x_offset);
+ geo->src.height = clamp_val(geo->src.height, 1,
+ geo->src.full_height - geo->src.y_offset);
+
+ /* dst full resolution and TV display size are same */
+
+ geo->dst.x_offset = clamp_val(geo->dst.x_offset, 0,
+ geo->dst.full_width - 1);
+ geo->dst.y_offset = clamp_val(geo->dst.y_offset, 0,
+ geo->dst.full_height - 1);
+
+ /* mixer scale-up is unuseful. so no use it */
+ geo->dst.width = clamp_val(geo->src.width, 1,
+ geo->dst.full_width - geo->dst.x_offset);
+ geo->dst.height = clamp_val(geo->src.height, 1,
+ geo->dst.full_height - geo->dst.y_offset);
+}
+
+/* PUBLIC API */
+
+struct mxr_layer *mxr_graph_layer_create(struct mxr_device *mdev, int cur_mxr,
+ int idx, int nr)
+{
+ struct mxr_layer *layer;
+ int ret;
+ struct mxr_layer_ops ops = {
+ .release = mxr_graph_layer_release,
+ .buffer_set = mxr_graph_buffer_set,
+ .stream_set = mxr_graph_stream_set,
+ .format_set = mxr_graph_format_set,
+ .fix_geometry = mxr_graph_fix_geometry,
+ };
+ char name[32];
+
+ sprintf(name, "mxr%d_graph%d", cur_mxr, idx);
+
+ layer = mxr_base_layer_create(mdev, idx, name, &ops);
+ if (layer == NULL) {
+ mxr_err(mdev, "failed to initialize layer(%d) base\n", idx);
+ goto fail;
+ }
+
+ layer->fmt_array = mxr_graph_format;
+ layer->fmt_array_size = ARRAY_SIZE(mxr_graph_format);
+ layer->minor = nr;
+ layer->type = MXR_LAYER_TYPE_GRP;
+
+ ret = mxr_base_layer_register(layer);
+ if (ret)
+ goto fail_layer;
+
+ layer->cur_mxr = cur_mxr;
+ return layer;
+
+fail_layer:
+ mxr_base_layer_release(layer);
+
+fail:
+ return NULL;
+}
--- /dev/null
+/*
+ * Samsung TV Mixer driver
+ *
+ * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
+ *
+ * Tomasz Stanislawski, <t.stanislaws@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published
+ * by the Free Software Foundiation. either version 2 of the License,
+ * or (at your option) any later version
+ */
+
+#include "mixer.h"
+#include "regs-mixer.h"
+#include "regs-vp.h"
+
+#include <linux/delay.h>
+
+/* Register access subroutines */
+static inline u32 vp_read(struct mxr_device *mdev, u32 reg_id)
+{
+#if defined(CONFIG_ARCH_EXYNOS4)
+ return readl(mdev->res.vp_regs + reg_id);
+#else
+ return 0;
+#endif
+}
+
+static inline void vp_write(struct mxr_device *mdev, u32 reg_id, u32 val)
+{
+#if defined(CONFIG_ARCH_EXYNOS4)
+ writel(val, mdev->res.vp_regs + reg_id);
+#endif
+}
+
+static inline void vp_write_mask(struct mxr_device *mdev, u32 reg_id,
+ u32 val, u32 mask)
+{
+#if defined(CONFIG_ARCH_EXYNOS4)
+ u32 old = vp_read(mdev, reg_id);
+
+ val = (val & mask) | (old & ~mask);
+ writel(val, mdev->res.vp_regs + reg_id);
+#endif
+}
+
+static inline u32 mxr_read(struct mxr_device *mdev, u32 reg_id)
+{
+ return readl(mdev->res.mxr_regs + reg_id);
+}
+
+static inline void mxr_write(struct mxr_device *mdev, u32 reg_id, u32 val)
+{
+ if (reg_id == MXR_GRAPHIC_BASE(0)) {
+ if (val != 0)
+ writel(val, mdev->res.mxr_regs + reg_id);
+ } else
+ writel(val, mdev->res.mxr_regs + reg_id);
+}
+
+static inline void mxr_write_mask(struct mxr_device *mdev, u32 reg_id,
+ u32 val, u32 mask)
+{
+ u32 old = mxr_read(mdev, reg_id);
+
+ val = (val & mask) | (old & ~mask);
+ writel(val, mdev->res.mxr_regs + reg_id);
+}
+
+void mxr_layer_sync(struct mxr_device *mdev, int en)
+{
+ mxr_write_mask(mdev, MXR_STATUS, en ? MXR_STATUS_LAYER_SYNC : 0,
+ MXR_STATUS_LAYER_SYNC);
+}
+void mxr_vsync_set_update(struct mxr_device *mdev, int en)
+{
+ /* block update on vsync */
+ mxr_write_mask(mdev, MXR_STATUS, en ? MXR_STATUS_SYNC_ENABLE : 0,
+ MXR_STATUS_SYNC_ENABLE);
+#if defined(CONFIG_ARCH_EXYNOS4)
+ vp_write(mdev, VP_SHADOW_UPDATE, en ? VP_SHADOW_UPDATE_ENABLE : 0);
+#endif
+}
+
+static void __mxr_reg_vp_reset(struct mxr_device *mdev)
+{
+#if defined(CONFIG_ARCH_EXYNOS4)
+ int tries = 100;
+
+ vp_write(mdev, VP_SRESET, VP_SRESET_PROCESSING);
+ for (tries = 100; tries; --tries) {
+ /* waiting until VP_SRESET_PROCESSING is 0 */
+ if (~vp_read(mdev, VP_SRESET) & VP_SRESET_PROCESSING)
+ break;
+ mdelay(10);
+ }
+ WARN(tries == 0, "failed to reset Video Processor\n");
+#endif
+}
+
+static void mxr_reg_sub_mxr_reset(struct mxr_device *mdev, int mxr_num)
+{
+ u32 val; /* value stored to register */
+
+ if (mxr_num == MXR_SUB_MIXER0) {
+ /* setting default layer priority: layer1 > layer0 > video
+ * because typical usage scenario would be
+ * layer0,1 - UI overlay
+ * video - video playback
+ */
+ val = MXR_LAYER_CFG_GRP1_VAL(3);
+ val |= MXR_LAYER_CFG_GRP0_VAL(2);
+ val |= MXR_LAYER_CFG_VP_VAL(1);
+ mxr_write(mdev, MXR_LAYER_CFG, val);
+
+ /* use dark gray background color */
+ mxr_write(mdev, MXR_BG_COLOR0, 0x008080);
+ mxr_write(mdev, MXR_BG_COLOR1, 0x008080);
+ mxr_write(mdev, MXR_BG_COLOR2, 0x008080);
+
+ /* setting graphical layers */
+
+ val = MXR_GRP_CFG_BLANK_KEY_OFF; /* no blank key */
+ val |= MXR_GRP_CFG_ALPHA_VAL(0xff); /* non-transparent alpha */
+
+ /* the same configuration for both layers */
+ mxr_write(mdev, MXR_GRAPHIC_CFG(0), val);
+ mxr_write(mdev, MXR_GRAPHIC_CFG(1), val);
+ } else if (mxr_num == MXR_SUB_MIXER1) {
+ val = MXR_LAYER_CFG_GRP1_VAL(3);
+ val |= MXR_LAYER_CFG_GRP0_VAL(2);
+ val |= MXR_LAYER_CFG_VP_VAL(1);
+ mxr_write(mdev, MXR1_LAYER_CFG, val);
+
+ /* use dark gray background color */
+ mxr_write(mdev, MXR1_BG_COLOR0, 0x008080);
+ mxr_write(mdev, MXR1_BG_COLOR1, 0x008080);
+ mxr_write(mdev, MXR1_BG_COLOR2, 0x008080);
+
+ /* setting graphical layers */
+
+ val = MXR_GRP_CFG_BLANK_KEY_OFF; /* no blank key */
+ val |= MXR_GRP_CFG_ALPHA_VAL(0xff); /* non-transparent alpha */
+
+ /* the same configuration for both layers */
+ mxr_write(mdev, MXR1_GRAPHIC_CFG(0), val);
+ mxr_write(mdev, MXR1_GRAPHIC_CFG(1), val);
+ }
+}
+
+static void mxr_reg_vp_default_filter(struct mxr_device *mdev);
+
+void mxr_reg_reset(struct mxr_device *mdev)
+{
+ int i;
+ unsigned long flags;
+
+ spin_lock_irqsave(&mdev->reg_slock, flags);
+
+ mxr_write_mask(mdev, MXR_STATUS, ~0, MXR_STATUS_SOFT_RESET);
+ mxr_vsync_set_update(mdev, MXR_DISABLE);
+
+ /* set output in RGB888 mode */
+ mxr_write(mdev, MXR_CFG, MXR_CFG_OUT_RGB888);
+
+ /* 16 beat burst in DMA */
+ mxr_write_mask(mdev, MXR_STATUS, MXR_STATUS_16_BURST,
+ MXR_STATUS_BURST_MASK);
+
+ for (i = 0; i < MXR_MAX_SUB_MIXERS; ++i)
+ mxr_reg_sub_mxr_reset(mdev, i);
+
+ /* configuration of Video Processor Registers */
+ __mxr_reg_vp_reset(mdev);
+ mxr_reg_vp_default_filter(mdev);
+
+ /* enable all interrupts */
+ mxr_write_mask(mdev, MXR_INT_EN, ~0, MXR_INT_EN_ALL);
+
+ mxr_vsync_set_update(mdev, MXR_ENABLE);
+ spin_unlock_irqrestore(&mdev->reg_slock, flags);
+}
+
+void mxr_reg_graph_format(struct mxr_device *mdev, int idx,
+ const struct mxr_format *fmt, const struct mxr_geometry *geo)
+{
+ u32 wh, sxy, dxy;
+ unsigned long flags;
+
+ spin_lock_irqsave(&mdev->reg_slock, flags);
+ mxr_vsync_set_update(mdev, MXR_DISABLE);
+
+ /* Mostly, src width, height and dst width, height are same.
+ * However, in case of doing src and dst cropping, those are different.
+ * So you have to write dst width and height to MXR_GRAPHIC_WH register.
+ */
+ wh = MXR_GRP_WH_WIDTH(geo->dst.width);
+ wh |= MXR_GRP_WH_HEIGHT(geo->dst.height);
+ wh |= MXR_GRP_WH_H_SCALE(geo->x_ratio);
+ wh |= MXR_GRP_WH_V_SCALE(geo->y_ratio);
+
+ /* setup offsets in source image */
+ sxy = MXR_GRP_SXY_SX(geo->src.x_offset);
+ sxy |= MXR_GRP_SXY_SY(geo->src.y_offset);
+
+ /* setup offsets in display image */
+ dxy = MXR_GRP_DXY_DX(geo->dst.x_offset);
+ dxy |= MXR_GRP_DXY_DY(geo->dst.y_offset);
+
+ if (idx == 0) {
+ mxr_write_mask(mdev, MXR_GRAPHIC_CFG(0),
+ MXR_GRP_CFG_FORMAT_VAL(fmt->cookie),
+ MXR_GRP_CFG_FORMAT_MASK);
+ mxr_write(mdev, MXR_GRAPHIC_SPAN(0), geo->src.full_width);
+ mxr_write(mdev, MXR_GRAPHIC_WH(0), wh);
+ mxr_write(mdev, MXR_GRAPHIC_SXY(0), sxy);
+ mxr_write(mdev, MXR_GRAPHIC_DXY(0), dxy);
+ } else if (idx == 1) {
+ mxr_write_mask(mdev, MXR_GRAPHIC_CFG(1),
+ MXR_GRP_CFG_FORMAT_VAL(fmt->cookie),
+ MXR_GRP_CFG_FORMAT_MASK);
+ mxr_write(mdev, MXR_GRAPHIC_SPAN(1), geo->src.full_width);
+ mxr_write(mdev, MXR_GRAPHIC_WH(1), wh);
+ mxr_write(mdev, MXR_GRAPHIC_SXY(1), sxy);
+ mxr_write(mdev, MXR_GRAPHIC_DXY(1), dxy);
+ } else if (idx == 2) {
+ mxr_write_mask(mdev, MXR1_GRAPHIC_CFG(0),
+ MXR_GRP_CFG_FORMAT_VAL(fmt->cookie),
+ MXR_GRP_CFG_FORMAT_MASK);
+ mxr_write(mdev, MXR1_GRAPHIC_SPAN(0), geo->src.full_width);
+ mxr_write(mdev, MXR1_GRAPHIC_WH(0), wh);
+ mxr_write(mdev, MXR1_GRAPHIC_SXY(0), sxy);
+ mxr_write(mdev, MXR1_GRAPHIC_DXY(0), dxy);
+ } else if (idx == 3) {
+ mxr_write_mask(mdev, MXR1_GRAPHIC_CFG(1),
+ MXR_GRP_CFG_FORMAT_VAL(fmt->cookie),
+ MXR_GRP_CFG_FORMAT_MASK);
+ mxr_write(mdev, MXR1_GRAPHIC_SPAN(1), geo->src.full_width);
+ mxr_write(mdev, MXR1_GRAPHIC_WH(1), wh);
+ mxr_write(mdev, MXR1_GRAPHIC_SXY(1), sxy);
+ mxr_write(mdev, MXR1_GRAPHIC_DXY(1), dxy);
+ }
+
+ mxr_vsync_set_update(mdev, MXR_ENABLE);
+ spin_unlock_irqrestore(&mdev->reg_slock, flags);
+}
+
+void mxr_reg_video_geo(struct mxr_device *mdev, int cur_mxr, int idx,
+ const struct mxr_geometry *geo)
+{
+ u32 lt, rb;
+ unsigned long flags;
+
+ mxr_dbg(mdev, "%s\n", __func__);
+
+ spin_lock_irqsave(&mdev->reg_slock, flags);
+ mxr_vsync_set_update(mdev, MXR_DISABLE);
+
+ lt = MXR_VIDEO_LT_LEFT_VAL(geo->dst.x_offset);
+ lt |= MXR_VIDEO_LT_TOP_VAL(geo->dst.y_offset);
+ rb = MXR_VIDEO_RB_RIGHT_VAL(geo->dst.x_offset + geo->dst.width - 1);
+ rb |= MXR_VIDEO_RB_BOTTOM_VAL(geo->dst.y_offset + geo->dst.height - 1);
+
+ if (cur_mxr == MXR_SUB_MIXER0) {
+ mxr_write(mdev, MXR_VIDEO_LT, lt);
+ mxr_write(mdev, MXR_VIDEO_RB, rb);
+ } else if (cur_mxr == MXR_SUB_MIXER1) {
+ mxr_write(mdev, MXR1_VIDEO_LT, lt);
+ mxr_write(mdev, MXR1_VIDEO_RB, rb);
+ }
+
+ mxr_vsync_set_update(mdev, MXR_ENABLE);
+ spin_unlock_irqrestore(&mdev->reg_slock, flags);
+
+ mxr_dbg(mdev, "destination x = %d, y = %d, width = %d, height = %d\n",
+ geo->dst.x_offset, geo->dst.y_offset,
+ geo->dst.width, geo->dst.height);
+}
+
+void mxr_reg_vp_format(struct mxr_device *mdev,
+ const struct mxr_format *fmt, const struct mxr_geometry *geo)
+{
+#if defined(CONFIG_ARCH_EXYNOS4)
+ unsigned long flags;
+
+ spin_lock_irqsave(&mdev->reg_slock, flags);
+ mxr_vsync_set_update(mdev, MXR_DISABLE);
+
+ vp_write_mask(mdev, VP_MODE, fmt->cookie, VP_MODE_FMT_MASK);
+
+ /* setting size of input image */
+ vp_write(mdev, VP_IMG_SIZE_Y, VP_IMG_HSIZE(geo->src.full_width) |
+ VP_IMG_VSIZE(geo->src.full_height));
+ /* chroma height has to reduced by 2 to avoid chroma distorions */
+ vp_write(mdev, VP_IMG_SIZE_C, VP_IMG_HSIZE(geo->src.full_width) |
+ VP_IMG_VSIZE(geo->src.full_height / 2));
+
+ vp_write(mdev, VP_SRC_WIDTH, geo->src.width);
+ vp_write(mdev, VP_SRC_HEIGHT, geo->src.height);
+ vp_write(mdev, VP_SRC_H_POSITION,
+ VP_SRC_H_POSITION_VAL(geo->src.x_offset));
+ vp_write(mdev, VP_SRC_V_POSITION, geo->src.y_offset);
+
+ vp_write(mdev, VP_DST_WIDTH, geo->dst.width);
+ vp_write(mdev, VP_DST_H_POSITION, geo->dst.x_offset);
+ if (geo->dst.field == V4L2_FIELD_INTERLACED) {
+ vp_write(mdev, VP_DST_HEIGHT, geo->dst.height / 2);
+ vp_write(mdev, VP_DST_V_POSITION, geo->dst.y_offset / 2);
+ } else {
+ vp_write(mdev, VP_DST_HEIGHT, geo->dst.height);
+ vp_write(mdev, VP_DST_V_POSITION, geo->dst.y_offset);
+ }
+
+ vp_write(mdev, VP_H_RATIO, geo->x_ratio);
+ vp_write(mdev, VP_V_RATIO, geo->y_ratio);
+
+ vp_write(mdev, VP_ENDIAN_MODE, VP_ENDIAN_MODE_LITTLE);
+
+ mxr_vsync_set_update(mdev, MXR_ENABLE);
+ spin_unlock_irqrestore(&mdev->reg_slock, flags);
+#endif
+}
+
+void mxr_reg_graph_buffer(struct mxr_device *mdev, int idx, dma_addr_t addr)
+{
+ u32 val = addr ? ~0 : 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&mdev->reg_slock, flags);
+ mxr_vsync_set_update(mdev, MXR_DISABLE);
+
+ if (idx == 0) {
+ mxr_write_mask(mdev, MXR_CFG, 0x195, MXR_CFG_GRP0_ENABLE);
+ mxr_write(mdev, MXR_GRAPHIC_BASE(0), addr);
+ } else if (idx == 1) {
+ mxr_write_mask(mdev, MXR_CFG, val, MXR_CFG_GRP1_ENABLE);
+ mxr_write(mdev, MXR_GRAPHIC_BASE(1), addr);
+ } else if (idx == 2) {
+ mxr_write_mask(mdev, MXR_CFG, val, MXR_CFG_MX1_GRP0_ENABLE);
+ mxr_write(mdev, MXR1_GRAPHIC_BASE(0), addr);
+ } else if (idx == 3) {
+ mxr_write_mask(mdev, MXR_CFG, val, MXR_CFG_MX1_GRP1_ENABLE);
+ mxr_write(mdev, MXR1_GRAPHIC_BASE(1), addr);
+ }
+
+ mxr_vsync_set_update(mdev, MXR_ENABLE);
+ spin_unlock_irqrestore(&mdev->reg_slock, flags);
+}
+
+void mxr_reg_vp_buffer(struct mxr_device *mdev,
+ dma_addr_t luma_addr[2], dma_addr_t chroma_addr[2])
+{
+#if defined(CONFIG_ARCH_EXYNOS4)
+ u32 val = luma_addr[0] ? ~0 : 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&mdev->reg_slock, flags);
+ mxr_vsync_set_update(mdev, MXR_DISABLE);
+
+ mxr_write_mask(mdev, MXR_CFG, val, MXR_CFG_VIDEO_ENABLE);
+ vp_write_mask(mdev, VP_ENABLE, val, VP_ENABLE_ON);
+ /* TODO: fix tiled mode */
+ vp_write(mdev, VP_TOP_Y_PTR, luma_addr[0]);
+ vp_write(mdev, VP_TOP_C_PTR, chroma_addr[0]);
+ vp_write(mdev, VP_BOT_Y_PTR, luma_addr[1]);
+ vp_write(mdev, VP_BOT_C_PTR, chroma_addr[1]);
+
+ mxr_vsync_set_update(mdev, MXR_ENABLE);
+ spin_unlock_irqrestore(&mdev->reg_slock, flags);
+#endif
+}
+
+void mxr_reg_set_layer_blend(struct mxr_device *mdev, int sub_mxr, int num,
+ int en)
+{
+ u32 val = en ? ~0 : 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&mdev->reg_slock, flags);
+ mxr_vsync_set_update(mdev, MXR_DISABLE);
+
+ if (sub_mxr == MXR_SUB_MIXER0 && num == MXR_LAYER_VIDEO)
+ mxr_write_mask(mdev, MXR_VIDEO_CFG, val,
+ MXR_VIDEO_CFG_BLEND_EN);
+ else if (sub_mxr == MXR_SUB_MIXER0 && num == MXR_LAYER_GRP0)
+ mxr_write_mask(mdev, MXR_GRAPHIC_CFG(0), val,
+ MXR_GRP_CFG_LAYER_BLEND_EN);
+ else if (sub_mxr == MXR_SUB_MIXER0 && num == MXR_LAYER_GRP1)
+ mxr_write_mask(mdev, MXR_GRAPHIC_CFG(1), val,
+ MXR_GRP_CFG_LAYER_BLEND_EN);
+#if defined(CONFIG_ARCH_EXYNOS5)
+ else if (sub_mxr == MXR_SUB_MIXER1 && num == MXR_LAYER_VIDEO)
+ mxr_write_mask(mdev, MXR1_VIDEO_CFG, val,
+ MXR_VIDEO_CFG_BLEND_EN);
+ else if (sub_mxr == MXR_SUB_MIXER1 && num == MXR_LAYER_GRP0)
+ mxr_write_mask(mdev, MXR1_GRAPHIC_CFG(0), val,
+ MXR_GRP_CFG_LAYER_BLEND_EN);
+ else if (sub_mxr == MXR_SUB_MIXER1 && num == MXR_LAYER_GRP1)
+ mxr_write_mask(mdev, MXR1_GRAPHIC_CFG(1), val,
+ MXR_GRP_CFG_LAYER_BLEND_EN);
+#endif
+
+ mxr_vsync_set_update(mdev, MXR_ENABLE);
+ spin_unlock_irqrestore(&mdev->reg_slock, flags);
+}
+
+void mxr_reg_layer_alpha(struct mxr_device *mdev, int sub_mxr, int num, u32 a)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&mdev->reg_slock, flags);
+ mxr_vsync_set_update(mdev, MXR_DISABLE);
+
+ if (sub_mxr == MXR_SUB_MIXER0 && num == MXR_LAYER_VIDEO)
+ mxr_write_mask(mdev, MXR_VIDEO_CFG, MXR_VIDEO_CFG_ALPHA(a),
+ 0xff);
+ else if (sub_mxr == MXR_SUB_MIXER0 && num == MXR_LAYER_GRP0)
+ mxr_write_mask(mdev, MXR_GRAPHIC_CFG(0), MXR_GRP_CFG_ALPHA_VAL(a),
+ 0xff);
+ else if (sub_mxr == MXR_SUB_MIXER0 && num == MXR_LAYER_GRP1)
+ mxr_write_mask(mdev, MXR_GRAPHIC_CFG(1), MXR_GRP_CFG_ALPHA_VAL(a),
+ 0xff);
+#if defined(CONFIG_ARCH_EXYNOS5)
+ else if (sub_mxr == MXR_SUB_MIXER1 && num == MXR_LAYER_VIDEO)
+ mxr_write_mask(mdev, MXR1_VIDEO_CFG, MXR_VIDEO_CFG_ALPHA(a),
+ 0xff);
+ else if (sub_mxr == MXR_SUB_MIXER1 && num == MXR_LAYER_GRP0)
+ mxr_write_mask(mdev, MXR1_GRAPHIC_CFG(0), MXR_GRP_CFG_ALPHA_VAL(a),
+ 0xff);
+ else if (sub_mxr == MXR_SUB_MIXER1 && num == MXR_LAYER_GRP1)
+ mxr_write_mask(mdev, MXR1_GRAPHIC_CFG(1), MXR_GRP_CFG_ALPHA_VAL(a),
+ 0xff);
+#endif
+
+ mxr_vsync_set_update(mdev, MXR_ENABLE);
+ spin_unlock_irqrestore(&mdev->reg_slock, flags);
+}
+
+void mxr_reg_set_pixel_blend(struct mxr_device *mdev, int sub_mxr, int num,
+ int en)
+{
+ u32 val = en ? ~0 : 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&mdev->reg_slock, flags);
+ mxr_vsync_set_update(mdev, MXR_DISABLE);
+
+ if (sub_mxr == MXR_SUB_MIXER0 && num == MXR_LAYER_GRP0)
+ mxr_write_mask(mdev, MXR_GRAPHIC_CFG(0), val,
+ MXR_GRP_CFG_PIXEL_BLEND_EN);
+ else if (sub_mxr == MXR_SUB_MIXER0 && num == MXR_LAYER_GRP1)
+ mxr_write_mask(mdev, MXR_GRAPHIC_CFG(1), val,
+ MXR_GRP_CFG_PIXEL_BLEND_EN);
+#if defined(CONFIG_ARCH_EXYNOS5)
+ else if (sub_mxr == MXR_SUB_MIXER1 && num == MXR_LAYER_GRP0)
+ mxr_write_mask(mdev, MXR1_GRAPHIC_CFG(0), val,
+ MXR_GRP_CFG_PIXEL_BLEND_EN);
+ else if (sub_mxr == MXR_SUB_MIXER1 && num == MXR_LAYER_GRP1)
+ mxr_write_mask(mdev, MXR1_GRAPHIC_CFG(1), val,
+ MXR_GRP_CFG_PIXEL_BLEND_EN);
+#endif
+
+ mxr_vsync_set_update(mdev, MXR_ENABLE);
+ spin_unlock_irqrestore(&mdev->reg_slock, flags);
+}
+
+void mxr_reg_set_colorkey(struct mxr_device *mdev, int sub_mxr, int num, int en)
+{
+ u32 val = en ? ~0 : 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&mdev->reg_slock, flags);
+ mxr_vsync_set_update(mdev, MXR_DISABLE);
+
+ if (sub_mxr == MXR_SUB_MIXER0 && num == MXR_LAYER_GRP0)
+ mxr_write_mask(mdev, MXR_GRAPHIC_CFG(0), val,
+ MXR_GRP_CFG_BLANK_KEY_OFF);
+ else if (sub_mxr == MXR_SUB_MIXER0 && num == MXR_LAYER_GRP1)
+ mxr_write_mask(mdev, MXR_GRAPHIC_CFG(1), val,
+ MXR_GRP_CFG_BLANK_KEY_OFF);
+#if defined(CONFIG_ARCH_EXYNOS5)
+ else if (sub_mxr == MXR_SUB_MIXER1 && num == MXR_LAYER_GRP0)
+ mxr_write_mask(mdev, MXR1_GRAPHIC_CFG(0), val,
+ MXR_GRP_CFG_BLANK_KEY_OFF);
+ else if (sub_mxr == MXR_SUB_MIXER1 && num == MXR_LAYER_GRP1)
+ mxr_write_mask(mdev, MXR1_GRAPHIC_CFG(1), val,
+ MXR_GRP_CFG_BLANK_KEY_OFF);
+#endif
+
+ mxr_vsync_set_update(mdev, MXR_ENABLE);
+ spin_unlock_irqrestore(&mdev->reg_slock, flags);
+}
+
+void mxr_reg_colorkey_val(struct mxr_device *mdev, int sub_mxr, int num, u32 v)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&mdev->reg_slock, flags);
+ mxr_vsync_set_update(mdev, MXR_DISABLE);
+
+ if (sub_mxr == MXR_SUB_MIXER0 && num == MXR_LAYER_GRP0)
+ mxr_write(mdev, MXR_GRAPHIC_BLANK(0), v);
+ else if (sub_mxr == MXR_SUB_MIXER0 && num == MXR_LAYER_GRP1)
+ mxr_write(mdev, MXR_GRAPHIC_BLANK(1), v);
+#if defined(CONFIG_ARCH_EXYNOS5)
+ else if (sub_mxr == MXR_SUB_MIXER1 && num == MXR_LAYER_GRP0)
+ mxr_write(mdev, MXR1_GRAPHIC_BLANK(0), v);
+ else if (sub_mxr == MXR_SUB_MIXER1 && num == MXR_LAYER_GRP1)
+ mxr_write(mdev, MXR1_GRAPHIC_BLANK(1), v);
+#endif
+
+ mxr_vsync_set_update(mdev, MXR_ENABLE);
+ spin_unlock_irqrestore(&mdev->reg_slock, flags);
+}
+
+static void mxr_irq_layer_handle(struct mxr_layer *layer)
+{
+ struct list_head *head = &layer->enq_list;
+ struct tv_graph_pipeline *pipe = &layer->pipe;
+ struct mxr_buffer *done;
+
+ /* skip non-existing layer */
+ if (layer == NULL)
+ return;
+
+ spin_lock(&layer->enq_slock);
+ if (pipe->state == TV_GRAPH_PIPELINE_IDLE)
+ goto done;
+
+ done = layer->shadow_buf;
+ layer->shadow_buf = layer->update_buf;
+
+ if (list_empty(head)) {
+ if (pipe->state != TV_GRAPH_PIPELINE_STREAMING)
+ layer->update_buf = NULL;
+ } else {
+ struct mxr_buffer *next;
+ next = list_first_entry(head, struct mxr_buffer, list);
+ list_del(&next->list);
+ layer->update_buf = next;
+ }
+
+ layer->ops.buffer_set(layer, layer->update_buf);
+
+ if (done && done != layer->shadow_buf)
+ vb2_buffer_done(&done->vb, VB2_BUF_STATE_DONE);
+
+done:
+ spin_unlock(&layer->enq_slock);
+}
+
+u32 mxr_irq_underrun_handle(struct mxr_device *mdev, u32 val)
+{
+ if (val & MXR_INT_STATUS_MX0_VIDEO) {
+ mxr_warn(mdev, "mixer0 video layer underrun occur\n");
+ val |= MXR_INT_STATUS_MX0_VIDEO;
+ } else if (val & MXR_INT_STATUS_MX0_GRP0) {
+ mxr_warn(mdev, "mixer0 graphic0 layer underrun occur\n");
+ val |= MXR_INT_STATUS_MX0_GRP0;
+ } else if (val & MXR_INT_STATUS_MX0_GRP1) {
+ mxr_warn(mdev, "mixer0 graphic1 layer underrun occur\n");
+ val |= MXR_INT_STATUS_MX0_GRP1;
+ } else if (val & MXR_INT_STATUS_MX1_VIDEO) {
+ mxr_warn(mdev, "mixer1 video layer underrun occur\n");
+ val |= MXR_INT_STATUS_MX1_VIDEO;
+ } else if (val & MXR_INT_STATUS_MX1_GRP0) {
+ mxr_warn(mdev, "mixer1 graphic0 layer underrun occur\n");
+ val |= MXR_INT_STATUS_MX1_GRP0;
+ } else if (val & MXR_INT_STATUS_MX1_GRP1) {
+ mxr_warn(mdev, "mixer1 graphic1 layer underrun occur\n");
+ val |= MXR_INT_STATUS_MX1_GRP1;
+ }
+
+ return val;
+}
+
+irqreturn_t mxr_irq_handler(int irq, void *dev_data)
+{
+ struct mxr_device *mdev = dev_data;
+ u32 i, val;
+
+ spin_lock(&mdev->reg_slock);
+ val = mxr_read(mdev, MXR_INT_STATUS);
+
+ /* wake up process waiting for VSYNC */
+ if (val & MXR_INT_STATUS_VSYNC) {
+ set_bit(MXR_EVENT_VSYNC, &mdev->event_flags);
+ wake_up(&mdev->event_queue);
+ }
+
+ /* clear interrupts.
+ vsync is updated after write MXR_CFG_LAYER_UPDATE bit */
+ if (val & MXR_INT_CLEAR_VSYNC)
+ mxr_write_mask(mdev, MXR_INT_STATUS, ~0, MXR_INT_CLEAR_VSYNC);
+
+ val = mxr_irq_underrun_handle(mdev, val);
+ mxr_write(mdev, MXR_INT_STATUS, val);
+
+ spin_unlock(&mdev->reg_slock);
+ /* leave on non-vsync event */
+ if (~val & MXR_INT_CLEAR_VSYNC)
+ return IRQ_HANDLED;
+
+ for (i = 0; i < MXR_MAX_SUB_MIXERS; ++i) {
+#if defined(CONFIG_ARCH_EXYNOS4)
+ mxr_irq_layer_handle(mdev->sub_mxr[i].layer[MXR_LAYER_VIDEO]);
+#endif
+ mxr_irq_layer_handle(mdev->sub_mxr[i].layer[MXR_LAYER_GRP0]);
+ mxr_irq_layer_handle(mdev->sub_mxr[i].layer[MXR_LAYER_GRP1]);
+ }
+
+ if (test_bit(MXR_EVENT_VSYNC, &mdev->event_flags)) {
+ spin_lock(&mdev->reg_slock);
+ mxr_write_mask(mdev, MXR_CFG, ~0, MXR_CFG_LAYER_UPDATE);
+ spin_unlock(&mdev->reg_slock);
+ }
+
+ return IRQ_HANDLED;
+}
+
+void mxr_reg_s_output(struct mxr_device *mdev, int cookie)
+{
+ u32 val;
+
+ val = cookie == 0 ? MXR_CFG_DST_SDO : MXR_CFG_DST_HDMI;
+ mxr_write_mask(mdev, MXR_CFG, val, MXR_CFG_DST_MASK);
+}
+
+void mxr_reg_streamon(struct mxr_device *mdev)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&mdev->reg_slock, flags);
+ /* single write -> no need to block vsync update */
+
+ /* start MIXER */
+ mxr_write_mask(mdev, MXR_STATUS, ~0, MXR_STATUS_REG_RUN);
+
+ spin_unlock_irqrestore(&mdev->reg_slock, flags);
+}
+
+void mxr_reg_streamoff(struct mxr_device *mdev)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&mdev->reg_slock, flags);
+ /* single write -> no need to block vsync update */
+
+ /* stop MIXER */
+ mxr_write_mask(mdev, MXR_STATUS, 0, MXR_STATUS_REG_RUN);
+
+ spin_unlock_irqrestore(&mdev->reg_slock, flags);
+}
+
+int mxr_reg_wait4vsync(struct mxr_device *mdev)
+{
+ int ret;
+
+ clear_bit(MXR_EVENT_VSYNC, &mdev->event_flags);
+ /* TODO: consider adding interruptible */
+ ret = wait_event_timeout(mdev->event_queue,
+ test_bit(MXR_EVENT_VSYNC, &mdev->event_flags),
+ msecs_to_jiffies(1000));
+ if (ret > 0)
+ return 0;
+ if (ret < 0)
+ return ret;
+ mxr_warn(mdev, "no vsync detected - timeout\n");
+ return -ETIME;
+}
+
+void mxr_reg_set_mbus_fmt(struct mxr_device *mdev,
+ struct v4l2_mbus_framefmt *fmt)
+{
+ u32 val = 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&mdev->reg_slock, flags);
+ mxr_vsync_set_update(mdev, MXR_DISABLE);
+
+ /* choosing between YUV444 and RGB888 as mixer output type */
+ if (mdev->sub_mxr[MXR_SUB_MIXER0].mbus_fmt[MXR_PAD_SOURCE_GRP0].code ==
+ V4L2_MBUS_FMT_YUV8_1X24) {
+ val = MXR_CFG_OUT_YUV444;
+ fmt->code = V4L2_MBUS_FMT_YUV8_1X24;
+ } else {
+ val = MXR_CFG_OUT_RGB888;
+ fmt->code = V4L2_MBUS_FMT_XRGB8888_4X8_LE;
+ }
+
+ /* choosing between interlace and progressive mode */
+ if (fmt->field == V4L2_FIELD_INTERLACED)
+ val |= MXR_CFG_SCAN_INTERLACE;
+ else
+ val |= MXR_CFG_SCAN_PROGRASSIVE;
+
+ /* choosing between porper HD and SD mode */
+ if (fmt->height == 480)
+ val |= MXR_CFG_SCAN_NTSC | MXR_CFG_SCAN_SD;
+ else if (fmt->height == 576)
+ val |= MXR_CFG_SCAN_PAL | MXR_CFG_SCAN_SD;
+ else if (fmt->height == 720)
+ val |= MXR_CFG_SCAN_HD_720 | MXR_CFG_SCAN_HD;
+ else if (fmt->height == 1080)
+ val |= MXR_CFG_SCAN_HD_1080 | MXR_CFG_SCAN_HD;
+ else
+ WARN(1, "unrecognized mbus height %u!\n", fmt->height);
+
+ mxr_write_mask(mdev, MXR_CFG, val, MXR_CFG_SCAN_MASK |
+ MXR_CFG_OUT_MASK);
+
+ val = (fmt->field == V4L2_FIELD_INTERLACED) ? ~0 : 0;
+ vp_write_mask(mdev, VP_MODE, val,
+ VP_MODE_LINE_SKIP | VP_MODE_FIELD_ID_AUTO_TOGGLING);
+
+ mxr_vsync_set_update(mdev, MXR_ENABLE);
+ spin_unlock_irqrestore(&mdev->reg_slock, flags);
+}
+
+void mxr_reg_local_path_clear(struct mxr_device *mdev)
+{
+ u32 val;
+
+ val = readl(SYSREG_DISP1BLK_CFG);
+ val &= ~(DISP1BLK_CFG_MIXER0_VALID | DISP1BLK_CFG_MIXER1_VALID);
+ writel(val, SYSREG_DISP1BLK_CFG);
+ mxr_dbg(mdev, "SYSREG_DISP1BLK_CFG = 0x%x\n", readl(SYSREG_DISP1BLK_CFG));
+}
+
+void mxr_reg_local_path_set(struct mxr_device *mdev, int mxr0_gsc, int mxr1_gsc,
+ u32 flags)
+{
+ u32 val = 0;
+ int mxr0_local = mdev->sub_mxr[MXR_SUB_MIXER0].local;
+ int mxr1_local = mdev->sub_mxr[MXR_SUB_MIXER1].local;
+
+ if (mxr0_local && !mxr1_local) { /* 1-path : sub-mixer0 */
+ val = MXR_TVOUT_CFG_ONE_PATH;
+ val |= MXR_TVOUT_CFG_PATH_MIXER0;
+ } else if (!mxr0_local && mxr1_local) { /* 1-path : sub-mixer1 */
+ val = MXR_TVOUT_CFG_ONE_PATH;
+ val |= MXR_TVOUT_CFG_PATH_MIXER1;
+ } else if (mxr0_local && mxr1_local) { /* 2-path */
+ val = MXR_TVOUT_CFG_TWO_PATH;
+ val |= MXR_TVOUT_CFG_STEREO_SCOPIC;
+ }
+
+ mxr_write(mdev, MXR_TVOUT_CFG, val);
+
+ /* set local path gscaler to mixer */
+ val = readl(SYSREG_DISP1BLK_CFG);
+ val |= DISP1BLK_CFG_FIFORST_DISP1;
+ val &= ~DISP1BLK_CFG_MIXER_MASK;
+ if (flags & MEDIA_LNK_FL_ENABLED) {
+ if (mxr0_local) {
+ val |= DISP1BLK_CFG_MIXER0_VALID;
+ val |= DISP1BLK_CFG_MIXER0_SRC_GSC(mxr0_gsc);
+ }
+ if (mxr1_local) {
+ val |= DISP1BLK_CFG_MIXER1_VALID;
+ val |= DISP1BLK_CFG_MIXER1_SRC_GSC(mxr1_gsc);
+ }
+ }
+ mxr_dbg(mdev, "%s: SYSREG value = 0x%x\n", __func__, val);
+ writel(val, SYSREG_DISP1BLK_CFG);
+}
+
+void mxr_reg_graph_layer_stream(struct mxr_device *mdev, int idx, int en)
+{
+ u32 val = 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&mdev->reg_slock, flags);
+ mxr_vsync_set_update(mdev, MXR_DISABLE);
+
+ if (mdev->frame_packing) {
+ val = MXR_TVOUT_CFG_TWO_PATH;
+ val |= MXR_TVOUT_CFG_STEREO_SCOPIC;
+ } else {
+ val = MXR_TVOUT_CFG_ONE_PATH;
+ val |= MXR_TVOUT_CFG_PATH_MIXER0;
+ }
+
+ mxr_write(mdev, MXR_TVOUT_CFG, val);
+
+ mxr_vsync_set_update(mdev, MXR_ENABLE);
+ spin_unlock_irqrestore(&mdev->reg_slock, flags);
+}
+
+void mxr_reg_vp_layer_stream(struct mxr_device *mdev, int en)
+{
+ /* no extra actions need to be done */
+}
+
+void mxr_reg_video_layer_stream(struct mxr_device *mdev, int idx, int en)
+{
+ u32 val = en ? ~0 : 0;
+
+ if (idx == 0)
+ mxr_write_mask(mdev, MXR_CFG, val, MXR_CFG_VIDEO_ENABLE);
+ else if (idx == 1)
+ mxr_write_mask(mdev, MXR_CFG, val, MXR_CFG_MX1_VIDEO_ENABLE);
+}
+
+static const u8 filter_y_horiz_tap8[] = {
+ 0, -1, -1, -1, -1, -1, -1, -1,
+ -1, -1, -1, -1, -1, 0, 0, 0,
+ 0, 2, 4, 5, 6, 6, 6, 6,
+ 6, 5, 5, 4, 3, 2, 1, 1,
+ 0, -6, -12, -16, -18, -20, -21, -20,
+ -20, -18, -16, -13, -10, -8, -5, -2,
+ 127, 126, 125, 121, 114, 107, 99, 89,
+ 79, 68, 57, 46, 35, 25, 16, 8,
+};
+
+static const u8 filter_y_vert_tap4[] = {
+ 0, -3, -6, -8, -8, -8, -8, -7,
+ -6, -5, -4, -3, -2, -1, -1, 0,
+ 127, 126, 124, 118, 111, 102, 92, 81,
+ 70, 59, 48, 37, 27, 19, 11, 5,
+ 0, 5, 11, 19, 27, 37, 48, 59,
+ 70, 81, 92, 102, 111, 118, 124, 126,
+ 0, 0, -1, -1, -2, -3, -4, -5,
+ -6, -7, -8, -8, -8, -8, -6, -3,
+};
+
+static const u8 filter_cr_horiz_tap4[] = {
+ 0, -3, -6, -8, -8, -8, -8, -7,
+ -6, -5, -4, -3, -2, -1, -1, 0,
+ 127, 126, 124, 118, 111, 102, 92, 81,
+ 70, 59, 48, 37, 27, 19, 11, 5,
+};
+
+static inline void mxr_reg_vp_filter_set(struct mxr_device *mdev,
+ int reg_id, const u8 *data, unsigned int size)
+{
+ /* assure 4-byte align */
+ BUG_ON(size & 3);
+ for (; size; size -= 4, reg_id += 4, data += 4) {
+ u32 val = (data[0] << 24) | (data[1] << 16) |
+ (data[2] << 8) | data[3];
+ vp_write(mdev, reg_id, val);
+ }
+}
+
+static void mxr_reg_vp_default_filter(struct mxr_device *mdev)
+{
+#if defined(CONFIG_ARCH_EXYNOS4)
+ mxr_reg_vp_filter_set(mdev, VP_POLY8_Y0_LL,
+ filter_y_horiz_tap8, sizeof filter_y_horiz_tap8);
+ mxr_reg_vp_filter_set(mdev, VP_POLY4_Y0_LL,
+ filter_y_vert_tap4, sizeof filter_y_vert_tap4);
+ mxr_reg_vp_filter_set(mdev, VP_POLY4_C0_LL,
+ filter_cr_horiz_tap4, sizeof filter_cr_horiz_tap4);
+#endif
+}
+
+static void mxr_reg_mxr_dump(struct mxr_device *mdev)
+{
+#define DUMPREG(reg_id) \
+do { \
+ mxr_dbg(mdev, #reg_id " = %08x\n", \
+ (u32)readl(mdev->res.mxr_regs + reg_id)); \
+} while (0)
+
+ DUMPREG(MXR_STATUS);
+ DUMPREG(MXR_CFG);
+ DUMPREG(MXR_INT_EN);
+ DUMPREG(MXR_INT_STATUS);
+
+ DUMPREG(MXR_LAYER_CFG);
+ DUMPREG(MXR_VIDEO_CFG);
+
+ DUMPREG(MXR_GRAPHIC0_CFG);
+ DUMPREG(MXR_GRAPHIC0_BASE);
+ DUMPREG(MXR_GRAPHIC0_SPAN);
+ DUMPREG(MXR_GRAPHIC0_WH);
+ DUMPREG(MXR_GRAPHIC0_SXY);
+ DUMPREG(MXR_GRAPHIC0_DXY);
+
+ DUMPREG(MXR_GRAPHIC1_CFG);
+ DUMPREG(MXR_GRAPHIC1_BASE);
+ DUMPREG(MXR_GRAPHIC1_SPAN);
+ DUMPREG(MXR_GRAPHIC1_WH);
+ DUMPREG(MXR_GRAPHIC1_SXY);
+ DUMPREG(MXR_GRAPHIC1_DXY);
+#undef DUMPREG
+}
+
+static void mxr_reg_vp_dump(struct mxr_device *mdev)
+{
+#define DUMPREG(reg_id) \
+do { \
+ mxr_dbg(mdev, #reg_id " = %08x\n", \
+ (u32) readl(mdev->res.vp_regs + reg_id)); \
+} while (0)
+
+#if defined(CONFIG_ARCH_EXYNOS4)
+ DUMPREG(VP_ENABLE);
+ DUMPREG(VP_SRESET);
+ DUMPREG(VP_SHADOW_UPDATE);
+ DUMPREG(VP_FIELD_ID);
+ DUMPREG(VP_MODE);
+ DUMPREG(VP_IMG_SIZE_Y);
+ DUMPREG(VP_IMG_SIZE_C);
+ DUMPREG(VP_PER_RATE_CTRL);
+ DUMPREG(VP_TOP_Y_PTR);
+ DUMPREG(VP_BOT_Y_PTR);
+ DUMPREG(VP_TOP_C_PTR);
+ DUMPREG(VP_BOT_C_PTR);
+ DUMPREG(VP_ENDIAN_MODE);
+ DUMPREG(VP_SRC_H_POSITION);
+ DUMPREG(VP_SRC_V_POSITION);
+ DUMPREG(VP_SRC_WIDTH);
+ DUMPREG(VP_SRC_HEIGHT);
+ DUMPREG(VP_DST_H_POSITION);
+ DUMPREG(VP_DST_V_POSITION);
+ DUMPREG(VP_DST_WIDTH);
+ DUMPREG(VP_DST_HEIGHT);
+ DUMPREG(VP_H_RATIO);
+ DUMPREG(VP_V_RATIO);
+#endif
+
+#undef DUMPREG
+}
+
+void mxr_reg_dump(struct mxr_device *mdev)
+{
+ mxr_reg_mxr_dump(mdev);
+ mxr_reg_vp_dump(mdev);
+}
--- /dev/null
+/* linux/drivers/media/video/exynos/tv/mixer_vb2.c
+ *
+ * Copyright (c) 2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com/
+ *
+ * Videobuf2 allocator operations file
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+#include <linux/platform_device.h>
+#if defined(CONFIG_VIDEOBUF2_CMA_PHYS)
+#include <media/videobuf2-cma-phys.h>
+#elif defined(CONFIG_VIDEOBUF2_DMA_CONTIG)
+#include <media/videobuf2-dma-contig.h>
+#endif
+
+#include "mixer.h"
+
+#if defined(CONFIG_VIDEOBUF2_CMA_PHYS)
+void *mxr_cma_init(struct mxr_device *mdev)
+{
+ return vb2_cma_phys_init(mdev->dev, NULL, 0, false);
+}
+
+void mxr_cma_resume(void *alloc_ctx) {}
+void mxr_cma_suspend(void *alloc_ctx) {}
+void mxr_cma_set_cacheable(void *alloc_ctx, bool cacheable) {}
+
+int mxr_cma_cache_flush(void *alloc_ctx, struct vb2_buffer *vb, u32 plane_no)
+{
+ return 0;
+}
+
+const struct mxr_vb2 mxr_vb2_cma = {
+ .ops = &vb2_cma_phys_memops,
+ .init = mxr_cma_init,
+ .cleanup = vb2_cma_phys_cleanup,
+ .plane_addr = vb2_cma_phys_plane_paddr,
+ .resume = mxr_cma_resume,
+ .suspend = mxr_cma_suspend,
+ .cache_flush = mxr_cma_cache_flush,
+ .set_cacheable = mxr_cma_set_cacheable,
+};
+#elif defined(CONFIG_VIDEOBUF2_DMA_CONTIG)
+void *mxr_dma_contig_init(struct mxr_device *mdev)
+{
+ return vb2_dma_contig_init_ctx(mdev->dev);
+}
+
+const struct mxr_vb2 mxr_vb2_dma_contig = {
+ .ops = &vb2_dma_contig_memops,
+ .init = mxr_dma_contig_init,
+ .cleanup = vb2_dma_contig_cleanup_ctx,
+ .plane_addr = vb2_dma_contig_plane_dma_addr,
+};
+#endif
--- /dev/null
+/*
+ * Samsung TV Mixer driver
+ *
+ * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
+ *
+ * Tomasz Stanislawski, <t.stanislaws@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published
+ * by the Free Software Foundation. either version 2 of the License,
+ * or (at your option) any later version
+ */
+#include "mixer.h"
+
+#include <linux/videodev2.h>
+#include <linux/mm.h>
+#include <linux/version.h>
+#include <linux/timer.h>
+#include <linux/export.h>
+
+#include <media/exynos_mc.h>
+#include <media/v4l2-ioctl.h>
+#include <media/videobuf2-fb.h>
+#if defined(CONFIG_VIDEOBUF2_CMA_PHYS)
+#include <media/videobuf2-cma-phys.h>
+#elif defined(CONFIG_VIDEOBUF2_ION)
+#include <media/videobuf2-ion.h>
+#else
+#include <media/videobuf2-dma-contig.h>
+#endif
+
+
+int __devinit mxr_acquire_video(struct mxr_device *mdev,
+ struct mxr_output_conf *output_conf, int output_count)
+{
+ int i;
+ int ret = 0;
+ struct v4l2_subdev *sd;
+
+ mdev->alloc_ctx = mdev->vb2->init(mdev);
+ if (IS_ERR_OR_NULL(mdev->alloc_ctx)) {
+ mxr_err(mdev, "could not acquire vb2 allocator\n");
+ ret = PTR_ERR(mdev->alloc_ctx);
+ goto fail;
+ }
+
+ /* registering outputs */
+ mdev->output_cnt = 0;
+ for (i = 0; i < output_count; ++i) {
+ struct mxr_output_conf *conf = &output_conf[i];
+ struct mxr_output *out;
+
+ /* find subdev of output devices */
+ sd = (struct v4l2_subdev *)
+ module_name_to_driver_data(conf->module_name);
+ /* trying to register next output */
+ if (sd == NULL)
+ continue;
+ out = kzalloc(sizeof *out, GFP_KERNEL);
+ if (out == NULL) {
+ mxr_err(mdev, "no memory for '%s'\n",
+ conf->output_name);
+ ret = -ENOMEM;
+ /* registered subdevs are removed in fail_v4l2_dev */
+ goto fail_output;
+ }
+ strlcpy(out->name, conf->output_name, sizeof(out->name));
+ out->sd = sd;
+ out->cookie = conf->cookie;
+ mdev->output[mdev->output_cnt++] = out;
+ mxr_info(mdev, "added output '%s' from module '%s'\n",
+ conf->output_name, conf->module_name);
+ /* checking if maximal number of outputs is reached */
+ if (mdev->output_cnt >= MXR_MAX_OUTPUTS)
+ break;
+ }
+
+ if (mdev->output_cnt == 0) {
+ mxr_err(mdev, "failed to register any output\n");
+ ret = -ENODEV;
+ /* skipping fail_output because there is nothing to free */
+ goto fail_vb2_allocator;
+ }
+
+ return 0;
+
+fail_output:
+ /* kfree is NULL-safe */
+ for (i = 0; i < mdev->output_cnt; ++i)
+ kfree(mdev->output[i]);
+ memset(mdev->output, 0, sizeof mdev->output);
+
+fail_vb2_allocator:
+ /* freeing allocator context */
+ mdev->vb2->cleanup(mdev->alloc_ctx);
+
+fail:
+ return ret;
+}
+
+void __devexit mxr_release_video(struct mxr_device *mdev)
+{
+ int i;
+
+ /* kfree is NULL-safe */
+ for (i = 0; i < mdev->output_cnt; ++i)
+ kfree(mdev->output[i]);
+
+ mdev->vb2->cleanup(mdev->alloc_ctx);
+}
+
+static void tv_graph_pipeline_stream(struct tv_graph_pipeline *pipe, int on)
+{
+ struct mxr_device *mdev = pipe->layer->mdev;
+ struct media_entity *me = &pipe->layer->vfd.entity;
+ /* source pad of graphic layer entity */
+ struct media_pad *pad = &me->pads[0];
+ struct v4l2_subdev *sd;
+ struct exynos_entity_data md_data;
+
+ mxr_dbg(mdev, "%s TV graphic layer pipeline\n", on ? "start" : "stop");
+
+ /* find remote pad through enabled link */
+ pad = media_entity_remote_source(pad);
+ if (media_entity_type(pad->entity) != MEDIA_ENT_T_V4L2_SUBDEV
+ || pad == NULL)
+ mxr_warn(mdev, "cannot find remote pad\n");
+
+ sd = media_entity_to_v4l2_subdev(pad->entity);
+ mxr_dbg(mdev, "s_stream of %s sub-device is called\n", sd->name);
+
+ md_data.mxr_data_from = FROM_MXR_VD;
+ v4l2_set_subdevdata(sd, &md_data);
+ v4l2_subdev_call(sd, video, s_stream, on);
+}
+
+static int mxr_querycap(struct file *file, void *priv,
+ struct v4l2_capability *cap)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+
+ mxr_dbg(layer->mdev, "%s:%d\n", __func__, __LINE__);
+
+ strlcpy(cap->driver, MXR_DRIVER_NAME, sizeof cap->driver);
+ strlcpy(cap->card, layer->vfd.name, sizeof cap->card);
+ sprintf(cap->bus_info, "%d", layer->idx);
+ cap->version = KERNEL_VERSION(0, 1, 0);
+ cap->capabilities = V4L2_CAP_STREAMING |
+ V4L2_CAP_VIDEO_OUTPUT | V4L2_CAP_VIDEO_OUTPUT_MPLANE;
+
+ return 0;
+}
+
+/* Geometry handling */
+void mxr_layer_geo_fix(struct mxr_layer *layer)
+{
+ struct mxr_device *mdev = layer->mdev;
+ struct v4l2_mbus_framefmt mbus_fmt;
+
+ /* TODO: add some dirty flag to avoid unnecessary adjustments */
+ mxr_get_mbus_fmt(mdev, &mbus_fmt);
+ layer->geo.dst.full_width = mbus_fmt.width;
+ layer->geo.dst.full_height = mbus_fmt.height;
+ layer->geo.dst.field = mbus_fmt.field;
+ layer->ops.fix_geometry(layer);
+}
+
+void mxr_layer_default_geo(struct mxr_layer *layer)
+{
+ struct mxr_device *mdev = layer->mdev;
+ struct v4l2_mbus_framefmt mbus_fmt;
+
+ mxr_dbg(layer->mdev, "%s start\n", __func__);
+ memset(&layer->geo, 0, sizeof layer->geo);
+
+ mxr_get_mbus_fmt(mdev, &mbus_fmt);
+
+ layer->geo.dst.full_width = mbus_fmt.width;
+ layer->geo.dst.full_height = mbus_fmt.height;
+ layer->geo.dst.width = layer->geo.dst.full_width;
+ layer->geo.dst.height = layer->geo.dst.full_height;
+ layer->geo.dst.field = mbus_fmt.field;
+
+ layer->geo.src.full_width = mbus_fmt.width;
+ layer->geo.src.full_height = mbus_fmt.height;
+ layer->geo.src.width = layer->geo.src.full_width;
+ layer->geo.src.height = layer->geo.src.full_height;
+
+ layer->ops.fix_geometry(layer);
+}
+
+static void mxr_geometry_dump(struct mxr_device *mdev, struct mxr_geometry *geo)
+{
+ mxr_dbg(mdev, "src.full_size = (%u, %u)\n",
+ geo->src.full_width, geo->src.full_height);
+ mxr_dbg(mdev, "src.size = (%u, %u)\n",
+ geo->src.width, geo->src.height);
+ mxr_dbg(mdev, "src.offset = (%u, %u)\n",
+ geo->src.x_offset, geo->src.y_offset);
+ mxr_dbg(mdev, "dst.full_size = (%u, %u)\n",
+ geo->dst.full_width, geo->dst.full_height);
+ mxr_dbg(mdev, "dst.size = (%u, %u)\n",
+ geo->dst.width, geo->dst.height);
+ mxr_dbg(mdev, "dst.offset = (%u, %u)\n",
+ geo->dst.x_offset, geo->dst.y_offset);
+ mxr_dbg(mdev, "ratio = (%u, %u)\n",
+ geo->x_ratio, geo->y_ratio);
+}
+
+static const struct mxr_format *find_format_by_index(
+ struct mxr_layer *layer, unsigned long index);
+
+static int mxr_enum_fmt(struct file *file, void *priv,
+ struct v4l2_fmtdesc *f)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+ struct mxr_device *mdev = layer->mdev;
+ const struct mxr_format *fmt;
+
+ mxr_dbg(mdev, "%s\n", __func__);
+ fmt = find_format_by_index(layer, f->index);
+ if (fmt == NULL)
+ return -EINVAL;
+
+ strlcpy(f->description, fmt->name, sizeof(f->description));
+ f->pixelformat = fmt->fourcc;
+
+ return 0;
+}
+
+static int mxr_s_fmt(struct file *file, void *priv,
+ struct v4l2_format *f)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+ const struct mxr_format *fmt;
+ struct v4l2_pix_format_mplane *pix;
+ struct mxr_device *mdev = layer->mdev;
+ struct mxr_geometry *geo = &layer->geo;
+
+ mxr_dbg(mdev, "%s:%d\n", __func__, __LINE__);
+
+ pix = &f->fmt.pix_mp;
+ fmt = find_format_by_fourcc(layer, pix->pixelformat);
+ if (fmt == NULL) {
+ mxr_warn(mdev, "not recognized fourcc: %08x\n",
+ pix->pixelformat);
+ return -EINVAL;
+ }
+ layer->fmt = fmt;
+ geo->src.full_width = pix->width;
+ geo->src.width = pix->width;
+ geo->src.full_height = pix->height;
+ geo->src.height = pix->height;
+ /* assure consistency of geometry */
+ mxr_layer_geo_fix(layer);
+ mxr_dbg(mdev, "width=%u height=%u span=%u\n",
+ geo->src.width, geo->src.height, geo->src.full_width);
+
+ return 0;
+}
+
+static unsigned int divup(unsigned int divident, unsigned int divisor)
+{
+ return (divident + divisor - 1) / divisor;
+}
+
+unsigned long mxr_get_plane_size(const struct mxr_block *blk,
+ unsigned int width, unsigned int height)
+{
+ unsigned int bl_width = divup(width, blk->width);
+ unsigned int bl_height = divup(height, blk->height);
+
+ return bl_width * bl_height * blk->size;
+}
+
+static void mxr_mplane_fill(struct v4l2_plane_pix_format *planes,
+ const struct mxr_format *fmt, u32 width, u32 height)
+{
+ int i;
+
+ memset(planes, 0, sizeof(*planes) * fmt->num_subframes);
+ for (i = 0; i < fmt->num_planes; ++i) {
+ struct v4l2_plane_pix_format *plane = planes
+ + fmt->plane2subframe[i];
+ const struct mxr_block *blk = &fmt->plane[i];
+ u32 bl_width = divup(width, blk->width);
+ u32 bl_height = divup(height, blk->height);
+ u32 sizeimage = bl_width * bl_height * blk->size;
+ u16 bytesperline = bl_width * blk->size / blk->height;
+
+ plane->sizeimage += sizeimage;
+ plane->bytesperline = max(plane->bytesperline, bytesperline);
+ }
+}
+
+static int mxr_g_fmt(struct file *file, void *priv,
+ struct v4l2_format *f)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+ struct v4l2_pix_format_mplane *pix = &f->fmt.pix_mp;
+
+ mxr_dbg(layer->mdev, "%s:%d\n", __func__, __LINE__);
+
+ pix->width = layer->geo.src.full_width;
+ pix->height = layer->geo.src.full_height;
+ pix->field = V4L2_FIELD_NONE;
+ pix->pixelformat = layer->fmt->fourcc;
+ pix->colorspace = layer->fmt->colorspace;
+ mxr_mplane_fill(pix->plane_fmt, layer->fmt, pix->width, pix->height);
+
+ return 0;
+}
+
+static inline struct mxr_crop *choose_crop_by_type(struct mxr_geometry *geo,
+ enum v4l2_buf_type type)
+{
+ switch (type) {
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT:
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
+ return &geo->dst;
+ case V4L2_BUF_TYPE_VIDEO_OVERLAY:
+ return &geo->src;
+ default:
+ return NULL;
+ }
+}
+
+static int mxr_g_crop(struct file *file, void *fh, struct v4l2_crop *a)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+ struct mxr_crop *crop;
+
+ mxr_dbg(layer->mdev, "%s:%d\n", __func__, __LINE__);
+ crop = choose_crop_by_type(&layer->geo, a->type);
+ if (crop == NULL)
+ return -EINVAL;
+ mxr_layer_geo_fix(layer);
+ a->c.left = crop->x_offset;
+ a->c.top = crop->y_offset;
+ a->c.width = crop->width;
+ a->c.height = crop->height;
+ return 0;
+}
+
+static int mxr_s_crop(struct file *file, void *fh, struct v4l2_crop *a)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+ struct mxr_crop *crop;
+
+ mxr_dbg(layer->mdev, "%s:%d\n", __func__, __LINE__);
+ crop = choose_crop_by_type(&layer->geo, a->type);
+ if (crop == NULL)
+ return -EINVAL;
+ crop->x_offset = a->c.left;
+ crop->y_offset = a->c.top;
+ crop->width = a->c.width;
+ crop->height = a->c.height;
+ mxr_layer_geo_fix(layer);
+ return 0;
+}
+
+static int mxr_cropcap(struct file *file, void *fh, struct v4l2_cropcap *a)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+ struct mxr_crop *crop;
+
+ mxr_dbg(layer->mdev, "%s:%d\n", __func__, __LINE__);
+ crop = choose_crop_by_type(&layer->geo, a->type);
+ if (crop == NULL)
+ return -EINVAL;
+ mxr_layer_geo_fix(layer);
+ a->bounds.left = 0;
+ a->bounds.top = 0;
+ a->bounds.width = crop->full_width;
+ a->bounds.top = crop->full_height;
+ a->defrect = a->bounds;
+ /* setting pixel aspect to 1/1 */
+ a->pixelaspect.numerator = 1;
+ a->pixelaspect.denominator = 1;
+ return 0;
+}
+
+static int mxr_check_ctrl_val(struct v4l2_control *ctrl)
+{
+ int ret = 0;
+
+ switch (ctrl->id) {
+ case V4L2_CID_TV_LAYER_BLEND_ALPHA:
+ case V4L2_CID_TV_CHROMA_VALUE:
+ if (ctrl->value < 0 || ctrl->value > 256)
+ ret = -ERANGE;
+ break;
+ }
+
+ return ret;
+}
+
+static int mxr_s_ctrl(struct file *file, void *fh, struct v4l2_control *ctrl)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+ struct mxr_device *mdev = layer->mdev;
+ int cur_mxr = layer->cur_mxr;
+ int v = ctrl->value;
+ int num = 0;
+ int ret;
+
+ mxr_dbg(mdev, "%s start\n", __func__);
+
+ if (layer->type == MXR_LAYER_TYPE_VIDEO)
+ num = 0;
+ else if (layer->type == MXR_LAYER_TYPE_GRP && layer->idx == 0)
+ num = 1;
+ else if (layer->type == MXR_LAYER_TYPE_GRP && layer->idx == 1)
+ num = 2;
+
+ ret = mxr_check_ctrl_val(ctrl);
+ if (ret) {
+ mxr_err(mdev, "alpha value is out of range\n");
+ return ret;
+ }
+
+ switch (ctrl->id) {
+ case V4L2_CID_TV_LAYER_BLEND_ENABLE:
+ mdev->sub_mxr[cur_mxr].layer[num]->layer_blend_en = v;
+ break;
+ case V4L2_CID_TV_LAYER_BLEND_ALPHA:
+ mdev->sub_mxr[cur_mxr].layer[num]->layer_alpha = (u32)v;
+ break;
+ case V4L2_CID_TV_PIXEL_BLEND_ENABLE:
+ mdev->sub_mxr[cur_mxr].layer[num]->pixel_blend_en = v;
+ break;
+ case V4L2_CID_TV_CHROMA_ENABLE:
+ mdev->sub_mxr[cur_mxr].layer[num]->chroma_en = v;
+ break;
+ case V4L2_CID_TV_CHROMA_VALUE:
+ mdev->sub_mxr[cur_mxr].layer[num]->chroma_val = (u32)v;
+ break;
+ case V4L2_CID_TV_HPD_STATUS:
+ v4l2_subdev_call(to_outsd(mdev), core, s_ctrl, ctrl);
+ break;
+ default:
+ mxr_err(mdev, "invalid control id\n");
+ ret = -EINVAL;
+ break;
+ }
+
+ return ret;
+}
+
+static int mxr_g_ctrl(struct file *file, void *fh, struct v4l2_control *ctrl)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+ struct mxr_device *mdev = layer->mdev;
+ int num = 0;
+ int ret = 0;
+
+ mxr_dbg(mdev, "%s start\n", __func__);
+
+ if (layer->type == MXR_LAYER_TYPE_VIDEO)
+ num = 0;
+ else if (layer->type == MXR_LAYER_TYPE_GRP && layer->idx == 0)
+ num = 1;
+ else if (layer->type == MXR_LAYER_TYPE_GRP && layer->idx == 1)
+ num = 2;
+
+ ret = mxr_check_ctrl_val(ctrl);
+
+ switch (ctrl->id) {
+ case V4L2_CID_TV_HPD_STATUS:
+ v4l2_subdev_call(to_outsd(mdev), core, g_ctrl, ctrl);
+ break;
+ default:
+ mxr_err(mdev, "invalid control id\n");
+ ret = -EINVAL;
+ break;
+ }
+ return ret;
+}
+
+static int mxr_enum_dv_presets(struct file *file, void *fh,
+ struct v4l2_dv_enum_preset *preset)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+ struct mxr_device *mdev = layer->mdev;
+ int ret;
+
+ /* lock protects from changing sd_out */
+ mutex_lock(&mdev->mutex);
+ ret = v4l2_subdev_call(to_outsd(mdev), video, enum_dv_presets, preset);
+ mutex_unlock(&mdev->mutex);
+
+ return ret ? -EINVAL : 0;
+}
+
+static int mxr_s_dv_preset(struct file *file, void *fh,
+ struct v4l2_dv_preset *preset)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+ struct mxr_device *mdev = layer->mdev;
+ int ret;
+
+ /* lock protects from changing sd_out */
+ mutex_lock(&mdev->mutex);
+
+ /* preset change cannot be done while there is an entity
+ * dependant on output configuration
+ */
+ if (mdev->n_output > 0) {
+ mutex_unlock(&mdev->mutex);
+ return -EBUSY;
+ }
+
+ ret = v4l2_subdev_call(to_outsd(mdev), video, s_dv_preset, preset);
+
+ mutex_unlock(&mdev->mutex);
+
+ /* any failure should return EINVAL according to V4L2 doc */
+ return ret ? -EINVAL : 0;
+}
+
+static int mxr_g_dv_preset(struct file *file, void *fh,
+ struct v4l2_dv_preset *preset)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+ struct mxr_device *mdev = layer->mdev;
+ int ret;
+
+ /* lock protects from changing sd_out */
+ mutex_lock(&mdev->mutex);
+ ret = v4l2_subdev_call(to_outsd(mdev), video, g_dv_preset, preset);
+ mutex_unlock(&mdev->mutex);
+
+ return ret ? -EINVAL : 0;
+}
+
+static int mxr_s_std(struct file *file, void *fh, v4l2_std_id *norm)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+ struct mxr_device *mdev = layer->mdev;
+ int ret;
+
+ /* lock protects from changing sd_out */
+ mutex_lock(&mdev->mutex);
+
+ /* standard change cannot be done while there is an entity
+ * dependant on output configuration
+ */
+ if (mdev->n_output > 0) {
+ mutex_unlock(&mdev->mutex);
+ return -EBUSY;
+ }
+
+ ret = v4l2_subdev_call(to_outsd(mdev), video, s_std_output, *norm);
+
+ mutex_unlock(&mdev->mutex);
+
+ return ret ? -EINVAL : 0;
+}
+
+static int mxr_g_std(struct file *file, void *fh, v4l2_std_id *norm)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+ struct mxr_device *mdev = layer->mdev;
+ int ret;
+
+ /* lock protects from changing sd_out */
+ mutex_lock(&mdev->mutex);
+ ret = v4l2_subdev_call(to_outsd(mdev), video, g_std_output, norm);
+ mutex_unlock(&mdev->mutex);
+
+ return ret ? -EINVAL : 0;
+}
+
+static int mxr_enum_output(struct file *file, void *fh, struct v4l2_output *a)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+ struct mxr_device *mdev = layer->mdev;
+ struct mxr_output *out;
+ struct v4l2_subdev *sd;
+
+ if (a->index >= mdev->output_cnt)
+ return -EINVAL;
+ out = mdev->output[a->index];
+ BUG_ON(out == NULL);
+ sd = out->sd;
+ strlcpy(a->name, out->name, sizeof(a->name));
+
+ /* try to obtain supported tv norms */
+ v4l2_subdev_call(sd, video, g_tvnorms_output, &a->std);
+ a->capabilities = 0;
+ if (sd->ops->video && sd->ops->video->s_dv_preset)
+ a->capabilities |= V4L2_OUT_CAP_PRESETS;
+ if (sd->ops->video && sd->ops->video->s_std_output)
+ a->capabilities |= V4L2_OUT_CAP_STD;
+ a->type = V4L2_OUTPUT_TYPE_ANALOG;
+
+ return 0;
+}
+
+static int mxr_s_output(struct file *file, void *fh, unsigned int i)
+{
+ struct video_device *vfd = video_devdata(file);
+ struct mxr_layer *layer = video_drvdata(file);
+ struct mxr_device *mdev = layer->mdev;
+ int ret = 0;
+
+ if (i >= mdev->output_cnt || mdev->output[i] == NULL)
+ return -EINVAL;
+
+ mutex_lock(&mdev->mutex);
+ if (mdev->n_output > 0) {
+ ret = -EBUSY;
+ goto done;
+ }
+ mdev->current_output = i;
+ vfd->tvnorms = 0;
+ v4l2_subdev_call(to_outsd(mdev), video, g_tvnorms_output,
+ &vfd->tvnorms);
+ mxr_dbg(mdev, "tvnorms = %08llx\n", vfd->tvnorms);
+
+done:
+ mutex_unlock(&mdev->mutex);
+ return ret;
+}
+
+static int mxr_g_output(struct file *file, void *fh, unsigned int *p)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+ struct mxr_device *mdev = layer->mdev;
+
+ mutex_lock(&mdev->mutex);
+ *p = mdev->current_output;
+ mutex_unlock(&mdev->mutex);
+
+ return 0;
+}
+
+static int mxr_reqbufs(struct file *file, void *priv,
+ struct v4l2_requestbuffers *p)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+
+ mxr_dbg(layer->mdev, "%s:%d\n", __func__, __LINE__);
+ return vb2_reqbufs(&layer->vb_queue, p);
+}
+
+static int mxr_querybuf(struct file *file, void *priv, struct v4l2_buffer *p)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+
+ mxr_dbg(layer->mdev, "%s:%d\n", __func__, __LINE__);
+ return vb2_querybuf(&layer->vb_queue, p);
+}
+
+static int mxr_qbuf(struct file *file, void *priv, struct v4l2_buffer *p)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+
+ mxr_dbg(layer->mdev, "%s:%d(%d)\n", __func__, __LINE__, p->index);
+ return vb2_qbuf(&layer->vb_queue, p);
+}
+
+static int mxr_dqbuf(struct file *file, void *priv, struct v4l2_buffer *p)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+
+ mxr_dbg(layer->mdev, "%s:%d\n", __func__, __LINE__);
+ return vb2_dqbuf(&layer->vb_queue, p, file->f_flags & O_NONBLOCK);
+}
+
+static int mxr_streamon(struct file *file, void *priv, enum v4l2_buf_type i)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+ struct mxr_device *mdev = layer->mdev;
+
+ switch (layer->idx) {
+ case 0:
+ mdev->layer_en.graph0 = 1;
+ break;
+ case 1:
+ mdev->layer_en.graph1 = 1;
+ break;
+ case 2:
+ mdev->layer_en.graph2 = 1;
+ break;
+ case 3:
+ mdev->layer_en.graph3 = 1;
+ break;
+ default:
+ mxr_err(mdev, "invalid layer number\n");
+ return -EINVAL;
+ }
+
+ if ((mdev->layer_en.graph0 && mdev->layer_en.graph2) ||
+ (mdev->layer_en.graph1 && mdev->layer_en.graph3)) {
+ mdev->frame_packing = 1;
+ mxr_dbg(mdev, "frame packing mode\n");
+ }
+
+ layer->ops.stream_set(layer, MXR_ENABLE);
+ mxr_dbg(layer->mdev, "%s:%d\n", __func__, __LINE__);
+ return vb2_streamon(&layer->vb_queue, i);
+}
+
+static int mxr_streamoff(struct file *file, void *priv, enum v4l2_buf_type i)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+ struct mxr_device *mdev = layer->mdev;
+
+ switch (layer->idx) {
+ case 0:
+ mdev->layer_en.graph0 = 0;
+ break;
+ case 1:
+ mdev->layer_en.graph1 = 0;
+ break;
+ case 2:
+ mdev->layer_en.graph2 = 0;
+ break;
+ case 3:
+ mdev->layer_en.graph3 = 0;
+ break;
+ default:
+ mxr_err(mdev, "invalid layer number\n");
+ return -EINVAL;
+ }
+
+ mdev->frame_packing = 0;
+ if ((mdev->layer_en.graph0 && mdev->layer_en.graph2) ||
+ (mdev->layer_en.graph1 && mdev->layer_en.graph3)) {
+ mdev->frame_packing = 1;
+ mxr_dbg(mdev, "frame packing mode\n");
+ }
+
+ mxr_dbg(layer->mdev, "%s:%d\n", __func__, __LINE__);
+ return vb2_streamoff(&layer->vb_queue, i);
+}
+
+static const struct v4l2_ioctl_ops mxr_ioctl_ops = {
+ .vidioc_querycap = mxr_querycap,
+ /* format handling */
+ .vidioc_enum_fmt_vid_out = mxr_enum_fmt,
+ .vidioc_s_fmt_vid_out_mplane = mxr_s_fmt,
+ .vidioc_g_fmt_vid_out_mplane = mxr_g_fmt,
+ /* buffer control */
+ .vidioc_reqbufs = mxr_reqbufs,
+ .vidioc_querybuf = mxr_querybuf,
+ .vidioc_qbuf = mxr_qbuf,
+ .vidioc_dqbuf = mxr_dqbuf,
+ /* Streaming control */
+ .vidioc_streamon = mxr_streamon,
+ .vidioc_streamoff = mxr_streamoff,
+ /* Preset functions */
+ .vidioc_enum_dv_presets = mxr_enum_dv_presets,
+ .vidioc_s_dv_preset = mxr_s_dv_preset,
+ .vidioc_g_dv_preset = mxr_g_dv_preset,
+ /* analog TV standard functions */
+ .vidioc_s_std = mxr_s_std,
+ .vidioc_g_std = mxr_g_std,
+ /* Output handling */
+ .vidioc_enum_output = mxr_enum_output,
+ .vidioc_s_output = mxr_s_output,
+ .vidioc_g_output = mxr_g_output,
+ /* Crop ioctls */
+ .vidioc_g_crop = mxr_g_crop,
+ .vidioc_s_crop = mxr_s_crop,
+ .vidioc_cropcap = mxr_cropcap,
+ /* Alpha blending functions */
+ .vidioc_s_ctrl = mxr_s_ctrl,
+ .vidioc_g_ctrl = mxr_g_ctrl,
+};
+
+static int mxr_video_open(struct file *file)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+ struct mxr_device *mdev = layer->mdev;
+ int ret = 0;
+
+ mxr_dbg(mdev, "%s:%d\n", __func__, __LINE__);
+ /* assure device probe is finished */
+ wait_for_device_probe();
+ /* creating context for file descriptor */
+ ret = v4l2_fh_open(file);
+ if (ret) {
+ mxr_err(mdev, "v4l2_fh_open failed\n");
+ return ret;
+ }
+
+ /* leaving if layer is already initialized */
+ if (!v4l2_fh_is_singular_file(file))
+ return 0;
+
+ ret = vb2_queue_init(&layer->vb_queue);
+ if (ret != 0) {
+ mxr_err(mdev, "failed to initialize vb2 queue\n");
+ goto fail_fh_open;
+ }
+ /* set default format, first on the list */
+ layer->fmt = layer->fmt_array[0];
+ /* setup default geometry */
+ mxr_layer_default_geo(layer);
+
+ return 0;
+
+fail_fh_open:
+ v4l2_fh_release(file);
+
+ return ret;
+}
+
+static unsigned int
+mxr_video_poll(struct file *file, struct poll_table_struct *wait)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+
+ mxr_dbg(layer->mdev, "%s:%d\n", __func__, __LINE__);
+
+ return vb2_poll(&layer->vb_queue, file, wait);
+}
+
+static int mxr_video_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+
+ mxr_dbg(layer->mdev, "%s:%d\n", __func__, __LINE__);
+
+ return vb2_mmap(&layer->vb_queue, vma);
+}
+
+static int mxr_video_release(struct file *file)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+
+ mxr_dbg(layer->mdev, "%s:%d\n", __func__, __LINE__);
+
+ /* initialize alpha blending variables */
+ layer->layer_blend_en = 0;
+ layer->layer_alpha = 0;
+ layer->pixel_blend_en = 0;
+ layer->chroma_en = 0;
+ layer->chroma_val = 0;
+
+ if (v4l2_fh_is_singular_file(file))
+ vb2_queue_release(&layer->vb_queue);
+
+ v4l2_fh_release(file);
+ return 0;
+}
+
+static const struct v4l2_file_operations mxr_fops = {
+ .owner = THIS_MODULE,
+ .open = mxr_video_open,
+ .poll = mxr_video_poll,
+ .mmap = mxr_video_mmap,
+ .release = mxr_video_release,
+ .unlocked_ioctl = video_ioctl2,
+};
+
+static int queue_setup(struct vb2_queue *vq, const struct v4l2_format *pfmt,
+ unsigned int *nbuffers, unsigned int *nplanes,
+ unsigned int sizes[], void *alloc_ctxs[])
+{
+ struct mxr_layer *layer = vb2_get_drv_priv(vq);
+ const struct mxr_format *fmt = layer->fmt;
+ int i;
+ struct mxr_device *mdev = layer->mdev;
+ struct v4l2_plane_pix_format planes[3];
+
+ mxr_dbg(mdev, "%s fmt->num_planes=%d\n", __func__,fmt->num_planes);
+ /* checking if format was configured */
+ if (fmt == NULL)
+ return -EINVAL;
+ mxr_dbg(mdev, "fmt = %s\n", fmt->name);
+ mxr_mplane_fill(planes, fmt, layer->geo.src.full_width,
+ layer->geo.src.full_height);
+
+ *nplanes = fmt->num_subframes;
+ for (i = 0; i < fmt->num_subframes; ++i) {
+ alloc_ctxs[i] = layer->mdev->alloc_ctx;
+ sizes[i] = PAGE_ALIGN(planes[i].sizeimage);
+ mxr_dbg(mdev, "size[%d] = %08x\n", i, sizes[i]);
+ }
+
+ if (*nbuffers == 0)
+ *nbuffers = 1;
+
+ return 0;
+}
+
+static void buf_queue(struct vb2_buffer *vb)
+{
+ struct mxr_buffer *buffer = container_of(vb, struct mxr_buffer, vb);
+ struct mxr_layer *layer = vb2_get_drv_priv(vb->vb2_queue);
+ struct mxr_device *mdev = layer->mdev;
+ struct tv_graph_pipeline *pipe = &layer->pipe;
+ unsigned long flags;
+
+ spin_lock_irqsave(&layer->enq_slock, flags);
+ list_add_tail(&buffer->list, &layer->enq_list);
+ spin_unlock_irqrestore(&layer->enq_slock, flags);
+
+ spin_lock_irqsave(&layer->enq_slock, flags);
+ if (pipe->state == TV_GRAPH_PIPELINE_STREAMING_START)
+ pipe->state = TV_GRAPH_PIPELINE_STREAMING;
+
+ spin_unlock_irqrestore(&layer->enq_slock, flags);
+
+ mxr_dbg(mdev, "queuing buffer\n");
+}
+
+static void wait_lock(struct vb2_queue *vq)
+{
+ struct mxr_layer *layer = vb2_get_drv_priv(vq);
+
+ mxr_dbg(layer->mdev, "%s\n", __func__);
+ mutex_lock(&layer->mutex);
+}
+
+static void wait_unlock(struct vb2_queue *vq)
+{
+ struct mxr_layer *layer = vb2_get_drv_priv(vq);
+
+ mxr_dbg(layer->mdev, "%s\n", __func__);
+ mutex_unlock(&layer->mutex);
+}
+
+static int buf_prepare(struct vb2_buffer *vb)
+{
+ struct mxr_layer *layer = vb2_get_drv_priv(vb->vb2_queue);
+ struct mxr_device *mdev = layer->mdev;
+ struct v4l2_subdev *sd;
+ struct media_pad *pad;
+ int i, j;
+ int enable = 0;
+ mxr_dbg(layer->mdev, "%s\n", __func__);
+
+ for (i = 0; i < MXR_MAX_SUB_MIXERS; ++i) {
+ sd = &mdev->sub_mxr[i].sd;
+
+ for (j = MXR_PAD_SOURCE_GSCALER; j < MXR_PADS_NUM; ++j) {
+ pad = &sd->entity.pads[j];
+
+ /* find sink pad of hdmi or sdo through enabled link*/
+ pad = media_entity_remote_source(pad);
+ if (media_entity_type(pad->entity)
+ == MEDIA_ENT_T_V4L2_SUBDEV) {
+ enable = 1;
+ break;
+ }
+ }
+ if (enable)
+ break;
+ }
+ if (!enable)
+ return -ENODEV;
+
+ sd = media_entity_to_v4l2_subdev(pad->entity);
+
+ /* current output device must be matched terminal entity
+ * which represents HDMI or SDO sub-device
+ */
+ if (strcmp(sd->name, to_output(mdev)->sd->name)) {
+ mxr_err(mdev, "subdev name : %s, output device name : %s\n",
+ sd->name, to_output(mdev)->sd->name);
+ mxr_err(mdev, "output device is not mached\n");
+ return -ERANGE;
+ }
+
+ return 0;
+}
+
+static int start_streaming(struct vb2_queue *vq, unsigned int count)
+{
+ struct mxr_layer *layer = vb2_get_drv_priv(vq);
+ struct mxr_device *mdev = layer->mdev;
+ struct tv_graph_pipeline *pipe = &layer->pipe;
+ unsigned long flags;
+ int ret;
+
+ mxr_dbg(mdev, "%s\n", __func__);
+
+ if (count == 0) {
+ mxr_dbg(mdev, "no output buffers queued\n");
+ return -EINVAL;
+ }
+
+ /* enable mixer clock */
+ ret = mxr_power_get(mdev);
+ if (ret)
+ mxr_err(mdev, "power on failed\n");
+
+ /* block any changes in output configuration */
+ mxr_output_get(mdev);
+
+ /* update layers geometry */
+ mxr_layer_geo_fix(layer);
+ mxr_geometry_dump(mdev, &layer->geo);
+
+ layer->ops.format_set(layer);
+
+ spin_lock_irqsave(&layer->enq_slock, flags);
+ pipe->state = TV_GRAPH_PIPELINE_STREAMING_START;
+ spin_unlock_irqrestore(&layer->enq_slock, flags);
+
+ /* enabling layer in hardware */
+ layer->ops.stream_set(layer, MXR_ENABLE);
+ /* store starting entity ptr on the tv graphic pipeline */
+ pipe->layer = layer;
+ /* start streaming all entities on the tv graphic pipeline */
+ tv_graph_pipeline_stream(pipe, 1);
+
+ return 0;
+}
+
+static void mxr_watchdog(unsigned long arg)
+{
+ struct mxr_layer *layer = (struct mxr_layer *) arg;
+ struct mxr_device *mdev = layer->mdev;
+ unsigned long flags;
+
+ mxr_err(mdev, "watchdog fired for layer %s\n", layer->vfd.name);
+
+ spin_lock_irqsave(&layer->enq_slock, flags);
+
+ if (layer->update_buf == layer->shadow_buf)
+ layer->update_buf = NULL;
+ if (layer->update_buf) {
+ vb2_buffer_done(&layer->update_buf->vb, VB2_BUF_STATE_ERROR);
+ layer->update_buf = NULL;
+ }
+ if (layer->shadow_buf) {
+ vb2_buffer_done(&layer->shadow_buf->vb, VB2_BUF_STATE_ERROR);
+ layer->shadow_buf = NULL;
+ }
+ spin_unlock_irqrestore(&layer->enq_slock, flags);
+}
+
+static int stop_streaming(struct vb2_queue *vq)
+{
+ struct mxr_layer *layer = vb2_get_drv_priv(vq);
+ struct mxr_device *mdev = layer->mdev;
+ unsigned long flags;
+ struct timer_list watchdog;
+ struct mxr_buffer *buf, *buf_tmp;
+ struct tv_graph_pipeline *pipe = &layer->pipe;
+
+ mxr_dbg(mdev, "%s\n", __func__);
+
+ spin_lock_irqsave(&layer->enq_slock, flags);
+
+ /* reset list */
+ pipe->state = TV_GRAPH_PIPELINE_STREAMING_FINISH;
+
+ /* set all buffer to be done */
+ list_for_each_entry_safe(buf, buf_tmp, &layer->enq_list, list) {
+ list_del(&buf->list);
+ vb2_buffer_done(&buf->vb, VB2_BUF_STATE_ERROR);
+ }
+
+ spin_unlock_irqrestore(&layer->enq_slock, flags);
+
+ /* give 1 seconds to complete to complete last buffers */
+ setup_timer_on_stack(&watchdog, mxr_watchdog,
+ (unsigned long)layer);
+ mod_timer(&watchdog, jiffies + msecs_to_jiffies(1000));
+
+ /* wait until all buffers are goes to done state */
+ vb2_wait_for_all_buffers(vq);
+
+ /* stop timer if all synchronization is done */
+ del_timer_sync(&watchdog);
+ destroy_timer_on_stack(&watchdog);
+
+ /* stopping hardware */
+ spin_lock_irqsave(&layer->enq_slock, flags);
+
+ pipe->state = TV_GRAPH_PIPELINE_IDLE;
+ spin_unlock_irqrestore(&layer->enq_slock, flags);
+
+ /* disabling layer in hardware */
+ layer->ops.stream_set(layer, MXR_DISABLE);
+
+ /* starting entity on the pipeline */
+ pipe->layer = layer;
+ /* stop streaming all entities on the pipeline */
+ tv_graph_pipeline_stream(pipe, 0);
+
+ /* allow changes in output configuration */
+ mxr_output_put(mdev);
+
+ /* disable mixer clock */
+ mxr_power_put(mdev);
+
+ return 0;
+}
+
+static struct vb2_ops mxr_video_qops = {
+ .queue_setup = queue_setup,
+ .buf_queue = buf_queue,
+ .wait_prepare = wait_unlock,
+ .wait_finish = wait_lock,
+ .buf_prepare = buf_prepare,
+ .start_streaming = start_streaming,
+ .stop_streaming = stop_streaming,
+};
+
+/* FIXME: try to put this functions to mxr_base_layer_create */
+int mxr_base_layer_register(struct mxr_layer *layer)
+{
+ struct mxr_device *mdev = layer->mdev;
+ struct exynos_md *md;
+ int ret;
+
+ md = (struct exynos_md *)module_name_to_driver_data(MDEV_MODULE_NAME);
+ if (!md) {
+ mxr_err(mdev, "failed to get output media device\n");
+ return -ENODEV;
+ }
+
+ layer->vfd.v4l2_dev = &md->v4l2_dev;
+ ret = video_register_device(&layer->vfd, VFL_TYPE_GRABBER,
+ layer->minor);
+ if (ret)
+ mxr_err(mdev, "failed to register video device\n");
+ else
+ mxr_info(mdev, "registered layer %s as /dev/video%d\n",
+ layer->vfd.name, layer->vfd.num);
+
+ layer->fb = vb2_fb_register(&layer->vb_queue, &layer->vfd);
+ if (PTR_ERR(layer->fb))
+ layer->fb = NULL;
+
+ return ret;
+}
+
+void mxr_base_layer_unregister(struct mxr_layer *layer)
+{
+ if (layer->fb)
+ vb2_fb_unregister(layer->fb);
+ video_unregister_device(&layer->vfd);
+}
+
+void mxr_layer_release(struct mxr_layer *layer)
+{
+ if (layer->ops.release)
+ layer->ops.release(layer);
+}
+
+void mxr_base_layer_release(struct mxr_layer *layer)
+{
+ kfree(layer);
+}
+
+static void mxr_vfd_release(struct video_device *vdev)
+{
+ printk(KERN_INFO "video device release\n");
+}
+
+struct mxr_layer *mxr_base_layer_create(struct mxr_device *mdev,
+ int idx, char *name, struct mxr_layer_ops *ops)
+{
+ struct mxr_layer *layer;
+ int ret;
+
+ layer = kzalloc(sizeof *layer, GFP_KERNEL);
+ if (layer == NULL) {
+ mxr_err(mdev, "not enough memory for layer.\n");
+ goto fail;
+ }
+
+ layer->mdev = mdev;
+ layer->idx = idx;
+ layer->ops = *ops;
+
+ spin_lock_init(&layer->enq_slock);
+ INIT_LIST_HEAD(&layer->enq_list);
+ mutex_init(&layer->mutex);
+
+ layer->vfd = (struct video_device) {
+ .minor = -1,
+ .release = mxr_vfd_release,
+ .fops = &mxr_fops,
+ .ioctl_ops = &mxr_ioctl_ops,
+ };
+
+ /* media_entity_init must be called after initializing layer->vfd
+ * for preventing to overwrite
+ */
+ ret = media_entity_init(&layer->vfd.entity, 1, &layer->pad, 0);
+ if (ret) {
+ mxr_err(mdev, "media entity init failed\n");
+ goto fail_alloc;
+ }
+
+ strlcpy(layer->vfd.name, name, sizeof(layer->vfd.name));
+ layer->vfd.entity.name = layer->vfd.name;
+ /* let framework control PRIORITY */
+ set_bit(V4L2_FL_USE_FH_PRIO, &layer->vfd.flags);
+
+ video_set_drvdata(&layer->vfd, layer);
+ layer->vfd.lock = &layer->mutex;
+
+ layer->vb_queue = (struct vb2_queue) {
+ .type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
+ .io_modes = VB2_MMAP | VB2_USERPTR| VB2_DMABUF,
+ .drv_priv = layer,
+ .buf_struct_size = sizeof(struct mxr_buffer),
+ .ops = &mxr_video_qops,
+ .mem_ops = &vb2_dma_contig_memops,
+ };
+ return layer;
+
+fail_alloc:
+ kfree(layer);
+
+fail:
+ return NULL;
+}
+
+const struct mxr_format *find_format_by_fourcc(
+ struct mxr_layer *layer, unsigned long fourcc)
+{
+ int i;
+
+ for (i = 0; i < layer->fmt_array_size; ++i)
+ if (layer->fmt_array[i]->fourcc == fourcc)
+ return layer->fmt_array[i];
+ return NULL;
+}
+
+static const struct mxr_format *find_format_by_index(
+ struct mxr_layer *layer, unsigned long index)
+{
+ if (index >= layer->fmt_array_size)
+ return NULL;
+ return layer->fmt_array[index];
+}
--- /dev/null
+/*
+ * Samsung TV Mixer driver
+ *
+ * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
+ *
+ * Tomasz Stanislawski, <t.stanislaws@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published
+ * by the Free Software Foundiation. either version 2 of the License,
+ * or (at your option) any later version
+ */
+
+#include "mixer.h"
+
+#if defined(CONFIG_VIDEOBUF2_CMA_PHYS)
+#include <media/videobuf2-cma-phys.h>
+#elif defined(CONFIG_VIDEOBUF2_ION)
+#include <media/videobuf2-ion.h>
+#endif
+
+/* AUXILIARY CALLBACKS */
+
+static void mxr_video_layer_release(struct mxr_layer *layer)
+{
+ mxr_base_layer_release(layer);
+}
+
+static void mxr_video_stream_set(struct mxr_layer *layer, int en)
+{
+ mxr_reg_video_layer_stream(layer->mdev, layer->idx, en);
+}
+
+static void mxr_video_format_set(struct mxr_layer *layer)
+{
+ mxr_reg_video_geo(layer->mdev, layer->cur_mxr, layer->idx, &layer->geo);
+}
+
+static void mxr_video_fix_geometry(struct mxr_layer *layer)
+{
+ struct mxr_geometry *geo = &layer->geo;
+
+ mxr_dbg(layer->mdev, "%s start\n", __func__);
+ geo->dst.x_offset = clamp_val(geo->dst.x_offset, 0,
+ geo->dst.full_width - 1);
+ geo->dst.y_offset = clamp_val(geo->dst.y_offset, 0,
+ geo->dst.full_height - 1);
+
+ /* mixer scale-up is unuseful. so no use it */
+ geo->dst.width = clamp_val(geo->dst.width, 1,
+ geo->dst.full_width - geo->dst.x_offset);
+ geo->dst.height = clamp_val(geo->dst.height, 1,
+ geo->dst.full_height - geo->dst.y_offset);
+}
+
+/* PUBLIC API */
+
+struct mxr_layer *mxr_video_layer_create(struct mxr_device *mdev, int cur_mxr,
+ int idx)
+{
+ struct mxr_layer *layer;
+ struct mxr_layer_ops ops = {
+ .release = mxr_video_layer_release,
+ .stream_set = mxr_video_stream_set,
+ .format_set = mxr_video_format_set,
+ .fix_geometry = mxr_video_fix_geometry,
+ };
+
+ layer = kzalloc(sizeof *layer, GFP_KERNEL);
+ if (layer == NULL) {
+ mxr_err(mdev, "not enough memory for layer.\n");
+ goto fail;
+ }
+
+ layer->mdev = mdev;
+ layer->idx = idx;
+ layer->type = MXR_LAYER_TYPE_VIDEO;
+ layer->ops = ops;
+
+ layer->cur_mxr = cur_mxr;
+
+ mxr_layer_default_geo(layer);
+
+ return layer;
+
+fail:
+ return NULL;
+}
--- /dev/null
+/*
+ * Samsung TV Mixer driver
+ *
+ * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
+ *
+ * Tomasz Stanislawski, <t.stanislaws@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published
+ * by the Free Software Foundiation. either version 2 of the License,
+ * or (at your option) any later version
+ */
+
+#include "mixer.h"
+
+#include "regs-vp.h"
+
+#if defined(CONFIG_VIDEOBUF2_CMA_PHYS)
+#include <media/videobuf2-cma-phys.h>
+#elif defined(CONFIG_VIDEOBUF2_ION)
+#include <media/videobuf2-ion.h>
+#endif
+
+/* FORMAT DEFINITIONS */
+static const struct mxr_format mxr_fmt_nv12 = {
+ .name = "NV12",
+ .fourcc = V4L2_PIX_FMT_NV12,
+ .colorspace = V4L2_COLORSPACE_JPEG,
+ .num_planes = 2,
+ .plane = {
+ { .width = 1, .height = 1, .size = 1 },
+ { .width = 2, .height = 2, .size = 2 },
+ },
+ .num_subframes = 1,
+ .cookie = VP_MODE_NV12 | VP_MODE_MEM_LINEAR,
+};
+
+static const struct mxr_format mxr_fmt_nv21 = {
+ .name = "NV21",
+ .fourcc = V4L2_PIX_FMT_NV21,
+ .colorspace = V4L2_COLORSPACE_JPEG,
+ .num_planes = 2,
+ .plane = {
+ { .width = 1, .height = 1, .size = 1 },
+ { .width = 2, .height = 2, .size = 2 },
+ },
+ .num_subframes = 1,
+ .cookie = VP_MODE_NV21 | VP_MODE_MEM_LINEAR,
+};
+
+static const struct mxr_format mxr_fmt_nv12m = {
+ .name = "NV12 (mplane)",
+ .fourcc = V4L2_PIX_FMT_NV12M,
+ .colorspace = V4L2_COLORSPACE_JPEG,
+ .num_planes = 2,
+ .plane = {
+ { .width = 1, .height = 1, .size = 1 },
+ { .width = 2, .height = 2, .size = 2 },
+ },
+ .num_subframes = 2,
+ .plane2subframe = {0, 1},
+ .cookie = VP_MODE_NV12 | VP_MODE_MEM_LINEAR,
+};
+
+static const struct mxr_format mxr_fmt_nv12mt = {
+ .name = "NV12 tiled (mplane)",
+ .fourcc = V4L2_PIX_FMT_NV12MT,
+ .colorspace = V4L2_COLORSPACE_JPEG,
+ .num_planes = 2,
+ .plane = {
+ { .width = 128, .height = 32, .size = 4096 },
+ { .width = 128, .height = 32, .size = 2048 },
+ },
+ .num_subframes = 2,
+ .plane2subframe = {0, 1},
+ .cookie = VP_MODE_NV12 | VP_MODE_MEM_TILED,
+};
+
+static const struct mxr_format *mxr_video_format[] = {
+ &mxr_fmt_nv12,
+ &mxr_fmt_nv21,
+ &mxr_fmt_nv12m,
+ &mxr_fmt_nv12mt,
+};
+
+/* AUXILIARY CALLBACKS */
+
+static void mxr_vp_layer_release(struct mxr_layer *layer)
+{
+ mxr_base_layer_unregister(layer);
+ mxr_base_layer_release(layer);
+}
+
+static void mxr_vp_buffer_set(struct mxr_layer *layer,
+ struct mxr_buffer *buf)
+{
+ struct mxr_device *mdev = layer->mdev;
+ dma_addr_t luma_addr[2] = {0, 0};
+ dma_addr_t chroma_addr[2] = {0, 0};
+
+ if (buf == NULL) {
+ mxr_reg_vp_buffer(mdev, luma_addr, chroma_addr);
+ return;
+ }
+
+ luma_addr[0] = mdev->vb2->plane_addr(&buf->vb, 0);
+ if (layer->fmt->num_subframes == 2) {
+ chroma_addr[0] = mdev->vb2->plane_addr(&buf->vb, 1);
+ } else {
+ /* FIXME: mxr_get_plane_size compute integer division,
+ * which is slow and should not be performed in interrupt */
+ chroma_addr[0] = luma_addr[0] + mxr_get_plane_size(
+ &layer->fmt->plane[0], layer->geo.src.full_width,
+ layer->geo.src.full_height);
+ }
+ if (layer->fmt->cookie & VP_MODE_MEM_TILED) {
+ luma_addr[1] = luma_addr[0] + 0x40;
+ chroma_addr[1] = chroma_addr[0] + 0x40;
+ } else {
+ luma_addr[1] = luma_addr[0] + layer->geo.src.full_width;
+ chroma_addr[1] = chroma_addr[0];
+ }
+ mxr_reg_vp_buffer(layer->mdev, luma_addr, chroma_addr);
+}
+
+static void mxr_vp_stream_set(struct mxr_layer *layer, int en)
+{
+ mxr_reg_vp_layer_stream(layer->mdev, en);
+}
+
+static void mxr_vp_format_set(struct mxr_layer *layer)
+{
+ mxr_reg_vp_format(layer->mdev, layer->fmt, &layer->geo);
+}
+
+static void mxr_vp_fix_geometry(struct mxr_layer *layer)
+{
+ struct mxr_geometry *geo = &layer->geo;
+
+ mxr_dbg(layer->mdev, "%s start\n", __func__);
+ /* align horizontal size to 8 pixels */
+ geo->src.full_width = ALIGN(geo->src.full_width, 8);
+ /* limit to boundary size */
+ geo->src.full_width = clamp_val(geo->src.full_width, 8, 8192);
+ geo->src.full_height = clamp_val(geo->src.full_height, 1, 8192);
+ geo->src.width = clamp_val(geo->src.width, 32, geo->src.full_width);
+ geo->src.width = min(geo->src.width, 2047U);
+ geo->src.height = clamp_val(geo->src.height, 4, geo->src.full_height);
+ geo->src.height = min(geo->src.height, 2047U);
+
+ /* setting size of output window */
+ geo->dst.width = clamp_val(geo->dst.width, 8, geo->dst.full_width);
+ geo->dst.height = clamp_val(geo->dst.height, 1, geo->dst.full_height);
+
+ /* ensure that scaling is in range 1/4x to 16x */
+ if (geo->src.width >= 4 * geo->dst.width)
+ geo->src.width = 4 * geo->dst.width;
+ if (geo->dst.width >= 16 * geo->src.width)
+ geo->dst.width = 16 * geo->src.width;
+ if (geo->src.height >= 4 * geo->dst.height)
+ geo->src.height = 4 * geo->dst.height;
+ if (geo->dst.height >= 16 * geo->src.height)
+ geo->dst.height = 16 * geo->src.height;
+
+ /* setting scaling ratio */
+ geo->x_ratio = (geo->src.width << 16) / geo->dst.width;
+ geo->y_ratio = (geo->src.height << 16) / geo->dst.height;
+
+ /* adjust offsets */
+ geo->src.x_offset = min(geo->src.x_offset,
+ geo->src.full_width - geo->src.width);
+ geo->src.y_offset = min(geo->src.y_offset,
+ geo->src.full_height - geo->src.height);
+ geo->dst.x_offset = min(geo->dst.x_offset,
+ geo->dst.full_width - geo->dst.width);
+ geo->dst.y_offset = min(geo->dst.y_offset,
+ geo->dst.full_height - geo->dst.height);
+}
+
+/* PUBLIC API */
+
+struct mxr_layer *mxr_vp_layer_create(struct mxr_device *mdev, int cur_mxr,
+ int idx, int nr)
+{
+ struct mxr_layer *layer;
+ int ret;
+ struct mxr_layer_ops ops = {
+ .release = mxr_vp_layer_release,
+ .buffer_set = mxr_vp_buffer_set,
+ .stream_set = mxr_vp_stream_set,
+ .format_set = mxr_vp_format_set,
+ .fix_geometry = mxr_vp_fix_geometry,
+ };
+ char name[32];
+
+ sprintf(name, "mxr%d_video%d", cur_mxr, idx);
+
+ layer = mxr_base_layer_create(mdev, idx, name, &ops);
+ if (layer == NULL) {
+ mxr_err(mdev, "failed to initialize layer(%d) base\n", idx);
+ goto fail;
+ }
+
+ layer->fmt_array = mxr_video_format;
+ layer->fmt_array_size = ARRAY_SIZE(mxr_video_format);
+ layer->minor = nr;
+ layer->type = MXR_LAYER_TYPE_VIDEO;
+
+ ret = mxr_base_layer_register(layer);
+ if (ret)
+ goto fail_layer;
+
+ layer->cur_mxr = cur_mxr;
+ return layer;
+
+fail_layer:
+ mxr_base_layer_release(layer);
+
+fail:
+ return NULL;
+}
--- /dev/null
+/* linux/arch/arm/mach-exynos4/include/mach/regs-hdmi.h
+ *
+ * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com/
+ *
+ * HDMI register header file for Samsung TVOUT driver
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+#ifndef SAMSUNG_REGS_HDMI_H
+#define SAMSUNG_REGS_HDMI_H
+
+/*
+ * Register part
+*/
+
+#define HDMI_CTRL_BASE(x) ((x) + 0x00000000)
+#define HDMI_CORE_BASE(x) ((x) + 0x00010000)
+#define HDMI_TG_BASE(x) ((x) + 0x00050000)
+
+/* Control registers */
+#define HDMI_INTC_CON HDMI_CTRL_BASE(0x0000)
+#define HDMI_INTC_FLAG HDMI_CTRL_BASE(0x0004)
+#define HDMI_HPD_STATUS HDMI_CTRL_BASE(0x000C)
+#define HDMI_PHY_RSTOUT HDMI_CTRL_BASE(0x0014)
+#define HDMI_PHY_VPLL HDMI_CTRL_BASE(0x0018)
+#define HDMI_PHY_CMU HDMI_CTRL_BASE(0x001C)
+#define HDMI_CORE_RSTOUT HDMI_CTRL_BASE(0x0020)
+
+/* Core registers */
+#define HDMI_CON_0 HDMI_CORE_BASE(0x0000)
+#define HDMI_CON_1 HDMI_CORE_BASE(0x0004)
+#define HDMI_CON_2 HDMI_CORE_BASE(0x0008)
+#define HDMI_SYS_STATUS HDMI_CORE_BASE(0x0010)
+#define HDMI_PHY_STATUS HDMI_CORE_BASE(0x0014)
+#define HDMI_STATUS_EN HDMI_CORE_BASE(0x0020)
+#define HDMI_HPD HDMI_CORE_BASE(0x0030)
+#define HDMI_MODE_SEL HDMI_CORE_BASE(0x0040)
+#define HDMI_BLUE_SCREEN_0 HDMI_CORE_BASE(0x0050)
+#define HDMI_BLUE_SCREEN_1 HDMI_CORE_BASE(0x0054)
+#define HDMI_BLUE_SCREEN_2 HDMI_CORE_BASE(0x0058)
+#define HDMI_H_BLANK_0 HDMI_CORE_BASE(0x00A0)
+#define HDMI_H_BLANK_1 HDMI_CORE_BASE(0x00A4)
+#define HDMI_V_BLANK_0 HDMI_CORE_BASE(0x00B0)
+#define HDMI_V_BLANK_1 HDMI_CORE_BASE(0x00B4)
+#define HDMI_V_BLANK_2 HDMI_CORE_BASE(0x00B8)
+#define HDMI_H_V_LINE_0 HDMI_CORE_BASE(0x00C0)
+#define HDMI_H_V_LINE_1 HDMI_CORE_BASE(0x00C4)
+#define HDMI_H_V_LINE_2 HDMI_CORE_BASE(0x00C8)
+#define HDMI_VSYNC_POL HDMI_CORE_BASE(0x00E4)
+#define HDMI_INT_PRO_MODE HDMI_CORE_BASE(0x00E8)
+#define HDMI_V_BLANK_F_0 HDMI_CORE_BASE(0x0110)
+#define HDMI_V_BLANK_F_1 HDMI_CORE_BASE(0x0114)
+#define HDMI_V_BLANK_F_2 HDMI_CORE_BASE(0x0118)
+#define HDMI_H_SYNC_GEN_0 HDMI_CORE_BASE(0x0120)
+#define HDMI_H_SYNC_GEN_1 HDMI_CORE_BASE(0x0124)
+#define HDMI_H_SYNC_GEN_2 HDMI_CORE_BASE(0x0128)
+#define HDMI_V_SYNC_GEN_1_0 HDMI_CORE_BASE(0x0130)
+#define HDMI_V_SYNC_GEN_1_1 HDMI_CORE_BASE(0x0134)
+#define HDMI_V_SYNC_GEN_1_2 HDMI_CORE_BASE(0x0138)
+#define HDMI_V_SYNC_GEN_2_0 HDMI_CORE_BASE(0x0140)
+#define HDMI_V_SYNC_GEN_2_1 HDMI_CORE_BASE(0x0144)
+#define HDMI_V_SYNC_GEN_2_2 HDMI_CORE_BASE(0x0148)
+#define HDMI_V_SYNC_GEN_3_0 HDMI_CORE_BASE(0x0150)
+#define HDMI_V_SYNC_GEN_3_1 HDMI_CORE_BASE(0x0154)
+#define HDMI_V_SYNC_GEN_3_2 HDMI_CORE_BASE(0x0158)
+#define HDMI_AVI_CON HDMI_CORE_BASE(0x0300)
+#define HDMI_AVI_BYTE(n) HDMI_CORE_BASE(0x0320 + 4 * (n))
+#define HDMI_DC_CONTROL HDMI_CORE_BASE(0x05C0)
+#define HDMI_VIDEO_PATTERN_GEN HDMI_CORE_BASE(0x05C4)
+#define HDMI_HPD_GEN HDMI_CORE_BASE(0x05C8)
+
+/* Timing generator registers */
+#define HDMI_TG_CMD HDMI_TG_BASE(0x0000)
+#define HDMI_TG_H_FSZ_L HDMI_TG_BASE(0x0018)
+#define HDMI_TG_H_FSZ_H HDMI_TG_BASE(0x001C)
+#define HDMI_TG_HACT_ST_L HDMI_TG_BASE(0x0020)
+#define HDMI_TG_HACT_ST_H HDMI_TG_BASE(0x0024)
+#define HDMI_TG_HACT_SZ_L HDMI_TG_BASE(0x0028)
+#define HDMI_TG_HACT_SZ_H HDMI_TG_BASE(0x002C)
+#define HDMI_TG_V_FSZ_L HDMI_TG_BASE(0x0030)
+#define HDMI_TG_V_FSZ_H HDMI_TG_BASE(0x0034)
+#define HDMI_TG_VSYNC_L HDMI_TG_BASE(0x0038)
+#define HDMI_TG_VSYNC_H HDMI_TG_BASE(0x003C)
+#define HDMI_TG_VSYNC2_L HDMI_TG_BASE(0x0040)
+#define HDMI_TG_VSYNC2_H HDMI_TG_BASE(0x0044)
+#define HDMI_TG_VACT_ST_L HDMI_TG_BASE(0x0048)
+#define HDMI_TG_VACT_ST_H HDMI_TG_BASE(0x004C)
+#define HDMI_TG_VACT_SZ_L HDMI_TG_BASE(0x0050)
+#define HDMI_TG_VACT_SZ_H HDMI_TG_BASE(0x0054)
+#define HDMI_TG_FIELD_CHG_L HDMI_TG_BASE(0x0058)
+#define HDMI_TG_FIELD_CHG_H HDMI_TG_BASE(0x005C)
+#define HDMI_TG_VACT_ST2_L HDMI_TG_BASE(0x0060)
+#define HDMI_TG_VACT_ST2_H HDMI_TG_BASE(0x0064)
+#define HDMI_TG_VSYNC_TOP_HDMI_L HDMI_TG_BASE(0x0078)
+#define HDMI_TG_VSYNC_TOP_HDMI_H HDMI_TG_BASE(0x007C)
+#define HDMI_TG_VSYNC_BOT_HDMI_L HDMI_TG_BASE(0x0080)
+#define HDMI_TG_VSYNC_BOT_HDMI_H HDMI_TG_BASE(0x0084)
+#define HDMI_TG_FIELD_TOP_HDMI_L HDMI_TG_BASE(0x0088)
+#define HDMI_TG_FIELD_TOP_HDMI_H HDMI_TG_BASE(0x008C)
+#define HDMI_TG_FIELD_BOT_HDMI_L HDMI_TG_BASE(0x0090)
+#define HDMI_TG_FIELD_BOT_HDMI_H HDMI_TG_BASE(0x0094)
+
+/*
+ * Bit definition part
+ */
+
+/* HDMI_INTC_CON */
+#define HDMI_INTC_EN_GLOBAL (1 << 6)
+#define HDMI_INTC_EN_HPD_PLUG (1 << 3)
+#define HDMI_INTC_EN_HPD_UNPLUG (1 << 2)
+
+/* HDMI_INTC_FLAG */
+#define HDMI_INTC_FLAG_HPD_PLUG (1 << 3)
+#define HDMI_INTC_FLAG_HPD_UNPLUG (1 << 2)
+
+/* HDMI_PHY_RSTOUT */
+#define HDMI_PHY_SW_RSTOUT (1 << 0)
+
+/* HDMI_CORE_RSTOUT */
+#define HDMI_CORE_SW_RSTOUT (1 << 0)
+
+/* HDMI_CON_0 */
+#define HDMI_BLUE_SCR_EN (1 << 5)
+#define HDMI_EN (1 << 0)
+
+/* HDMI_PHY_STATUS */
+#define HDMI_PHY_STATUS_READY (1 << 0)
+
+/* HDMI_MODE_SEL */
+#define HDMI_MODE_HDMI_EN (1 << 1)
+#define HDMI_MODE_DVI_EN (1 << 0)
+#define HDMI_MODE_MASK (3 << 0)
+
+/* HDMI_TG_CMD */
+#define HDMI_FIELD_EN (1 << 1)
+#define HDMI_TG_EN (1 << 0)
+
+#endif /* SAMSUNG_REGS_HDMI_H */
--- /dev/null
+/* linux/arch/arm/mach-exynos4/include/mach/regs-hdmi_14.h
+ *
+ * Copyright (c) 2010 Samsung Electronics
+ * http://www.samsung.com/
+ *
+ * HDMI register header file for Samsung TVOUT driver
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+#ifndef __ARCH_ARM_REGS_HDMI_H
+#define __ARCH_ARM_REGS_HDMI_H
+
+/*
+ * Register part
+*/
+
+#define S5P_HDMI_I2C_PHY_BASE(x) (x)
+
+#define HDMI_I2C_CON S5P_HDMI_I2C_PHY_BASE(0x0000)
+#define HDMI_I2C_STAT S5P_HDMI_I2C_PHY_BASE(0x0004)
+#define HDMI_I2C_ADD S5P_HDMI_I2C_PHY_BASE(0x0008)
+#define HDMI_I2C_DS S5P_HDMI_I2C_PHY_BASE(0x000c)
+#define HDMI_I2C_LC S5P_HDMI_I2C_PHY_BASE(0x0010)
+
+#define HDMI_CTRL_BASE(x) ((x) + 0x00000000)
+#define HDMI_CORE_BASE(x) ((x) + 0x00010000)
+#define HDMI_SPDIF_BASE(x) ((x) + 0x00030000)
+#define HDMI_I2S_BASE(x) ((x) + 0x00040000)
+#define HDMI_TG_BASE(x) ((x) + 0x00050000)
+#define HDMI_EFUSE_BASE(x) ((x) + 0x00060000)
+
+/* Control registers */
+#define HDMI_INTC_CON_0 HDMI_CTRL_BASE(0x0000)
+#define HDMI_INTC_FLAG_0 HDMI_CTRL_BASE(0x0004)
+#define HDMI_HDCP_KEY_LOAD HDMI_CTRL_BASE(0x0008)
+#define HDMI_HPD_STATUS HDMI_CTRL_BASE(0x000C)
+
+#define HDMI_INTC_CON_1 HDMI_CTRL_BASE(0x0010)
+#define HDMI_INTC_FLAG_1 HDMI_CTRL_BASE(0x0014)
+#define HDMI_PHY_STATUS_0 HDMI_CTRL_BASE(0x0020)
+#define HDMI_PHY_STATUS_PLL HDMI_CTRL_BASE(0x0028)
+#define HDMI_PHY_CON_0 HDMI_CTRL_BASE(0x0030)
+
+#define HDMI_HPD_CTRL HDMI_CTRL_BASE(0x0040)
+#define HDMI_HPD_TH_(n) HDMI_CTRL_BASE(0x0050 + 4 * (n))
+
+#define HDMI_AUDIO_CLKSEL HDMI_CTRL_BASE(0x0070)
+#define HDMI_PHY_RSTOUT HDMI_CTRL_BASE(0x0074)
+#define HDMI_PHY_VPLL HDMI_CTRL_BASE(0x0078)
+#define HDMI_PHY_CMU HDMI_CTRL_BASE(0x007C)
+#define HDMI_CORE_RSTOUT HDMI_CTRL_BASE(0x080)
+
+/* HDMI core registers */
+#define HDMI_CON_0 HDMI_CORE_BASE(0x000)
+#define HDMI_CON_1 HDMI_CORE_BASE(0x004)
+#define HDMI_CON_2 HDMI_CORE_BASE(0x008)
+#define HDMI_SIM_MODE HDMI_CORE_BASE(0x00C)
+#define HDMI_STATUS HDMI_CORE_BASE(0x010)
+#define HDMI_PHY_STATUS HDMI_CORE_BASE(0x014)
+#define HDMI_STATUS_EN HDMI_CORE_BASE(0x020)
+#define HDMI_HPD HDMI_CORE_BASE(0x030)
+#define HDMI_MODE_SEL HDMI_CORE_BASE(0x040)
+#define HDMI_ENC_EN HDMI_CORE_BASE(0x044)
+
+/* Video related registers */
+#define HDMI_YMAX HDMI_CORE_BASE(0x060)
+#define HDMI_YMIN HDMI_CORE_BASE(0x064)
+#define HDMI_CMAX HDMI_CORE_BASE(0x068)
+#define HDMI_CMIN HDMI_CORE_BASE(0x06c)
+
+#define HDMI_DI_PREFIX HDMI_CORE_BASE(0x078)
+#define HDMI_VBI_ST_MG HDMI_CORE_BASE(0x080)
+#define HDMI_END_MG HDMI_CORE_BASE(0x084)
+
+#define HDMI_AUTH_ST_MG0 HDMI_CORE_BASE(0x090)
+#define HDMI_AUTH_ST_MG1 HDMI_CORE_BASE(0x094)
+#define HDMI_AUTH_END_MG0 HDMI_CORE_BASE(0x098)
+#define HDMI_AUTH_END_MG1 HDMI_CORE_BASE(0x09C)
+
+#define HDMI_H_BLANK_0 HDMI_CORE_BASE(0x0a0)
+#define HDMI_H_BLANK_1 HDMI_CORE_BASE(0x0a4)
+
+#define HDMI_V2_BLANK_0 HDMI_CORE_BASE(0x0b0)
+#define HDMI_V2_BLANK_1 HDMI_CORE_BASE(0x0b4)
+#define HDMI_V1_BLANK_0 HDMI_CORE_BASE(0x0b8)
+#define HDMI_V1_BLANK_1 HDMI_CORE_BASE(0x0bC)
+
+#define HDMI_V_LINE_0 HDMI_CORE_BASE(0x0c0)
+#define HDMI_V_LINE_1 HDMI_CORE_BASE(0x0c4)
+#define HDMI_H_LINE_0 HDMI_CORE_BASE(0x0c8)
+#define HDMI_H_LINE_1 HDMI_CORE_BASE(0x0cC)
+#define HDMI_HSYNC_POL HDMI_CORE_BASE(0x0E0)
+
+#define HDMI_VSYNC_POL HDMI_CORE_BASE(0x0e4)
+#define HDMI_INT_PRO_MODE HDMI_CORE_BASE(0x0e8)
+
+#define HDMI_V_BLANK_F0_0 HDMI_CORE_BASE(0x110)
+#define HDMI_V_BLANK_F0_1 HDMI_CORE_BASE(0x114)
+#define HDMI_V_BLANK_F1_0 HDMI_CORE_BASE(0x118)
+#define HDMI_V_BLANK_F1_1 HDMI_CORE_BASE(0x11C)
+
+#define HDMI_H_SYNC_START_0 HDMI_CORE_BASE(0x120)
+#define HDMI_H_SYNC_START_1 HDMI_CORE_BASE(0x124)
+#define HDMI_H_SYNC_END_0 HDMI_CORE_BASE(0x128)
+#define HDMI_H_SYNC_END_1 HDMI_CORE_BASE(0x12C)
+
+#define HDMI_V_SYNC_LINE_BEF_2_0 HDMI_CORE_BASE(0x130)
+#define HDMI_V_SYNC_LINE_BEF_2_1 HDMI_CORE_BASE(0x134)
+#define HDMI_V_SYNC_LINE_BEF_1_0 HDMI_CORE_BASE(0x138)
+#define HDMI_V_SYNC_LINE_BEF_1_1 HDMI_CORE_BASE(0x13C)
+
+#define HDMI_V_SYNC_LINE_AFT_2_0 HDMI_CORE_BASE(0x140)
+#define HDMI_V_SYNC_LINE_AFT_2_1 HDMI_CORE_BASE(0x144)
+#define HDMI_V_SYNC_LINE_AFT_1_0 HDMI_CORE_BASE(0x148)
+#define HDMI_V_SYNC_LINE_AFT_1_1 HDMI_CORE_BASE(0x14C)
+
+#define HDMI_V_SYNC_LINE_AFT_PXL_2_0 HDMI_CORE_BASE(0x150)
+#define HDMI_V_SYNC_LINE_AFT_PXL_2_1 HDMI_CORE_BASE(0x154)
+#define HDMI_V_SYNC_LINE_AFT_PXL_1_0 HDMI_CORE_BASE(0x158)
+#define HDMI_V_SYNC_LINE_AFT_PXL_1_1 HDMI_CORE_BASE(0x15C)
+
+#define HDMI_V_BLANK_F2_0 HDMI_CORE_BASE(0x160)
+#define HDMI_V_BLANK_F2_1 HDMI_CORE_BASE(0x164)
+#define HDMI_V_BLANK_F3_0 HDMI_CORE_BASE(0x168)
+#define HDMI_V_BLANK_F3_1 HDMI_CORE_BASE(0x16C)
+#define HDMI_V_BLANK_F4_0 HDMI_CORE_BASE(0x170)
+#define HDMI_V_BLANK_F4_1 HDMI_CORE_BASE(0x174)
+#define HDMI_V_BLANK_F5_0 HDMI_CORE_BASE(0x178)
+#define HDMI_V_BLANK_F5_1 HDMI_CORE_BASE(0x17C)
+
+#define HDMI_V_SYNC_LINE_AFT_3_0 HDMI_CORE_BASE(0x180)
+#define HDMI_V_SYNC_LINE_AFT_3_1 HDMI_CORE_BASE(0x184)
+#define HDMI_V_SYNC_LINE_AFT_4_0 HDMI_CORE_BASE(0x188)
+#define HDMI_V_SYNC_LINE_AFT_4_1 HDMI_CORE_BASE(0x18C)
+#define HDMI_V_SYNC_LINE_AFT_5_0 HDMI_CORE_BASE(0x190)
+#define HDMI_V_SYNC_LINE_AFT_5_1 HDMI_CORE_BASE(0x194)
+#define HDMI_V_SYNC_LINE_AFT_6_0 HDMI_CORE_BASE(0x198)
+#define HDMI_V_SYNC_LINE_AFT_6_1 HDMI_CORE_BASE(0x19C)
+
+#define HDMI_V_SYNC_LINE_AFT_PXL_3_0 HDMI_CORE_BASE(0x1A0)
+#define HDMI_V_SYNC_LINE_AFT_PXL_3_1 HDMI_CORE_BASE(0x1A4)
+#define HDMI_V_SYNC_LINE_AFT_PXL_4_0 HDMI_CORE_BASE(0x1A8)
+#define HDMI_V_SYNC_LINE_AFT_PXL_4_1 HDMI_CORE_BASE(0x1AC)
+#define HDMI_V_SYNC_LINE_AFT_PXL_5_0 HDMI_CORE_BASE(0x1B0)
+#define HDMI_V_SYNC_LINE_AFT_PXL_5_1 HDMI_CORE_BASE(0x1B4)
+#define HDMI_V_SYNC_LINE_AFT_PXL_6_0 HDMI_CORE_BASE(0x1B8)
+#define HDMI_V_SYNC_LINE_AFT_PXL_6_1 HDMI_CORE_BASE(0x1BC)
+
+#define HDMI_VACT_SPACE_1_0 HDMI_CORE_BASE(0x1C0)
+#define HDMI_VACT_SPACE_1_1 HDMI_CORE_BASE(0x1C4)
+#define HDMI_VACT_SPACE_2_0 HDMI_CORE_BASE(0x1C8)
+#define HDMI_VACT_SPACE_2_1 HDMI_CORE_BASE(0x1CC)
+#define HDMI_VACT_SPACE_3_0 HDMI_CORE_BASE(0x1D0)
+#define HDMI_VACT_SPACE_3_1 HDMI_CORE_BASE(0x1D4)
+#define HDMI_VACT_SPACE_4_0 HDMI_CORE_BASE(0x1D8)
+#define HDMI_VACT_SPACE_4_1 HDMI_CORE_BASE(0x1DC)
+#define HDMI_VACT_SPACE_5_0 HDMI_CORE_BASE(0x1E0)
+#define HDMI_VACT_SPACE_5_1 HDMI_CORE_BASE(0x1E4)
+#define HDMI_VACT_SPACE_6_0 HDMI_CORE_BASE(0x1E8)
+#define HDMI_VACT_SPACE_6_1 HDMI_CORE_BASE(0x1EC)
+#define HDMI_CSC_MUX HDMI_CORE_BASE(0x1F0)
+#define HDMI_SYNC_GEN_MUX HDMI_CORE_BASE(0x1F4)
+
+#define HDMI_GCP_CON HDMI_CORE_BASE(0x200)
+#define HDMI_GCP_CON_EX HDMI_CORE_BASE(0x204)
+#define HDMI_GCP_BYTE1 HDMI_CORE_BASE(0x210)
+#define HDMI_GCP_BYTE2 HDMI_CORE_BASE(0x214)
+#define HDMI_GCP_BYTE3 HDMI_CORE_BASE(0x218)
+
+/* Audio related registers */
+#define HDMI_ASP_CON HDMI_CORE_BASE(0x300)
+#define HDMI_ASP_SP_FLAT HDMI_CORE_BASE(0x304)
+#define HDMI_ASP_CHCFG0 HDMI_CORE_BASE(0x310)
+#define HDMI_ASP_CHCFG1 HDMI_CORE_BASE(0x314)
+#define HDMI_ASP_CHCFG2 HDMI_CORE_BASE(0x318)
+#define HDMI_ASP_CHCFG3 HDMI_CORE_BASE(0x31c)
+
+#define HDMI_ACR_CON HDMI_CORE_BASE(0x400)
+#define HDMI_ACR_MCTS0 HDMI_CORE_BASE(0x410)
+#define HDMI_ACR_MCTS1 HDMI_CORE_BASE(0x414)
+#define HDMI_ACR_MCTS2 HDMI_CORE_BASE(0x418)
+#define HDMI_ACR_CTS0 HDMI_CORE_BASE(0x420)
+#define HDMI_ACR_CTS1 HDMI_CORE_BASE(0x424)
+#define HDMI_ACR_CTS2 HDMI_CORE_BASE(0x428)
+#define HDMI_ACR_N0 HDMI_CORE_BASE(0x430)
+#define HDMI_ACR_N1 HDMI_CORE_BASE(0x434)
+#define HDMI_ACR_N2 HDMI_CORE_BASE(0x438)
+#define HDMI_ACR_LSB2 HDMI_CORE_BASE(0x440)
+#define HDMI_ACR_TXCNT HDMI_CORE_BASE(0x444)
+#define HDMI_ACR_TXINTERNAL HDMI_CORE_BASE(0x448)
+#define HDMI_ACR_CTS_OFFSET HDMI_CORE_BASE(0x44c)
+
+#define HDMI_ACP_CON HDMI_CORE_BASE(0x500)
+#define HDMI_ACP_TYPE HDMI_CORE_BASE(0x514)
+/* offset of HDMI_ACP_DATA00 ~ 16 : 0x0520 ~ 0x0560 */
+#define HDMI_ACP_DATA(n) HDMI_CORE_BASE(0x520 + 4 * (n))
+
+#define HDMI_ISRC_CON HDMI_CORE_BASE(0x600)
+#define HDMI_ISRC1_HEADER1 HDMI_CORE_BASE(0x614)
+/* offset of HDMI_ISRC1_DATA00 ~ 15 : 0x0620 ~ 0x065C */
+#define HDMI_ISRC1_DATA(n) HDMI_CORE_BASE(0x620 + 4 * (n))
+/* offset of HDMI_ISRC2_DATA00 ~ 15 : 0x06A0 ~ 0x06DC */
+#define HDMI_ISRC2_DATA(n) HDMI_CORE_BASE(0x6A0 + 4 * (n))
+
+#define HDMI_AVI_CON HDMI_CORE_BASE(0x700)
+#define HDMI_AVI_HEADER0 HDMI_CORE_BASE(0x710)
+#define HDMI_AVI_HEADER1 HDMI_CORE_BASE(0x714)
+#define HDMI_AVI_HEADER2 HDMI_CORE_BASE(0x718)
+#define HDMI_AVI_CHECK_SUM HDMI_CORE_BASE(0x71C)
+/* offset of HDMI_AVI_BYTE1 ~ 13 : 0x0720 ~ 0x0750 */
+#define HDMI_AVI_BYTE(n) HDMI_CORE_BASE(0x720 + 4 * (n - 1))
+
+#define HDMI_AUI_CON HDMI_CORE_BASE(0x800)
+#define HDMI_AUI_HEADER0 HDMI_CORE_BASE(0x810)
+#define HDMI_AUI_HEADER1 HDMI_CORE_BASE(0x814)
+#define HDMI_AUI_HEADER2 HDMI_CORE_BASE(0x818)
+#define HDMI_AUI_CHECK_SUM HDMI_CORE_BASE(0x81C)
+/* offset of HDMI_AUI_BYTE1 ~ 12 : 0x0820 ~ 0x084C */
+#define HDMI_AUI_BYTE(n) HDMI_CORE_BASE(0x820 + 4 * (n - 1))
+
+#define HDMI_MPG_CON HDMI_CORE_BASE(0x900)
+#define HDMI_MPG_CHECK_SUM HDMI_CORE_BASE(0x91C)
+/* offset of HDMI_MPG_BYTE1 ~ 6 : 0x0920 ~ 0x0934 */
+#define HDMI_MPG_BYTE(n) HDMI_CORE_BASE(0x920 + 4 * (n - 1))
+
+#define HDMI_SPD_CON HDMI_CORE_BASE(0xA00)
+#define HDMI_SPD_HEADER0 HDMI_CORE_BASE(0xA10)
+#define HDMI_SPD_HEADER1 HDMI_CORE_BASE(0xA14)
+#define HDMI_SPD_HEADER2 HDMI_CORE_BASE(0xA18)
+/* offset of HDMI_SPD_DATA00 ~ 27 : 0x0A20 ~ 0x0A8C */
+#define HDMI_SPD_DATA0(n) HDMI_CORE_BASE(0xA20 + 4 * (n))
+
+#define HDMI_GAMUT_CON HDMI_CORE_BASE(0xB00)
+#define HDMI_GAMUT_HEADER0 HDMI_CORE_BASE(0xB10)
+#define HDMI_GAMUT_HEADER1 HDMI_CORE_BASE(0xB14)
+#define HDMI_GAMUT_HEADER2 HDMI_CORE_BASE(0xB18)
+/* offset of HDMI_GAMUT_METADATA00 ~ 27 : 0x0B20 ~ 0x0B8C */
+#define HDMI_GAMUT_METADATA(n) HDMI_CORE_BASE(0xB20 + 4 * (n))
+
+#define HDMI_VSI_CON HDMI_CORE_BASE(0xC00)
+#define HDMI_VSI_HEADER0 HDMI_CORE_BASE(0xC10)
+#define HDMI_VSI_HEADER1 HDMI_CORE_BASE(0xC14)
+#define HDMI_VSI_HEADER2 HDMI_CORE_BASE(0xC18)
+/* offset of HDMI_VSI_DATA00 ~ 27 : 0x0C20 ~ 0x0C8C */
+#define HDMI_VSI_DATA(n) HDMI_CORE_BASE(0xC20 + 4 * (n))
+
+#define HDMI_DC_CONTROL HDMI_CORE_BASE(0xD00)
+#define HDMI_VIDEO_PATTERN_GEN HDMI_CORE_BASE(0xD04)
+#define HDMI_HPD_GEN0 HDMI_CORE_BASE(0xD08)
+#define HDMI_HPD_GEN1 HDMI_CORE_BASE(0xD0C)
+#define HDMI_HPD_GEN2 HDMI_CORE_BASE(0xD10)
+#define HDMI_HPD_GEN3 HDMI_CORE_BASE(0xD14)
+
+#define HDMI_DIM_CON HDMI_CORE_BASE(0xD30)
+
+/* HDCP related registers */
+/* offset of HDMI_HDCP_SHA1_00 ~ 19 : 0x7000 ~ 0x704C */
+#define HDMI_HDCP_SHA1_(n) HDMI_CORE_BASE(0x7000 + 4 * (n))
+
+/* offset of HDMI_HDCP_KSV_LIST_0 ~ 4 : 0x7050 ~ 0x7060 */
+#define HDMI_HDCP_KSV_LIST_(n) HDMI_CORE_BASE(0x7050 + 4 * (n))
+
+#define HDMI_HDCP_KSV_LIST_CON HDMI_CORE_BASE(0x7064)
+#define HDMI_HDCP_SHA_RESULT HDMI_CORE_BASE(0x7070)
+#define HDMI_HDCP_CTRL1 HDMI_CORE_BASE(0x7080)
+#define HDMI_HDCP_CTRL2 HDMI_CORE_BASE(0x7084)
+#define HDMI_HDCP_CHECK_RESULT HDMI_CORE_BASE(0x7090)
+
+/* offset of HDMI_HDCP_BKSV_0 ~ 4 : 0x70A0 ~ 0x70B0 */
+#define HDMI_HDCP_BKSV_(n) HDMI_CORE_BASE(0x70A0 + 4 * (n))
+/* offset of HDMI_HDCP_AKSV_0 ~ 4 : 0x70C0 ~ 0x70D0 */
+#define HDMI_HDCP_AKSV_(n) HDMI_CORE_BASE(0x70C0 + 4 * (n))
+
+/* offset of HDMI_HDCP_AN_0 ~ 7 : 0x70E0 ~ 0x70FC */
+#define HDMI_HDCP_AN_(n) HDMI_CORE_BASE(0x70E0 + 4 * (n))
+
+#define HDMI_HDCP_BCAPS HDMI_CORE_BASE(0x7100)
+#define HDMI_HDCP_BSTATUS_0 HDMI_CORE_BASE(0x7110)
+#define HDMI_HDCP_BSTATUS_1 HDMI_CORE_BASE(0x7114)
+#define HDMI_HDCP_RI_0 HDMI_CORE_BASE(0x7140)
+#define HDMI_HDCP_RI_1 HDMI_CORE_BASE(0x7144)
+#define HDMI_HDCP_I2C_INT HDMI_CORE_BASE(0x7180)
+#define HDMI_HDCP_AN_INT HDMI_CORE_BASE(0x7190)
+#define HDMI_HDCP_WDT_INT HDMI_CORE_BASE(0x71a0)
+#define HDMI_HDCP_RI_INT HDMI_CORE_BASE(0x71b0)
+
+#define HDMI_HDCP_RI_COMPARE_0 HDMI_CORE_BASE(0x71d0)
+#define HDMI_HDCP_RI_COMPARE_1 HDMI_CORE_BASE(0x71d4)
+#define HDMI_HDCP_FRAME_COUNT HDMI_CORE_BASE(0x71e0)
+
+#define HDMI_RGB_ROUND_EN HDMI_CORE_BASE(0xD500)
+
+#define HDMI_VACT_SPACE_R_0 HDMI_CORE_BASE(0xD504)
+#define HDMI_VACT_SPACE_R_1 HDMI_CORE_BASE(0xD508)
+
+#define HDMI_VACT_SPACE_G_0 HDMI_CORE_BASE(0xD50C)
+#define HDMI_VACT_SPACE_G_1 HDMI_CORE_BASE(0xD510)
+
+#define HDMI_VACT_SPACE_B_0 HDMI_CORE_BASE(0xD514)
+#define HDMI_VACT_SPACE_B_1 HDMI_CORE_BASE(0xD518)
+
+#define HDMI_BLUE_SCREEN_B_0 HDMI_CORE_BASE(0xD520)
+#define HDMI_BLUE_SCREEN_B_1 HDMI_CORE_BASE(0xD524)
+#define HDMI_BLUE_SCREEN_G_0 HDMI_CORE_BASE(0xD528)
+#define HDMI_BLUE_SCREEN_G_1 HDMI_CORE_BASE(0xD52C)
+#define HDMI_BLUE_SCREEN_R_0 HDMI_CORE_BASE(0xD530)
+#define HDMI_BLUE_SCREEN_R_1 HDMI_CORE_BASE(0xD534)
+
+/* SPDIF registers */
+#define HDMI_SPDIFIN_CLK_CTRL HDMI_SPDIF_BASE(0x000)
+#define HDMI_SPDIFIN_OP_CTRL HDMI_SPDIF_BASE(0x004)
+#define HDMI_SPDIFIN_IRQ_MASK HDMI_SPDIF_BASE(0x008)
+#define HDMI_SPDIFIN_IRQ_STATUS HDMI_SPDIF_BASE(0x00c)
+#define HDMI_SPDIFIN_CONFIG_1 HDMI_SPDIF_BASE(0x010)
+#define HDMI_SPDIFIN_CONFIG_2 HDMI_SPDIF_BASE(0x014)
+#define HDMI_SPDIFIN_USER_VALUE_1 HDMI_SPDIF_BASE(0x020)
+#define HDMI_SPDIFIN_USER_VALUE_2 HDMI_SPDIF_BASE(0x024)
+#define HDMI_SPDIFIN_USER_VALUE_3 HDMI_SPDIF_BASE(0x028)
+#define HDMI_SPDIFIN_USER_VALUE_4 HDMI_SPDIF_BASE(0x02c)
+#define HDMI_SPDIFIN_CH_STATUS_0_1 HDMI_SPDIF_BASE(0x030)
+#define HDMI_SPDIFIN_CH_STATUS_0_2 HDMI_SPDIF_BASE(0x034)
+#define HDMI_SPDIFIN_CH_STATUS_0_3 HDMI_SPDIF_BASE(0x038)
+#define HDMI_SPDIFIN_CH_STATUS_0_4 HDMI_SPDIF_BASE(0x03c)
+#define HDMI_SPDIFIN_CH_STATUS_1 HDMI_SPDIF_BASE(0x040)
+#define HDMI_SPDIFIN_FRAME_PERIOD_1 HDMI_SPDIF_BASE(0x048)
+#define HDMI_SPDIFIN_FRAME_PERIOD_2 HDMI_SPDIF_BASE(0x04c)
+#define HDMI_SPDIFIN_PC_INFO_1 HDMI_SPDIF_BASE(0x050)
+#define HDMI_SPDIFIN_PC_INFO_2 HDMI_SPDIF_BASE(0x054)
+#define HDMI_SPDIFIN_PD_INFO_1 HDMI_SPDIF_BASE(0x058)
+#define HDMI_SPDIFIN_PD_INFO_2 HDMI_SPDIF_BASE(0x05c)
+#define HDMI_SPDIFIN_DATA_BUF_0_1 HDMI_SPDIF_BASE(0x060)
+#define HDMI_SPDIFIN_DATA_BUF_0_2 HDMI_SPDIF_BASE(0x064)
+#define HDMI_SPDIFIN_DATA_BUF_0_3 HDMI_SPDIF_BASE(0x068)
+#define HDMI_SPDIFIN_USER_BUF_0 HDMI_SPDIF_BASE(0x06c)
+#define HDMI_SPDIFIN_DATA_BUF_1_1 HDMI_SPDIF_BASE(0x070)
+#define HDMI_SPDIFIN_DATA_BUF_1_2 HDMI_SPDIF_BASE(0x074)
+#define HDMI_SPDIFIN_DATA_BUF_1_3 HDMI_SPDIF_BASE(0x078)
+#define HDMI_SPDIFIN_USER_BUF_1 HDMI_SPDIF_BASE(0x07c)
+
+/* I2S registers */
+#define HDMI_I2S_CLK_CON HDMI_I2S_BASE(0x000)
+#define HDMI_I2S_CON_1 HDMI_I2S_BASE(0x004)
+#define HDMI_I2S_CON_2 HDMI_I2S_BASE(0x008)
+#define HDMI_I2S_PIN_SEL_0 HDMI_I2S_BASE(0x00c)
+#define HDMI_I2S_PIN_SEL_1 HDMI_I2S_BASE(0x010)
+#define HDMI_I2S_PIN_SEL_2 HDMI_I2S_BASE(0x014)
+#define HDMI_I2S_PIN_SEL_3 HDMI_I2S_BASE(0x018)
+#define HDMI_I2S_DSD_CON HDMI_I2S_BASE(0x01c)
+#define HDMI_I2S_IN_MUX_CON HDMI_I2S_BASE(0x020)
+#define HDMI_I2S_CH_ST_CON HDMI_I2S_BASE(0x024)
+#define HDMI_I2S_CH_ST_0 HDMI_I2S_BASE(0x028)
+#define HDMI_I2S_CH_ST_1 HDMI_I2S_BASE(0x02c)
+#define HDMI_I2S_CH_ST_2 HDMI_I2S_BASE(0x030)
+#define HDMI_I2S_CH_ST_3 HDMI_I2S_BASE(0x034)
+#define HDMI_I2S_CH_ST_4 HDMI_I2S_BASE(0x038)
+#define HDMI_I2S_CH_ST_SH_0 HDMI_I2S_BASE(0x03c)
+#define HDMI_I2S_CH_ST_SH_1 HDMI_I2S_BASE(0x040)
+#define HDMI_I2S_CH_ST_SH_2 HDMI_I2S_BASE(0x044)
+#define HDMI_I2S_CH_ST_SH_3 HDMI_I2S_BASE(0x048)
+#define HDMI_I2S_CH_ST_SH_4 HDMI_I2S_BASE(0x04c)
+#define HDMI_I2S_VD_DATA HDMI_I2S_BASE(0x050)
+#define HDMI_I2S_MUX_CH HDMI_I2S_BASE(0x054)
+#define HDMI_I2S_MUX_CUV HDMI_I2S_BASE(0x058)
+#define HDMI_I2S_IRQ_MASK HDMI_I2S_BASE(0x05c)
+#define HDMI_I2S_IRQ_STATUS HDMI_I2S_BASE(0x060)
+
+#define HDMI_I2S_CH0_L_0 HDMI_I2S_BASE(0x0064)
+#define HDMI_I2S_CH0_L_1 HDMI_I2S_BASE(0x0068)
+#define HDMI_I2S_CH0_L_2 HDMI_I2S_BASE(0x006C)
+#define HDMI_I2S_CH0_L_3 HDMI_I2S_BASE(0x0070)
+#define HDMI_I2S_CH0_R_0 HDMI_I2S_BASE(0x0074)
+#define HDMI_I2S_CH0_R_1 HDMI_I2S_BASE(0x0078)
+#define HDMI_I2S_CH0_R_2 HDMI_I2S_BASE(0x007C)
+#define HDMI_I2S_CH0_R_3 HDMI_I2S_BASE(0x0080)
+#define HDMI_I2S_CH1_L_0 HDMI_I2S_BASE(0x0084)
+#define HDMI_I2S_CH1_L_1 HDMI_I2S_BASE(0x0088)
+#define HDMI_I2S_CH1_L_2 HDMI_I2S_BASE(0x008C)
+#define HDMI_I2S_CH1_L_3 HDMI_I2S_BASE(0x0090)
+#define HDMI_I2S_CH1_R_0 HDMI_I2S_BASE(0x0094)
+#define HDMI_I2S_CH1_R_1 HDMI_I2S_BASE(0x0098)
+#define HDMI_I2S_CH1_R_2 HDMI_I2S_BASE(0x009C)
+#define HDMI_I2S_CH1_R_3 HDMI_I2S_BASE(0x00A0)
+#define HDMI_I2S_CH2_L_0 HDMI_I2S_BASE(0x00A4)
+#define HDMI_I2S_CH2_L_1 HDMI_I2S_BASE(0x00A8)
+#define HDMI_I2S_CH2_L_2 HDMI_I2S_BASE(0x00AC)
+#define HDMI_I2S_CH2_L_3 HDMI_I2S_BASE(0x00B0)
+#define HDMI_I2S_CH2_R_0 HDMI_I2S_BASE(0x00B4)
+#define HDMI_I2S_CH2_R_1 HDMI_I2S_BASE(0x00B8)
+#define HDMI_I2S_CH2_R_2 HDMI_I2S_BASE(0x00BC)
+#define HDMI_I2S_Ch2_R_3 HDMI_I2S_BASE(0x00C0)
+#define HDMI_I2S_CH3_L_0 HDMI_I2S_BASE(0x00C4)
+#define HDMI_I2S_CH3_L_1 HDMI_I2S_BASE(0x00C8)
+#define HDMI_I2S_CH3_L_2 HDMI_I2S_BASE(0x00CC)
+#define HDMI_I2S_CH3_R_0 HDMI_I2S_BASE(0x00D0)
+#define HDMI_I2S_CH3_R_1 HDMI_I2S_BASE(0x00D4)
+#define HDMI_I2S_CH3_R_2 HDMI_I2S_BASE(0x00D8)
+#define HDMI_I2S_CUV_L_R HDMI_I2S_BASE(0x00DC)
+
+/* Timing Generator registers */
+#define HDMI_TG_CMD HDMI_TG_BASE(0x000)
+#define HDMI_TG_CFG HDMI_TG_BASE(0x004)
+#define HDMI_TG_CB_SZ HDMI_TG_BASE(0x008)
+#define HDMI_TG_INDELAY_L HDMI_TG_BASE(0x00c)
+#define HDMI_TG_INDELAY_H HDMI_TG_BASE(0x010)
+#define HDMI_TG_POL_CTRL HDMI_TG_BASE(0x014)
+#define HDMI_TG_H_FSZ_L HDMI_TG_BASE(0x018)
+#define HDMI_TG_H_FSZ_H HDMI_TG_BASE(0x01c)
+#define HDMI_TG_HACT_ST_L HDMI_TG_BASE(0x020)
+#define HDMI_TG_HACT_ST_H HDMI_TG_BASE(0x024)
+#define HDMI_TG_HACT_SZ_L HDMI_TG_BASE(0x028)
+#define HDMI_TG_HACT_SZ_H HDMI_TG_BASE(0x02c)
+#define HDMI_TG_V_FSZ_L HDMI_TG_BASE(0x030)
+#define HDMI_TG_V_FSZ_H HDMI_TG_BASE(0x034)
+#define HDMI_TG_VSYNC_L HDMI_TG_BASE(0x038)
+#define HDMI_TG_VSYNC_H HDMI_TG_BASE(0x03c)
+#define HDMI_TG_VSYNC2_L HDMI_TG_BASE(0x040)
+#define HDMI_TG_VSYNC2_H HDMI_TG_BASE(0x044)
+#define HDMI_TG_VACT_ST_L HDMI_TG_BASE(0x048)
+#define HDMI_TG_VACT_ST_H HDMI_TG_BASE(0x04c)
+#define HDMI_TG_VACT_SZ_L HDMI_TG_BASE(0x050)
+#define HDMI_TG_VACT_SZ_H HDMI_TG_BASE(0x054)
+#define HDMI_TG_FIELD_CHG_L HDMI_TG_BASE(0x058)
+#define HDMI_TG_FIELD_CHG_H HDMI_TG_BASE(0x05c)
+#define HDMI_TG_VACT_ST2_L HDMI_TG_BASE(0x060)
+#define HDMI_TG_VACT_ST2_H HDMI_TG_BASE(0x064)
+#define HDMI_TG_VACT_ST3_L HDMI_TG_BASE(0x068)
+#define HDMI_TG_VACT_ST3_H HDMI_TG_BASE(0x06c)
+#define HDMI_TG_VACT_ST4_L HDMI_TG_BASE(0x070)
+#define HDMI_TG_VACT_ST4_H HDMI_TG_BASE(0x074)
+
+#define HDMI_TG_VSYNC_TOP_HDMI_L HDMI_TG_BASE(0x078)
+#define HDMI_TG_VSYNC_TOP_HDMI_H HDMI_TG_BASE(0x07c)
+#define HDMI_TG_VSYNC_BOT_HDMI_L HDMI_TG_BASE(0x080)
+#define HDMI_TG_VSYNC_BOT_HDMI_H HDMI_TG_BASE(0x084)
+#define HDMI_TG_FIELD_TOP_HDMI_L HDMI_TG_BASE(0x088)
+#define HDMI_TG_FIELD_TOP_HDMI_H HDMI_TG_BASE(0x08c)
+#define HDMI_TG_FIELD_BOT_HDMI_L HDMI_TG_BASE(0x090)
+#define HDMI_TG_FIELD_BOT_HDMI_H HDMI_TG_BASE(0x094)
+
+#define HDMI_TG_3D HDMI_TG_BASE(0x0F0)
+
+#define HDMI_MHL_HSYNC_WIDTH HDMI_TG_BASE(0x17C)
+#define HDMI_MHL_VSYNC_WIDTH HDMI_TG_BASE(0x180)
+#define HDMI_MHL_CLK_INV HDMI_TG_BASE(0x184)
+
+/* HDMI eFUSE registers */
+#define HDMI_EFUSE_CTRL HDMI_EFUSE_BASE(0x000)
+#define HDMI_EFUSE_STATUS HDMI_EFUSE_BASE(0x004)
+#define HDMI_EFUSE_ADDR_WIDTH HDMI_EFUSE_BASE(0x008)
+#define HDMI_EFUSE_SIGDEV_ASSERT HDMI_EFUSE_BASE(0x00c)
+#define HDMI_EFUSE_SIGDEV_DE_ASSERT HDMI_EFUSE_BASE(0x010)
+#define HDMI_EFUSE_PRCHG_ASSERT HDMI_EFUSE_BASE(0x014)
+#define HDMI_EFUSE_PRCHG_DE_ASSERT HDMI_EFUSE_BASE(0x018)
+#define HDMI_EFUSE_FSET_ASSERT HDMI_EFUSE_BASE(0x01c)
+#define HDMI_EFUSE_FSET_DE_ASSERT HDMI_EFUSE_BASE(0x020)
+#define HDMI_EFUSE_SENSING HDMI_EFUSE_BASE(0x024)
+#define HDMI_EFUSE_SCK_ASSERT HDMI_EFUSE_BASE(0x028)
+#define HDMI_EFUSE_SCK_DE_ASSERT HDMI_EFUSE_BASE(0x02c)
+#define HDMI_EFUSE_SDOUT_OFFSET HDMI_EFUSE_BASE(0x030)
+#define HDMI_EFUSE_READ_OFFSET HDMI_EFUSE_BASE(0x034)
+
+/*
+ * Bit definition part
+ */
+
+/* Control Register */
+
+/* HDMI_INTC_CON_0 */
+#define HDMI_INTC_POL (1 << 7)
+#define HDMI_INTC_EN_GLOBAL (1 << 6)
+#define HDMI_INTC_EN_I2S (1 << 5)
+#define HDMI_INTC_EN_CEC (1 << 4)
+#define HDMI_INTC_EN_HPD_PLUG (1 << 3)
+#define HDMI_INTC_EN_HPD_UNPLUG (1 << 2)
+#define HDMI_INTC_EN_SPDIF (1 << 1)
+#define HDMI_INTC_EN_HDCP (1 << 0)
+
+/* HDMI_INTC_FLAG_0 */
+#define HDMI_INTC_FLAG_I2S (1 << 5)
+#define HDMI_INTC_FLAG_CEC (1 << 4)
+#define HDMI_INTC_FLAG_HPD_PLUG (1 << 3)
+#define HDMI_INTC_FLAG_HPD_UNPLUG (1 << 2)
+#define HDMI_INTC_FLAG_SPDIF (1 << 1)
+#define HDMI_INTC_FLAG_HDCP (1 << 0)
+
+/* HDMI_HDCP_KEY_LOAD */
+#define HDMI_HDCP_KEY_LOAD_DONE (1 << 0)
+
+/* HDMI_HPD_STATUS */
+#define HDMI_HPD_VALUE (1 << 0)
+
+/* AUDIO_CLKSEL */
+#define HDMI_AUDIO_SPDIF_CLK (1 << 0)
+#define HDMI_AUDIO_PCLK (0 << 0)
+
+/* HDMI_PHY_RSTOUT */
+#define HDMI_PHY_SW_RSTOUT (1 << 0)
+
+/* HDMI_PHY_VPLL */
+#define HDMI_PHY_VPLL_LOCK (1 << 7)
+#define HDMI_PHY_VPLL_CODE_MASK (0x7 << 0)
+
+/* HDMI_PHY_CMU */
+#define HDMI_PHY_CMU_LOCK (1 << 7)
+#define HDMI_PHY_CMU_CODE_MASK (0x7 << 0)
+
+/* HDMI_CORE_RSTOUT */
+#define HDMI_CORE_SW_RSTOUT (1 << 0)
+
+/* Core Register */
+
+/* HDMI_CON_0 */
+#define HDMI_BLUE_SCR_EN (1 << 5)
+#define HDMI_BLUE_SCR_DIS (0 << 5)
+#define HDMI_ENC_OPTION (1 << 4)
+#define HDMI_ASP_ENABLE (1 << 2)
+#define HDMI_ASP_DISABLE (0 << 2)
+#define HDMI_PWDN_ENB_NORMAL (1 << 1)
+#define HDMI_PWDN_ENB_PD (0 << 1)
+#define HDMI_EN (1 << 0)
+#define HDMI_DIS (~(1 << 0))
+
+/* HDMI_CON_1 */
+#define HDMI_PX_LMT_CTRL_BYPASS (0 << 5)
+#define HDMI_PX_LMT_CTRL_RGB (1 << 5)
+#define HDMI_PX_LMT_CTRL_YPBPR (2 << 5)
+#define HDMI_PX_LMT_CTRL_RESERVED (3 << 5)
+#define HDMI_CON_PXL_REP_RATIO_MASK (1 << 1 | 1 << 0)
+#define HDMI_DOUBLE_PIXEL_REPETITION (0x01)
+
+/* HDMI_CON_2 */
+#define HDMI_VID_PREAMBLE_EN (0 << 5)
+#define HDMI_VID_PREAMBLE_DIS (1 << 5)
+#define HDMI_GUARD_BAND_EN (0 << 1)
+#define HDMI_GUARD_BAND_DIS (1 << 1)
+
+/* STATUS */
+#define HDMI_AUTHEN_ACK_AUTH (1 << 7)
+#define HDMI_AUTHEN_ACK_NOT (0 << 7)
+#define HDMI_AUD_FIFO_OVF_FULL (1 << 6)
+#define HDMI_AUD_FIFO_OVF_NOT (0 << 6)
+#define HDMI_UPDATE_RI_INT_OCC (1 << 4)
+#define HDMI_UPDATE_RI_INT_NOT (0 << 4)
+#define HDMI_UPDATE_RI_INT_CLEAR (1 << 4)
+#define HDMI_UPDATE_PJ_INT_OCC (1 << 3)
+#define HDMI_UPDATE_PJ_INT_NOT (0 << 3)
+#define HDMI_UPDATE_PJ_INT_CLEAR (1 << 3)
+#define HDMI_WRITE_INT_OCC (1 << 2)
+#define HDMI_WRITE_INT_NOT (0 << 2)
+#define HDMI_WRITE_INT_CLEAR (1 << 2)
+#define HDMI_WATCHDOG_INT_OCC (1 << 1)
+#define HDMI_WATCHDOG_INT_NOT (0 << 1)
+#define HDMI_WATCHDOG_INT_CLEAR (1 << 1)
+#define HDMI_WTFORACTIVERX_INT_OCC (1)
+#define HDMI_WTFORACTIVERX_INT_NOT (0)
+#define HDMI_WTFORACTIVERX_INT_CLEAR (1)
+
+/* PHY_STATUS */
+#define HDMI_PHY_STATUS_READY (1)
+
+/* HDMI_MODE_SEL */
+#define HDMI_MODE_HDMI_EN (1 << 1)
+#define HDMI_MODE_DVI_EN (1 << 0)
+#define HDMI_MODE_MASK (3 << 0)
+
+/* STATUS_EN */
+#define HDMI_AUD_FIFO_OVF_EN (1 << 6)
+#define HDMI_AUD_FIFO_OVF_DIS (0 << 6)
+#define HDMI_UPDATE_RI_INT_EN (1 << 4)
+#define HDMI_UPDATE_RI_INT_DIS (0 << 4)
+#define HDMI_UPDATE_PJ_INT_EN (1 << 3)
+#define HDMI_UPDATE_PJ_INT_DIS (0 << 3)
+#define HDMI_WRITE_INT_EN (1 << 2)
+#define HDMI_WRITE_INT_DIS (0 << 2)
+#define HDMI_WATCHDOG_INT_EN (1 << 1)
+#define HDMI_WATCHDOG_INT_DIS (0 << 1)
+#define HDMI_WTFORACTIVERX_INT_EN (1)
+#define HDMI_WTFORACTIVERX_INT_DIS (0)
+#define HDMI_INT_EN_ALL (HDMI_UPDATE_RI_INT_EN|\
+ HDMI_UPDATE_PJ_INT_DIS|\
+ HDMI_WRITE_INT_EN|\
+ HDMI_WATCHDOG_INT_EN|\
+ HDMI_WTFORACTIVERX_INT_EN)
+#define HDMI_INT_DIS_ALL (~0x1F)
+
+/* HPD */
+#define HDMI_SW_HPD_PLUGGED (1 << 1)
+#define HDMI_SW_HPD_UNPLUGGED (0 << 1)
+#define HDMI_HPD_SEL_I_HPD (1)
+#define HDMI_HPD_SEL_SW_HPD (0)
+
+/* MODE_SEL */
+#define HDMI_MODE_EN (1 << 1)
+#define HDMI_MODE_DIS (0 << 1)
+#define HDMI_DVI_MODE_EN (1)
+#define HDMI_DVI_MODE_DIS (0)
+
+/* ENC_EN */
+#define HDMI_HDCP_ENC_ENABLE (1)
+#define HDMI_HDCP_ENC_DISABLE (0)
+
+/* Video Related Register */
+
+/* BLUESCREEN_0/1/2 */
+
+/* HDMI_YMAX/YMIN/CMAX/CMIN */
+
+/* H_BLANK_0/1 */
+
+/* V_BLANK_0/1/2 */
+
+/* H_V_LINE_0/1/2 */
+
+/* VSYNC_POL */
+#define HDMI_V_SYNC_POL_ACT_LOW (1)
+#define HDMI_V_SYNC_POL_ACT_HIGH (0)
+
+/* INT_PRO_MODE */
+#define HDMI_INTERLACE_MODE (1)
+#define HDMI_PROGRESSIVE_MODE (0)
+
+/* V_BLANK_F_0/1/2 */
+
+/* H_SYNC_GEN_0/1/2 */
+
+/* V_SYNC_GEN1_0/1/2 */
+
+/* V_SYNC_GEN2_0/1/2 */
+
+/* V_SYNC_GEN3_0/1/2 */
+
+/* Audio Related Packet Register */
+
+/* ASP_CON */
+#define HDMI_AUD_DST_DOUBLE (1 << 7)
+#define HDMI_AUD_NO_DST_DOUBLE (0 << 7)
+#define HDMI_AUD_TYPE_SAMPLE (0 << 5)
+#define HDMI_AUD_TYPE_ONE_BIT (1 << 5)
+#define HDMI_AUD_TYPE_HBR (2 << 5)
+#define HDMI_AUD_TYPE_DST (3 << 5)
+#define HDMI_AUD_MODE_TWO_CH (0 << 4)
+#define HDMI_AUD_MODE_MULTI_CH (1 << 4)
+#define HDMI_AUD_SP_AUD3_EN (1 << 3)
+#define HDMI_AUD_SP_AUD2_EN (1 << 2)
+#define HDMI_AUD_SP_AUD1_EN (1 << 1)
+#define HDMI_AUD_SP_AUD0_EN (1 << 0)
+#define HDMI_AUD_SP_ALL_DIS (0 << 0)
+
+#define HDMI_AUD_SET_SP_PRE(x) ((x) & 0xF)
+
+/* ASP_SP_FLAT */
+#define HDMI_ASP_SP_FLAT_AUD_SAMPLE (0)
+
+/* ASP_CHCFG0/1/2/3 */
+#define HDMI_SPK3R_SEL_I_PCM0L (0 << 27)
+#define HDMI_SPK3R_SEL_I_PCM0R (1 << 27)
+#define HDMI_SPK3R_SEL_I_PCM1L (2 << 27)
+#define HDMI_SPK3R_SEL_I_PCM1R (3 << 27)
+#define HDMI_SPK3R_SEL_I_PCM2L (4 << 27)
+#define HDMI_SPK3R_SEL_I_PCM2R (5 << 27)
+#define HDMI_SPK3R_SEL_I_PCM3L (6 << 27)
+#define HDMI_SPK3R_SEL_I_PCM3R (7 << 27)
+#define HDMI_SPK3L_SEL_I_PCM0L (0 << 24)
+#define HDMI_SPK3L_SEL_I_PCM0R (1 << 24)
+#define HDMI_SPK3L_SEL_I_PCM1L (2 << 24)
+#define HDMI_SPK3L_SEL_I_PCM1R (3 << 24)
+#define HDMI_SPK3L_SEL_I_PCM2L (4 << 24)
+#define HDMI_SPK3L_SEL_I_PCM2R (5 << 24)
+#define HDMI_SPK3L_SEL_I_PCM3L (6 << 24)
+#define HDMI_SPK3L_SEL_I_PCM3R (7 << 24)
+#define HDMI_SPK2R_SEL_I_PCM0L (0 << 19)
+#define HDMI_SPK2R_SEL_I_PCM0R (1 << 19)
+#define HDMI_SPK2R_SEL_I_PCM1L (2 << 19)
+#define HDMI_SPK2R_SEL_I_PCM1R (3 << 19)
+#define HDMI_SPK2R_SEL_I_PCM2L (4 << 19)
+#define HDMI_SPK2R_SEL_I_PCM2R (5 << 19)
+#define HDMI_SPK2R_SEL_I_PCM3L (6 << 19)
+#define HDMI_SPK2R_SEL_I_PCM3R (7 << 19)
+#define HDMI_SPK2L_SEL_I_PCM0L (0 << 16)
+#define HDMI_SPK2L_SEL_I_PCM0R (1 << 16)
+#define HDMI_SPK2L_SEL_I_PCM1L (2 << 16)
+#define HDMI_SPK2L_SEL_I_PCM1R (3 << 16)
+#define HDMI_SPK2L_SEL_I_PCM2L (4 << 16)
+#define HDMI_SPK2L_SEL_I_PCM2R (5 << 16)
+#define HDMI_SPK2L_SEL_I_PCM3L (6 << 16)
+#define HDMI_SPK2L_SEL_I_PCM3R (7 << 16)
+#define HDMI_SPK1R_SEL_I_PCM0L (0 << 11)
+#define HDMI_SPK1R_SEL_I_PCM0R (1 << 11)
+#define HDMI_SPK1R_SEL_I_PCM1L (2 << 11)
+#define HDMI_SPK1R_SEL_I_PCM1R (3 << 11)
+#define HDMI_SPK1R_SEL_I_PCM2L (4 << 11)
+#define HDMI_SPK1R_SEL_I_PCM2R (5 << 11)
+#define HDMI_SPK1R_SEL_I_PCM3L (6 << 11)
+#define HDMI_SPK1R_SEL_I_PCM3R (7 << 11)
+#define HDMI_SPK1L_SEL_I_PCM0L (0 << 8)
+#define HDMI_SPK1L_SEL_I_PCM0R (1 << 8)
+#define HDMI_SPK1L_SEL_I_PCM1L (2 << 8)
+#define HDMI_SPK1L_SEL_I_PCM1R (3 << 8)
+#define HDMI_SPK1L_SEL_I_PCM2L (4 << 8)
+#define HDMI_SPK1L_SEL_I_PCM2R (5 << 8)
+#define HDMI_SPK1L_SEL_I_PCM3L (6 << 8)
+#define HDMI_SPK1L_SEL_I_PCM3R (7 << 8)
+#define HDMI_SPK0R_SEL_I_PCM0L (0 << 3)
+#define HDMI_SPK0R_SEL_I_PCM0R (1 << 3)
+#define HDMI_SPK0R_SEL_I_PCM1L (2 << 3)
+#define HDMI_SPK0R_SEL_I_PCM1R (3 << 3)
+#define HDMI_SPK0R_SEL_I_PCM2L (4 << 3)
+#define HDMI_SPK0R_SEL_I_PCM2R (5 << 3)
+#define HDMI_SPK0R_SEL_I_PCM3L (6 << 3)
+#define HDMI_SPK0R_SEL_I_PCM3R (7 << 3)
+#define HDMI_SPK0L_SEL_I_PCM0L (0)
+#define HDMI_SPK0L_SEL_I_PCM0R (1)
+#define HDMI_SPK0L_SEL_I_PCM1L (2)
+#define HDMI_SPK0L_SEL_I_PCM1R (3)
+#define HDMI_SPK0L_SEL_I_PCM2L (4)
+#define HDMI_SPK0L_SEL_I_PCM2R (5)
+#define HDMI_SPK0L_SEL_I_PCM3L (6)
+#define HDMI_SPK0L_SEL_I_PCM3R (7)
+
+/* ACR_CON */
+#define HDMI_ACR_CON_TX_MODE_NO_TX (0 << 0)
+#define HDMI_ACR_CON_TX_MODE_MESURED_CTS (4 << 0)
+
+/* ACR_MCTS0/1/2 */
+
+/* ACR_CTS0/1/2 */
+
+/* ACR_N0/1/2 */
+#define HDMI_ACR_N0_VAL(x) (x & 0xff)
+#define HDMI_ACR_N1_VAL(x) ((x >> 8) & 0xff)
+#define HDMI_ACR_N2_VAL(x) ((x >> 16) & 0xff)
+
+/* ACR_LSB2 */
+#define HDMI_ACR_LSB2_MASK (0xFF)
+
+/* ACR_TXCNT */
+#define HDMI_ACR_TXCNT_MASK (0x1F)
+
+/* ACR_TXINTERNAL */
+#define HDMI_ACR_TX_INTERNAL_MASK (0xFF)
+
+/* ACR_CTS_OFFSET */
+#define HDMI_ACR_CTS_OFFSET_MASK (0xFF)
+
+/* GCP_CON */
+#define HDMI_GCP_CON_EN_1ST_VSYNC (1 << 3)
+#define HDMI_GCP_CON_EN_2ST_VSYNC (1 << 2)
+#define HDMI_GCP_CON_TRANS_EVERY_VSYNC (2)
+#define HDMI_GCP_CON_NO_TRAN (0)
+#define HDMI_GCP_CON_TRANS_ONCE (1)
+#define HDMI_GCP_CON_TRANS_EVERY_VSYNC (2)
+
+/* GCP_BYTE1 */
+#define HDMI_GCP_BYTE1_MASK (0xFF)
+
+/* GCP_BYTE2 */
+#define HDMI_GCP_BYTE2_PP_MASK (0xF << 4)
+#define HDMI_GCP_24BPP (1 << 2)
+#define HDMI_GCP_30BPP (1 << 0 | 1 << 2)
+#define HDMI_GCP_36BPP (1 << 1 | 1 << 2)
+#define HDMI_GCP_48BPP (1 << 0 | 1 << 1 | 1 << 2)
+
+/* GCP_BYTE3 */
+#define HDMI_GCP_BYTE3_MASK (0xFF)
+
+/* ACP Packet Register */
+
+/* ACP_CON */
+#define HDMI_ACP_FR_RATE_MASK (0x1F << 3)
+#define HDMI_ACP_CON_NO_TRAN (0)
+#define HDMI_ACP_CON_TRANS_ONCE (1)
+#define HDMI_ACP_CON_TRANS_EVERY_VSYNC (2)
+
+/* ACP_TYPE */
+#define HDMI_ACP_TYPE_MASK (0xFF)
+
+/* ACP_DATA00~16 */
+#define HDMI_ACP_DATA_MASK (0xFF)
+
+/* ISRC1/2 Packet Register */
+
+/* ISRC_CON */
+#define HDMI_ISRC_FR_RATE_MASK (0x1F << 3)
+#define HDMI_ISRC_EN (1 << 2)
+#define HDMI_ISRC_DIS (0 << 2)
+
+/* ISRC1_HEADER1 */
+#define HDMI_ISRC1_HEADER_MASK (0xFF)
+
+/* ISRC1_DATA 00~15 */
+#define HDMI_ISRC1_DATA_MASK (0xFF)
+
+/* ISRC2_DATA 00~15 */
+#define HDMI_ISRC2_DATA_MASK (0xFF)
+
+/* AVI InfoFrame Register */
+
+/* AVI_CON */
+#define HDMI_AVI_CON_EVERY_VSYNC (1 << 1)
+
+/* AVI_CHECK_SUM */
+
+/* AVI_DATA01~13 */
+#define HDMI_AVI_PIXEL_REPETITION_DOUBLE (1<<0)
+#define HDMI_AVI_PICTURE_ASPECT_4_3 (1<<4)
+#define HDMI_AVI_PICTURE_ASPECT_16_9 (1<<5)
+
+/* Audio InfoFrame Register */
+
+/* AUI_CON */
+#define HDMI_AUI_CON_NO_TRAN (0 << 0)
+#define HDMI_AUI_CON_TRANS_ONCE (1 << 0)
+#define HDMI_AUI_CON_TRANS_EVERY_VSYNC (2 << 0)
+
+/* AUI_CHECK_SUM */
+
+/* AUI_DATA1~5 */
+
+/* MPEG Source InfoFrame registers */
+
+/* MPG_CON */
+
+/* HDMI_MPG_CHECK_SUM */
+
+/* MPG_DATA1~5 */
+
+/* Source Product Descriptor Infoframe registers */
+
+/* SPD_CON */
+
+/* SPD_HEADER0/1/2 */
+
+/* SPD_DATA0~27 */
+
+/* VSI_CON */
+#define HDMI_VSI_CON_DO_NOT_TRANSMIT (0 << 0)
+#define HDMI_VSI_CON_EVERY_VSYNC (1 << 1)
+
+/* VSI_DATA00 ~ 27 */
+#define HDMI_VSI_DATA04_VIDEO_FORMAT(x) (x << 5)
+#define HDMI_VSI_DATA05_3D_STRUCTURE(x) (x << 4)
+#define HDMI_VSI_DATA06_3D_EXT_DATA(x) (x << 4)
+
+/* HDCP Register */
+
+/* HDCP_SHA1_00~19 */
+
+/* HDCP_KSV_LIST_0~4 */
+
+/* HDCP_KSV_LIST_CON */
+#define HDMI_HDCP_KSV_WRITE_DONE (0x1 << 3)
+#define HDMI_HDCP_KSV_LIST_EMPTY (0x1 << 2)
+#define HDMI_HDCP_KSV_END (0x1 << 1)
+#define HDMI_HDCP_KSV_READ (0x1 << 0)
+
+/* HDCP_CTRL1 */
+#define HDMI_HDCP_EN_PJ_EN (1 << 4)
+#define HDMI_HDCP_EN_PJ_DIS (~(1 << 4))
+#define HDMI_HDCP_SET_REPEATER_TIMEOUT (1 << 2)
+#define HDMI_HDCP_CLEAR_REPEATER_TIMEOUT (~(1 << 2))
+#define HDMI_HDCP_CP_DESIRED_EN (1 << 1)
+#define HDMI_HDCP_CP_DESIRED_DIS (~(1 << 1))
+#define HDMI_HDCP_ENABLE_1_1_FEATURE_EN (1)
+#define HDMI_HDCP_ENABLE_1_1_FEATURE_DIS (~(1))
+
+/* HDCP_CHECK_RESULT */
+#define HDMI_HDCP_PI_MATCH_RESULT_Y ((0x1 << 3) | (0x1 << 2))
+#define HDMI_HDCP_PI_MATCH_RESULT_N ((0x1 << 3) | (0x0 << 2))
+#define HDMI_HDCP_RI_MATCH_RESULT_Y ((0x1 << 1) | (0x1 << 0))
+#define HDMI_HDCP_RI_MATCH_RESULT_N ((0x1 << 1) | (0x0 << 0))
+#define HDMI_HDCP_CLR_ALL_RESULTS (0)
+
+/* HDCP_BKSV0~4 */
+/* HDCP_AKSV0~4 */
+
+/* HDCP_BCAPS */
+#define HDMI_HDCP_BCAPS_REPEATER (1 << 6)
+#define HDMI_HDCP_BCAPS_READY (1 << 5)
+#define HDMI_HDCP_BCAPS_FAST (1 << 4)
+#define HDMI_HDCP_BCAPS_1_1_FEATURES (1 << 1)
+#define HDMI_HDCP_BCAPS_FAST_REAUTH (1)
+
+/* HDCP_BSTATUS_0/1 */
+/* HDCP_Ri_0/1 */
+/* HDCP_I2C_INT */
+/* HDCP_AN_INT */
+/* HDCP_WATCHDOG_INT */
+/* HDCP_RI_INT/1 */
+/* HDCP_Ri_Compare_0 */
+/* HDCP_Ri_Compare_1 */
+/* HDCP_Frame_Count */
+
+/* Gamut Metadata Packet Register */
+
+/* GAMUT_CON */
+/* GAMUT_HEADER0 */
+/* GAMUT_HEADER1 */
+/* GAMUT_HEADER2 */
+/* GAMUT_METADATA0~27 */
+
+/* Video Mode Register */
+
+/* VIDEO_PATTERN_GEN */
+/* HPD_GEN */
+/* HDCP_Ri_Compare_0 */
+/* HDCP_Ri_Compare_0 */
+/* HDCP_Ri_Compare_0 */
+/* HDCP_Ri_Compare_0 */
+/* HDCP_Ri_Compare_0 */
+/* HDCP_Ri_Compare_0 */
+/* HDCP_Ri_Compare_0 */
+/* HDCP_Ri_Compare_0 */
+/* HDCP_Ri_Compare_0 */
+/* HDCP_Ri_Compare_0 */
+
+/* SPDIF Register */
+
+/* SPDIFIN_CLK_CTRL */
+#define HDMI_SPDIFIN_READY_CLK_DOWN (1 << 1)
+#define HDMI_SPDIFIN_CLK_ON (1 << 0)
+
+/* SPDIFIN_OP_CTRL */
+#define HDMI_SPDIFIN_SW_RESET (0 << 0)
+#define HDMI_SPDIFIN_STATUS_CHECK_MODE (1 << 0)
+#define HDMI_SPDIFIN_STATUS_CHECK_MODE_HDMI (3 << 0)
+
+/* SPDIFIN_IRQ_MASK */
+
+/* SPDIFIN_IRQ_STATUS */
+#define HDMI_SPDIFIN_IRQ_OVERFLOW_EN (1 << 7)
+#define HDMI_SPDIFIN_IRQ_ABNORMAL_PD_EN (1 << 6)
+#define HDMI_SPDIFIN_IRQ_SH_NOT_DETECTED_RIGHTTIME_EN (1 << 5)
+#define HDMI_SPDIFIN_IRQ_SH_DETECTED_EN (1 << 4)
+#define HDMI_SPDIFIN_IRQ_SH_NOT_DETECTED_EN (1 << 3)
+#define HDMI_SPDIFIN_IRQ_WRONG_PREAMBLE_EN (1 << 2)
+#define HDMI_SPDIFIN_IRQ_CH_STATUS_RECOVERED_EN (1 << 1)
+#define HDMI_SPDIFIN_IRQ_WRONG_SIG_EN (1 << 0)
+
+/* SPDIFIN_CONFIG_1 */
+#define HDMI_SPDIFIN_CFG_NOISE_FILTER_2_SAMPLE (1 << 6)
+#define HDMI_SPDIFIN_CFG_PCPD_MANUAL (1 << 4)
+#define HDMI_SPDIFIN_CFG_WORD_LENGTH_MANUAL (1 << 3)
+#define HDMI_SPDIFIN_CFG_UVCP_REPORT (1 << 2)
+#define HDMI_SPDIFIN_CFG_HDMI_2_BURST (1 << 1)
+#define HDMI_SPDIFIN_CFG_DATA_ALIGN_32 (1 << 0)
+
+/* SPDIFIN_CONFIG_2 */
+#define HDMI_SPDIFIN_CFG2_NO_CLK_DIV (0)
+
+/* SPDIFIN_USER_VALUE_1 */
+#define HDMI_SPDIFIN_USER_VAL_REPETITION_TIME_LOW(x) ((x & 0xf) << 4)
+#define HDMI_SPDIFIN_USER_VAL_WORD_LENGTH_24 (0xb << 0)
+#define HDMI_SPDIFIN_USER_VAL_REPETITION_TIME_HIGH(x) ((x >> 4) & 0xff)
+/* SPDIFIN_USER_VALUE_2 */
+/* SPDIFIN_USER_VALUE_3 */
+/* SPDIFIN_USER_VALUE_4 */
+/* SPDIFIN_CH_STATUS_0_1 */
+/* SPDIFIN_CH_STATUS_0_2 */
+/* SPDIFIN_CH_STATUS_0_3 */
+/* SPDIFIN_CH_STATUS_0_4 */
+/* SPDIFIN_CH_STATUS_1 */
+/* SPDIFIN_FRAME_PERIOD_1 */
+/* SPDIFIN_FRAME_PERIOD_2 */
+/* SPDIFIN_PC_INFO_1 */
+/* SPDIFIN_PC_INFO_2 */
+/* SPDIFIN_PD_INFO_1 */
+/* SPDIFIN_PD_INFO_2 */
+/* SPDIFIN_DATA_BUF_0_1 */
+/* SPDIFIN_DATA_BUF_0_2 */
+/* SPDIFIN_DATA_BUF_0_3 */
+/* SPDIFIN_USER_BUF_0 */
+/* SPDIFIN_USER_BUF_1_1 */
+/* SPDIFIN_USER_BUF_1_2 */
+/* SPDIFIN_USER_BUF_1_3 */
+/* SPDIFIN_USER_BUF_1 */
+
+/* I2S Register */
+
+/* I2S_CLK_CON */
+#define HDMI_I2S_CLK_DISABLE (0)
+#define HDMI_I2S_CLK_ENABLE (1)
+
+/* I2S_CON_1 */
+#define HDMI_I2S_SCLK_FALLING_EDGE (0 << 1)
+#define HDMI_I2S_SCLK_RISING_EDGE (1 << 1)
+#define HDMI_I2S_L_CH_LOW_POL (0)
+#define HDMI_I2S_L_CH_HIGH_POL (1)
+
+/* I2S_CON_2 */
+#define HDMI_I2S_MSB_FIRST_MODE (0 << 6)
+#define HDMI_I2S_LSB_FIRST_MODE (1 << 6)
+#define HDMI_I2S_BIT_CH_32FS (0 << 4)
+#define HDMI_I2S_BIT_CH_48FS (1 << 4)
+#define HDMI_I2S_BIT_CH_RESERVED (2 << 4)
+#define HDMI_I2S_SDATA_16BIT (1 << 2)
+#define HDMI_I2S_SDATA_20BIT (2 << 2)
+#define HDMI_I2S_SDATA_24BIT (3 << 2)
+#define HDMI_I2S_BASIC_FORMAT (0)
+#define HDMI_I2S_L_JUST_FORMAT (2)
+#define HDMI_I2S_R_JUST_FORMAT (3)
+#define HDMI_I2S_CON_2_CLR (~(0xFF))
+#define HDMI_I2S_SET_BIT_CH(x) (((x) & 0x7) << 4)
+#define HDMI_I2S_SET_SDATA_BIT(x) (((x) & 0x7) << 2)
+
+/* I2S_PIN_SEL_0 */
+#define HDMI_I2S_SEL_SCLK(x) (((x) & 0x7) << 4)
+#define HDMI_I2S_SEL_LRCK(x) ((x) & 0x7)
+
+/* I2S_PIN_SEL_1 */
+#define HDMI_I2S_SEL_SDATA1(x) (((x) & 0x7) << 4)
+#define HDMI_I2S_SEL_SDATA0(x) ((x) & 0x7)
+
+/* I2S_PIN_SEL_2 */
+#define HDMI_I2S_SEL_SDATA3(x) (((x) & 0x7) << 4)
+#define HDMI_I2S_SEL_SDATA2(x) ((x) & 0x7)
+
+/* I2S_PIN_SEL_3 */
+#define HDMI_I2S_SEL_DSD(x) ((x) & 0x7)
+
+/* I2S_DSD_CON */
+#define HDMI_I2S_DSD_CLK_RI_EDGE (1 << 1)
+#define HDMI_I2S_DSD_CLK_FA_EDGE (0 << 1)
+#define HDMI_I2S_DSD_ENABLE (1 << 0)
+#define HDMI_I2S_DSD_DISABLE (0 << 0)
+
+/* I2S_MUX_CON */
+#define HDMI_I2S_NOISE_FILTER_ZERO (0 << 5)
+#define HDMI_I2S_NOISE_FILTER_2_STAGE (1 << 5)
+#define HDMI_I2S_NOISE_FILTER_3_STAGE (2 << 5)
+#define HDMI_I2S_NOISE_FILTER_4_STAGE (3 << 5)
+#define HDMI_I2S_NOISE_FILTER_5_STAGE (4 << 5)
+#define HDMI_I2S_IN_ENABLE (1 << 4)
+#define HDMI_I2S_IN_DISABLE (0 << 4)
+#define HDMI_I2S_AUD_SPDIF (0 << 2)
+#define HDMI_I2S_AUD_I2S (1 << 2)
+#define HDMI_I2S_AUD_DSD (2 << 2)
+#define HDMI_I2S_CUV_SPDIF_ENABLE (0 << 1)
+#define HDMI_I2S_CUV_I2S_ENABLE (1 << 1)
+#define HDMI_I2S_MUX_DISABLE (0 << 0)
+#define HDMI_I2S_MUX_ENABLE (1 << 0)
+
+/* I2S_CH_ST_CON */
+#define HDMI_I2S_CH_STATUS_RELOAD (1 << 0)
+#define HDMI_I2S_CH_ST_CON_CLR (~(1))
+
+/* I2S_CH_ST_0 / I2S_CH_ST_SH_0 */
+#define HDMI_I2S_CH_STATUS_MODE_0 (0 << 6)
+#define HDMI_I2S_2AUD_CH_WITHOUT_PREEMPH (0 << 3)
+#define HDMI_I2S_2AUD_CH_WITH_PREEMPH (1 << 3)
+#define HDMI_I2S_DEFAULT_EMPHASIS (0 << 3)
+#define HDMI_I2S_COPYRIGHT (0 << 2)
+#define HDMI_I2S_NO_COPYRIGHT (1 << 2)
+#define HDMI_I2S_LINEAR_PCM (0 << 1)
+#define HDMI_I2S_NO_LINEAR_PCM (1 << 1)
+#define HDMI_I2S_CONSUMER_FORMAT (0)
+#define HDMI_I2S_PROF_FORMAT (1)
+#define HDMI_I2S_CH_ST_0_CLR (~(0xFF))
+
+/* I2S_CH_ST_1 / I2S_CH_ST_SH_1 */
+#define HDMI_I2S_CD_PLAYER (0x00)
+#define HDMI_I2S_DAT_PLAYER (0x03)
+#define HDMI_I2S_DCC_PLAYER (0x43)
+#define HDMI_I2S_MINI_DISC_PLAYER (0x49)
+
+/* I2S_CH_ST_2 / I2S_CH_ST_SH_2 */
+#define HDMI_I2S_CHANNEL_NUM_MASK (0xF << 4)
+#define HDMI_I2S_SOURCE_NUM_MASK (0xF)
+#define HDMI_I2S_SET_CHANNEL_NUM(x) ((x) & (0xF) << 4)
+#define HDMI_I2S_SET_SOURCE_NUM(x) ((x) & (0xF))
+
+/* I2S_CH_ST_3 / I2S_CH_ST_SH_3 */
+#define HDMI_I2S_CLK_ACCUR_LEVEL_1 (1 << 4)
+#define HDMI_I2S_CLK_ACCUR_LEVEL_2 (0 << 4)
+#define HDMI_I2S_CLK_ACCUR_LEVEL_3 (2 << 4)
+#define HDMI_I2S_SAMPLING_FREQ_44_1 (0x0)
+#define HDMI_I2S_SAMPLING_FREQ_48 (0x2)
+#define HDMI_I2S_SAMPLING_FREQ_32 (0x3)
+#define HDMI_I2S_SAMPLING_FREQ_96 (0xA)
+#define HDMI_I2S_SET_SAMPLING_FREQ(x) ((x) & (0xF))
+
+/* I2S_CH_ST_4 / I2S_CH_ST_SH_4 */
+#define HDMI_I2S_ORG_SAMPLING_FREQ_44_1 (0xF << 4)
+#define HDMI_I2S_ORG_SAMPLING_FREQ_88_2 (0x7 << 4)
+#define HDMI_I2S_ORG_SAMPLING_FREQ_22_05 (0xB << 4)
+#define HDMI_I2S_ORG_SAMPLING_FREQ_176_4 (0x3 << 4)
+#define HDMI_I2S_WORD_LENGTH_NOT_DEFINE (0x0 << 1)
+#define HDMI_I2S_WORD_LENGTH_MAX24_20BITS (0x1 << 1)
+#define HDMI_I2S_WORD_LENGTH_MAX24_22BITS (0x2 << 1)
+#define HDMI_I2S_WORD_LENGTH_MAX24_23BITS (0x4 << 1)
+#define HDMI_I2S_WORD_LENGTH_MAX24_24BITS (0x5 << 1)
+#define HDMI_I2S_WORD_LENGTH_MAX24_21BITS (0x6 << 1)
+#define HDMI_I2S_WORD_LENGTH_MAX20_16BITS (0x1 << 1)
+#define HDMI_I2S_WORD_LENGTH_MAX20_18BITS (0x2 << 1)
+#define HDMI_I2S_WORD_LENGTH_MAX20_19BITS (0x4 << 1)
+#define HDMI_I2S_WORD_LENGTH_MAX20_20BITS (0x5 << 1)
+#define HDMI_I2S_WORD_LENGTH_MAX20_17BITS (0x6 << 1)
+#define HDMI_I2S_WORD_LENGTH_MAX_24BITS (1)
+#define HDMI_I2S_WORD_LENGTH_MAX_20BITS (0)
+
+/* I2S_VD_DATA */
+#define HDMI_I2S_VD_AUD_SAMPLE_RELIABLE (0)
+#define HDMI_I2S_VD_AUD_SAMPLE_UNRELIABLE (1)
+
+/* I2S_MUX_CH */
+#define HDMI_I2S_CH3_R_EN (1 << 7)
+#define HDMI_I2S_CH3_L_EN (1 << 6)
+#define HDMI_I2S_CH2_R_EN (1 << 5)
+#define HDMI_I2S_CH2_L_EN (1 << 4)
+#define HDMI_I2S_CH1_R_EN (1 << 3)
+#define HDMI_I2S_CH1_L_EN (1 << 2)
+#define HDMI_I2S_CH0_R_EN (1 << 1)
+#define HDMI_I2S_CH0_L_EN (1)
+#define HDMI_I2S_CH_ALL_EN (0xFF)
+#define HDMI_I2S_MUX_CH_CLR (~HDMI_I2S_CH_ALL_EN)
+
+/* I2S_MUX_CUV */
+#define HDMI_I2S_CUV_R_EN (1 << 1)
+#define HDMI_I2S_CUV_L_EN (1 << 0)
+#define HDMI_I2S_CUV_RL_EN (0x03)
+
+/* I2S_IRQ_MASK */
+#define HDMI_I2S_INT2_DIS (0 << 1)
+#define HDMI_I2S_INT2_EN (1 << 1)
+
+/* I2S_IRQ_STATUS */
+#define HDMI_I2S_INT2_STATUS (1 << 1)
+
+/* I2S_CH0_L_0 */
+/* I2S_CH0_L_1 */
+/* I2S_CH0_L_2 */
+/* I2S_CH0_L_3 */
+/* I2S_CH0_R_0 */
+/* I2S_CH0_R_1 */
+/* I2S_CH0_R_2 */
+/* I2S_CH0_R_3 */
+/* I2S_CH1_L_0 */
+/* I2S_CH1_L_1 */
+/* I2S_CH1_L_2 */
+/* I2S_CH1_L_3 */
+/* I2S_CH1_R_0 */
+/* I2S_CH1_R_1 */
+/* I2S_CH1_R_2 */
+/* I2S_CH1_R_3 */
+/* I2S_CH2_L_0 */
+/* I2S_CH2_L_1 */
+/* I2S_CH2_L_2 */
+/* I2S_CH2_L_3 */
+/* I2S_CH2_R_0 */
+/* I2S_CH2_R_1 */
+/* I2S_CH2_R_2 */
+/* I2S_Ch2_R_3 */
+/* I2S_CH3_L_0 */
+/* I2S_CH3_L_1 */
+/* I2S_CH3_L_2 */
+/* I2S_CH3_R_0 */
+/* I2S_CH3_R_1 */
+/* I2S_CH3_R_2 */
+
+/* I2S_CUV_L_R */
+#define HDMI_I2S_CUV_R_DATA_MASK (0x7 << 4)
+#define HDMI_I2S_CUV_L_DATA_MASK (0x7)
+
+/* Timing Generator Register */
+/* TG_CMD */
+#define HDMI_GETSYNC_TYPE (1 << 4)
+#define HDMI_GETSYNC (1 << 3)
+
+/* HDMI_TG_CMD */
+#define HDMI_FIELD_EN (1 << 1)
+#define HDMI_TG_EN (1 << 0)
+
+/* TG_CFG */
+/* TG_CB_SZ */
+/* TG_INDELAY_L */
+/* TG_INDELAY_H */
+/* TG_POL_CTRL */
+
+/* TG_H_FSZ_L */
+/* TG_H_FSZ_H */
+/* TG_HACT_ST_L */
+/* TG_HACT_ST_H */
+/* TG_HACT_SZ_L */
+/* TG_HACT_SZ_H */
+/* TG_V_FSZ_L */
+/* TG_V_FSZ_H */
+/* TG_VSYNC_L */
+/* TG_VSYNC_H */
+/* TG_VSYNC2_L */
+/* TG_VSYNC2_H */
+/* TG_VACT_ST_L */
+/* TG_VACT_ST_H */
+/* TG_VACT_SZ_L */
+/* TG_VACT_SZ_H */
+/* TG_FIELD_CHG_L */
+/* TG_FIELD_CHG_H */
+/* TG_VACT_ST2_L */
+/* TG_VACT_ST2_H */
+/* TG_VACT_SC_ST_L */
+/* TG_VACT_SC_ST_H */
+/* TG_VACT_SC_SZ_L */
+/* TG_VACT_SC_SZ_H */
+
+/* TG_VSYNC_TOP_HDMI_L */
+/* TG_VSYNC_TOP_HDMI_H */
+/* TG_VSYNC_BOT_HDMI_L */
+/* TG_VSYNC_BOT_HDMI_H */
+/* TG_FIELD_TOP_HDMI_L */
+/* TG_FIELD_TOP_HDMI_H */
+/* TG_FIELD_BOT_HDMI_L */
+/* TG_FIELD_BOT_HDMI_H */
+/* TG_HSYNC_HDOUT_ST_L */
+/* TG_HSYNC_HDOUT_ST_H */
+/* TG_HSYNC_HDOUT_END_L */
+/* TG_HSYNC_HDOUT_END_H */
+/* TG_VSYNC_HDOUT_ST_L */
+/* TG_VSYNC_HDOUT_ST_H */
+/* TG_VSYNC_HDOUT_END_L */
+/* TG_VSYNC_HDOUT_END_H */
+/* TG_VSYNC_HDOUT_DLY_L */
+/* TG_VSYNC_HDOUT_DLY_H */
+/* TG_BT_ERR_RANGE */
+/* TG_BT_ERR_RESULT */
+/* TG_COR_THR */
+/* TG_COR_NUM */
+/* TG_BT_CON */
+/* TG_BT_H_FSZ_L */
+/* TG_BT_H_FSZ_H */
+/* TG_BT_HSYNC_ST */
+/* TG_BT_HSYNC_SZ */
+/* TG_BT_FSZ_L */
+/* TG_BT_FSZ_H */
+/* TG_BT_VACT_T_ST_L */
+/* TG_BT_VACT_T_ST_H */
+/* TG_BT_VACT_B_ST_L */
+/* TG_BT_VACT_B_ST_H */
+/* TG_BT_VACT_SZ_L */
+/* TG_BT_VACT_SZ_H */
+/* TG_BT_VSYNC_SZ */
+
+/* HDCP E-FUSE Control Register */
+/* HDCP_E_FUSE_CTRL */
+#define HDMI_EFUSE_CTRL_HDCP_KEY_READ (1 << 0)
+
+/* HDCP_E_FUSE_STATUS */
+#define HDMI_EFUSE_ECC_FAIL (1 << 2)
+#define HDMI_EFUSE_ECC_BUSY (1 << 1)
+#define HDMI_EFUSE_ECC_DONE (1)
+
+/* EFUSE_ADDR_WIDTH */
+/* EFUSE_SIGDEV_ASSERT */
+/* EFUSE_SIGDEV_DE-ASSERT */
+/* EFUSE_PRCHG_ASSERT */
+/* EFUSE_PRCHG_DE-ASSERT */
+/* EFUSE_FSET_ASSERT */
+/* EFUSE_FSET_DE-ASSERT */
+/* EFUSE_SENSING */
+/* EFUSE_SCK_ASSERT */
+/* EFUSE_SCK_DEASSERT */
+/* EFUSE_SDOUT_OFFSET */
+/* EFUSE_READ_OFFSET */
+
+/* HDCP_SHA_RESULT */
+#define HDMI_HDCP_SHA_VALID_NO_RD (0 << 1)
+#define HDMI_HDCP_SHA_VALID_RD (1 << 1)
+#define HDMI_HDCP_SHA_VALID (1)
+#define HDMI_HDCP_SHA_NO_VALID (0)
+
+/* DC_CONTRAL */
+#define HDMI_DC_CTL_12 (1 << 1)
+#define HDMI_DC_CTL_8 (0)
+#define HDMI_DC_CTL_10 (1)
+#endif /* __ARCH_ARM_REGS_HDMI_H */
--- /dev/null
+/*
+ * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com/
+ *
+ * Mixer register header file for Samsung Mixer driver
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+#ifndef SAMSUNG_REGS_MIXER_H
+#define SAMSUNG_REGS_MIXER_H
+
+#include <plat/map-base.h>
+
+/* SYSREG for local path between Gscaler and Mixer */
+#define SYSREG_DISP1BLK_CFG (S3C_VA_SYS + 0x0214)
+
+#define DISP1BLK_CFG_FIFORST_DISP1 (1 << 23)
+#define DISP1BLK_CFG_MIXER_MASK (0x3F << 2)
+#define DISP1BLK_CFG_MIXER0_VALID (1 << 7)
+#define DISP1BLK_CFG_MIXER0_SRC_GSC(x) (x << 5)
+#define DISP1BLK_CFG_MIXER1_VALID (1 << 4)
+#define DISP1BLK_CFG_MIXER1_SRC_GSC(x) (x << 2)
+
+/*
+ * Register part
+ */
+#define MXR_STATUS 0x0000
+#define MXR_CFG 0x0004
+#define MXR_INT_EN 0x0008
+#define MXR_INT_STATUS 0x000C
+#define MXR_LAYER_CFG 0x0010
+#define MXR_VIDEO_CFG 0x0014
+#define MXR_GRAPHIC0_CFG 0x0020
+#define MXR_GRAPHIC0_BASE 0x0024
+#define MXR_GRAPHIC0_SPAN 0x0028
+#define MXR_GRAPHIC0_SXY 0x002C
+#define MXR_GRAPHIC0_WH 0x0030
+#define MXR_GRAPHIC0_DXY 0x0034
+#define MXR_GRAPHIC0_BLANK 0x0038
+#define MXR_GRAPHIC1_CFG 0x0040
+#define MXR_GRAPHIC1_BASE 0x0044
+#define MXR_GRAPHIC1_SPAN 0x0048
+#define MXR_GRAPHIC1_SXY 0x004C
+#define MXR_GRAPHIC1_WH 0x0050
+#define MXR_GRAPHIC1_DXY 0x0054
+#define MXR_GRAPHIC1_BLANK 0x0058
+#define MXR_BG_CFG 0x0060
+#define MXR_BG_COLOR0 0x0064
+#define MXR_BG_COLOR1 0x0068
+#define MXR_BG_COLOR2 0x006C
+#define MXR_CM_COEFF_Y 0x0080
+#define MXR_CM_COEFF_CB 0x0084
+#define MXR_CM_COEFF_CR 0x0088
+/* after EXYNOS5250 for video layer transfered from Gscaler */
+#define MXR_VIDEO_LT 0x0090
+#define MXR_VIDEO_RB 0x0094
+
+/* after EXYNOS4212 for setting 3D */
+#define MXR_TVOUT_CFG 0x0100
+#define MXR_3D_ACTIVE_VIDEO 0x0104
+#define MXR_3D_ACTIVE_SPACE 0x0108
+
+/* after EXYNOS5250, support 2 sub-mixers */
+#define MXR1_LAYER_CFG 0x0110
+#define MXR1_VIDEO_CFG 0x0114
+#define MXR1_GRAPHIC0_CFG 0x0120
+#define MXR1_GRAPHIC0_BASE 0x0124
+#define MXR1_GRAPHIC0_SPAN 0x0128
+#define MXR1_GRAPHIC0_SXY 0x012C
+#define MXR1_GRAPHIC0_WH 0x0130
+#define MXR1_GRAPHIC0_DXY 0x0134
+#define MXR1_GRAPHIC0_BLANK 0x0138
+#define MXR1_GRAPHIC1_CFG 0x0140
+#define MXR1_GRAPHIC1_BASE 0x0144
+#define MXR1_GRAPHIC1_SPAN 0x0148
+#define MXR1_GRAPHIC1_SXY 0x014C
+#define MXR1_GRAPHIC1_WH 0x0150
+#define MXR1_GRAPHIC1_DXY 0x0154
+#define MXR1_GRAPHIC1_BLANK 0x0158
+#define MXR1_BG_CFG 0x0160
+#define MXR1_BG_COLOR0 0x0164
+#define MXR1_BG_COLOR1 0x0168
+#define MXR1_BG_COLOR2 0x016C
+#define MXR1_CM_COEFF_Y 0x0180
+#define MXR1_CM_COEFF_CB 0x0184
+#define MXR1_CM_COEFF_CR 0x0188
+/* after EXYNOS5250 for video layer transfered from Gscaler */
+#define MXR1_VIDEO_LT 0x0190
+#define MXR1_VIDEO_RB 0x0194
+
+/* for parametrized access to layer registers */
+#define MXR_GRAPHIC_CFG(i) (0x0020 + (i) * 0x20)
+#define MXR_GRAPHIC_BASE(i) (0x0024 + (i) * 0x20)
+#define MXR_GRAPHIC_SPAN(i) (0x0028 + (i) * 0x20)
+#define MXR_GRAPHIC_SXY(i) (0x002C + (i) * 0x20)
+#define MXR_GRAPHIC_WH(i) (0x0030 + (i) * 0x20)
+#define MXR_GRAPHIC_DXY(i) (0x0034 + (i) * 0x20)
+#define MXR_GRAPHIC_BLANK(i) (0x0038 + (i) * 0x20)
+
+/* after EXYNOS5250, support 2 sub-mixers */
+#define MXR1_GRAPHIC_CFG(i) (0x0120 + (i) * 0x20)
+#define MXR1_GRAPHIC_BASE(i) (0x0124 + (i) * 0x20)
+#define MXR1_GRAPHIC_SPAN(i) (0x0128 + (i) * 0x20)
+#define MXR1_GRAPHIC_SXY(i) (0x012C + (i) * 0x20)
+#define MXR1_GRAPHIC_WH(i) (0x0130 + (i) * 0x20)
+#define MXR1_GRAPHIC_DXY(i) (0x0134 + (i) * 0x20)
+#define MXR1_GRAPHIC_BLANK(i) (0x0138 + (i) * 0x20)
+
+/*
+ * Bit definition part
+ */
+
+/* generates mask for range of bits */
+#define MXR_MASK(high_bit, low_bit) \
+ (((2 << ((high_bit) - (low_bit))) - 1) << (low_bit))
+
+#define MXR_MASK_VAL(val, high_bit, low_bit) \
+ (((val) << (low_bit)) & MXR_MASK(high_bit, low_bit))
+
+/* bits for MXR_STATUS */
+#define MXR_STATUS_SOFT_RESET (1 << 8)
+#define MXR_STATUS_16_BURST (1 << 7)
+#define MXR_STATUS_BURST_MASK (1 << 7)
+#define MXR_STATUS_LAYER_SYNC (1 << 6)
+#define MXR_STATUS_SYNC_ENABLE (1 << 2)
+#define MXR_STATUS_REG_RUN (1 << 0)
+
+/* bits for MXR_CFG */
+#define MXR_CFG_LAYER_UPDATE (1 << 31)
+#define MXR_CFG_LAYER_UPDATE_COUNTER (3 << 29)
+#define MXR_CFG_MX1_GRP1_ENABLE (1 << 15)
+#define MXR_CFG_MX1_GRP0_ENABLE (1 << 14)
+#define MXR_CFG_MX1_VIDEO_ENABLE (1 << 13)
+#define MXR_CFG_OUT_YUV444 (0 << 8)
+#define MXR_CFG_OUT_RGB888 (1 << 8)
+#define MXR_CFG_OUT_MASK (1 << 8)
+#define MXR_CFG_DST_SDO (0 << 7)
+#define MXR_CFG_DST_HDMI (1 << 7)
+#define MXR_CFG_DST_MASK (1 << 7)
+#define MXR_CFG_SCAN_HD_720 (0 << 6)
+#define MXR_CFG_SCAN_HD_1080 (1 << 6)
+#define MXR_CFG_GRP1_ENABLE (1 << 5)
+#define MXR_CFG_GRP0_ENABLE (1 << 4)
+#define MXR_CFG_VIDEO_ENABLE (1 << 3)
+#define MXR_CFG_SCAN_INTERLACE (0 << 2)
+#define MXR_CFG_SCAN_PROGRASSIVE (1 << 2)
+#define MXR_CFG_SCAN_NTSC (0 << 1)
+#define MXR_CFG_SCAN_PAL (1 << 1)
+#define MXR_CFG_SCAN_SD (0 << 0)
+#define MXR_CFG_SCAN_HD (1 << 0)
+#define MXR_CFG_SCAN_MASK 0x47
+
+/* bits for MXR_GRAPHICn_CFG */
+#define MXR_GRP_CFG_BLANK_KEY_OFF (1 << 21)
+#define MXR_GRP_CFG_LAYER_BLEND_EN (1 << 17)
+#define MXR_GRP_CFG_PIXEL_BLEND_EN (1 << 16)
+#define MXR_GRP_CFG_FORMAT_VAL(x) MXR_MASK_VAL(x, 11, 8)
+#define MXR_GRP_CFG_FORMAT_MASK MXR_GRP_CFG_FORMAT_VAL(~0)
+#define MXR_GRP_CFG_ALPHA_VAL(x) MXR_MASK_VAL(x, 7, 0)
+
+/* bits for MXR_GRAPHICn_WH */
+#define MXR_GRP_WH_H_SCALE(x) MXR_MASK_VAL(x, 28, 28)
+#define MXR_GRP_WH_V_SCALE(x) MXR_MASK_VAL(x, 12, 12)
+#define MXR_GRP_WH_WIDTH(x) MXR_MASK_VAL(x, 26, 16)
+#define MXR_GRP_WH_HEIGHT(x) MXR_MASK_VAL(x, 10, 0)
+
+/* bits for MXR_GRAPHICn_SXY */
+#define MXR_GRP_SXY_SX(x) MXR_MASK_VAL(x, 26, 16)
+#define MXR_GRP_SXY_SY(x) MXR_MASK_VAL(x, 10, 0)
+
+/* bits for MXR_GRAPHICn_DXY */
+#define MXR_GRP_DXY_DX(x) MXR_MASK_VAL(x, 26, 16)
+#define MXR_GRP_DXY_DY(x) MXR_MASK_VAL(x, 10, 0)
+
+/* bits for MXR_INT_EN */
+#define MXR_INT_EN_VSYNC (1 << 11)
+#define MXR_INT_EN_ALL (0x38b80)
+
+/* bit for MXR_INT_STATUS */
+#define MXR_INT_STATUS_MX1_GRP1 (1 << 17)
+#define MXR_INT_STATUS_MX1_GRP0 (1 << 16)
+#define MXR_INT_STATUS_MX1_VIDEO (1 << 15)
+#define MXR_INT_CLEAR_VSYNC (1 << 11)
+#define MXR_INT_STATUS_MX0_GRP1 (1 << 9)
+#define MXR_INT_STATUS_MX0_GRP0 (1 << 8)
+#define MXR_INT_STATUS_MX0_VIDEO (1 << 7)
+#define MXR_INT_STATUS_VSYNC (1 << 0)
+
+/* bit for MXR_LAYER_CFG */
+#define MXR_LAYER_CFG_GRP1_VAL(x) MXR_MASK_VAL(x, 11, 8)
+#define MXR_LAYER_CFG_GRP0_VAL(x) MXR_MASK_VAL(x, 7, 4)
+#define MXR_LAYER_CFG_VP_VAL(x) MXR_MASK_VAL(x, 3, 0)
+
+/* bit for MXR_VIDEO_CFG */
+#define MXR_VIDEO_CFG_BLEND_EN (1 << 16)
+#define MXR_VIDEO_CFG_ALPHA(x) MXR_MASK_VAL(x, 7, 0)
+
+/* bit for MXR_VIDEO_LT */
+#define MXR_VIDEO_LT_LEFT_VAL(x) MXR_MASK_VAL(x, 31, 16)
+#define MXR_VIDEO_LT_TOP_VAL(x) MXR_MASK_VAL(x, 15, 0)
+
+/* bit for MXR_VIDEO_RB */
+#define MXR_VIDEO_RB_RIGHT_VAL(x) MXR_MASK_VAL(x, 31, 16)
+#define MXR_VIDEO_RB_BOTTOM_VAL(x) MXR_MASK_VAL(x, 15, 0)
+
+/* bit for MXR_TVOUT_CFG */
+#define MXR_TVOUT_CFG_3D_FROMAT_VAL(x) MXR_MASK_VAL(x, 5, 4)
+#define MXR_TVOUT_CFG_PATH_MIXER0 (0 << 3)
+#define MXR_TVOUT_CFG_PATH_MIXER1 (1 << 3)
+#define MXR_TVOUT_CFG_ONE_PATH (1 << 2)
+#define MXR_TVOUT_CFG_TWO_PATH (0 << 2)
+#define MXR_TVOUT_CFG_PATH_MASK (3 << 2)
+#define MXR_TVOUT_CFG_STEREO_SCOPIC (1 << 0)
+
+#endif /* SAMSUNG_REGS_MIXER_H */
--- /dev/null
+/* drivers/media/video/s5p-tv/regs-sdo.h
+ *
+ * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com/
+ *
+ * SDO register description file
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef SAMSUNG_REGS_SDO_H
+#define SAMSUNG_REGS_SDO_H
+
+/*
+ * Register part
+ */
+
+#define SDO_CLKCON 0x0000
+#define SDO_CONFIG 0x0008
+#define SDO_VBI 0x0014
+#define SDO_DAC 0x003C
+#define SDO_CCCON 0x0180
+#define SDO_IRQ 0x0280
+#define SDO_IRQMASK 0x0284
+#define SDO_VERSION 0x03D8
+
+/*
+ * Bit definition part
+ */
+
+/* SDO Clock Control Register (SDO_CLKCON) */
+#define SDO_TVOUT_SW_RESET (1 << 4)
+#define SDO_TVOUT_CLOCK_READY (1 << 1)
+#define SDO_TVOUT_CLOCK_ON (1 << 0)
+
+/* SDO Video Standard Configuration Register (SDO_CONFIG) */
+#define SDO_PROGRESSIVE (1 << 4)
+#define SDO_NTSC_M 0
+#define SDO_PAL_M 1
+#define SDO_PAL_BGHID 2
+#define SDO_PAL_N 3
+#define SDO_PAL_NC 4
+#define SDO_NTSC_443 8
+#define SDO_PAL_60 9
+#define SDO_STANDARD_MASK 0xf
+
+/* SDO VBI Configuration Register (SDO_VBI) */
+#define SDO_CVBS_WSS_INS (1 << 14)
+#define SDO_CVBS_CLOSED_CAPTION_MASK (3 << 12)
+
+/* SDO DAC Configuration Register (SDO_DAC) */
+#define SDO_POWER_ON_DAC (1 << 0)
+
+/* SDO Color Compensation On/Off Control (SDO_CCCON) */
+#define SDO_COMPENSATION_BHS_ADJ_OFF (1 << 4)
+#define SDO_COMPENSATION_CVBS_COMP_OFF (1 << 0)
+
+/* SDO Interrupt Request Register (SDO_IRQ) */
+#define SDO_VSYNC_IRQ_PEND (1 << 0)
+
+#endif /* SAMSUNG_REGS_SDO_H */
--- /dev/null
+/*
+ * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com/
+ *
+ * Video processor register header file for Samsung Mixer driver
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef SAMSUNG_REGS_VP_H
+#define SAMSUNG_REGS_VP_H
+
+/*
+ * Register part
+ */
+
+#define VP_ENABLE 0x0000
+#define VP_SRESET 0x0004
+#define VP_SHADOW_UPDATE 0x0008
+#define VP_FIELD_ID 0x000C
+#define VP_MODE 0x0010
+#define VP_IMG_SIZE_Y 0x0014
+#define VP_IMG_SIZE_C 0x0018
+#define VP_PER_RATE_CTRL 0x001C
+#define VP_TOP_Y_PTR 0x0028
+#define VP_BOT_Y_PTR 0x002C
+#define VP_TOP_C_PTR 0x0030
+#define VP_BOT_C_PTR 0x0034
+#define VP_ENDIAN_MODE 0x03CC
+#define VP_SRC_H_POSITION 0x0044
+#define VP_SRC_V_POSITION 0x0048
+#define VP_SRC_WIDTH 0x004C
+#define VP_SRC_HEIGHT 0x0050
+#define VP_DST_H_POSITION 0x0054
+#define VP_DST_V_POSITION 0x0058
+#define VP_DST_WIDTH 0x005C
+#define VP_DST_HEIGHT 0x0060
+#define VP_H_RATIO 0x0064
+#define VP_V_RATIO 0x0068
+#define VP_POLY8_Y0_LL 0x006C
+#define VP_POLY4_Y0_LL 0x00EC
+#define VP_POLY4_C0_LL 0x012C
+
+/*
+ * Bit definition part
+ */
+
+/* generates mask for range of bits */
+
+#define VP_MASK(high_bit, low_bit) \
+ (((2 << ((high_bit) - (low_bit))) - 1) << (low_bit))
+
+#define VP_MASK_VAL(val, high_bit, low_bit) \
+ (((val) << (low_bit)) & VP_MASK(high_bit, low_bit))
+
+ /* VP_ENABLE */
+#define VP_ENABLE_ON (1 << 0)
+
+/* VP_SRESET */
+#define VP_SRESET_PROCESSING (1 << 0)
+
+/* VP_SHADOW_UPDATE */
+#define VP_SHADOW_UPDATE_ENABLE (1 << 0)
+
+/* VP_MODE */
+#define VP_MODE_NV12 (0 << 6)
+#define VP_MODE_NV21 (1 << 6)
+#define VP_MODE_LINE_SKIP (1 << 5)
+#define VP_MODE_MEM_LINEAR (0 << 4)
+#define VP_MODE_MEM_TILED (1 << 4)
+#define VP_MODE_FMT_MASK (5 << 4)
+#define VP_MODE_FIELD_ID_AUTO_TOGGLING (1 << 2)
+#define VP_MODE_2D_IPC (1 << 1)
+
+/* VP_IMG_SIZE_Y */
+/* VP_IMG_SIZE_C */
+#define VP_IMG_HSIZE(x) VP_MASK_VAL(x, 29, 16)
+#define VP_IMG_VSIZE(x) VP_MASK_VAL(x, 13, 0)
+
+/* VP_SRC_H_POSITION */
+#define VP_SRC_H_POSITION_VAL(x) VP_MASK_VAL(x, 14, 4)
+
+/* VP_ENDIAN_MODE */
+#define VP_ENDIAN_MODE_LITTLE (1 << 0)
+
+#endif /* SAMSUNG_REGS_VP_H */
--- /dev/null
+/*
+ * Samsung Standard Definition Output (SDO) driver
+ *
+ * Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
+ *
+ * Tomasz Stanislawski, <t.stanislaws@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published
+ * by the Free Software Foundiation. either version 2 of the License,
+ * or (at your option) any later version
+ */
+
+#include <linux/clk.h>
+#include <linux/delay.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/irq.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/regulator/consumer.h>
+#include <linux/slab.h>
+
+#include <media/v4l2-subdev.h>
+#include <media/exynos_mc.h>
+
+#include "regs-sdo.h"
+
+MODULE_AUTHOR("Tomasz Stanislawski, <t.stanislaws@samsung.com>");
+MODULE_DESCRIPTION("Samsung Standard Definition Output (SDO)");
+MODULE_LICENSE("GPL");
+
+/* SDO pad definitions */
+#define SDO_PAD_SINK 0
+#define SDO_PADS_NUM 1
+
+#define SDO_DEFAULT_STD V4L2_STD_PAL
+
+struct sdo_format {
+ v4l2_std_id id;
+ /* all modes are 720 pixels wide */
+ unsigned int height;
+ unsigned int cookie;
+};
+
+struct sdo_device {
+ /** pointer to device parent */
+ struct device *dev;
+ /** base address of SDO registers */
+ void __iomem *regs;
+ /** SDO interrupt */
+ unsigned int irq;
+ /** DAC source clock */
+ struct clk *sclk_dac;
+ /** DAC clock */
+ struct clk *dac;
+ /** DAC physical interface */
+ struct clk *dacphy;
+ /** clock for control of VPLL */
+ struct clk *fout_vpll;
+ /** regulator for SDO IP power */
+ struct regulator *vdac;
+ /** regulator for SDO plug detection */
+ struct regulator *vdet;
+ /** subdev used as device interface */
+ struct v4l2_subdev sd;
+ /** sink pad connected to mixer */
+ struct media_pad pad;
+ /** current format */
+ const struct sdo_format *fmt;
+};
+
+static inline struct sdo_device *sd_to_sdev(struct v4l2_subdev *sd)
+{
+ return container_of(sd, struct sdo_device, sd);
+}
+
+static inline
+void sdo_write_mask(struct sdo_device *sdev, u32 reg_id, u32 value, u32 mask)
+{
+ u32 old = readl(sdev->regs + reg_id);
+ value = (value & mask) | (old & ~mask);
+ writel(value, sdev->regs + reg_id);
+}
+
+static inline
+void sdo_write(struct sdo_device *sdev, u32 reg_id, u32 value)
+{
+ writel(value, sdev->regs + reg_id);
+}
+
+static inline
+u32 sdo_read(struct sdo_device *sdev, u32 reg_id)
+{
+ return readl(sdev->regs + reg_id);
+}
+
+static irqreturn_t sdo_irq_handler(int irq, void *dev_data)
+{
+ struct sdo_device *sdev = dev_data;
+
+ /* clear interrupt */
+ sdo_write_mask(sdev, SDO_IRQ, ~0, SDO_VSYNC_IRQ_PEND);
+ return IRQ_HANDLED;
+}
+
+static void sdo_reg_debug(struct sdo_device *sdev)
+{
+#define DBGREG(reg_id) \
+ dev_info(sdev->dev, #reg_id " = %08x\n", \
+ sdo_read(sdev, reg_id))
+
+ DBGREG(SDO_CLKCON);
+ DBGREG(SDO_CONFIG);
+ DBGREG(SDO_VBI);
+ DBGREG(SDO_DAC);
+ DBGREG(SDO_IRQ);
+ DBGREG(SDO_IRQMASK);
+ DBGREG(SDO_VERSION);
+}
+
+static const struct sdo_format sdo_format[] = {
+ { V4L2_STD_PAL_N, .height = 576, .cookie = SDO_PAL_N },
+ { V4L2_STD_PAL_Nc, .height = 576, .cookie = SDO_PAL_NC },
+ { V4L2_STD_PAL_M, .height = 480, .cookie = SDO_PAL_M },
+ { V4L2_STD_PAL_60, .height = 480, .cookie = SDO_PAL_60 },
+ { V4L2_STD_NTSC_443, .height = 480, .cookie = SDO_NTSC_443 },
+ { V4L2_STD_PAL, .height = 576, .cookie = SDO_PAL_BGHID },
+ { V4L2_STD_NTSC_M, .height = 480, .cookie = SDO_NTSC_M },
+};
+
+static const struct sdo_format *sdo_find_format(v4l2_std_id id)
+{
+ int i;
+ for (i = 0; i < ARRAY_SIZE(sdo_format); ++i)
+ if (sdo_format[i].id & id)
+ return &sdo_format[i];
+ return NULL;
+}
+
+static int sdo_g_tvnorms_output(struct v4l2_subdev *sd, v4l2_std_id *std)
+{
+ *std = V4L2_STD_NTSC_M | V4L2_STD_PAL_M | V4L2_STD_PAL |
+ V4L2_STD_PAL_N | V4L2_STD_PAL_Nc |
+ V4L2_STD_NTSC_443 | V4L2_STD_PAL_60;
+ return 0;
+}
+
+static int sdo_s_std_output(struct v4l2_subdev *sd, v4l2_std_id std)
+{
+ struct sdo_device *sdev = sd_to_sdev(sd);
+ const struct sdo_format *fmt;
+ fmt = sdo_find_format(std);
+ if (fmt == NULL)
+ return -EINVAL;
+ sdev->fmt = fmt;
+ return 0;
+}
+
+static int sdo_g_std_output(struct v4l2_subdev *sd, v4l2_std_id *std)
+{
+ *std = sd_to_sdev(sd)->fmt->id;
+ return 0;
+}
+
+static int sdo_g_mbus_fmt(struct v4l2_subdev *sd,
+ struct v4l2_mbus_framefmt *fmt)
+{
+ struct sdo_device *sdev = sd_to_sdev(sd);
+
+ if (!sdev->fmt)
+ return -ENXIO;
+ /* all modes are 720 pixels wide */
+ fmt->width = 720;
+ fmt->height = sdev->fmt->height;
+ fmt->code = V4L2_MBUS_FMT_FIXED;
+ fmt->field = V4L2_FIELD_INTERLACED;
+ return 0;
+}
+
+static int sdo_s_power(struct v4l2_subdev *sd, int on)
+{
+ struct sdo_device *sdev = sd_to_sdev(sd);
+ struct device *dev = sdev->dev;
+ int ret;
+
+ dev_info(dev, "sdo_s_power(%d)\n", on);
+
+ if (on)
+ ret = pm_runtime_get_sync(dev);
+ else
+ ret = pm_runtime_put_sync(dev);
+
+ /* only values < 0 indicate errors */
+ return IS_ERR_VALUE(ret) ? ret : 0;
+}
+
+static int sdo_streamon(struct sdo_device *sdev)
+{
+ /* set proper clock for Timing Generator */
+ clk_set_rate(sdev->fout_vpll, 54000000);
+ dev_info(sdev->dev, "fout_vpll.rate = %lu\n",
+ clk_get_rate(sdev->fout_vpll));
+ /* enable clock in SDO */
+ sdo_write_mask(sdev, SDO_CLKCON, ~0, SDO_TVOUT_CLOCK_ON);
+ clk_enable(sdev->dacphy);
+ /* enable DAC */
+ sdo_write_mask(sdev, SDO_DAC, ~0, SDO_POWER_ON_DAC);
+ sdo_reg_debug(sdev);
+ return 0;
+}
+
+static int sdo_streamoff(struct sdo_device *sdev)
+{
+ int tries;
+
+ sdo_write_mask(sdev, SDO_DAC, 0, SDO_POWER_ON_DAC);
+ clk_disable(sdev->dacphy);
+ sdo_write_mask(sdev, SDO_CLKCON, 0, SDO_TVOUT_CLOCK_ON);
+ for (tries = 100; tries; --tries) {
+ if (sdo_read(sdev, SDO_CLKCON) & SDO_TVOUT_CLOCK_READY)
+ break;
+ mdelay(1);
+ }
+ if (tries == 0)
+ dev_err(sdev->dev, "failed to stop streaming\n");
+ return tries ? 0 : -EIO;
+}
+
+static int sdo_s_stream(struct v4l2_subdev *sd, int on)
+{
+ struct sdo_device *sdev = sd_to_sdev(sd);
+ return on ? sdo_streamon(sdev) : sdo_streamoff(sdev);
+}
+
+static const struct v4l2_subdev_core_ops sdo_sd_core_ops = {
+ .s_power = sdo_s_power,
+};
+
+static const struct v4l2_subdev_video_ops sdo_sd_video_ops = {
+ .s_std_output = sdo_s_std_output,
+ .g_std_output = sdo_g_std_output,
+ .g_tvnorms_output = sdo_g_tvnorms_output,
+ .g_mbus_fmt = sdo_g_mbus_fmt,
+ .s_stream = sdo_s_stream,
+};
+
+static const struct v4l2_subdev_ops sdo_sd_ops = {
+ .core = &sdo_sd_core_ops,
+ .video = &sdo_sd_video_ops,
+};
+
+static int sdo_runtime_suspend(struct device *dev)
+{
+ struct v4l2_subdev *sd = dev_get_drvdata(dev);
+ struct sdo_device *sdev = sd_to_sdev(sd);
+
+ dev_info(dev, "suspend\n");
+ regulator_disable(sdev->vdet);
+ regulator_disable(sdev->vdac);
+ clk_disable(sdev->sclk_dac);
+ return 0;
+}
+
+static int sdo_runtime_resume(struct device *dev)
+{
+ struct v4l2_subdev *sd = dev_get_drvdata(dev);
+ struct sdo_device *sdev = sd_to_sdev(sd);
+
+ dev_info(dev, "resume\n");
+ clk_enable(sdev->sclk_dac);
+ regulator_enable(sdev->vdac);
+ regulator_enable(sdev->vdet);
+
+ /* software reset */
+ sdo_write_mask(sdev, SDO_CLKCON, ~0, SDO_TVOUT_SW_RESET);
+ mdelay(10);
+ sdo_write_mask(sdev, SDO_CLKCON, 0, SDO_TVOUT_SW_RESET);
+
+ /* setting TV mode */
+ sdo_write_mask(sdev, SDO_CONFIG, sdev->fmt->cookie, SDO_STANDARD_MASK);
+ /* XXX: forcing interlaced mode using undocumented bit */
+ sdo_write_mask(sdev, SDO_CONFIG, 0, SDO_PROGRESSIVE);
+ /* turn all VBI off */
+ sdo_write_mask(sdev, SDO_VBI, 0, SDO_CVBS_WSS_INS |
+ SDO_CVBS_CLOSED_CAPTION_MASK);
+ /* turn all post processing off */
+ sdo_write_mask(sdev, SDO_CCCON, ~0, SDO_COMPENSATION_BHS_ADJ_OFF |
+ SDO_COMPENSATION_CVBS_COMP_OFF);
+ sdo_reg_debug(sdev);
+ return 0;
+}
+
+static const struct dev_pm_ops sdo_pm_ops = {
+ .runtime_suspend = sdo_runtime_suspend,
+ .runtime_resume = sdo_runtime_resume,
+};
+
+static int sdo_link_setup(struct media_entity *entity,
+ const struct media_pad *local,
+ const struct media_pad *remote, u32 flags)
+{
+ return 0;
+}
+
+/* sdo entity operations */
+static const struct media_entity_operations sdo_entity_ops = {
+ .link_setup = sdo_link_setup,
+};
+
+static int sdo_register_entity(struct sdo_device *sdev)
+{
+ struct v4l2_subdev *sd = &sdev->sd;
+ struct v4l2_device *v4l2_dev;
+ struct media_pad *pads = &sdev->pad;
+ struct media_entity *me = &sd->entity;
+ struct device *dev = sdev->dev;
+ struct exynos_md *md;
+ int ret;
+
+ dev_dbg(dev, "SDO entity init\n");
+
+ /* init sdo subdev */
+ v4l2_subdev_init(sd, &sdo_sd_ops);
+ sd->owner = THIS_MODULE;
+ strlcpy(sd->name, "s5p-sdo", sizeof(sd->name));
+
+ dev_set_drvdata(dev, sd);
+
+ /* init sdo sub-device as entity */
+ pads[SDO_PAD_SINK].flags = MEDIA_PAD_FL_SINK;
+ me->ops = &sdo_entity_ops;
+ ret = media_entity_init(me, SDO_PADS_NUM, pads, 0);
+ if (ret) {
+ dev_err(dev, "failed to initialize media entity\n");
+ return ret;
+ }
+
+ /* get output media ptr for registering sdo's sd */
+ md = (struct exynos_md *)module_name_to_driver_data(MDEV_MODULE_NAME);
+ if (!md) {
+ dev_err(dev, "failed to get output media device\n");
+ return -ENODEV;
+ }
+
+ v4l2_dev = &md->v4l2_dev;
+
+ /* regiser SDO subdev as entity to v4l2_dev pointer of
+ * output media device
+ */
+ ret = v4l2_device_register_subdev(v4l2_dev, sd);
+ if (ret) {
+ dev_err(dev, "failed to register SDO subdev\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static void sdo_entity_info_print(struct sdo_device *sdev)
+{
+ struct v4l2_subdev *sd = &sdev->sd;
+ struct media_entity *me = &sd->entity;
+
+ dev_dbg(sdev->dev, "\n************** SDO entity info **************\n");
+ dev_dbg(sdev->dev, "[SUB DEVICE INFO]\n");
+ entity_info_print(me, sdev->dev);
+ dev_dbg(sdev->dev, "*********************************************\n\n");
+}
+
+static int __devinit sdo_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct sdo_device *sdev;
+ struct resource *res;
+ int ret = 0;
+ struct clk *sclk_vpll;
+
+ dev_info(dev, "probe start\n");
+ sdev = kzalloc(sizeof *sdev, GFP_KERNEL);
+ if (!sdev) {
+ dev_err(dev, "not enough memory.\n");
+ ret = -ENOMEM;
+ goto fail;
+ }
+ sdev->dev = dev;
+
+ /* mapping registers */
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (res == NULL) {
+ dev_err(dev, "get memory resource failed.\n");
+ ret = -ENXIO;
+ goto fail_sdev;
+ }
+
+ sdev->regs = ioremap(res->start, resource_size(res));
+ if (sdev->regs == NULL) {
+ dev_err(dev, "register mapping failed.\n");
+ ret = -ENXIO;
+ goto fail_sdev;
+ }
+
+ /* acquiring interrupt */
+ res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+ if (res == NULL) {
+ dev_err(dev, "get interrupt resource failed.\n");
+ ret = -ENXIO;
+ goto fail_regs;
+ }
+ ret = request_irq(res->start, sdo_irq_handler, 0, "s5p-sdo", sdev);
+ if (ret) {
+ dev_err(dev, "request interrupt failed.\n");
+ goto fail_regs;
+ }
+ sdev->irq = res->start;
+
+ /* acquire clocks */
+ sdev->sclk_dac = clk_get(dev, "sclk_dac");
+ if (IS_ERR_OR_NULL(sdev->sclk_dac)) {
+ dev_err(dev, "failed to get clock 'sclk_dac'\n");
+ ret = -ENXIO;
+ goto fail_irq;
+ }
+ sdev->dac = clk_get(dev, "dac");
+ if (IS_ERR_OR_NULL(sdev->dac)) {
+ dev_err(dev, "failed to get clock 'dac'\n");
+ ret = -ENXIO;
+ goto fail_sclk_dac;
+ }
+ sdev->dacphy = clk_get(dev, "dacphy");
+ if (IS_ERR_OR_NULL(sdev->dacphy)) {
+ dev_err(dev, "failed to get clock 'dacphy'\n");
+ ret = -ENXIO;
+ goto fail_dac;
+ }
+ sclk_vpll = clk_get(dev, "sclk_vpll");
+ if (IS_ERR_OR_NULL(sclk_vpll)) {
+ dev_err(dev, "failed to get clock 'sclk_vpll'\n");
+ ret = -ENXIO;
+ goto fail_dacphy;
+ }
+ clk_set_parent(sdev->sclk_dac, sclk_vpll);
+ clk_put(sclk_vpll);
+ sdev->fout_vpll = clk_get(dev, "fout_vpll");
+ if (IS_ERR_OR_NULL(sdev->fout_vpll)) {
+ dev_err(dev, "failed to get clock 'fout_vpll'\n");
+ goto fail_dacphy;
+ }
+ dev_info(dev, "fout_vpll.rate = %lu\n", clk_get_rate(sclk_vpll));
+
+ /* enable gate for dac clock, because mixer uses it */
+ clk_enable(sdev->dac);
+
+ /* configure power management */
+ pm_runtime_enable(dev);
+
+ /* set default format */
+ sdev->fmt = sdo_find_format(SDO_DEFAULT_STD);
+ BUG_ON(sdev->fmt == NULL);
+
+ ret = sdo_register_entity(sdev);
+ if (ret)
+ goto fail_dacphy;
+
+ sdo_entity_info_print(sdev);
+
+ dev_info(dev, "probe succeeded\n");
+ return 0;
+
+fail_dacphy:
+ clk_put(sdev->dacphy);
+fail_dac:
+ clk_put(sdev->dac);
+fail_sclk_dac:
+ clk_put(sdev->sclk_dac);
+fail_irq:
+ free_irq(sdev->irq, sdev);
+fail_regs:
+ iounmap(sdev->regs);
+fail_sdev:
+ kfree(sdev);
+fail:
+ dev_info(dev, "probe failed\n");
+ return ret;
+}
+
+static int __devexit sdo_remove(struct platform_device *pdev)
+{
+ struct v4l2_subdev *sd = dev_get_drvdata(&pdev->dev);
+ struct sdo_device *sdev = sd_to_sdev(sd);
+
+ pm_runtime_disable(&pdev->dev);
+ clk_disable(sdev->dac);
+ regulator_put(sdev->vdet);
+ regulator_put(sdev->vdac);
+ clk_put(sdev->fout_vpll);
+ clk_put(sdev->dacphy);
+ clk_put(sdev->dac);
+ clk_put(sdev->sclk_dac);
+ free_irq(sdev->irq, sdev);
+ iounmap(sdev->regs);
+ kfree(sdev);
+
+ dev_info(&pdev->dev, "remove successful\n");
+ return 0;
+}
+
+static struct platform_driver sdo_driver __refdata = {
+ .probe = sdo_probe,
+ .remove = __devexit_p(sdo_remove),
+ .driver = {
+ .name = "s5p-sdo",
+ .owner = THIS_MODULE,
+ .pm = &sdo_pm_ops,
+ }
+};
+
+static int __init sdo_init(void)
+{
+ int ret;
+ static const char banner[] __initdata = KERN_INFO \
+ "Samsung Standard Definition Output (SDO) driver, "
+ "(c) 2010-2011 Samsung Electronics Co., Ltd.\n";
+ printk(banner);
+
+ ret = platform_driver_register(&sdo_driver);
+ if (ret)
+ printk(KERN_ERR "SDO platform driver register failed\n");
+
+ return ret;
+}
+module_init(sdo_init);
+
+static void __exit sdo_exit(void)
+{
+ platform_driver_unregister(&sdo_driver);
+}
+module_exit(sdo_exit);
return vb2_qbuf(&fimc->vid_cap.vbq, buf);
}
+static int fimc_cap_expbuf(struct file *file, void *priv,
+ struct v4l2_exportbuffer *eb)
+{
+ struct fimc_dev *fimc = video_drvdata(file);
+
+ return vb2_expbuf(&fimc->vid_cap.vbq, eb);
+}
+
static int fimc_cap_dqbuf(struct file *file, void *priv,
struct v4l2_buffer *buf)
{
.vidioc_qbuf = fimc_cap_qbuf,
.vidioc_dqbuf = fimc_cap_dqbuf,
+ .vidioc_expbuf = fimc_cap_expbuf,
.vidioc_prepare_buf = fimc_cap_prepare_buf,
.vidioc_create_bufs = fimc_cap_create_bufs,
.bottom = DEFAULT_HEIGHT,
};
-struct g2d_fmt *find_fmt(struct v4l2_format *f)
+static struct g2d_fmt *find_fmt(struct v4l2_format *f)
{
unsigned int i;
for (i = 0; i < NUM_FORMATS; i++) {
obj-$(CONFIG_VIDEO_SAMSUNG_S5P_MFC) := s5p-mfc.o
-s5p-mfc-y += s5p_mfc.o s5p_mfc_intr.o s5p_mfc_opr.o
+s5p-mfc-y += s5p_mfc.o s5p_mfc_intr.o
s5p-mfc-y += s5p_mfc_dec.o s5p_mfc_enc.o
-s5p-mfc-y += s5p_mfc_ctrl.o s5p_mfc_cmd.o
-s5p-mfc-y += s5p_mfc_pm.o s5p_mfc_shm.o
+s5p-mfc-y += s5p_mfc_ctrl.o s5p_mfc_pm.o
+obj-$(CONFIG_VIDEO_SAMSUNG_S5P_MFC_V5) += s5p_mfc_opr.o s5p_mfc_cmd.o s5p_mfc_shm.o
+obj-$(CONFIG_VIDEO_SAMSUNG_S5P_MFC_V6) += s5p_mfc_opr_v6.o s5p_mfc_cmd_v6.o
--- /dev/null
+/*
+ * Register definition file for Samsung MFC V6.x Interface (FIMV) driver
+ *
+ * Copyright (c) 2012 Samsung Electronics
+ * http://www.samsung.com/
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef _REGS_FIMV_V6_H
+#define _REGS_FIMV_V6_H
+
+#define S5P_FIMV_REG_SIZE (S5P_FIMV_END_ADDR - S5P_FIMV_START_ADDR)
+#define S5P_FIMV_REG_COUNT ((S5P_FIMV_END_ADDR - S5P_FIMV_START_ADDR) / 4)
+
+/* Number of bits that the buffer address should be shifted for particular
+ * MFC buffers. */
+#define S5P_FIMV_MEM_OFFSET 0
+
+#define S5P_FIMV_START_ADDR 0x0000
+#define S5P_FIMV_END_ADDR 0xfd80
+
+#define S5P_FIMV_REG_CLEAR_BEGIN 0xf000
+#define S5P_FIMV_REG_CLEAR_COUNT 1024
+
+/* Codec Common Registers */
+#define S5P_FIMV_RISC_ON 0x0000
+#define S5P_FIMV_RISC2HOST_INT 0x003C
+#define S5P_FIMV_HOST2RISC_INT 0x0044
+#define S5P_FIMV_RISC_BASE_ADDRESS 0x0054
+
+#define S5P_FIMV_MFC_RESET 0x1070
+
+#define S5P_FIMV_HOST2RISC_CMD 0x1100
+#define S5P_FIMV_H2R_CMD_EMPTY 0
+#define S5P_FIMV_H2R_CMD_SYS_INIT 1
+#define S5P_FIMV_H2R_CMD_OPEN_INSTANCE 2
+#define S5P_FIMV_CH_SEQ_HEADER 3
+#define S5P_FIMV_CH_INIT_BUFS 4
+#define S5P_FIMV_CH_FRAME_START 5
+#define S5P_FIMV_H2R_CMD_CLOSE_INSTANCE 6
+#define S5P_FIMV_H2R_CMD_SLEEP 7
+#define S5P_FIMV_H2R_CMD_WAKEUP 8
+#define S5P_FIMV_CH_LAST_FRAME 9
+#define S5P_FIMV_H2R_CMD_FLUSH 10
+/* RMVME: REALLOC used? */
+#define S5P_FIMV_CH_FRAME_START_REALLOC 5
+
+#define S5P_FIMV_RISC2HOST_CMD 0x1104
+#define S5P_FIMV_R2H_CMD_EMPTY 0
+#define S5P_FIMV_R2H_CMD_SYS_INIT_RET 1
+#define S5P_FIMV_R2H_CMD_OPEN_INSTANCE_RET 2
+#define S5P_FIMV_R2H_CMD_SEQ_DONE_RET 3
+#define S5P_FIMV_R2H_CMD_INIT_BUFFERS_RET 4
+
+#define S5P_FIMV_R2H_CMD_CLOSE_INSTANCE_RET 6
+#define S5P_FIMV_R2H_CMD_SLEEP_RET 7
+#define S5P_FIMV_R2H_CMD_WAKEUP_RET 8
+#define S5P_FIMV_R2H_CMD_COMPLETE_SEQ_RET 9
+#define S5P_FIMV_R2H_CMD_DPB_FLUSH_RET 10
+#define S5P_FIMV_R2H_CMD_NAL_ABORT_RET 11
+#define S5P_FIMV_R2H_CMD_FW_STATUS_RET 12
+#define S5P_FIMV_R2H_CMD_FRAME_DONE_RET 13
+#define S5P_FIMV_R2H_CMD_FIELD_DONE_RET 14
+#define S5P_FIMV_R2H_CMD_SLICE_DONE_RET 15
+#define S5P_FIMV_R2H_CMD_ENC_BUFFER_FUL_RET 16
+#define S5P_FIMV_R2H_CMD_ERR_RET 32
+
+#define S5P_FIMV_FW_VERSION 0xF000
+
+#define S5P_FIMV_INSTANCE_ID 0xF008
+#define S5P_FIMV_CODEC_TYPE 0xF00C
+#define S5P_FIMV_CONTEXT_MEM_ADDR 0xF014
+#define S5P_FIMV_CONTEXT_MEM_SIZE 0xF018
+#define S5P_FIMV_PIXEL_FORMAT 0xF020
+
+#define S5P_FIMV_METADATA_ENABLE 0xF024
+#define S5P_FIMV_DBG_BUFFER_ADDR 0xF030
+#define S5P_FIMV_DBG_BUFFER_SIZE 0xF034
+#define S5P_FIMV_RET_INSTANCE_ID 0xF070
+
+#define S5P_FIMV_ERROR_CODE 0xF074
+#define S5P_FIMV_ERR_WARNINGS_START 160
+#define S5P_FIMV_ERR_DEC_MASK 0xFFFF
+#define S5P_FIMV_ERR_DEC_SHIFT 0
+#define S5P_FIMV_ERR_DSPL_MASK 0xFFFF0000
+#define S5P_FIMV_ERR_DSPL_SHIFT 16
+
+#define S5P_FIMV_DBG_BUFFER_OUTPUT_SIZE 0xF078
+#define S5P_FIMV_METADATA_STATUS 0xF07C
+#define S5P_FIMV_METADATA_ADDR_MB_INFO 0xF080
+#define S5P_FIMV_METADATA_SIZE_MB_INFO 0xF084
+
+/* Decoder Registers */
+#define S5P_FIMV_D_CRC_CTRL 0xF0B0
+#define S5P_FIMV_D_DEC_OPTIONS 0xF0B4
+#define S5P_FIMV_D_OPT_FMO_ASO_CTRL_MASK 4
+#define S5P_FIMV_D_OPT_DDELAY_EN_SHIFT 3
+#define S5P_FIMV_D_OPT_LF_CTRL_SHIFT 1
+#define S5P_FIMV_D_OPT_LF_CTRL_MASK 0x3
+#define S5P_FIMV_D_OPT_TILE_MODE_SHIFT 0
+
+#define S5P_FIMV_D_DISPLAY_DELAY 0xF0B8
+
+#define S5P_FIMV_D_SET_FRAME_WIDTH 0xF0BC
+#define S5P_FIMV_D_SET_FRAME_HEIGHT 0xF0C0
+
+#define S5P_FIMV_D_SEI_ENABLE 0xF0C4
+
+/* Buffer setting registers */
+#define S5P_FIMV_D_MIN_NUM_DPB 0xF0F0
+#define S5P_FIMV_D_MIN_LUMA_DPB_SIZE 0xF0F4
+#define S5P_FIMV_D_MIN_CHROMA_DPB_SIZE 0xF0F8
+#define S5P_FIMV_D_MVC_NUM_VIEWS 0xF0FC
+#define S5P_FIMV_D_MIN_NUM_MV 0xF100
+#define S5P_FIMV_D_NUM_DPB 0xF130
+#define S5P_FIMV_D_LUMA_DPB_SIZE 0xF134
+#define S5P_FIMV_D_CHROMA_DPB_SIZE 0xF138
+#define S5P_FIMV_D_MV_BUFFER_SIZE 0xF13C
+
+#define S5P_FIMV_D_LUMA_DPB 0xF140
+#define S5P_FIMV_D_CHROMA_DPB 0xF240
+#define S5P_FIMV_D_MV_BUFFER 0xF340
+
+#define S5P_FIMV_D_SCRATCH_BUFFER_ADDR 0xF440
+#define S5P_FIMV_D_SCRATCH_BUFFER_SIZE 0xF444
+#define S5P_FIMV_D_METADATA_BUFFER_ADDR 0xF448
+#define S5P_FIMV_D_METADATA_BUFFER_SIZE 0xF44C
+#define S5P_FIMV_D_NUM_MV 0xF478
+#define S5P_FIMV_D_CPB_BUFFER_ADDR 0xF4B0
+#define S5P_FIMV_D_CPB_BUFFER_SIZE 0xF4B4
+
+#define S5P_FIMV_D_AVAILABLE_DPB_FLAG_UPPER 0xF4B8
+#define S5P_FIMV_D_AVAILABLE_DPB_FLAG_LOWER 0xF4BC
+#define S5P_FIMV_D_CPB_BUFFER_OFFSET 0xF4C0
+#define S5P_FIMV_D_SLICE_IF_ENABLE 0xF4C4
+#define S5P_FIMV_D_PICTURE_TAG 0xF4C8
+#define S5P_FIMV_D_STREAM_DATA_SIZE 0xF4D0
+
+/* Display information register */
+#define S5P_FIMV_D_DISPLAY_FRAME_WIDTH 0xF500
+#define S5P_FIMV_D_DISPLAY_FRAME_HEIGHT 0xF504
+
+/* Display status */
+#define S5P_FIMV_D_DISPLAY_STATUS 0xF508
+#define S5P_FIMV_DEC_STATUS_DECODING_ONLY 0
+#define S5P_FIMV_DEC_STATUS_DECODING_DISPLAY 1
+#define S5P_FIMV_DEC_STATUS_DISPLAY_ONLY 2
+#define S5P_FIMV_DEC_STATUS_DECODING_EMPTY 3
+#define S5P_FIMV_DEC_STATUS_DECODING_STATUS_MASK 7
+#define S5P_FIMV_DEC_STATUS_PROGRESSIVE (0<<3)
+#define S5P_FIMV_DEC_STATUS_INTERLACE (1<<3)
+#define S5P_FIMV_DEC_STATUS_INTERLACE_MASK (1<<3)
+#define S5P_FIMV_DEC_STATUS_RESOLUTION_MASK (3<<4)
+#define S5P_FIMV_DEC_STATUS_RESOLUTION_INC (1<<4)
+#define S5P_FIMV_DEC_STATUS_RESOLUTION_DEC (2<<4)
+#define S5P_FIMV_DEC_STATUS_RESOLUTION_SHIFT 4
+#define S5P_FIMV_DEC_STATUS_CRC_GENERATED (1<<5)
+#define S5P_FIMV_DEC_STATUS_CRC_NOT_GENERATED (0<<5)
+#define S5P_FIMV_DEC_STATUS_CRC_MASK (1<<5)
+
+#define S5P_FIMV_D_DISPLAY_LUMA_ADDR 0xF50C
+#define S5P_FIMV_D_DISPLAY_CHROMA_ADDR 0xF510
+
+#define S5P_FIMV_D_DISPLAY_FRAME_TYPE 0xF514
+#define S5P_FIMV_DECODE_FRAME_SKIPPED 0
+#define S5P_FIMV_DECODE_FRAME_I_FRAME 1
+#define S5P_FIMV_DECODE_FRAME_P_FRAME 2
+#define S5P_FIMV_DECODE_FRAME_B_FRAME 3
+#define S5P_FIMV_DECODE_FRAME_OTHER_FRAME 4
+#define S5P_FIMV_SHARED_CROP_INFO_H 0x0020
+#define S5P_FIMV_SHARED_CROP_LEFT_MASK 0xFFFF
+#define S5P_FIMV_SHARED_CROP_LEFT_SHIFT 0
+#define S5P_FIMV_SHARED_CROP_RIGHT_MASK 0xFFFF0000
+#define S5P_FIMV_SHARED_CROP_RIGHT_SHIFT 16
+#define S5P_FIMV_SHARED_CROP_INFO_V 0x0024
+#define S5P_FIMV_SHARED_CROP_TOP_MASK 0xFFFF
+#define S5P_FIMV_SHARED_CROP_TOP_SHIFT 0
+#define S5P_FIMV_SHARED_CROP_BOTTOM_MASK 0xFFFF0000
+#define S5P_FIMV_SHARED_CROP_BOTTOM_SHIFT 16
+
+#define S5P_FIMV_D_DISPLAY_CROP_INFO1 0xF518
+#define S5P_FIMV_D_DISPLAY_CROP_INFO2 0xF51C
+#define S5P_FIMV_D_DISPLAY_PICTURE_PROFILE 0xF520
+#define S5P_FIMV_D_DISPLAY_LUMA_CRC_TOP 0xF524
+#define S5P_FIMV_D_DISPLAY_CHROMA_CRC_TOP 0xF528
+#define S5P_FIMV_D_DISPLAY_LUMA_CRC_BOT 0xF52C
+#define S5P_FIMV_D_DISPLAY_CHROMA_CRC_BOT 0xF530
+#define S5P_FIMV_D_DISPLAY_ASPECT_RATIO 0xF534
+#define S5P_FIMV_D_DISPLAY_EXTENDED_AR 0xF538
+
+/* Decoded picture information register */
+#define S5P_FIMV_D_DECODED_FRAME_WIDTH 0xF53C
+#define S5P_FIMV_D_DECODED_FRAME_HEIGHT 0xF540
+#define S5P_FIMV_D_DECODED_STATUS 0xF544
+#define S5P_FIMV_DEC_CRC_GEN_MASK 0x1
+#define S5P_FIMV_DEC_CRC_GEN_SHIFT 6
+
+#define S5P_FIMV_D_DECODED_LUMA_ADDR 0xF548
+#define S5P_FIMV_D_DECODED_CHROMA_ADDR 0xF54C
+
+#define S5P_FIMV_D_DECODED_FRAME_TYPE 0xF550
+#define S5P_FIMV_DECODE_FRAME_MASK 7
+
+#define S5P_FIMV_D_DECODED_CROP_INFO1 0xF554
+#define S5P_FIMV_D_DECODED_CROP_INFO2 0xF558
+#define S5P_FIMV_D_DECODED_PICTURE_PROFILE 0xF55C
+#define S5P_FIMV_D_DECODED_NAL_SIZE 0xF560
+#define S5P_FIMV_D_DECODED_LUMA_CRC_TOP 0xF564
+#define S5P_FIMV_D_DECODED_CHROMA_CRC_TOP 0xF568
+#define S5P_FIMV_D_DECODED_LUMA_CRC_BOT 0xF56C
+#define S5P_FIMV_D_DECODED_CHROMA_CRC_BOT 0xF570
+
+/* Returned value register for specific setting */
+#define S5P_FIMV_D_RET_PICTURE_TAG_TOP 0xF574
+#define S5P_FIMV_D_RET_PICTURE_TAG_BOT 0xF578
+#define S5P_FIMV_D_RET_PICTURE_TIME_TOP 0xF57C
+#define S5P_FIMV_D_RET_PICTURE_TIME_BOT 0xF580
+#define S5P_FIMV_D_CHROMA_FORMAT 0xF588
+#define S5P_FIMV_D_MPEG4_INFO 0xF58C
+#define S5P_FIMV_D_H264_INFO 0xF590
+
+#define S5P_FIMV_D_METADATA_ADDR_CONCEALED_MB 0xF594
+#define S5P_FIMV_D_METADATA_SIZE_CONCEALED_MB 0xF598
+#define S5P_FIMV_D_METADATA_ADDR_VC1_PARAM 0xF59C
+#define S5P_FIMV_D_METADATA_SIZE_VC1_PARAM 0xF5A0
+#define S5P_FIMV_D_METADATA_ADDR_SEI_NAL 0xF5A4
+#define S5P_FIMV_D_METADATA_SIZE_SEI_NAL 0xF5A8
+#define S5P_FIMV_D_METADATA_ADDR_VUI 0xF5AC
+#define S5P_FIMV_D_METADATA_SIZE_VUI 0xF5B0
+
+#define S5P_FIMV_D_MVC_VIEW_ID 0xF5B4
+
+/* SEI related information */
+#define S5P_FIMV_D_FRAME_PACK_SEI_AVAIL 0xF5F0
+#define S5P_FIMV_D_FRAME_PACK_ARRGMENT_ID 0xF5F4
+#define S5P_FIMV_D_FRAME_PACK_SEI_INFO 0xF5F8
+#define S5P_FIMV_D_FRAME_PACK_GRID_POS 0xF5FC
+
+/* Encoder Registers */
+#define S5P_FIMV_E_FRAME_WIDTH 0xF770
+#define S5P_FIMV_E_FRAME_HEIGHT 0xF774
+#define S5P_FIMV_E_CROPPED_FRAME_WIDTH 0xF778
+#define S5P_FIMV_E_CROPPED_FRAME_HEIGHT 0xF77C
+#define S5P_FIMV_E_FRAME_CROP_OFFSET 0xF780
+#define S5P_FIMV_E_ENC_OPTIONS 0xF784
+#define S5P_FIMV_E_PICTURE_PROFILE 0xF788
+#define S5P_FIMV_ENC_PROFILE_H264_MAIN 0
+#define S5P_FIMV_ENC_PROFILE_H264_HIGH 1
+#define S5P_FIMV_ENC_PROFILE_H264_BASELINE 2
+#define S5P_FIMV_ENC_PROFILE_H264_CONSTRAINED_BASELINE 3
+#define S5P_FIMV_ENC_PROFILE_MPEG4_SIMPLE 0
+#define S5P_FIMV_ENC_PROFILE_MPEG4_ADVANCED_SIMPLE 1
+#define S5P_FIMV_E_FIXED_PICTURE_QP 0xF790
+
+#define S5P_FIMV_E_RC_CONFIG 0xF794
+#define S5P_FIMV_E_RC_QP_BOUND 0xF798
+#define S5P_FIMV_E_RC_RPARAM 0xF79C
+#define S5P_FIMV_E_MB_RC_CONFIG 0xF7A0
+#define S5P_FIMV_E_PADDING_CTRL 0xF7A4
+#define S5P_FIMV_E_MV_HOR_RANGE 0xF7AC
+#define S5P_FIMV_E_MV_VER_RANGE 0xF7B0
+
+#define S5P_FIMV_E_VBV_BUFFER_SIZE 0xF84C
+#define S5P_FIMV_E_VBV_INIT_DELAY 0xF850
+#define S5P_FIMV_E_NUM_DPB 0xF890
+#define S5P_FIMV_E_LUMA_DPB 0xF8C0
+#define S5P_FIMV_E_CHROMA_DPB 0xF904
+#define S5P_FIMV_E_ME_BUFFER 0xF948
+
+#define S5P_FIMV_E_SCRATCH_BUFFER_ADDR 0xF98C
+#define S5P_FIMV_E_SCRATCH_BUFFER_SIZE 0xF990
+#define S5P_FIMV_E_TMV_BUFFER0 0xF994
+#define S5P_FIMV_E_TMV_BUFFER1 0xF998
+#define S5P_FIMV_E_SOURCE_LUMA_ADDR 0xF9F0
+#define S5P_FIMV_E_SOURCE_CHROMA_ADDR 0xF9F4
+#define S5P_FIMV_E_STREAM_BUFFER_ADDR 0xF9F8
+#define S5P_FIMV_E_STREAM_BUFFER_SIZE 0xF9FC
+#define S5P_FIMV_E_ROI_BUFFER_ADDR 0xFA00
+
+#define S5P_FIMV_E_PARAM_CHANGE 0xFA04
+#define S5P_FIMV_E_IR_SIZE 0xFA08
+#define S5P_FIMV_E_GOP_CONFIG 0xFA0C
+#define S5P_FIMV_E_MSLICE_MODE 0xFA10
+#define S5P_FIMV_E_MSLICE_SIZE_MB 0xFA14
+#define S5P_FIMV_E_MSLICE_SIZE_BITS 0xFA18
+#define S5P_FIMV_E_FRAME_INSERTION 0xFA1C
+
+#define S5P_FIMV_E_RC_FRAME_RATE 0xFA20
+#define S5P_FIMV_E_RC_BIT_RATE 0xFA24
+#define S5P_FIMV_E_RC_QP_OFFSET 0xFA28
+#define S5P_FIMV_E_RC_ROI_CTRL 0xFA2C
+#define S5P_FIMV_E_PICTURE_TAG 0xFA30
+#define S5P_FIMV_E_BIT_COUNT_ENABLE 0xFA34
+#define S5P_FIMV_E_MAX_BIT_COUNT 0xFA38
+#define S5P_FIMV_E_MIN_BIT_COUNT 0xFA3C
+
+#define S5P_FIMV_E_METADATA_BUFFER_ADDR 0xFA40
+#define S5P_FIMV_E_METADATA_BUFFER_SIZE 0xFA44
+#define S5P_FIMV_E_STREAM_SIZE 0xFA80
+#define S5P_FIMV_E_SLICE_TYPE 0xFA84
+#define S5P_FIMV_ENC_SI_SLICE_TYPE_NON_CODED 0
+#define S5P_FIMV_ENC_SI_SLICE_TYPE_I 1
+#define S5P_FIMV_ENC_SI_SLICE_TYPE_P 2
+#define S5P_FIMV_ENC_SI_SLICE_TYPE_B 3
+#define S5P_FIMV_ENC_SI_SLICE_TYPE_SKIPPED 4
+#define S5P_FIMV_ENC_SI_SLICE_TYPE_OTHERS 5
+#define S5P_FIMV_E_PICTURE_COUNT 0xFA88
+#define S5P_FIMV_E_RET_PICTURE_TAG 0xFA8C
+#define S5P_FIMV_E_STREAM_BUFFER_WRITE_POINTER 0xFA90
+
+#define S5P_FIMV_E_ENCODED_SOURCE_LUMA_ADDR 0xFA94
+#define S5P_FIMV_E_ENCODED_SOURCE_CHROMA_ADDR 0xFA98
+#define S5P_FIMV_E_RECON_LUMA_DPB_ADDR 0xFA9C
+#define S5P_FIMV_E_RECON_CHROMA_DPB_ADDR 0xFAA0
+#define S5P_FIMV_E_METADATA_ADDR_ENC_SLICE 0xFAA4
+#define S5P_FIMV_E_METADATA_SIZE_ENC_SLICE 0xFAA8
+
+#define S5P_FIMV_E_MPEG4_OPTIONS 0xFB10
+#define S5P_FIMV_E_MPEG4_HEC_PERIOD 0xFB14
+#define S5P_FIMV_E_ASPECT_RATIO 0xFB50
+#define S5P_FIMV_E_EXTENDED_SAR 0xFB54
+
+#define S5P_FIMV_E_H264_OPTIONS 0xFB58
+#define S5P_FIMV_E_H264_LF_ALPHA_OFFSET 0xFB5C
+#define S5P_FIMV_E_H264_LF_BETA_OFFSET 0xFB60
+#define S5P_FIMV_E_H264_I_PERIOD 0xFB64
+
+#define S5P_FIMV_E_H264_FMO_SLICE_GRP_MAP_TYPE 0xFB68
+#define S5P_FIMV_E_H264_FMO_NUM_SLICE_GRP_MINUS1 0xFB6C
+#define S5P_FIMV_E_H264_FMO_SLICE_GRP_CHANGE_DIR 0xFB70
+#define S5P_FIMV_E_H264_FMO_SLICE_GRP_CHANGE_RATE_MINUS1 0xFB74
+#define S5P_FIMV_E_H264_FMO_RUN_LENGTH_MINUS1_0 0xFB78
+#define S5P_FIMV_E_H264_FMO_RUN_LENGTH_MINUS1_1 0xFB7C
+#define S5P_FIMV_E_H264_FMO_RUN_LENGTH_MINUS1_2 0xFB80
+#define S5P_FIMV_E_H264_FMO_RUN_LENGTH_MINUS1_3 0xFB84
+
+#define S5P_FIMV_E_H264_ASO_SLICE_ORDER_0 0xFB88
+#define S5P_FIMV_E_H264_ASO_SLICE_ORDER_1 0xFB8C
+#define S5P_FIMV_E_H264_ASO_SLICE_ORDER_2 0xFB90
+#define S5P_FIMV_E_H264_ASO_SLICE_ORDER_3 0xFB94
+#define S5P_FIMV_E_H264_ASO_SLICE_ORDER_4 0xFB98
+#define S5P_FIMV_E_H264_ASO_SLICE_ORDER_5 0xFB9C
+#define S5P_FIMV_E_H264_ASO_SLICE_ORDER_6 0xFBA0
+#define S5P_FIMV_E_H264_ASO_SLICE_ORDER_7 0xFBA4
+
+#define S5P_FIMV_E_H264_CHROMA_QP_OFFSET 0xFBA8
+#define S5P_FIMV_E_H264_NUM_T_LAYER 0xFBAC
+
+#define S5P_FIMV_E_H264_HIERARCHICAL_QP_LAYER0 0xFBB0
+#define S5P_FIMV_E_H264_HIERARCHICAL_QP_LAYER1 0xFBB4
+#define S5P_FIMV_E_H264_HIERARCHICAL_QP_LAYER2 0xFBB8
+#define S5P_FIMV_E_H264_HIERARCHICAL_QP_LAYER3 0xFBBC
+#define S5P_FIMV_E_H264_HIERARCHICAL_QP_LAYER4 0xFBC0
+#define S5P_FIMV_E_H264_HIERARCHICAL_QP_LAYER5 0xFBC4
+#define S5P_FIMV_E_H264_HIERARCHICAL_QP_LAYER6 0xFBC8
+
+#define S5P_FIMV_E_H264_FRAME_PACKING_SEI_INFO 0xFC4C
+#define S5P_FIMV_ENC_FP_ARRANGEMENT_TYPE_SIDE_BY_SIDE 0
+#define S5P_FIMV_ENC_FP_ARRANGEMENT_TYPE_TOP_BOTTOM 1
+#define S5P_FIMV_ENC_FP_ARRANGEMENT_TYPE_TEMPORAL 2
+
+#define S5P_FIMV_E_MVC_FRAME_QP_VIEW1 0xFD40
+#define S5P_FIMV_E_MVC_RC_FRAME_RATE_VIEW1 0xFD44
+#define S5P_FIMV_E_MVC_RC_BIT_RATE_VIEW1 0xFD48
+#define S5P_FIMV_E_MVC_RC_QBOUND_VIEW1 0xFD4C
+#define S5P_FIMV_E_MVC_RC_RPARA_VIEW1 0xFD50
+#define S5P_FIMV_E_MVC_INTER_VIEW_PREDICTION_ON 0xFD80
+
+/* Codec numbers */
+#define S5P_FIMV_CODEC_NONE -1
+
+
+#define S5P_FIMV_CODEC_H264_DEC 0
+#define S5P_FIMV_CODEC_H264_MVC_DEC 1
+
+#define S5P_FIMV_CODEC_MPEG4_DEC 3
+#define S5P_FIMV_CODEC_FIMV1_DEC 4
+#define S5P_FIMV_CODEC_FIMV2_DEC 5
+#define S5P_FIMV_CODEC_FIMV3_DEC 6
+#define S5P_FIMV_CODEC_FIMV4_DEC 7
+#define S5P_FIMV_CODEC_H263_DEC 8
+#define S5P_FIMV_CODEC_VC1RCV_DEC 9
+#define S5P_FIMV_CODEC_VC1_DEC 10
+/* FIXME: Add 11~12 */
+#define S5P_FIMV_CODEC_MPEG2_DEC 13
+#define S5P_FIMV_CODEC_VP8_DEC 14
+/* FIXME: Add 15~16 */
+#define S5P_FIMV_CODEC_H264_ENC 20
+#define S5P_FIMV_CODEC_H264_MVC_ENC 21
+
+#define S5P_FIMV_CODEC_MPEG4_ENC 23
+#define S5P_FIMV_CODEC_H263_ENC 24
+/*** Definitions for MFCv5 compatibility ***/
+#define S5P_FIMV_SI_DISPLAY_Y_ADR S5P_FIMV_D_DISPLAY_LUMA_ADDR
+#define S5P_FIMV_SI_DISPLAY_C_ADR S5P_FIMV_D_DISPLAY_CHROMA_ADDR
+
+#define S5P_FIMV_CRC_LUMA0 S5P_FIMV_D_DECODED_LUMA_CRC_TOP
+#define S5P_FIMV_CRC_CHROMA0 S5P_FIMV_D_DECODED_CHROMA_CRC_TOP
+#define S5P_FIMV_CRC_LUMA1 S5P_FIMV_D_DECODED_LUMA_CRC_BOT
+#define S5P_FIMV_CRC_CHROMA1 S5P_FIMV_D_DECODED_CHROMA_CRC_BOT
+#define S5P_FIMV_CRC_DISP_LUMA0 S5P_FIMV_D_DISPLAY_LUMA_CRC_TOP
+#define S5P_FIMV_CRC_DISP_CHROMA0 S5P_FIMV_D_DISPLAY_CHROMA_CRC_TOP
+
+#define S5P_FIMV_SI_DECODED_STATUS S5P_FIMV_D_DECODED_STATUS
+#define S5P_FIMV_SI_DISPLAY_STATUS S5P_FIMV_D_DISPLAY_STATUS
+#define S5P_FIMV_SHARED_SET_FRAME_TAG S5P_FIMV_D_PICTURE_TAG
+#define S5P_FIMV_SHARED_GET_FRAME_TAG_TOP S5P_FIMV_D_RET_PICTURE_TAG_TOP
+#define S5P_FIMV_CRC_DISP_STATUS S5P_FIMV_D_DISPLAY_STATUS
+
+/* SEI related information */
+#define S5P_FIMV_FRAME_PACK_SEI_AVAIL S5P_FIMV_D_FRAME_PACK_SEI_AVAIL
+#define S5P_FIMV_FRAME_PACK_ARRGMENT_ID S5P_FIMV_D_FRAME_PACK_ARRGMENT_ID
+#define S5P_FIMV_FRAME_PACK_SEI_INFO S5P_FIMV_D_FRAME_PACK_SEI_INFO
+#define S5P_FIMV_FRAME_PACK_GRID_POS S5P_FIMV_D_FRAME_PACK_GRID_POS
+
+#define S5P_FIMV_SHARED_SET_E_FRAME_TAG S5P_FIMV_E_PICTURE_TAG
+#define S5P_FIMV_SHARED_GET_E_FRAME_TAG S5P_FIMV_E_RET_PICTURE_TAG
+#define S5P_FIMV_ENCODED_LUMA_ADDR S5P_FIMV_E_ENCODED_SOURCE_LUMA_ADDR
+#define S5P_FIMV_ENCODED_CHROMA_ADDR S5P_FIMV_E_ENCODED_SOURCE_CHROMA_ADDR
+#define S5P_FIMV_FRAME_INSERTION S5P_FIMV_E_FRAME_INSERTION
+
+#define S5P_FIMV_PARAM_CHANGE_FLAG S5P_FIMV_E_PARAM_CHANGE /* flag */
+#define S5P_FIMV_NEW_I_PERIOD S5P_FIMV_E_GOP_CONFIG
+#define S5P_FIMV_NEW_RC_FRAME_RATE S5P_FIMV_E_RC_FRAME_RATE
+#define S5P_FIMV_NEW_RC_BIT_RATE S5P_FIMV_E_RC_BIT_RATE
+/*** End of MFCv5 compatibility definitions ***/
+
+/*** old definitions ***/
+#if 1
+
+#define S5P_FIMV_SW_RESET 0x0000
+#define S5P_FIMV_RISC_HOST_INT 0x0008
+
+/* Command from HOST to RISC */
+#define S5P_FIMV_HOST2RISC_ARG1 0x0034
+#define S5P_FIMV_HOST2RISC_ARG2 0x0038
+#define S5P_FIMV_HOST2RISC_ARG3 0x003c
+#define S5P_FIMV_HOST2RISC_ARG4 0x0040
+
+/* Command from RISC to HOST */
+#define S5P_FIMV_RISC2HOST_CMD_MASK 0x1FFFF
+#define S5P_FIMV_RISC2HOST_ARG1 0x0048
+#define S5P_FIMV_RISC2HOST_ARG2 0x004c
+#define S5P_FIMV_RISC2HOST_ARG3 0x0050
+#define S5P_FIMV_RISC2HOST_ARG4 0x0054
+
+#define S5P_FIMV_SYS_MEM_SZ 0x005c
+#define S5P_FIMV_FW_STATUS 0x0080
+
+/* Memory controller register */
+#define S5P_FIMV_MC_DRAMBASE_ADR_A 0x0508
+#define S5P_FIMV_MC_DRAMBASE_ADR_B 0x050c
+#define S5P_FIMV_MC_STATUS 0x0510
+
+/* Common register */
+#define S5P_FIMV_COMMON_BASE_A 0x0600
+#define S5P_FIMV_COMMON_BASE_B 0x0700
+
+/* Decoder */
+#define S5P_FIMV_DEC_CHROMA_ADR (S5P_FIMV_COMMON_BASE_A)
+#define S5P_FIMV_DEC_LUMA_ADR (S5P_FIMV_COMMON_BASE_B)
+
+/* H.264 decoding */
+#define S5P_FIMV_H264_VERT_NB_MV_ADR (S5P_FIMV_COMMON_BASE_A + 0x8c) /* vertical neighbor motion vector */
+#define S5P_FIMV_H264_NB_IP_ADR (S5P_FIMV_COMMON_BASE_A + 0x90) /* neighbor pixels for intra pred */
+#define S5P_FIMV_H264_MV_ADR (S5P_FIMV_COMMON_BASE_B + 0x80) /* H264 motion vector */
+
+/* MPEG4 decoding */
+#define S5P_FIMV_MPEG4_NB_DCAC_ADR (S5P_FIMV_COMMON_BASE_A + 0x8c) /* neighbor AC/DC coeff. */
+#define S5P_FIMV_MPEG4_UP_NB_MV_ADR (S5P_FIMV_COMMON_BASE_A + 0x90) /* upper neighbor motion vector */
+#define S5P_FIMV_MPEG4_SA_MV_ADR (S5P_FIMV_COMMON_BASE_A + 0x94) /* subseq. anchor motion vector */
+#define S5P_FIMV_MPEG4_OT_LINE_ADR (S5P_FIMV_COMMON_BASE_A + 0x98) /* overlap transform line */
+#define S5P_FIMV_MPEG4_SP_ADR (S5P_FIMV_COMMON_BASE_A + 0xa8) /* syntax parser */
+
+/* H.263 decoding */
+#define S5P_FIMV_H263_NB_DCAC_ADR (S5P_FIMV_COMMON_BASE_A + 0x8c)
+#define S5P_FIMV_H263_UP_NB_MV_ADR (S5P_FIMV_COMMON_BASE_A + 0x90)
+#define S5P_FIMV_H263_SA_MV_ADR (S5P_FIMV_COMMON_BASE_A + 0x94)
+#define S5P_FIMV_H263_OT_LINE_ADR (S5P_FIMV_COMMON_BASE_A + 0x98)
+
+/* VC-1 decoding */
+#define S5P_FIMV_VC1_NB_DCAC_ADR (S5P_FIMV_COMMON_BASE_A + 0x8c)
+#define S5P_FIMV_VC1_UP_NB_MV_ADR (S5P_FIMV_COMMON_BASE_A + 0x90)
+#define S5P_FIMV_VC1_SA_MV_ADR (S5P_FIMV_COMMON_BASE_A + 0x94)
+#define S5P_FIMV_VC1_OT_LINE_ADR (S5P_FIMV_COMMON_BASE_A + 0x98)
+#define S5P_FIMV_VC1_BITPLANE3_ADR (S5P_FIMV_COMMON_BASE_A + 0x9c) /* bitplane3 */
+#define S5P_FIMV_VC1_BITPLANE2_ADR (S5P_FIMV_COMMON_BASE_A + 0xa0) /* bitplane2 */
+#define S5P_FIMV_VC1_BITPLANE1_ADR (S5P_FIMV_COMMON_BASE_A + 0xa4) /* bitplane1 */
+
+/* Encoder */
+#define S5P_FIMV_ENC_REF0_LUMA_ADR (S5P_FIMV_COMMON_BASE_A + 0x1c) /* reconstructed luma */
+#define S5P_FIMV_ENC_REF1_LUMA_ADR (S5P_FIMV_COMMON_BASE_A + 0x20)
+#define S5P_FIMV_ENC_REF0_CHROMA_ADR (S5P_FIMV_COMMON_BASE_B) /* reconstructed chroma */
+#define S5P_FIMV_ENC_REF1_CHROMA_ADR (S5P_FIMV_COMMON_BASE_B + 0x04)
+#define S5P_FIMV_ENC_REF2_LUMA_ADR (S5P_FIMV_COMMON_BASE_B + 0x10)
+#define S5P_FIMV_ENC_REF2_CHROMA_ADR (S5P_FIMV_COMMON_BASE_B + 0x08)
+#define S5P_FIMV_ENC_REF3_LUMA_ADR (S5P_FIMV_COMMON_BASE_B + 0x14)
+#define S5P_FIMV_ENC_REF3_CHROMA_ADR (S5P_FIMV_COMMON_BASE_B + 0x0c)
+
+/* H.264 encoding */
+#define S5P_FIMV_H264_UP_MV_ADR (S5P_FIMV_COMMON_BASE_A) /* upper motion vector */
+#define S5P_FIMV_H264_NBOR_INFO_ADR (S5P_FIMV_COMMON_BASE_A + 0x04) /* entropy engine's neighbor info. */
+#define S5P_FIMV_H264_UP_INTRA_MD_ADR (S5P_FIMV_COMMON_BASE_A + 0x08) /* upper intra MD */
+#define S5P_FIMV_H264_COZERO_FLAG_ADR (S5P_FIMV_COMMON_BASE_A + 0x10) /* direct cozero flag */
+#define S5P_FIMV_H264_UP_INTRA_PRED_ADR (S5P_FIMV_COMMON_BASE_B + 0x40) /* upper intra PRED */
+
+/* H.263 encoding */
+#define S5P_FIMV_H263_UP_MV_ADR (S5P_FIMV_COMMON_BASE_A) /* upper motion vector */
+#define S5P_FIMV_H263_ACDC_COEF_ADR (S5P_FIMV_COMMON_BASE_A + 0x04) /* upper Q coeff. */
+
+/* MPEG4 encoding */
+#define S5P_FIMV_MPEG4_UP_MV_ADR (S5P_FIMV_COMMON_BASE_A) /* upper motion vector */
+#define S5P_FIMV_MPEG4_ACDC_COEF_ADR (S5P_FIMV_COMMON_BASE_A + 0x04) /* upper Q coeff. */
+#define S5P_FIMV_MPEG4_COZERO_FLAG_ADR (S5P_FIMV_COMMON_BASE_A + 0x10) /* direct cozero flag */
+
+#define S5P_FIMV_ENC_REF_B_LUMA_ADR 0x062c /* ref B Luma addr */
+#define S5P_FIMV_ENC_REF_B_CHROMA_ADR 0x0630 /* ref B Chroma addr */
+
+#define S5P_FIMV_ENC_CUR_LUMA_ADR 0x0718 /* current Luma addr */
+#define S5P_FIMV_ENC_CUR_CHROMA_ADR 0x071C /* current Chroma addr */
+
+/* Codec common register */
+#define S5P_FIMV_ENC_HSIZE_PX 0x0818 /* frame width at encoder */
+#define S5P_FIMV_ENC_VSIZE_PX 0x081c /* frame height at encoder */
+#define S5P_FIMV_ENC_PROFILE 0x0830 /* profile register */
+#define S5P_FIMV_ENC_PIC_STRUCT 0x083c /* picture field/frame flag */
+#define S5P_FIMV_ENC_LF_CTRL 0x0848 /* loop filter control */
+#define S5P_FIMV_ENC_ALPHA_OFF 0x084c /* loop filter alpha offset */
+#define S5P_FIMV_ENC_BETA_OFF 0x0850 /* loop filter beta offset */
+#define S5P_FIMV_MR_BUSIF_CTRL 0x0854 /* hidden, bus interface ctrl */
+#define S5P_FIMV_ENC_PXL_CACHE_CTRL 0x0a00 /* pixel cache control */
+
+/* Channel & stream interface register */
+#define S5P_FIMV_SI_RTN_CHID 0x2000 /* Return CH instance ID register */
+#define S5P_FIMV_SI_CH0_INST_ID 0x2040 /* codec instance ID */
+#define S5P_FIMV_SI_CH1_INST_ID 0x2080 /* codec instance ID */
+/* Decoder */
+#define S5P_FIMV_SI_VRESOL 0x2004 /* vertical resolution of decoder */
+#define S5P_FIMV_SI_HRESOL 0x2008 /* horizontal resolution of decoder */
+#define S5P_FIMV_SI_BUF_NUMBER 0x200c /* number of frames in the decoded pic */
+#define S5P_FIMV_SI_CONSUMED_BYTES 0x2018 /* Consumed number of bytes to decode
+ a frame */
+#define S5P_FIMV_SI_FRAME_TYPE 0x2020 /* frame type such as skip/I/P/B */
+
+#define S5P_FIMV_SI_CH0_SB_ST_ADR 0x2044 /* start addr of stream buf */
+#define S5P_FIMV_SI_CH0_SB_FRM_SIZE 0x2048 /* size of stream buf */
+#define S5P_FIMV_SI_CH0_DESC_ADR 0x204c /* addr of descriptor buf */
+#define S5P_FIMV_SI_CH0_CPB_SIZE 0x2058 /* max size of coded pic. buf */
+#define S5P_FIMV_SI_CH0_DESC_SIZE 0x205c /* max size of descriptor buf */
+
+#define S5P_FIMV_SI_CH1_SB_ST_ADR 0x2084 /* start addr of stream buf */
+#define S5P_FIMV_SI_CH1_SB_FRM_SIZE 0x2088 /* size of stream buf */
+#define S5P_FIMV_SI_CH1_DESC_ADR 0x208c /* addr of descriptor buf */
+#define S5P_FIMV_SI_CH1_CPB_SIZE 0x2098 /* max size of coded pic. buf */
+#define S5P_FIMV_SI_CH1_DESC_SIZE 0x209c /* max size of descriptor buf */
+
+#define S5P_FIMV_SI_FIMV1_HRESOL 0x2054 /* horizontal resolution */
+#define S5P_FIMV_SI_FIMV1_VRESOL 0x2050 /* vertical resolution */
+
+/* Decode frame address */
+#define S5P_FIMV_DECODE_Y_ADR 0x2024
+#define S5P_FIMV_DECODE_C_ADR 0x2028
+
+/* Decoded frame type */
+#define S5P_FIMV_DECODE_FRAME_TYPE 0x2020
+
+/* Sizes of buffers required for decoding */
+#define S5P_FIMV_DEC_NB_IP_SIZE (32 * 1024)
+#define S5P_FIMV_DEC_VERT_NB_MV_SIZE (16 * 1024)
+#define S5P_FIMV_DEC_NB_DCAC_SIZE (16 * 1024)
+#define S5P_FIMV_DEC_UPNB_MV_SIZE (68 * 1024)
+#define S5P_FIMV_DEC_SUB_ANCHOR_MV_SIZE (136 * 1024)
+#define S5P_FIMV_DEC_OVERLAP_TRANSFORM_SIZE (32 * 1024)
+#define S5P_FIMV_DEC_VC1_BITPLANE_SIZE (2 * 1024)
+#define S5P_FIMV_DEC_STX_PARSER_SIZE (68 * 1024)
+
+#define S5P_FIMV_NV12M_HALIGN 16
+#define S5P_FIMV_NV12MT_HALIGN 16
+#define S5P_FIMV_NV12MT_VALIGN 16
+
+/* Sizes of buffers required for encoding */
+#define S5P_FIMV_ENC_UPMV_SIZE (0x10000)
+#define S5P_FIMV_ENC_COLFLG_SIZE (0x10000)
+#define S5P_FIMV_ENC_INTRAMD_SIZE (0x10000)
+#define S5P_FIMV_ENC_INTRAPRED_SIZE (0x4000)
+#define S5P_FIMV_ENC_NBORINFO_SIZE (0x10000)
+#define S5P_FIMV_ENC_ACDCCOEF_SIZE (0x10000)
+
+/* Encoder */
+#define S5P_FIMV_ENC_SI_STRM_SIZE 0x2004 /* stream size */
+#define S5P_FIMV_ENC_SI_PIC_CNT 0x2008 /* picture count */
+#define S5P_FIMV_ENC_SI_WRITE_PTR 0x200c /* write pointer */
+#define S5P_FIMV_ENC_SI_SLICE_TYPE 0x2010 /* slice type(I/P/B/IDR) */
+
+#define S5P_FIMV_ENCODED_Y_ADDR 0x2014 /* the addr of the encoded luma pic */
+#define S5P_FIMV_ENCODED_C_ADDR 0x2018 /* the addr of the encoded chroma pic */
+
+#define S5P_FIMV_ENC_SI_CH0_SB_ADR 0x2044 /* addr of stream buf */
+#define S5P_FIMV_ENC_SI_CH0_SB_SIZE 0x204c /* size of stream buf */
+#define S5P_FIMV_ENC_SI_CH0_CUR_Y_ADR 0x2050 /* current Luma addr */
+#define S5P_FIMV_ENC_SI_CH0_CUR_C_ADR 0x2054 /* current Chroma addr */
+#define S5P_FIMV_ENC_SI_CH0_FRAME_INS 0x2058 /* frame insertion */
+
+#define S5P_FIMV_ENC_SI_CH1_SB_ADR 0x2084 /* addr of stream buf */
+#define S5P_FIMV_ENC_SI_CH1_SB_SIZE 0x208c /* size of stream buf */
+#define S5P_FIMV_ENC_SI_CH1_CUR_Y_ADR 0x2090 /* current Luma addr */
+#define S5P_FIMV_ENC_SI_CH1_CUR_C_ADR 0x2094 /* current Chroma addr */
+#define S5P_FIMV_ENC_SI_CH1_FRAME_INS 0x2098 /* frame insertion */
+
+#define S5P_FIMV_ENC_PIC_TYPE_CTRL 0xc504 /* pic type level control */
+#define S5P_FIMV_ENC_B_RECON_WRITE_ON 0xc508 /* B frame recon write ctrl */
+#define S5P_FIMV_ENC_MSLICE_CTRL 0xc50c /* multi slice control */
+#define S5P_FIMV_ENC_MSLICE_MB 0xc510 /* MB number in the one slice */
+#define S5P_FIMV_ENC_MSLICE_BIT 0xc514 /* bit count for one slice */
+#define S5P_FIMV_ENC_CIR_CTRL 0xc518 /* number of intra refresh MB */
+#define S5P_FIMV_ENC_MAP_FOR_CUR 0xc51c /* linear or 64x32 tiled mode */
+#define S5P_FIMV_ENC_PADDING_CTRL 0xc520 /* padding control */
+
+#define S5P_FIMV_ENC_RC_CONFIG 0xc5a0 /* RC config */
+#define S5P_FIMV_ENC_RC_BIT_RATE 0xc5a8 /* bit rate */
+#define S5P_FIMV_ENC_RC_QBOUND 0xc5ac /* max/min QP */
+#define S5P_FIMV_ENC_RC_RPARA 0xc5b0 /* rate control reaction coeff */
+#define S5P_FIMV_ENC_RC_MB_CTRL 0xc5b4 /* MB adaptive scaling */
+
+/* Encoder for H264 only */
+#define S5P_FIMV_ENC_H264_ENTRP_MODE 0xd004 /* CAVLC or CABAC */
+#define S5P_FIMV_ENC_H264_ALPHA_OFF 0xd008 /* loop filter alpha offset */
+#define S5P_FIMV_ENC_H264_BETA_OFF 0xd00c /* loop filter beta offset */
+#define S5P_FIMV_ENC_H264_NUM_OF_REF 0xd010 /* number of reference for P/B */
+#define S5P_FIMV_ENC_H264_TRANS_FLAG 0xd034 /* 8x8 transform flag in PPS & high profile */
+
+#define S5P_FIMV_ENC_RC_FRAME_RATE 0xd0d0 /* frame rate */
+
+/* Encoder for MPEG4 only */
+#define S5P_FIMV_ENC_MPEG4_QUART_PXL 0xe008 /* qpel interpolation ctrl */
+
+/* Additional */
+#define S5P_FIMV_SI_CH0_DPB_CONF_CTRL 0x2068 /* DPB Config Control Register */
+#define S5P_FIMV_DPB_COUNT_MASK 0xffff
+
+#define S5P_FIMV_SI_CH0_RELEASE_BUF 0x2060 /* DPB release buffer register */
+#define S5P_FIMV_SI_CH0_HOST_WR_ADR 0x2064 /* address of shared memory */
+
+/* Channel Control Register */
+#define S5P_FIMV_CH_FRAME_START_REALLOC 5
+
+#define S5P_FIMV_CH_MASK 7
+#define S5P_FIMV_CH_SHIFT 16
+
+/* Host to RISC command */
+#define S5P_FIMV_R2H_CMD_RSV_RET 3
+#define S5P_FIMV_R2H_CMD_ENC_COMPLETE_RET 7
+#define S5P_FIMV_R2H_CMD_FLUSH_RET 12
+#define S5P_FIMV_R2H_CMD_EDFU_INIT_RET 16
+
+/* Shared memory registers' offsets */
+
+/* An offset of the start position in the stream when
+ * the start position is not aligned */
+#define S5P_FIMV_SHARED_GET_FRAME_TAG_BOT 0x000C
+#define S5P_FIMV_SHARED_START_BYTE_NUM 0x0018
+#define S5P_FIMV_SHARED_RC_VOP_TIMING 0x0030
+#define S5P_FIMV_SHARED_LUMA_DPB_SIZE 0x0064
+#define S5P_FIMV_SHARED_CHROMA_DPB_SIZE 0x0068
+#define S5P_FIMV_SHARED_MV_SIZE 0x006C
+#define S5P_FIMV_SHARED_PIC_TIME_TOP 0x0010
+#define S5P_FIMV_SHARED_PIC_TIME_BOTTOM 0x0014
+#define S5P_FIMV_SHARED_EXT_ENC_CONTROL 0x0028
+#define S5P_FIMV_SHARED_P_B_FRAME_QP 0x0070
+#define S5P_FIMV_SHARED_ASPECT_RATIO_IDC 0x0074
+#define S5P_FIMV_SHARED_EXTENDED_SAR 0x0078
+#define S5P_FIMV_SHARED_H264_I_PERIOD 0x009C
+#define S5P_FIMV_SHARED_RC_CONTROL_CONFIG 0x00A0
+
+#endif /* End of old definitions */
+
+#endif /* _REGS_FIMV_V6_H */
#define S5P_FIMV_ENC_PROFILE_H264_MAIN 0
#define S5P_FIMV_ENC_PROFILE_H264_HIGH 1
#define S5P_FIMV_ENC_PROFILE_H264_BASELINE 2
+#define S5P_FIMV_ENC_PROFILE_H264_CONSTRAINED_BASELINE 3
#define S5P_FIMV_ENC_PROFILE_MPEG4_SIMPLE 0
#define S5P_FIMV_ENC_PROFILE_MPEG4_ADVANCED_SIMPLE 1
#define S5P_FIMV_ENC_PIC_STRUCT 0x083c /* picture field/frame flag */
#define S5P_FIMV_DEC_STATUS_RESOLUTION_MASK (3<<4)
#define S5P_FIMV_DEC_STATUS_RESOLUTION_INC (1<<4)
#define S5P_FIMV_DEC_STATUS_RESOLUTION_DEC (2<<4)
+#define S5P_FIMV_DEC_STATUS_RESOLUTION_SHIFT 4
/* Decode frame address */
#define S5P_FIMV_DECODE_Y_ADR 0x2024
#define S5P_FIMV_R2H_CMD_EDFU_INIT_RET 16
#define S5P_FIMV_R2H_CMD_ERR_RET 32
+/* Dummy definition for MFCv6 compatibilty */
+#define S5P_FIMV_CODEC_H264_MVC_DEC -1
+#define S5P_FIMV_R2H_CMD_FIELD_DONE_RET -1
+#define S5P_FIMV_MFC_RESET -1
+#define S5P_FIMV_RISC_ON -1
+#define S5P_FIMV_RISC_BASE_ADDRESS -1
+#define S5P_FIMV_CODEC_VP8_DEC -1
+#define S5P_FIMV_REG_CLEAR_BEGIN 0
+#define S5P_FIMV_REG_CLEAR_COUNT 0
+
/* Error handling defines */
#define S5P_FIMV_ERR_WARNINGS_START 145
#define S5P_FIMV_ERR_DEC_MASK 0xFFFF
#define S5P_FIMV_SHARED_EXTENDED_SAR 0x0078
#define S5P_FIMV_SHARED_H264_I_PERIOD 0x009C
#define S5P_FIMV_SHARED_RC_CONTROL_CONFIG 0x00A0
+#define S5P_FIMV_SHARED_DISP_FRAME_TYPE_SHIFT 2
+
+#define S5P_FIMV_SHARED_FRAME_PACK_SEI_AVAIL 0x16C
+#define S5P_FIMV_SHARED_FRAME_PACK_ARRGMENT_ID 0x170
+#define S5P_FIMV_SHARED_FRAME_PACK_SEI_INFO 0x174
+#define S5P_FIMV_SHARED_FRAME_PACK_GRID_POS 0x178
+
+/* SEI related information */
+#define S5P_FIMV_FRAME_PACK_SEI_AVAIL S5P_FIMV_SHARED_FRAME_PACK_SEI_AVAIL
+#define S5P_FIMV_FRAME_PACK_ARRGMENT_ID S5P_FIMV_SHARED_FRAME_PACK_ARRGMENT_ID
+#define S5P_FIMV_FRAME_PACK_SEI_INFO S5P_FIMV_SHARED_FRAME_PACK_SEI_INFO
+#define S5P_FIMV_FRAME_PACK_GRID_POS S5P_FIMV_SHARED_FRAME_PACK_GRID_POS
+
+#define S5P_FIMV_SHARED_SET_E_FRAME_TAG S5P_FIMV_SHARED_SET_FRAME_TAG
+#define S5P_FIMV_SHARED_GET_E_FRAME_TAG S5P_FIMV_SHARED_GET_FRAME_TAG_TOP
+#define S5P_FIMV_ENCODED_LUMA_ADDR S5P_FIMV_ENCODED_Y_ADDR
+#define S5P_FIMV_ENCODED_CHROMA_ADDR S5P_FIMV_ENCODED_C_ADDR
#endif /* _REGS_FIMV_H */
#include <linux/videodev2.h>
#include <linux/workqueue.h>
#include <media/videobuf2-core.h>
-#include "regs-mfc.h"
+#include "s5p_mfc_common.h"
#include "s5p_mfc_ctrl.h"
#include "s5p_mfc_debug.h"
#include "s5p_mfc_dec.h"
#include "s5p_mfc_enc.h"
#include "s5p_mfc_intr.h"
-#include "s5p_mfc_opr.h"
#include "s5p_mfc_pm.h"
-#include "s5p_mfc_shm.h"
+#ifdef CONFIG_EXYNOS_IOMMU
+#include <mach/sysmmu.h>
+#include <linux/of_platform.h>
+#endif
#define S5P_MFC_NAME "s5p-mfc"
#define S5P_MFC_DEC_NAME "s5p-mfc-dec"
static void s5p_mfc_clear_int_flags(struct s5p_mfc_dev *dev)
{
- mfc_write(dev, 0, S5P_FIMV_RISC_HOST_INT);
- mfc_write(dev, 0, S5P_FIMV_RISC2HOST_CMD);
- mfc_write(dev, 0xffff, S5P_FIMV_SI_RTN_CHID);
+ if (IS_MFCV6(dev)) {
+ mfc_write(dev, 0, S5P_FIMV_RISC2HOST_CMD);
+ mfc_write(dev, 0, S5P_FIMV_RISC2HOST_INT);
+ } else {
+ mfc_write(dev, 0, S5P_FIMV_RISC_HOST_INT);
+ mfc_write(dev, 0, S5P_FIMV_RISC2HOST_CMD);
+ mfc_write(dev, 0xffff, S5P_FIMV_SI_RTN_CHID);
+ }
}
static void s5p_mfc_handle_frame_all_extracted(struct s5p_mfc_ctx *ctx)
ctx->dst_queue_cnt--;
dst_buf->b->v4l2_buf.sequence = (ctx->sequence++);
- if (s5p_mfc_read_shm(ctx, PIC_TIME_TOP) ==
- s5p_mfc_read_shm(ctx, PIC_TIME_BOT))
+ if (s5p_mfc_read_info(ctx, PIC_TIME_TOP) ==
+ s5p_mfc_read_info(ctx, PIC_TIME_BOT))
dst_buf->b->v4l2_buf.field = V4L2_FIELD_NONE;
else
dst_buf->b->v4l2_buf.field = V4L2_FIELD_INTERLACED;
struct s5p_mfc_dev *dev = ctx->dev;
struct s5p_mfc_buf *dst_buf, *src_buf;
size_t dec_y_addr = s5p_mfc_get_dec_y_adr();
- unsigned int frame_type = s5p_mfc_get_frame_type();
+ unsigned int frame_type = s5p_mfc_get_dec_frame_type();
/* Copy timestamp / timecode from decoded src to dst and set
appropraite flags */
struct s5p_mfc_dev *dev = ctx->dev;
struct s5p_mfc_buf *dst_buf;
size_t dspl_y_addr = s5p_mfc_get_dspl_y_adr();
- unsigned int frame_type = s5p_mfc_get_frame_type();
+ unsigned int frame_type = s5p_mfc_get_disp_frame_type();
unsigned int index;
/* If frame is same as previous then skip and do not dequeue */
list_del(&dst_buf->list);
ctx->dst_queue_cnt--;
dst_buf->b->v4l2_buf.sequence = ctx->sequence;
- if (s5p_mfc_read_shm(ctx, PIC_TIME_TOP) ==
- s5p_mfc_read_shm(ctx, PIC_TIME_BOT))
+ if (s5p_mfc_read_info(ctx, PIC_TIME_TOP) ==
+ s5p_mfc_read_info(ctx, PIC_TIME_BOT))
dst_buf->b->v4l2_buf.field = V4L2_FIELD_NONE;
else
dst_buf->b->v4l2_buf.field =
dst_frame_status = s5p_mfc_get_dspl_status()
& S5P_FIMV_DEC_STATUS_DECODING_STATUS_MASK;
- res_change = s5p_mfc_get_dspl_status()
- & S5P_FIMV_DEC_STATUS_RESOLUTION_MASK;
+ res_change = (s5p_mfc_get_dspl_status()
+ & S5P_FIMV_DEC_STATUS_RESOLUTION_MASK)
+ >> S5P_FIMV_DEC_STATUS_RESOLUTION_SHIFT;
mfc_debug(2, "Frame Status: %x\n", dst_frame_status);
if (ctx->state == MFCINST_RES_CHANGE_INIT)
ctx->state = MFCINST_RES_CHANGE_FLUSH;
- if (res_change) {
+ if (res_change && res_change != 3) {
ctx->state = MFCINST_RES_CHANGE_INIT;
s5p_mfc_clear_int_flags(dev);
wake_up_ctx(ctx, reason, err);
s5p_mfc_try_run(dev);
return;
}
- if (ctx->dpb_flush_flag)
- ctx->dpb_flush_flag = 0;
+ if (ctx->dpb_flush)
+ ctx->dpb_flush = 0;
spin_lock_irqsave(&dev->irqlock, flags);
/* All frames remaining in the buffer have been extracted */
src_buf = list_entry(ctx->src_queue.next, struct s5p_mfc_buf,
list);
ctx->consumed_stream += s5p_mfc_get_consumed_stream();
+#if 0 //kiran
if (ctx->codec_mode != S5P_FIMV_CODEC_H264_DEC &&
- s5p_mfc_get_frame_type() == S5P_FIMV_DECODE_FRAME_P_FRAME
+ s5p_mfc_get_dec_frame_type() == S5P_FIMV_DECODE_FRAME_P_FRAME
&& ctx->consumed_stream + STUFF_BYTE <
src_buf->b->v4l2_planes[0].bytesused) {
/* Run MFC again on the same buffer */
mfc_debug(2, "Running again the same buffer\n");
ctx->after_packed_pb = 1;
- } else {
+ } else
+#endif
+ {
index = src_buf->b->v4l2_buf.index;
mfc_debug(2, "MFC needs next buffer\n");
ctx->consumed_stream = 0;
unsigned int reason, unsigned int err)
{
struct s5p_mfc_dev *dev;
- unsigned int guard_width, guard_height;
if (ctx == 0)
return;
ctx->img_width = s5p_mfc_get_img_width();
ctx->img_height = s5p_mfc_get_img_height();
- ctx->buf_width = ALIGN(ctx->img_width,
- S5P_FIMV_NV12MT_HALIGN);
- ctx->buf_height = ALIGN(ctx->img_height,
- S5P_FIMV_NV12MT_VALIGN);
- mfc_debug(2, "SEQ Done: Movie dimensions %dx%d, "
- "buffer dimensions: %dx%d\n", ctx->img_width,
- ctx->img_height, ctx->buf_width,
- ctx->buf_height);
- if (ctx->codec_mode == S5P_FIMV_CODEC_H264_DEC) {
- ctx->luma_size = ALIGN(ctx->buf_width *
- ctx->buf_height, S5P_FIMV_DEC_BUF_ALIGN);
- ctx->chroma_size = ALIGN(ctx->buf_width *
- ALIGN((ctx->img_height >> 1),
- S5P_FIMV_NV12MT_VALIGN),
- S5P_FIMV_DEC_BUF_ALIGN);
- ctx->mv_size = ALIGN(ctx->buf_width *
- ALIGN((ctx->buf_height >> 2),
- S5P_FIMV_NV12MT_VALIGN),
- S5P_FIMV_DEC_BUF_ALIGN);
- } else {
- guard_width = ALIGN(ctx->img_width + 24,
- S5P_FIMV_NV12MT_HALIGN);
- guard_height = ALIGN(ctx->img_height + 16,
- S5P_FIMV_NV12MT_VALIGN);
- ctx->luma_size = ALIGN(guard_width *
- guard_height, S5P_FIMV_DEC_BUF_ALIGN);
- guard_width = ALIGN(ctx->img_width + 16,
- S5P_FIMV_NV12MT_HALIGN);
- guard_height = ALIGN((ctx->img_height >> 1) + 4,
- S5P_FIMV_NV12MT_VALIGN);
- ctx->chroma_size = ALIGN(guard_width *
- guard_height, S5P_FIMV_DEC_BUF_ALIGN);
- ctx->mv_size = 0;
- }
+ s5p_mfc_dec_calc_dpb_size(ctx);
+
ctx->dpb_count = s5p_mfc_get_dpb_count();
+ ctx->mv_count = s5p_mfc_get_mv_count();
if (ctx->img_width == 0 || ctx->img_height == 0)
ctx->state = MFCINST_ERROR;
else
ctx->state = MFCINST_HEAD_PARSED;
+
+ if ((ctx->codec_mode == S5P_FIMV_CODEC_H264_DEC ||
+ ctx->codec_mode == S5P_FIMV_CODEC_H264_MVC_DEC) &&
+ !list_empty(&ctx->src_queue)) {
+ struct s5p_mfc_buf *src_buf;
+ src_buf = list_entry(ctx->src_queue.next,
+ struct s5p_mfc_buf, list);
+ mfc_debug(2, "Check consumed size of header. ");
+ mfc_debug(2, "source : %d, consumed : %d\n",
+ s5p_mfc_get_consumed_stream(),
+ src_buf->b->v4l2_planes[0].bytesused);
+ if (s5p_mfc_get_consumed_stream() <
+ src_buf->b->v4l2_planes[0].bytesused)
+ ctx->remained = 1;
+ }
}
s5p_mfc_clear_int_flags(dev);
clear_work_bit(ctx);
spin_unlock(&dev->condlock);
if (err == 0) {
ctx->state = MFCINST_RUNNING;
- if (!ctx->dpb_flush_flag) {
+ if (!ctx->dpb_flush && !ctx->remained) {
spin_lock_irqsave(&dev->irqlock, flags);
if (!list_empty(&ctx->src_queue)) {
src_buf = list_entry(ctx->src_queue.next,
}
spin_unlock_irqrestore(&dev->irqlock, flags);
} else {
- ctx->dpb_flush_flag = 0;
+ ctx->dpb_flush = 0;
}
if (test_and_clear_bit(0, &dev->hw_lock) == 0)
BUG();
switch (reason) {
case S5P_FIMV_R2H_CMD_ERR_RET:
/* An error has occured */
- if (ctx->state == MFCINST_RUNNING &&
- s5p_mfc_err_dec(err) >= S5P_FIMV_ERR_WARNINGS_START)
- s5p_mfc_handle_frame(ctx, reason, err);
- else
+ if (ctx->state == MFCINST_RUNNING) {
+ if (s5p_mfc_err_dec(err) >= S5P_FIMV_ERR_WARNINGS_START)
+ s5p_mfc_handle_frame(ctx, reason, err);
+ else
+ s5p_mfc_handle_error(ctx, reason, err);
+ } else {
s5p_mfc_handle_error(ctx, reason, err);
+ }
clear_bit(0, &dev->enter_suspend);
break;
case S5P_FIMV_R2H_CMD_SLICE_DONE_RET:
+ case S5P_FIMV_R2H_CMD_FIELD_DONE_RET:
case S5P_FIMV_R2H_CMD_FRAME_DONE_RET:
if (ctx->c_ops->post_frame_start) {
if (ctx->c_ops->post_frame_start(ctx))
if (s5p_mfc_get_node_type(file) == MFCNODE_DECODER) {
ctx->type = MFCINST_DECODER;
ctx->c_ops = get_dec_codec_ops();
+ s5p_mfc_dec_init(ctx);
/* Setup ctrl handler */
ret = s5p_mfc_dec_ctrls_setup(ctx);
if (ret) {
mfc_err("Failed to setup mfc controls\n");
- goto err_ctrls_setup;
+ goto err_dec_ctrls_setup;
}
} else if (s5p_mfc_get_node_type(file) == MFCNODE_ENCODER) {
ctx->type = MFCINST_ENCODER;
/* only for encoder */
INIT_LIST_HEAD(&ctx->ref_queue);
ctx->ref_queue_cnt = 0;
+ s5p_mfc_enc_init(ctx);
+
+ /* TODO
+ * MFC encoder control setup fails as of now. Hence in this driver
+ * we are not supporting the encoder
+ */
+#if 0
/* Setup ctrl handler */
ret = s5p_mfc_enc_ctrls_setup(ctx);
if (ret) {
mfc_err("Failed to setup mfc controls\n");
- goto err_ctrls_setup;
+ goto err_enc_ctrls_setup;
}
+#endif
} else {
ret = -ENOENT;
goto err_bad_node;
mfc_err("power off failed\n");
s5p_mfc_release_firmware(dev);
}
-err_ctrls_setup:
+err_dec_ctrls_setup:
s5p_mfc_dec_ctrls_delete(ctx);
+#if 0
+err_enc_ctrls_setup:
+#endif
+ s5p_mfc_enc_ctrls_delete(ctx);
err_bad_node:
err_no_ctx:
v4l2_fh_del(&ctx->fh);
s5p_mfc_clock_off();
dev->ctx[ctx->num] = 0;
s5p_mfc_dec_ctrls_delete(ctx);
+ s5p_mfc_enc_ctrls_delete(ctx);
v4l2_fh_del(&ctx->fh);
v4l2_fh_exit(&ctx->fh);
kfree(ctx);
.mmap = s5p_mfc_mmap,
};
-static int match_child(struct device *dev, void *data)
+#ifdef CONFIG_EXYNOS_IOMMU
+static int iommu_init(struct platform_device *pdev,
+ struct device *mfc_l,
+ struct device *mfc_r)
{
- if (!dev_name(dev))
- return 0;
- return !strcmp(dev_name(dev), (char *)data);
+ struct platform_device *pds;
+ struct dma_iommu_mapping *mapping;
+
+ pds = find_sysmmu_dt(pdev, "sysmmu_l");
+ if (pds==NULL) {
+ printk(KERN_ERR "no sysmmu_l found\n");
+ return -1;
+ }
+ platform_set_sysmmu(&pds->dev, mfc_l);
+ mapping = s5p_create_iommu_mapping(mfc_l, 0x20000000,
+ SZ_128M, 4, NULL);
+ if (mapping == NULL) {
+ printk(KERN_ERR "IOMMU mapping failed\n");
+ return -1;
+ }
+
+ pds = find_sysmmu_dt(pdev, "sysmmu_r");
+ if (pds==NULL) {
+ printk(KERN_ERR "no sysmmu_r found\n");
+ return -1;
+ }
+ platform_set_sysmmu(&pds->dev, mfc_r);
+ if (!s5p_create_iommu_mapping(mfc_r, 0x20000000,
+ SZ_128M, 4, mapping)) {
+ printk(KERN_ERR "IOMMU mapping failed\n");
+ return -1;
+ }
+
+ return 0;
}
+#endif
/* MFC probe function */
static int s5p_mfc_probe(struct platform_device *pdev)
goto err_req_irq;
}
- dev->mem_dev_l = device_find_child(&dev->plat_dev->dev, "s5p-mfc-l",
- match_child);
- if (!dev->mem_dev_l) {
- mfc_err("Mem child (L) device get failed\n");
- ret = -ENODEV;
- goto err_find_child;
+ dev->mem_dev_l = kzalloc(sizeof *dev, GFP_KERNEL);
+ dev->mem_dev_r = kzalloc(sizeof *dev, GFP_KERNEL);
+#ifdef CONFIG_EXYNOS_IOMMU
+ dev->mem_dev_l->init_name = "mfc_l";
+ dev->mem_dev_r->init_name = "mfc_r";
+ if (iommu_init(pdev, dev->mem_dev_r, dev->mem_dev_l)) {
+ v4l2_err(&dev->v4l2_dev, "failed to initialize IOMMU\n");
+ goto err_iommu_init;
}
- dev->mem_dev_r = device_find_child(&dev->plat_dev->dev, "s5p-mfc-r",
- match_child);
- if (!dev->mem_dev_r) {
- mfc_err("Mem child (R) device get failed\n");
- ret = -ENODEV;
- goto err_find_child;
- }
-
+#endif
dev->alloc_ctx[0] = vb2_dma_contig_init_ctx(dev->mem_dev_l);
if (IS_ERR_OR_NULL(dev->alloc_ctx[0])) {
ret = PTR_ERR(dev->alloc_ctx[0]);
dev->watchdog_timer.data = (unsigned long)dev;
dev->watchdog_timer.function = s5p_mfc_watchdog;
+ dev->variant = (struct s5p_mfc_variant *)
+ platform_get_device_id(pdev)->driver_data;
+
pr_debug("%s--\n", __func__);
return 0;
err_mem_init_ctx_1:
vb2_dma_contig_cleanup_ctx(dev->alloc_ctx[0]);
err_mem_init_ctx_0:
-err_find_child:
+err_iommu_init:
free_irq(dev->irq, dev);
err_req_irq:
err_get_res:
NULL)
};
+struct s5p_mfc_buf_size_v5 mfc_buf_size_v5 = {
+ .h264_ctx = 0x96000,
+ .non_h264_ctx = 0x2800,
+ .dsc = 0x20000,
+ .shm = 0x1000,
+};
+
+struct s5p_mfc_buf_size_v6 mfc_buf_size_v6 = {
+ .dev_ctx = 0x7000, /* 28KB */
+ .h264_dec_ctx = 0x200000, /* 1.6MB */
+ .other_dec_ctx = 0x5000, /* 20KB */
+ .h264_enc_ctx = 0x19000, /* 100KB */
+ .other_enc_ctx = 0x3000, /* 12KB */
+};
+
+struct s5p_mfc_buf_size buf_size_v5 = {
+ .fw = 0x60000,
+ .cpb = 0x400000, /* 4MB */
+ .priv = &mfc_buf_size_v5,
+};
+
+struct s5p_mfc_buf_size buf_size_v6 = {
+ .fw = 0x100000, /* 1MB */
+ .cpb = 0x300000, /* 3MB */
+ .priv = &mfc_buf_size_v6,
+};
+
+struct s5p_mfc_buf_align mfc_buf_align_v5 = {
+ .base = 17,
+};
+
+struct s5p_mfc_buf_align mfc_buf_align_v6 = {
+ .base = 0,
+};
+
+static struct s5p_mfc_variant mfc_drvdata_v5 = {
+ .version = 0x51,
+ .port_num = 2,
+ .buf_size = &buf_size_v5,
+ .buf_align = &mfc_buf_align_v5,
+};
+
+static struct s5p_mfc_variant mfc_drvdata_v6 = {
+ .version = 0x61,
+ .port_num = 1,
+ .buf_size = &buf_size_v6,
+ .buf_align = &mfc_buf_align_v6,
+};
+
+static struct platform_device_id mfc_driver_ids[] = {
+ {
+ .name = "s5p-mfc",
+ .driver_data = (unsigned long)&mfc_drvdata_v6,
+ }, {
+ .name = "s5p-mfc-v5",
+ .driver_data = (unsigned long)&mfc_drvdata_v5,
+ }, {
+ .name = "s5p-mfc-v6",
+ .driver_data = (unsigned long)&mfc_drvdata_v6,
+ },
+ {},
+};
+MODULE_DEVICE_TABLE(platform, mfc_driver_ids);
static struct platform_driver s5p_mfc_driver = {
- .probe = s5p_mfc_probe,
- .remove = __devexit_p(s5p_mfc_remove),
+ .probe = s5p_mfc_probe,
+ .remove = __devexit_p(s5p_mfc_remove),
+ .id_table = mfc_driver_ids,
.driver = {
.name = S5P_MFC_NAME,
.owner = THIS_MODULE,
#include "s5p_mfc_debug.h"
/* This function is used to send a command to the MFC */
-static int s5p_mfc_cmd_host2risc(struct s5p_mfc_dev *dev, int cmd,
+int s5p_mfc_cmd_host2risc(struct s5p_mfc_dev *dev, int cmd,
struct s5p_mfc_cmd_args *args)
{
int cur_cmd;
memset(&h2r_args, 0, sizeof(struct s5p_mfc_cmd_args));
h2r_args.arg[0] = ctx->codec_mode;
h2r_args.arg[1] = 0; /* no crc & no pixelcache */
- h2r_args.arg[2] = ctx->ctx_ofs;
+ h2r_args.arg[2] = ctx->ctx.ofs;
h2r_args.arg[3] = ctx->ctx_size;
ret = s5p_mfc_cmd_host2risc(dev, S5P_FIMV_H2R_CMD_OPEN_INSTANCE,
&h2r_args);
unsigned int arg[MAX_H2R_ARG];
};
+int s5p_mfc_cmd_host2risc(struct s5p_mfc_dev *dev, int cmd,
+ struct s5p_mfc_cmd_args *args);
+
int s5p_mfc_sys_init_cmd(struct s5p_mfc_dev *dev);
int s5p_mfc_sleep_cmd(struct s5p_mfc_dev *dev);
int s5p_mfc_wakeup_cmd(struct s5p_mfc_dev *dev);
--- /dev/null
+/*
+ * linux/drivers/media/video/s5p-mfc/s5p_mfc_cmd_v6.c
+ *
+ * Copyright (c) 2012 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com/
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include "s5p_mfc_common.h"
+
+#include "s5p_mfc_debug.h"
+#include "s5p_mfc_cmd.h"
+
+int s5p_mfc_cmd_host2risc(struct s5p_mfc_dev *dev, int cmd,
+ struct s5p_mfc_cmd_args *args)
+{
+ mfc_debug(2, "Issue the command: %d\n", cmd);
+
+ /* Reset RISC2HOST command */
+ mfc_write(dev, 0x0, S5P_FIMV_RISC2HOST_CMD);
+
+ /* Issue the command */
+ mfc_write(dev, cmd, S5P_FIMV_HOST2RISC_CMD);
+ mfc_write(dev, 0x1, S5P_FIMV_HOST2RISC_INT);
+
+ return 0;
+}
+
+int s5p_mfc_sys_init_cmd(struct s5p_mfc_dev *dev)
+{
+ struct s5p_mfc_cmd_args h2r_args;
+ struct s5p_mfc_buf_size_v6 *buf_size = dev->variant->buf_size->priv;
+ int ret;
+
+ mfc_debug_enter();
+
+ s5p_mfc_alloc_dev_context_buffer(dev);
+
+ mfc_write(dev, dev->ctx_buf.dma, S5P_FIMV_CONTEXT_MEM_ADDR);
+ mfc_write(dev, buf_size->dev_ctx, S5P_FIMV_CONTEXT_MEM_SIZE);
+
+ ret = s5p_mfc_cmd_host2risc(dev, S5P_FIMV_H2R_CMD_SYS_INIT, &h2r_args);
+
+ mfc_debug_leave();
+
+ return ret;
+}
+
+int s5p_mfc_sleep_cmd(struct s5p_mfc_dev *dev)
+{
+ struct s5p_mfc_cmd_args h2r_args;
+ int ret;
+
+ mfc_debug_enter();
+
+ memset(&h2r_args, 0, sizeof(struct s5p_mfc_cmd_args));
+
+ ret = s5p_mfc_cmd_host2risc(dev, S5P_FIMV_H2R_CMD_SLEEP, &h2r_args);
+
+ mfc_debug_leave();
+
+ return ret;
+}
+
+int s5p_mfc_wakeup_cmd(struct s5p_mfc_dev *dev)
+{
+ struct s5p_mfc_cmd_args h2r_args;
+ int ret;
+
+ mfc_debug_enter();
+
+ memset(&h2r_args, 0, sizeof(struct s5p_mfc_cmd_args));
+
+ ret = s5p_mfc_cmd_host2risc(dev, S5P_FIMV_H2R_CMD_WAKEUP, &h2r_args);
+
+ mfc_debug_leave();
+
+ return ret;
+}
+
+/* Open a new instance and get its number */
+int s5p_mfc_open_inst_cmd(struct s5p_mfc_ctx *ctx)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+ struct s5p_mfc_cmd_args h2r_args;
+ int ret;
+
+ mfc_debug_enter();
+
+ mfc_debug(2, "Requested codec mode: %d\n", ctx->codec_mode);
+
+ mfc_write(dev, ctx->codec_mode, S5P_FIMV_CODEC_TYPE);
+ mfc_write(dev, ctx->ctx.dma, S5P_FIMV_CONTEXT_MEM_ADDR);
+ mfc_write(dev, ctx->ctx_size, S5P_FIMV_CONTEXT_MEM_SIZE);
+ mfc_write(dev, 0, S5P_FIMV_D_CRC_CTRL); /* no crc */
+
+ ret = s5p_mfc_cmd_host2risc(dev, S5P_FIMV_H2R_CMD_OPEN_INSTANCE, &h2r_args);
+
+ mfc_debug_leave();
+
+ return ret;
+}
+
+/* Close instance */
+int s5p_mfc_close_inst_cmd(struct s5p_mfc_ctx *ctx)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+ struct s5p_mfc_cmd_args h2r_args;
+ int ret = 0;
+
+ mfc_debug_enter();
+
+ if (ctx->state != MFCINST_FREE) {
+ mfc_write(dev, ctx->inst_no, S5P_FIMV_INSTANCE_ID);
+
+ ret = s5p_mfc_cmd_host2risc(dev, S5P_FIMV_H2R_CMD_CLOSE_INSTANCE,
+ &h2r_args);
+ } else {
+ ret = -EINVAL;
+ }
+
+ mfc_debug_leave();
+
+ return ret;
+}
#ifndef S5P_MFC_COMMON_H_
#define S5P_MFC_COMMON_H_
-#include "regs-mfc.h"
#include <linux/platform_device.h>
#include <linux/videodev2.h>
#include <media/v4l2-ctrls.h>
#define MFC_OFFSET_SHIFT 11
#define FIRMWARE_ALIGN 0x20000 /* 128KB */
-#define MFC_H264_CTX_BUF_SIZE 0x96000 /* 600KB per H264 instance */
-#define MFC_CTX_BUF_SIZE 0x2800 /* 10KB per instance */
-#define DESC_BUF_SIZE 0x20000 /* 128KB for DESC buffer */
-#define SHARED_BUF_SIZE 0x2000 /* 8KB for shared buffer */
#define DEF_CPB_SIZE 0x40000 /* 512KB */
struct device *device;
};
+struct s5p_mfc_buf_size_v5 {
+ unsigned int h264_ctx;
+ unsigned int non_h264_ctx;
+ unsigned int dsc;
+ unsigned int shm;
+};
+
+struct s5p_mfc_buf_size_v6 {
+ unsigned int dev_ctx;
+ unsigned int h264_dec_ctx;
+ unsigned int other_dec_ctx;
+ unsigned int h264_enc_ctx;
+ unsigned int other_enc_ctx;
+};
+
+struct s5p_mfc_buf_size {
+ unsigned int fw;
+ unsigned int cpb;
+ void *priv;
+};
+
+struct s5p_mfc_buf_align {
+ unsigned int base;
+};
+
+struct s5p_mfc_variant {
+ unsigned int version;
+ unsigned int port_num;
+ struct s5p_mfc_buf_size *buf_size;
+ struct s5p_mfc_buf_align *buf_align;
+};
+
+/**
+ * struct s5p_mfc_priv_buf - represents internal used buffer
+ * @alloc: allocation-specific context for each buffer
+ * (videobuf2 allocator)
+ * @ofs: offset of each buffer, will be used for MFC
+ * @virt: kernel virtual address, only valid when the
+ * buffer accessed by driver
+ * @dma: DMA address, only valid when kernel DMA API used
+ */
+struct s5p_mfc_priv_buf {
+ void *alloc;
+ unsigned long ofs;
+ void *virt;
+ dma_addr_t dma;
+};
+
/**
* struct s5p_mfc_dev - The struct containing driver internal parameters.
*
* @watchdog_work: worker for the watchdog
* @alloc_ctx: videobuf2 allocator contexts for two memory banks
* @enter_suspend: flag set when entering suspend
+ * @ctx_buf: common context memory (MFCv6)
*
*/
struct s5p_mfc_dev {
struct v4l2_ctrl_handler dec_ctrl_handler;
struct v4l2_ctrl_handler enc_ctrl_handler;
struct s5p_mfc_pm pm;
+ struct s5p_mfc_variant *variant;
int num_inst;
spinlock_t irqlock; /* lock when operating on videobuf2 queues */
spinlock_t condlock; /* lock when changing/checking if a context is
struct work_struct watchdog_work;
void *alloc_ctx[2];
unsigned long enter_suspend;
+
+ struct s5p_mfc_priv_buf ctx_buf;
};
/**
enum v4l2_mpeg_video_h264_level level_v4l2;
int level;
u16 cpb_size;
+ int interlace;
+ u8 hier_qp;
+ enum v4l2_mpeg_video_h264_hierarchical_coding_type hier_qp_type;
+ u8 hier_qp_layer;
+ u8 hier_qp_layer_qp[7];
+ u8 sei_frame_packing;
+ u8 sei_fp_curr_frame_0;
+ enum v4l2_mpeg_video_h264_sei_fp_arrangement_type sei_fp_arrangement_type;
+
+ u8 fmo;
+ enum v4l2_mpeg_video_h264_fmo_map_type fmo_map_type;
+ u8 fmo_slice_grp;
+ enum v4l2_mpeg_video_h264_fmo_change_dir fmo_chg_dir;
+ u32 fmo_chg_rate;
+ u32 fmo_run_len[4];
+ u8 aso;
+ u32 aso_slice_order[8];
};
/**
enum v4l2_mpeg_video_mpeg4_profile profile;
int quarter_pixel;
/* Common for MPEG4, H263 */
- u16 vop_time_res;
- u16 vop_frm_delta;
u8 rc_frame_qp;
u8 rc_min_qp;
u8 rc_max_qp;
u8 pad_cb;
u8 pad_cr;
int rc_frame;
+ int rc_mb;
u32 rc_bitrate;
u16 rc_reaction_coeff;
u16 vbv_size;
+ u32 vbv_delay;
enum v4l2_mpeg_video_header_mode seq_hdr_mode;
enum v4l2_mpeg_mfc51_video_frame_skip_mode frame_skip_mode;
u8 num_b_frame;
u32 rc_framerate_num;
u32 rc_framerate_denom;
- int interlace;
union {
struct s5p_mfc_h264_enc_params h264;
unsigned long consumed_stream;
- unsigned int dpb_flush_flag;
+ unsigned int dpb_flush;
+ unsigned int remained;
/* Buffers */
void *bank1_buf;
int display_delay;
int display_delay_enable;
int after_packed_pb;
+ int sei_fp_parse;
int dpb_count;
int total_dpb_count;
-
+ int mv_count;
/* Buffers */
- void *ctx_buf;
- size_t ctx_phys;
- size_t ctx_ofs;
- size_t ctx_size;
-
- void *desc_buf;
- size_t desc_phys;
-
-
- void *shm_alloc;
- void *shm;
- size_t shm_ofs;
+ unsigned int ctx_size;
+ struct s5p_mfc_priv_buf ctx;
+ struct s5p_mfc_priv_buf dsc;
+ struct s5p_mfc_priv_buf shm;
struct s5p_mfc_enc_params enc_params;
size_t enc_dst_buf_size;
+ size_t luma_dpb_size;
+ size_t chroma_dpb_size;
+ size_t me_buffer_size;
+ size_t tmv_buffer_size;
enum v4l2_mpeg_mfc51_video_force_frame_type force_frame_type;
struct list_head ref_queue;
unsigned int ref_queue_cnt;
+ enum v4l2_mpeg_video_multi_slice_mode slice_mode;
+ union {
+ unsigned int mb;
+ unsigned int bits;
+ } slice_size;
+
struct s5p_mfc_codec_ops *c_ops;
struct v4l2_ctrl *ctrls[MFC_MAX_CTRLS];
struct v4l2_ctrl_handler ctrl_handler;
+
+ size_t scratch_buf_size;
};
/*
#define ctrl_to_ctx(__ctrl) \
container_of((__ctrl)->handler, struct s5p_mfc_ctx, ctrl_handler)
+#define HAS_PORTNUM(dev) (dev ? (dev->variant ? \
+ (dev->variant->port_num ? 1 : 0) : 0) : 0)
+#define IS_TWOPORT(dev) (dev->variant->port_num == 2 ? 1 : 0)
+#define IS_MFCV6(dev) (dev->variant->version >= 0x60 ? 1 : 0)
+
+#if defined(CONFIG_VIDEO_SAMSUNG_S5P_MFC_V5)
+#include "regs-mfc.h"
+#include "s5p_mfc_opr.h"
+#include "s5p_mfc_shm.h"
+#elif defined(CONFIG_VIDEO_SAMSUNG_S5P_MFC_V6)
+#include "regs-mfc-v6.h"
+#include "s5p_mfc_opr_v6.h"
+#endif
+
#endif /* S5P_MFC_COMMON_H_ */
#include <linux/firmware.h>
#include <linux/jiffies.h>
#include <linux/sched.h>
-#include "regs-mfc.h"
#include "s5p_mfc_cmd.h"
#include "s5p_mfc_common.h"
#include "s5p_mfc_debug.h"
* into kernel. */
mfc_debug_enter();
err = request_firmware((const struct firmware **)&fw_blob,
- "s5p-mfc.fw", dev->v4l2_dev.dev);
+ "mfc_fw.bin", dev->v4l2_dev.dev);
if (err != 0) {
mfc_err("Firmware is not present in the /lib/firmware directory nor compiled in kernel\n");
return -EINVAL;
}
- dev->fw_size = ALIGN(fw_blob->size, FIRMWARE_ALIGN);
+ dev->fw_size = dev->variant->buf_size->fw;
if (s5p_mfc_bitproc_buf) {
mfc_err("Attempting to allocate firmware when it seems that it is already loaded\n");
release_firmware(fw_blob);
return -EIO;
}
dev->bank1 = s5p_mfc_bitproc_phys;
- b_base = vb2_dma_contig_memops.alloc(
- dev->alloc_ctx[MFC_BANK2_ALLOC_CTX], 1 << MFC_BANK2_ALIGN_ORDER);
- if (IS_ERR(b_base)) {
- vb2_dma_contig_memops.put(s5p_mfc_bitproc_buf);
- s5p_mfc_bitproc_phys = 0;
- s5p_mfc_bitproc_buf = 0;
- mfc_err("Allocating bank2 base failed\n");
- release_firmware(fw_blob);
- return -ENOMEM;
- }
- bank2_base_phys = s5p_mfc_mem_cookie(
- dev->alloc_ctx[MFC_BANK2_ALLOC_CTX], b_base);
- vb2_dma_contig_memops.put(b_base);
- if (bank2_base_phys & ((1 << MFC_BASE_ALIGN_ORDER) - 1)) {
- mfc_err("The base memory for bank 2 is not aligned to 128KB\n");
- vb2_dma_contig_memops.put(s5p_mfc_bitproc_buf);
- s5p_mfc_bitproc_phys = 0;
- s5p_mfc_bitproc_buf = 0;
- release_firmware(fw_blob);
- return -EIO;
- }
- dev->bank2 = bank2_base_phys;
+ if (HAS_PORTNUM(dev) && IS_TWOPORT(dev)) {
+ b_base = vb2_dma_contig_memops.alloc(
+ dev->alloc_ctx[MFC_BANK2_ALLOC_CTX], 1 << MFC_BANK2_ALIGN_ORDER);
+ if (IS_ERR(b_base)) {
+ vb2_dma_contig_memops.put(s5p_mfc_bitproc_buf);
+ s5p_mfc_bitproc_phys = 0;
+ s5p_mfc_bitproc_buf = 0;
+ mfc_err("Allocating bank2 base failed\n");
+ release_firmware(fw_blob);
+ return -ENOMEM;
+ }
+ bank2_base_phys = s5p_mfc_mem_cookie(
+ dev->alloc_ctx[MFC_BANK2_ALLOC_CTX], b_base);
+ vb2_dma_contig_memops.put(b_base);
+ if (bank2_base_phys & ((1 << MFC_BASE_ALIGN_ORDER) - 1)) {
+ mfc_err("The base memory for bank 2 is not aligned to 128KB\n");
+ vb2_dma_contig_memops.put(s5p_mfc_bitproc_buf);
+ s5p_mfc_bitproc_phys = 0;
+ s5p_mfc_bitproc_buf = 0;
+ release_firmware(fw_blob);
+ return -EIO;
+ }
+ dev->bank2 = bank2_base_phys;
+ } else {
+ dev->bank2 = dev->bank1;
+ }
memcpy(s5p_mfc_bitproc_virt, fw_blob->data, fw_blob->size);
wmb();
release_firmware(fw_blob);
* into kernel. */
mfc_debug_enter();
err = request_firmware((const struct firmware **)&fw_blob,
- "s5p-mfc.fw", dev->v4l2_dev.dev);
+ "mfc_fw.bin", dev->v4l2_dev.dev);
if (err != 0) {
mfc_err("Firmware is not present in the /lib/firmware directory nor compiled in kernel\n");
return -EINVAL;
{
unsigned int mc_status;
unsigned long timeout;
+ int i;
mfc_debug_enter();
- /* Stop procedure */
- /* reset RISC */
- mfc_write(dev, 0x3f6, S5P_FIMV_SW_RESET);
- /* All reset except for MC */
- mfc_write(dev, 0x3e2, S5P_FIMV_SW_RESET);
- mdelay(10);
-
- timeout = jiffies + msecs_to_jiffies(MFC_BW_TIMEOUT);
- /* Check MC status */
- do {
- if (time_after(jiffies, timeout)) {
- mfc_err("Timeout while resetting MFC\n");
- return -EIO;
- }
- mc_status = mfc_read(dev, S5P_FIMV_MC_STATUS);
+ if (IS_MFCV6(dev)) {
+ /* Reset IP */
+ /* except RISC, reset */
+ mfc_write(dev, 0xFEE, S5P_FIMV_MFC_RESET);
+ /* reset release */
+ mfc_write(dev, 0x0, S5P_FIMV_MFC_RESET);
- } while (mc_status & 0x3);
+ /* Zero Initialization of MFC registers */
+ mfc_write(dev, 0, S5P_FIMV_RISC2HOST_CMD);
+ mfc_write(dev, 0, S5P_FIMV_HOST2RISC_CMD);
+ mfc_write(dev, 0, S5P_FIMV_FW_VERSION);
+
+ for (i = 0; i < S5P_FIMV_REG_CLEAR_COUNT; i++)
+ mfc_write(dev, 0, S5P_FIMV_REG_CLEAR_BEGIN + (i*4));
+
+ /* Reset */
+ mfc_write(dev, 0, S5P_FIMV_RISC_ON);
+ mfc_write(dev, 0x1FFF, S5P_FIMV_MFC_RESET);
+ mfc_write(dev, 0, S5P_FIMV_MFC_RESET);
+ } else {
+ /* Stop procedure */
+ /* reset RISC */
+ mfc_write(dev, 0x3f6, S5P_FIMV_SW_RESET);
+ /* All reset except for MC */
+ mfc_write(dev, 0x3e2, S5P_FIMV_SW_RESET);
+ mdelay(10);
+
+ timeout = jiffies + msecs_to_jiffies(MFC_BW_TIMEOUT);
+ /* Check MC status */
+ do {
+ if (time_after(jiffies, timeout)) {
+ mfc_err("Timeout while resetting MFC\n");
+ return -EIO;
+ }
+
+ mc_status = mfc_read(dev, S5P_FIMV_MC_STATUS);
+
+ } while (mc_status & 0x3);
+
+ mfc_write(dev, 0x0, S5P_FIMV_SW_RESET);
+ mfc_write(dev, 0x3fe, S5P_FIMV_SW_RESET);
+ }
- mfc_write(dev, 0x0, S5P_FIMV_SW_RESET);
- mfc_write(dev, 0x3fe, S5P_FIMV_SW_RESET);
mfc_debug_leave();
return 0;
}
static inline void s5p_mfc_init_memctrl(struct s5p_mfc_dev *dev)
{
- mfc_write(dev, dev->bank1, S5P_FIMV_MC_DRAMBASE_ADR_A);
- mfc_write(dev, dev->bank2, S5P_FIMV_MC_DRAMBASE_ADR_B);
- mfc_debug(2, "Bank1: %08x, Bank2: %08x\n", dev->bank1, dev->bank2);
+ if (IS_MFCV6(dev)) {
+ mfc_write(dev, dev->bank1, S5P_FIMV_RISC_BASE_ADDRESS);
+ mfc_debug(2, "Base Address : %08x\n", dev->bank1);
+ } else {
+ mfc_write(dev, dev->bank1, S5P_FIMV_MC_DRAMBASE_ADR_A);
+ mfc_write(dev, dev->bank2, S5P_FIMV_MC_DRAMBASE_ADR_B);
+ mfc_debug(2, "Bank1: %08x, Bank2: %08x\n", dev->bank1, dev->bank2);
+ }
}
static inline void s5p_mfc_clear_cmds(struct s5p_mfc_dev *dev)
{
- mfc_write(dev, 0xffffffff, S5P_FIMV_SI_CH0_INST_ID);
- mfc_write(dev, 0xffffffff, S5P_FIMV_SI_CH1_INST_ID);
- mfc_write(dev, 0, S5P_FIMV_RISC2HOST_CMD);
- mfc_write(dev, 0, S5P_FIMV_HOST2RISC_CMD);
+ if (IS_MFCV6(dev)) {
+ /* Zero initialization should be done before RESET.
+ * Nothing to do here. */
+ } else {
+ mfc_write(dev, 0xffffffff, S5P_FIMV_SI_CH0_INST_ID);
+ mfc_write(dev, 0xffffffff, S5P_FIMV_SI_CH1_INST_ID);
+ mfc_write(dev, 0, S5P_FIMV_RISC2HOST_CMD);
+ mfc_write(dev, 0, S5P_FIMV_HOST2RISC_CMD);
+ }
}
/* Initialize hardware */
s5p_mfc_clear_cmds(dev);
/* 3. Release reset signal to the RISC */
s5p_mfc_clean_dev_int_flags(dev);
- mfc_write(dev, 0x3ff, S5P_FIMV_SW_RESET);
+ if (IS_MFCV6(dev))
+ mfc_write(dev, 0x1, S5P_FIMV_RISC_ON);
+ else
+ mfc_write(dev, 0x3ff, S5P_FIMV_SW_RESET);
mfc_debug(2, "Will now wait for completion of firmware transfer\n");
if (s5p_mfc_wait_for_done_dev(dev, S5P_FIMV_R2H_CMD_FW_STATUS_RET)) {
mfc_err("Failed to load firmware\n");
}
+/* Deinitialize hardware */
+void s5p_mfc_deinit_hw(struct s5p_mfc_dev *dev)
+{
+ s5p_mfc_clock_on();
+
+ s5p_mfc_reset(dev);
+ if (IS_MFCV6(dev))
+ s5p_mfc_release_dev_context_buffer(dev);
+
+ s5p_mfc_clock_off();
+}
+
int s5p_mfc_sleep(struct s5p_mfc_dev *dev)
{
int ret;
return ret;
}
/* 4. Release reset signal to the RISC */
- mfc_write(dev, 0x3ff, S5P_FIMV_SW_RESET);
+ if (IS_MFCV6(dev))
+ mfc_write(dev, 0x1, S5P_FIMV_RISC_ON);
+ else
+ mfc_write(dev, 0x3ff, S5P_FIMV_SW_RESET);
mfc_debug(2, "Ok, now will write a command to wakeup the system\n");
if (s5p_mfc_wait_for_done_dev(dev, S5P_FIMV_R2H_CMD_WAKEUP_RET)) {
mfc_err("Failed to load firmware\n");
#include <linux/workqueue.h>
#include <media/v4l2-ctrls.h>
#include <media/videobuf2-core.h>
-#include "regs-mfc.h"
#include "s5p_mfc_common.h"
#include "s5p_mfc_debug.h"
#include "s5p_mfc_dec.h"
#include "s5p_mfc_intr.h"
-#include "s5p_mfc_opr.h"
#include "s5p_mfc_pm.h"
-#include "s5p_mfc_shm.h"
+
+#define DEF_SRC_FMT 4
+#define DEF_DST_FMT 0
static struct s5p_mfc_fmt formats[] = {
+ {
+ .name = "4:2:0 2 Planes 16x16 Tiles",
+ .fourcc = V4L2_PIX_FMT_NV12MT_16X16,
+ .codec_mode = S5P_FIMV_CODEC_NONE,
+ .type = MFC_FMT_RAW,
+ .num_planes = 2,
+ },
{
.name = "4:2:0 2 Planes 64x32 Tiles",
.fourcc = V4L2_PIX_FMT_NV12MT,
.codec_mode = S5P_FIMV_CODEC_NONE,
.type = MFC_FMT_RAW,
.num_planes = 2,
- },
+ },
+ {
+ .name = "4:2:0 2 Planes Y/CbCr",
+ .fourcc = V4L2_PIX_FMT_NV12M,
+ .codec_mode = S5P_FIMV_CODEC_NONE,
+ .type = MFC_FMT_RAW,
+ .num_planes = 2,
+ },
+ {
+ .name = "4:2:0 2 Planes Y/CrCb",
+ .fourcc = V4L2_PIX_FMT_NV21M,
+ .codec_mode = S5P_FIMV_CODEC_NONE,
+ .type = MFC_FMT_RAW,
+ .num_planes = 2,
+ },
+ {
+ .name = "H264 Encoded Stream",
+ .fourcc = V4L2_PIX_FMT_H264,
+ .codec_mode = S5P_FIMV_CODEC_H264_DEC,
+ .type = MFC_FMT_DEC,
+ .num_planes = 1,
+ },
{
- .name = "4:2:0 2 Planes",
- .fourcc = V4L2_PIX_FMT_NV12M,
- .codec_mode = S5P_FIMV_CODEC_NONE,
- .type = MFC_FMT_RAW,
- .num_planes = 2,
+ .name = "H264/MVC Encoded Stream",
+ .fourcc = V4L2_PIX_FMT_H264_MVC,
+ .codec_mode = S5P_FIMV_CODEC_H264_MVC_DEC,
+ .type = MFC_FMT_DEC,
+ .num_planes = 1,
},
{
- .name = "H264 Encoded Stream",
- .fourcc = V4L2_PIX_FMT_H264,
- .codec_mode = S5P_FIMV_CODEC_H264_DEC,
- .type = MFC_FMT_DEC,
- .num_planes = 1,
+ .name = "H263 Encoded Stream",
+ .fourcc = V4L2_PIX_FMT_H263,
+ .codec_mode = S5P_FIMV_CODEC_H263_DEC,
+ .type = MFC_FMT_DEC,
+ .num_planes = 1,
},
{
- .name = "H263 Encoded Stream",
- .fourcc = V4L2_PIX_FMT_H263,
- .codec_mode = S5P_FIMV_CODEC_H263_DEC,
- .type = MFC_FMT_DEC,
- .num_planes = 1,
+ .name = "MPEG1 Encoded Stream",
+ .fourcc = V4L2_PIX_FMT_MPEG1,
+ .codec_mode = S5P_FIMV_CODEC_MPEG2_DEC,
+ .type = MFC_FMT_DEC,
+ .num_planes = 1,
},
{
- .name = "MPEG1 Encoded Stream",
- .fourcc = V4L2_PIX_FMT_MPEG1,
- .codec_mode = S5P_FIMV_CODEC_MPEG2_DEC,
- .type = MFC_FMT_DEC,
- .num_planes = 1,
+ .name = "MPEG2 Encoded Stream",
+ .fourcc = V4L2_PIX_FMT_MPEG2,
+ .codec_mode = S5P_FIMV_CODEC_MPEG2_DEC,
+ .type = MFC_FMT_DEC,
+ .num_planes = 1,
},
{
- .name = "MPEG2 Encoded Stream",
- .fourcc = V4L2_PIX_FMT_MPEG2,
- .codec_mode = S5P_FIMV_CODEC_MPEG2_DEC,
- .type = MFC_FMT_DEC,
- .num_planes = 1,
+ .name = "MPEG4 Encoded Stream",
+ .fourcc = V4L2_PIX_FMT_MPEG4,
+ .codec_mode = S5P_FIMV_CODEC_MPEG4_DEC,
+ .type = MFC_FMT_DEC,
+ .num_planes = 1,
},
{
- .name = "MPEG4 Encoded Stream",
- .fourcc = V4L2_PIX_FMT_MPEG4,
- .codec_mode = S5P_FIMV_CODEC_MPEG4_DEC,
- .type = MFC_FMT_DEC,
- .num_planes = 1,
+ .name = "XviD Encoded Stream",
+ .fourcc = V4L2_PIX_FMT_XVID,
+ .codec_mode = S5P_FIMV_CODEC_MPEG4_DEC,
+ .type = MFC_FMT_DEC,
+ .num_planes = 1,
},
{
- .name = "XviD Encoded Stream",
- .fourcc = V4L2_PIX_FMT_XVID,
- .codec_mode = S5P_FIMV_CODEC_MPEG4_DEC,
- .type = MFC_FMT_DEC,
- .num_planes = 1,
+ .name = "VC1 Encoded Stream",
+ .fourcc = V4L2_PIX_FMT_VC1_ANNEX_G,
+ .codec_mode = S5P_FIMV_CODEC_VC1_DEC,
+ .type = MFC_FMT_DEC,
+ .num_planes = 1,
},
{
- .name = "VC1 Encoded Stream",
- .fourcc = V4L2_PIX_FMT_VC1_ANNEX_G,
- .codec_mode = S5P_FIMV_CODEC_VC1_DEC,
- .type = MFC_FMT_DEC,
- .num_planes = 1,
+ .name = "VC1 RCV Encoded Stream",
+ .fourcc = V4L2_PIX_FMT_VC1_ANNEX_L,
+ .codec_mode = S5P_FIMV_CODEC_VC1RCV_DEC,
+ .type = MFC_FMT_DEC,
+ .num_planes = 1,
},
{
- .name = "VC1 RCV Encoded Stream",
- .fourcc = V4L2_PIX_FMT_VC1_ANNEX_L,
- .codec_mode = S5P_FIMV_CODEC_VC1RCV_DEC,
- .type = MFC_FMT_DEC,
- .num_planes = 1,
+ .name = "VC8 Encoded Stream",
+ .fourcc = V4L2_PIX_FMT_VP8,
+ .codec_mode = S5P_FIMV_CODEC_VP8_DEC,
+ .type = MFC_FMT_DEC,
+ .num_planes = 1,
},
};
.default_value = 1,
.is_volatile = 1,
},
+ {
+ .id = V4L2_CID_CODEC_DISPLAY_STATUS,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "Display Status",
+ .minimum = 0,
+ .maximum = 3,
+ .step = 1,
+ .default_value = 0,
+ .is_volatile = 1,
+ },
};
#define NUM_CTRLS ARRAY_SIZE(controls)
pix_mp->num_planes = 2;
/* Set pixelformat to the format in which MFC
outputs the decoded frame */
- pix_mp->pixelformat = V4L2_PIX_FMT_NV12MT;
+ pix_mp->pixelformat = V4L2_PIX_FMT_NV12MT_16X16;
pix_mp->plane_fmt[0].bytesperline = ctx->buf_width;
pix_mp->plane_fmt[0].sizeimage = ctx->luma_size;
pix_mp->plane_fmt[1].bytesperline = ctx->buf_width;
/* Try format */
static int vidioc_try_fmt(struct file *file, void *priv, struct v4l2_format *f)
{
+ struct s5p_mfc_dev *dev = video_drvdata(file);
struct s5p_mfc_fmt *fmt;
- if (f->type != V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) {
- mfc_err("This node supports decoding only\n");
- return -EINVAL;
- }
- fmt = find_format(f, MFC_FMT_DEC);
- if (!fmt) {
- mfc_err("Unsupported format\n");
- return -EINVAL;
- }
- if (fmt->type != MFC_FMT_DEC) {
- mfc_err("\n");
- return -EINVAL;
+ mfc_debug(2, "Type is %d\n", f->type);
+ if (f->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) {
+ fmt = find_format(f, MFC_FMT_DEC);
+ if (!fmt) {
+ mfc_err("Unsupported format for source.\n");
+ return -EINVAL;
+ }
+ if (!IS_MFCV6(dev)) {
+ if (fmt->fourcc == V4L2_PIX_FMT_VP8) {
+ mfc_err("Not supported format.\n");
+ return -EINVAL;
+ }
+ }
+ } else if (f->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE) {
+ fmt = find_format(f, MFC_FMT_RAW);
+ if (!fmt) {
+ mfc_err("Unsupported format for destination.\n");
+ return -EINVAL;
+ }
+ if (IS_MFCV6(dev)) {
+ if (fmt->fourcc == V4L2_PIX_FMT_NV12MT) {
+ mfc_err("Not supported format.\n");
+ return -EINVAL;
+ }
+ } else {
+ if (fmt->fourcc != V4L2_PIX_FMT_NV12MT) {
+ mfc_err("Not supported format.\n");
+ return -EINVAL;
+ }
+ }
}
+
return 0;
}
ret = -EBUSY;
goto out;
}
+ if (f->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE) {
+ fmt = find_format(f, MFC_FMT_RAW);
+ if (!fmt) {
+ mfc_err("Unsupported format for source.\n");
+ return -EINVAL;
+ }
+ if (!IS_MFCV6(dev)) {
+ if (fmt->fourcc != V4L2_PIX_FMT_NV12MT) {
+ mfc_err("Not supported format.\n");
+ return -EINVAL;
+ }
+ } else if (IS_MFCV6(dev)) {
+ if (fmt->fourcc == V4L2_PIX_FMT_NV12MT) {
+ mfc_err("Not supported format.\n");
+ return -EINVAL;
+ }
+ }
+ ctx->dst_fmt = fmt;
+ mfc_debug_leave();
+ return ret;
+ } else if (f->type != V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) {
+ mfc_err("Wrong type error for S_FMT : %d", f->type);
+ return -EINVAL;
+ }
fmt = find_format(f, MFC_FMT_DEC);
if (!fmt || fmt->codec_mode == S5P_FIMV_CODEC_NONE) {
mfc_err("Unknown codec\n");
ret = -EINVAL;
goto out;
}
+ if (!IS_MFCV6(dev)) {
+ if (fmt->fourcc == V4L2_PIX_FMT_VP8) {
+ mfc_err("Not supported format.\n");
+ return -EINVAL;
+ }
+ }
ctx->src_fmt = fmt;
ctx->codec_mode = fmt->codec_mode;
mfc_debug(2, "The codec number is: %d\n", ctx->codec_mode);
}
s5p_mfc_try_run(dev);
s5p_mfc_wait_for_done_ctx(ctx,
- S5P_FIMV_R2H_CMD_INIT_BUFFERS_RET, 0);
+ S5P_FIMV_R2H_CMD_INIT_BUFFERS_RET, 0);
}
return ret;
}
return -EINVAL;
}
+/* Export DMA buffer */
+static int vidioc_expbuf(struct file *file, void *priv,
+ struct v4l2_exportbuffer *eb)
+{
+ struct s5p_mfc_ctx *ctx = fh_to_ctx(priv);
+ int ret;
+
+ if (eb->mem_offset < DST_QUEUE_OFF_BASE)
+ return vb2_expbuf(&ctx->vq_src, eb);
+
+ eb->mem_offset -= DST_QUEUE_OFF_BASE;
+ ret = vb2_expbuf(&ctx->vq_dst, eb);
+ eb->mem_offset += DST_QUEUE_OFF_BASE;
+
+ return ret;
+}
+
/* Stream on */
static int vidioc_streamon(struct file *file, void *priv,
enum v4l2_buf_type type)
return -EINVAL;
}
break;
+ case V4L2_CID_CODEC_DISPLAY_STATUS:
+ ctrl->val = s5p_mfc_get_dspl_status()
+ & S5P_FIMV_DEC_STATUS_DECODING_STATUS_MASK;
+ break;
}
return 0;
}
return -EINVAL;
}
if (ctx->src_fmt->fourcc == V4L2_PIX_FMT_H264) {
- left = s5p_mfc_read_shm(ctx, CROP_INFO_H);
+ left = s5p_mfc_read_info(ctx, CROP_INFO_H);
right = left >> S5P_FIMV_SHARED_CROP_RIGHT_SHIFT;
left = left & S5P_FIMV_SHARED_CROP_LEFT_MASK;
- top = s5p_mfc_read_shm(ctx, CROP_INFO_V);
+ top = s5p_mfc_read_info(ctx, CROP_INFO_V);
bottom = top >> S5P_FIMV_SHARED_CROP_BOTTOM_SHIFT;
top = top & S5P_FIMV_SHARED_CROP_TOP_MASK;
cr->c.left = left;
.vidioc_querybuf = vidioc_querybuf,
.vidioc_qbuf = vidioc_qbuf,
.vidioc_dqbuf = vidioc_dqbuf,
+ .vidioc_expbuf = vidioc_expbuf,
.vidioc_streamon = vidioc_streamon,
.vidioc_streamoff = vidioc_streamoff,
.vidioc_g_crop = vidioc_g_crop,
void *allocators[])
{
struct s5p_mfc_ctx *ctx = fh_to_ctx(vq->drv_priv);
+ struct s5p_mfc_dev *dev = ctx->dev;
/* Video output for decoding (source)
* this can be set after getting an instance */
vq->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE) {
psize[0] = ctx->luma_size;
psize[1] = ctx->chroma_size;
- allocators[0] = ctx->dev->alloc_ctx[MFC_BANK2_ALLOC_CTX];
+
+ if (IS_MFCV6(dev))
+ allocators[0] = ctx->dev->alloc_ctx[MFC_BANK1_ALLOC_CTX];
+ else
+ allocators[0] = ctx->dev->alloc_ctx[MFC_BANK2_ALLOC_CTX];
allocators[1] = ctx->dev->alloc_ctx[MFC_BANK1_ALLOC_CTX];
} else if (vq->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE &&
ctx->state == MFCINST_INIT) {
if (vq->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE) {
if (ctx->capture_state == QUEUE_BUFS_MMAPED)
return 0;
- for (i = 0; i <= ctx->src_fmt->num_planes ; i++) {
+ for (i = 0; i <= ctx->src_fmt->num_planes; i++) {
if (IS_ERR_OR_NULL(ERR_PTR(
vb2_dma_contig_plane_dma_addr(vb, i)))) {
mfc_err("Plane mem not allocated\n");
s5p_mfc_cleanup_queue(&ctx->dst_queue, &ctx->vq_dst);
INIT_LIST_HEAD(&ctx->dst_queue);
ctx->dst_queue_cnt = 0;
- ctx->dpb_flush_flag = 1;
+ ctx->dpb_flush = 1;
ctx->dec_dst_flag = 0;
}
if (q->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) {
ctx->ctrls[i] = NULL;
}
+void s5p_mfc_dec_init(struct s5p_mfc_ctx *ctx)
+{
+ ctx->src_fmt = &formats[DEF_SRC_FMT];
+ ctx->dst_fmt = &formats[DEF_DST_FMT];
+}
struct s5p_mfc_fmt *get_dec_def_fmt(bool src);
int s5p_mfc_dec_ctrls_setup(struct s5p_mfc_ctx *ctx);
void s5p_mfc_dec_ctrls_delete(struct s5p_mfc_ctx *ctx);
+void s5p_mfc_dec_init(struct s5p_mfc_ctx *ctx);
#endif /* S5P_MFC_DEC_H_ */
#include <linux/workqueue.h>
#include <media/v4l2-ctrls.h>
#include <media/videobuf2-core.h>
-#include "regs-mfc.h"
#include "s5p_mfc_common.h"
#include "s5p_mfc_debug.h"
#include "s5p_mfc_enc.h"
#include "s5p_mfc_intr.h"
-#include "s5p_mfc_opr.h"
+
+#define DEF_SRC_FMT 2
+#define DEF_DST_FMT 4
static struct s5p_mfc_fmt formats[] = {
{
- .name = "4:2:0 2 Planes 64x32 Tiles",
- .fourcc = V4L2_PIX_FMT_NV12MT,
- .codec_mode = S5P_FIMV_CODEC_NONE,
- .type = MFC_FMT_RAW,
- .num_planes = 2,
+ .name = "4:2:0 2 Planes 16x16 Tiles",
+ .fourcc = V4L2_PIX_FMT_NV12MT_16X16,
+ .codec_mode = S5P_FIMV_CODEC_NONE,
+ .type = MFC_FMT_RAW,
+ .num_planes = 2,
+ },
+ {
+ .name = "4:2:0 2 Planes 64x32 Tiles",
+ .fourcc = V4L2_PIX_FMT_NV12MT,
+ .codec_mode = S5P_FIMV_CODEC_NONE,
+ .type = MFC_FMT_RAW,
+ .num_planes = 2,
+ },
+ {
+ .name = "4:2:0 2 Planes Y/CbCr",
+ .fourcc = V4L2_PIX_FMT_NV12M,
+ .codec_mode = S5P_FIMV_CODEC_NONE,
+ .type = MFC_FMT_RAW,
+ .num_planes = 2,
},
{
- .name = "4:2:0 2 Planes",
- .fourcc = V4L2_PIX_FMT_NV12M,
- .codec_mode = S5P_FIMV_CODEC_NONE,
- .type = MFC_FMT_RAW,
- .num_planes = 2,
+ .name = "4:2:0 2 Planes Y/CrCb",
+ .fourcc = V4L2_PIX_FMT_NV21M,
+ .codec_mode = S5P_FIMV_CODEC_NONE,
+ .type = MFC_FMT_RAW,
+ .num_planes = 2,
},
{
- .name = "H264 Encoded Stream",
- .fourcc = V4L2_PIX_FMT_H264,
- .codec_mode = S5P_FIMV_CODEC_H264_ENC,
- .type = MFC_FMT_ENC,
- .num_planes = 1,
+ .name = "H264 Encoded Stream",
+ .fourcc = V4L2_PIX_FMT_H264,
+ .codec_mode = S5P_FIMV_CODEC_H264_ENC,
+ .type = MFC_FMT_ENC,
+ .num_planes = 1,
},
{
- .name = "MPEG4 Encoded Stream",
- .fourcc = V4L2_PIX_FMT_MPEG4,
- .codec_mode = S5P_FIMV_CODEC_MPEG4_ENC,
- .type = MFC_FMT_ENC,
- .num_planes = 1,
+ .name = "MPEG4 Encoded Stream",
+ .fourcc = V4L2_PIX_FMT_MPEG4,
+ .codec_mode = S5P_FIMV_CODEC_MPEG4_ENC,
+ .type = MFC_FMT_ENC,
+ .num_planes = 1,
},
{
- .name = "H263 Encoded Stream",
- .fourcc = V4L2_PIX_FMT_H263,
- .codec_mode = S5P_FIMV_CODEC_H263_ENC,
- .type = MFC_FMT_ENC,
- .num_planes = 1,
+ .name = "H263 Encoded Stream",
+ .fourcc = V4L2_PIX_FMT_H263,
+ .codec_mode = S5P_FIMV_CODEC_H263_ENC,
+ .type = MFC_FMT_ENC,
+ .num_planes = 1,
},
};
.id = V4L2_CID_MPEG_VIDEO_MULTI_SLICE_MODE,
.type = V4L2_CTRL_TYPE_MENU,
.minimum = V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_SINGLE,
- .maximum = V4L2_MPEG_VIDEO_MULTI_SICE_MODE_MAX_BYTES,
+ .maximum = V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_MAX_BITS,
.default_value = V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_SINGLE,
.menu_skip_mask = 0,
},
.default_value = 1,
},
{
- .id = V4L2_CID_MPEG_VIDEO_MULTI_SLICE_MAX_BYTES,
+ .id = V4L2_CID_MPEG_VIDEO_MULTI_SLICE_MAX_BITS,
.type = V4L2_CTRL_TYPE_INTEGER,
.minimum = 1900,
.maximum = (1 << 30) - 1,
.step = 1,
.default_value = 0,
},
+ {
+ .id = V4L2_CID_MPEG_VIDEO_VBV_DELAY,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .minimum = INT_MIN,
+ .maximum = INT_MAX,
+ .step = 1,
+ .default_value = 0,
+ },
{
.id = V4L2_CID_MPEG_VIDEO_H264_CPB_SIZE,
.type = V4L2_CTRL_TYPE_INTEGER,
.step = 1,
.default_value = 0,
},
+ {
+ .id = V4L2_CID_MPEG_VIDEO_H264_SEI_FRAME_PACKING,
+ .type = V4L2_CTRL_TYPE_BOOLEAN,
+ .minimum = 0,
+ .maximum = 1,
+ .step = 1,
+ .default_value = 0,
+ },
+ {
+ .id = V4L2_CID_MPEG_VIDEO_H264_SEI_FP_CURRENT_FRAME_0,
+ .type = V4L2_CTRL_TYPE_BOOLEAN,
+ .minimum = 0,
+ .maximum = 1,
+ .step = 1,
+ .default_value = 0,
+ },
+ {
+ .id = V4L2_CID_MPEG_VIDEO_H264_SEI_FP_ARRANGEMENT_TYPE,
+ .type = V4L2_CTRL_TYPE_MENU,
+ .minimum = V4L2_MPEG_VIDEO_H264_SEI_FP_ARRANGEMENT_TYPE_SIDE_BY_SIDE,
+ .maximum = V4L2_MPEG_VIDEO_H264_SEI_FP_ARRANGEMENT_TYPE_TEMPORAL,
+ .default_value = V4L2_MPEG_VIDEO_H264_SEI_FP_ARRANGEMENT_TYPE_SIDE_BY_SIDE,
+ .menu_skip_mask = 0,
+ },
+ {
+ .id = V4L2_CID_MPEG_VIDEO_H264_FMO,
+ .type = V4L2_CTRL_TYPE_BOOLEAN,
+ .minimum = 0,
+ .maximum = 1,
+ .step = 1,
+ .default_value = 0,
+ },
+ {
+ .id = V4L2_CID_MPEG_VIDEO_H264_FMO_MAP_TYPE,
+ .type = V4L2_CTRL_TYPE_MENU,
+ .minimum = V4L2_MPEG_VIDEO_H264_FMO_MAP_TYPE_INTERLEAVED_SLICES,
+ .maximum = V4L2_MPEG_VIDEO_H264_FMO_MAP_TYPE_WIPE_SCAN,
+ .default_value = V4L2_MPEG_VIDEO_H264_FMO_MAP_TYPE_INTERLEAVED_SLICES,
+ .menu_skip_mask = (
+ (1 << V4L2_MPEG_VIDEO_H264_FMO_MAP_TYPE_FOREGROUND_WITH_LEFT_OVER) |
+ (1 << V4L2_MPEG_VIDEO_H264_FMO_MAP_TYPE_BOX_OUT)
+ ),
+ },
+ {
+ .id = V4L2_CID_MPEG_VIDEO_H264_FMO_SLICE_GROUP,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .minimum = 1,
+ .maximum = 4,
+ .step = 1,
+ .default_value = 1,
+ },
+ {
+ .id = V4L2_CID_MPEG_VIDEO_H264_FMO_CHANGE_DIRECTION,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .minimum = V4L2_MPEG_VIDEO_H264_FMO_CHANGE_DIR_RIGHT,
+ .maximum = V4L2_MPEG_VIDEO_H264_FMO_CHANGE_DIR_LEFT,
+ .step = 1,
+ .default_value = V4L2_MPEG_VIDEO_H264_FMO_CHANGE_DIR_RIGHT,
+ },
+ {
+ .id = V4L2_CID_MPEG_VIDEO_H264_FMO_CHANGE_RATE,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .minimum = INT_MIN,
+ .maximum = INT_MAX,
+ .step = 1,
+ .default_value = 0,
+ },
+ {
+ .id = V4L2_CID_MPEG_VIDEO_H264_FMO_RUN_LENGTH,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .minimum = INT_MIN,
+ .maximum = INT_MAX,
+ .step = 1,
+ .default_value = 0,
+ },
+ {
+ .id = V4L2_CID_MPEG_VIDEO_H264_ASO,
+ .type = V4L2_CTRL_TYPE_BOOLEAN,
+ .minimum = 0,
+ .maximum = 1,
+ .step = 1,
+ .default_value = 0,
+ },
+ {
+ .id = V4L2_CID_MPEG_VIDEO_H264_ASO_SLICE_ORDER,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .minimum = INT_MIN,
+ .maximum = INT_MAX,
+ .step = 1,
+ .default_value = 0,
+ },
+ {
+ .id = V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING,
+ .type = V4L2_CTRL_TYPE_BOOLEAN,
+ .minimum = 0,
+ .maximum = 1,
+ .step = 1,
+ .default_value = 0,
+ },
+ {
+ .id = V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING_TYPE,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .minimum = V4L2_MPEG_VIDEO_H264_HIERARCHICAL_CODING_B,
+ .maximum = V4L2_MPEG_VIDEO_H264_HIERARCHICAL_CODING_P,
+ .step = 1,
+ .default_value = V4L2_MPEG_VIDEO_H264_HIERARCHICAL_CODING_B,
+ },
+ {
+ .id = V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING_LAYER,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .minimum = 0,
+ .maximum = 7,
+ .step = 1,
+ .default_value = 0,
+ },
+ {
+ .id = V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING_LAYER_QP,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .minimum = INT_MIN,
+ .maximum = INT_MAX,
+ .step = 1,
+ .default_value = 0,
+ },
};
#define NUM_CTRLS ARRAY_SIZE(controls)
vb2_buffer_done(dst_mb->b, VB2_BUF_STATE_DONE);
spin_unlock_irqrestore(&dev->irqlock, flags);
}
- ctx->state = MFCINST_RUNNING;
- if (s5p_mfc_ctx_ready(ctx)) {
- spin_lock_irqsave(&dev->condlock, flags);
- set_bit(ctx->num, &dev->ctx_work_bits);
- spin_unlock_irqrestore(&dev->condlock, flags);
+
+ if (IS_MFCV6(dev)) {
+ ctx->state = MFCINST_HEAD_PARSED; /* for INIT_BUFFER cmd */
+ } else {
+ ctx->state = MFCINST_RUNNING;
+ if (s5p_mfc_ctx_ready(ctx)) {
+ spin_lock_irqsave(&dev->condlock, flags);
+ set_bit(ctx->num, &dev->ctx_work_bits);
+ spin_unlock_irqrestore(&dev->condlock, flags);
+ }
+ s5p_mfc_try_run(dev);
}
- s5p_mfc_try_run(dev);
+
+ if (IS_MFCV6(dev))
+ ctx->dpb_count = s5p_mfc_get_enc_dpb_count();
+
return 0;
}
mfc_err("failed to set output format\n");
return -EINVAL;
}
+
+ if (!IS_MFCV6(dev)) {
+ if (fmt->fourcc == V4L2_PIX_FMT_NV12MT_16X16) {
+ mfc_err("Not supported format.\n");
+ return -EINVAL;
+ }
+ } else if (IS_MFCV6(dev)) {
+ if (fmt->fourcc == V4L2_PIX_FMT_NV12MT) {
+ mfc_err("Not supported format.\n");
+ return -EINVAL;
+ }
+ }
+
if (fmt->num_planes != pix_fmt_mp->num_planes) {
mfc_err("failed to set output format\n");
ret = -EINVAL;
mfc_debug(2, "fmt - w: %d, h: %d, ctx - w: %d, h: %d\n",
pix_fmt_mp->width, pix_fmt_mp->height,
ctx->img_width, ctx->img_height);
- if (ctx->src_fmt->fourcc == V4L2_PIX_FMT_NV12M) {
- ctx->buf_width = ALIGN(ctx->img_width,
- S5P_FIMV_NV12M_HALIGN);
- ctx->luma_size = ALIGN(ctx->img_width,
- S5P_FIMV_NV12M_HALIGN) * ALIGN(ctx->img_height,
- S5P_FIMV_NV12M_LVALIGN);
- ctx->chroma_size = ALIGN(ctx->img_width,
- S5P_FIMV_NV12M_HALIGN) * ALIGN((ctx->img_height
- >> 1), S5P_FIMV_NV12M_CVALIGN);
-
- ctx->luma_size = ALIGN(ctx->luma_size,
- S5P_FIMV_NV12M_SALIGN);
- ctx->chroma_size = ALIGN(ctx->chroma_size,
- S5P_FIMV_NV12M_SALIGN);
-
- pix_fmt_mp->plane_fmt[0].sizeimage = ctx->luma_size;
- pix_fmt_mp->plane_fmt[0].bytesperline = ctx->buf_width;
- pix_fmt_mp->plane_fmt[1].sizeimage = ctx->chroma_size;
- pix_fmt_mp->plane_fmt[1].bytesperline = ctx->buf_width;
-
- } else if (ctx->src_fmt->fourcc == V4L2_PIX_FMT_NV12MT) {
- ctx->buf_width = ALIGN(ctx->img_width,
- S5P_FIMV_NV12MT_HALIGN);
- ctx->luma_size = ALIGN(ctx->img_width,
- S5P_FIMV_NV12MT_HALIGN) * ALIGN(ctx->img_height,
- S5P_FIMV_NV12MT_VALIGN);
- ctx->chroma_size = ALIGN(ctx->img_width,
- S5P_FIMV_NV12MT_HALIGN) * ALIGN((ctx->img_height
- >> 1), S5P_FIMV_NV12MT_VALIGN);
- ctx->luma_size = ALIGN(ctx->luma_size,
- S5P_FIMV_NV12MT_SALIGN);
- ctx->chroma_size = ALIGN(ctx->chroma_size,
- S5P_FIMV_NV12MT_SALIGN);
-
- pix_fmt_mp->plane_fmt[0].sizeimage = ctx->luma_size;
- pix_fmt_mp->plane_fmt[0].bytesperline = ctx->buf_width;
- pix_fmt_mp->plane_fmt[1].sizeimage = ctx->chroma_size;
- pix_fmt_mp->plane_fmt[1].bytesperline = ctx->buf_width;
- }
+
+ s5p_mfc_enc_calc_src_size(ctx);
+ pix_fmt_mp->plane_fmt[0].sizeimage = ctx->luma_size;
+ pix_fmt_mp->plane_fmt[0].bytesperline = ctx->buf_width;
+ pix_fmt_mp->plane_fmt[1].sizeimage = ctx->chroma_size;
+ pix_fmt_mp->plane_fmt[1].bytesperline = ctx->buf_width;
+
ctx->src_bufs_cnt = 0;
ctx->output_state = QUEUE_FREE;
} else {
static int vidioc_reqbufs(struct file *file, void *priv,
struct v4l2_requestbuffers *reqbufs)
{
+ struct s5p_mfc_dev *dev = video_drvdata(file);
struct s5p_mfc_ctx *ctx = fh_to_ctx(priv);
int ret = 0;
return ret;
}
ctx->capture_state = QUEUE_BUFS_REQUESTED;
- ret = s5p_mfc_alloc_codec_buffers(ctx);
- if (ret) {
- mfc_err("Failed to allocate encoding buffers\n");
- reqbufs->count = 0;
- ret = vb2_reqbufs(&ctx->vq_dst, reqbufs);
- return -ENOMEM;
+
+ if (!IS_MFCV6(dev)) {
+ ret = s5p_mfc_alloc_codec_buffers(ctx);
+ if (ret) {
+ mfc_err("Failed to allocate encoding buffers\n");
+ reqbufs->count = 0;
+ ret = vb2_reqbufs(&ctx->vq_dst, reqbufs);
+ return -ENOMEM;
+ }
}
} else if (reqbufs->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) {
if (ctx->output_state != QUEUE_FREE) {
return -EINVAL;
}
+/* Export DMA buffer */
+static int vidioc_expbuf(struct file *file, void *priv,
+ struct v4l2_exportbuffer *eb)
+{
+ struct s5p_mfc_ctx *ctx = fh_to_ctx(priv);
+ int ret;
+
+ if (eb->mem_offset < DST_QUEUE_OFF_BASE)
+ return vb2_expbuf(&ctx->vq_src, eb);
+
+ eb->mem_offset -= DST_QUEUE_OFF_BASE;
+ ret = vb2_expbuf(&ctx->vq_dst, eb);
+ eb->mem_offset += DST_QUEUE_OFF_BASE;
+
+ return ret;
+}
+
/* Stream on */
static int vidioc_streamon(struct file *file, void *priv,
enum v4l2_buf_type type)
case V4L2_CID_MPEG_VIDEO_MULTI_SLICE_MAX_MB:
p->slice_mb = ctrl->val;
break;
- case V4L2_CID_MPEG_VIDEO_MULTI_SLICE_MAX_BYTES:
+ case V4L2_CID_MPEG_VIDEO_MULTI_SLICE_MAX_BITS:
p->slice_bit = ctrl->val * 8;
break;
case V4L2_CID_MPEG_VIDEO_CYCLIC_INTRA_REFRESH_MB:
p->codec.h264.profile =
S5P_FIMV_ENC_PROFILE_H264_BASELINE;
break;
+ case V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_BASELINE:
+ if (IS_MFCV6(dev))
+ p->codec.h264.profile =
+ S5P_FIMV_ENC_PROFILE_H264_CONSTRAINED_BASELINE;
+ else
+ ret = -EINVAL;
+ break;
default:
ret = -EINVAL;
}
case V4L2_CID_MPEG_VIDEO_MPEG4_QPEL:
p->codec.mpeg4.quarter_pixel = ctrl->val;
break;
+ case V4L2_CID_MPEG_VIDEO_H264_SEI_FRAME_PACKING:
+ p->codec.h264.sei_frame_packing = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_SEI_FP_CURRENT_FRAME_0:
+ p->codec.h264.sei_fp_curr_frame_0 = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_SEI_FP_ARRANGEMENT_TYPE:
+ p->codec.h264.sei_fp_arrangement_type = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_FMO:
+ p->codec.h264.fmo = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_FMO_MAP_TYPE:
+ p->codec.h264.fmo_map_type = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_FMO_SLICE_GROUP:
+ p->codec.h264.fmo_slice_grp = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_FMO_CHANGE_DIRECTION:
+ p->codec.h264.fmo_chg_dir = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_FMO_CHANGE_RATE:
+ p->codec.h264.fmo_chg_rate = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_FMO_RUN_LENGTH:
+ p->codec.h264.fmo_run_len[ctrl->val >> 30]
+ = ctrl->val & 0x3FFFFFFF;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_ASO:
+ p->codec.h264.aso = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_ASO_SLICE_ORDER:
+ p->codec.h264.aso_slice_order[(ctrl->val >> 18) & 0x7]
+ |= (ctrl->val & 0xFF) << (((ctrl->val >> 16) & 0x3) << 3);
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING:
+ p->codec.h264.hier_qp = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING_TYPE:
+ p->codec.h264.hier_qp_type = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING_LAYER:
+ p->codec.h264.hier_qp_layer = ctrl->val;
+ break;
+ case V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING_LAYER_QP:
+ p->codec.h264.hier_qp_layer_qp[(ctrl->val >> 16) & 0x7]
+ = ctrl->val & 0xFF;
+ break;
default:
v4l2_err(&dev->v4l2_dev, "Invalid control, id=%d, val=%d\n",
ctrl->id, ctrl->val);
.vidioc_querybuf = vidioc_querybuf,
.vidioc_qbuf = vidioc_qbuf,
.vidioc_dqbuf = vidioc_dqbuf,
+ .vidioc_expbuf = vidioc_expbuf,
.vidioc_streamon = vidioc_streamon,
.vidioc_streamoff = vidioc_streamoff,
.vidioc_s_parm = vidioc_s_parm,
unsigned int psize[], void *allocators[])
{
struct s5p_mfc_ctx *ctx = fh_to_ctx(vq->drv_priv);
+ struct s5p_mfc_dev *dev = ctx->dev;
if (ctx->state != MFCINST_GOT_INST) {
mfc_err("inavlid state: %d\n", ctx->state);
*buf_count = MFC_MAX_BUFFERS;
psize[0] = ctx->luma_size;
psize[1] = ctx->chroma_size;
- allocators[0] = ctx->dev->alloc_ctx[MFC_BANK2_ALLOC_CTX];
- allocators[1] = ctx->dev->alloc_ctx[MFC_BANK2_ALLOC_CTX];
+ if (IS_MFCV6(dev)) {
+ allocators[0] = ctx->dev->alloc_ctx[MFC_BANK1_ALLOC_CTX];
+ allocators[1] = ctx->dev->alloc_ctx[MFC_BANK1_ALLOC_CTX];
+ } else {
+ allocators[0] = ctx->dev->alloc_ctx[MFC_BANK2_ALLOC_CTX];
+ allocators[1] = ctx->dev->alloc_ctx[MFC_BANK2_ALLOC_CTX];
+ }
} else {
mfc_err("inavlid queue type: %d\n", vq->type);
return -EINVAL;
for (i = 0; i < NUM_CTRLS; i++)
ctx->ctrls[i] = NULL;
}
+
+void s5p_mfc_enc_init(struct s5p_mfc_ctx *ctx)
+{
+ ctx->src_fmt = &formats[DEF_SRC_FMT];
+ ctx->dst_fmt = &formats[DEF_DST_FMT];
+}
+
struct s5p_mfc_fmt *get_enc_def_fmt(bool src);
int s5p_mfc_enc_ctrls_setup(struct s5p_mfc_ctx *ctx);
void s5p_mfc_enc_ctrls_delete(struct s5p_mfc_ctx *ctx);
+void s5p_mfc_enc_init(struct s5p_mfc_ctx *ctx);
#endif /* S5P_MFC_ENC_H_ */
#include <linux/io.h>
#include <linux/sched.h>
#include <linux/wait.h>
-#include "regs-mfc.h"
#include "s5p_mfc_common.h"
#include "s5p_mfc_debug.h"
#include "s5p_mfc_intr.h"
* published by the Free Software Foundation.
*/
-#include "regs-mfc.h"
-#include "s5p_mfc_cmd.h"
#include "s5p_mfc_common.h"
+#include "s5p_mfc_cmd.h"
#include "s5p_mfc_ctrl.h"
#include "s5p_mfc_debug.h"
#include "s5p_mfc_intr.h"
-#include "s5p_mfc_opr.h"
#include "s5p_mfc_pm.h"
-#include "s5p_mfc_shm.h"
#include <asm/cacheflush.h>
#include <linux/delay.h>
#include <linux/dma-mapping.h>
/* Allocate temporary buffers for decoding */
int s5p_mfc_alloc_dec_temp_buffers(struct s5p_mfc_ctx *ctx)
{
- void *desc_virt;
struct s5p_mfc_dev *dev = ctx->dev;
+ struct s5p_mfc_buf_size_v5 *buf_size = dev->variant->buf_size->priv;
- ctx->desc_buf = vb2_dma_contig_memops.alloc(
- dev->alloc_ctx[MFC_BANK1_ALLOC_CTX], DESC_BUF_SIZE);
- if (IS_ERR_VALUE((int)ctx->desc_buf)) {
- ctx->desc_buf = 0;
+ ctx->dsc.alloc = vb2_dma_contig_memops.alloc(
+ dev->alloc_ctx[MFC_BANK1_ALLOC_CTX],
+ buf_size->dsc);
+ if (IS_ERR_VALUE((int)ctx->dsc.alloc)) {
+ ctx->dsc.alloc = NULL;
mfc_err("Allocating DESC buffer failed\n");
return -ENOMEM;
}
- ctx->desc_phys = s5p_mfc_mem_cookie(
- dev->alloc_ctx[MFC_BANK1_ALLOC_CTX], ctx->desc_buf);
- BUG_ON(ctx->desc_phys & ((1 << MFC_BANK1_ALIGN_ORDER) - 1));
- desc_virt = vb2_dma_contig_memops.vaddr(ctx->desc_buf);
- if (desc_virt == NULL) {
- vb2_dma_contig_memops.put(ctx->desc_buf);
- ctx->desc_phys = 0;
- ctx->desc_buf = 0;
- mfc_err("Remapping DESC buffer failed\n");
- return -ENOMEM;
- }
- memset(desc_virt, 0, DESC_BUF_SIZE);
- wmb();
+ ctx->dsc.dma = s5p_mfc_mem_cookie(
+ dev->alloc_ctx[MFC_BANK1_ALLOC_CTX], ctx->dsc.alloc);
+ BUG_ON(ctx->dsc.dma & ((1 << MFC_BANK1_ALIGN_ORDER) - 1));
+
return 0;
}
/* Release temporary buffers for decoding */
void s5p_mfc_release_dec_desc_buffer(struct s5p_mfc_ctx *ctx)
{
- if (ctx->desc_phys) {
- vb2_dma_contig_memops.put(ctx->desc_buf);
- ctx->desc_phys = 0;
- ctx->desc_buf = 0;
+ if (ctx->dsc.dma) {
+ vb2_dma_contig_memops.put(ctx->dsc.alloc);
+ ctx->dsc.alloc = NULL;
+ ctx->dsc.dma = 0;
}
}
/* Allocate memory for instance data buffer */
int s5p_mfc_alloc_instance_buffer(struct s5p_mfc_ctx *ctx)
{
- void *context_virt;
struct s5p_mfc_dev *dev = ctx->dev;
+ struct s5p_mfc_buf_size_v5 *buf_size = dev->variant->buf_size->priv;
if (ctx->codec_mode == S5P_FIMV_CODEC_H264_DEC ||
ctx->codec_mode == S5P_FIMV_CODEC_H264_ENC)
- ctx->ctx_size = MFC_H264_CTX_BUF_SIZE;
+ ctx->ctx_size = buf_size->h264_ctx;
else
- ctx->ctx_size = MFC_CTX_BUF_SIZE;
- ctx->ctx_buf = vb2_dma_contig_memops.alloc(
+ ctx->ctx_size = buf_size->non_h264_ctx;
+ ctx->ctx.alloc = vb2_dma_contig_memops.alloc(
dev->alloc_ctx[MFC_BANK1_ALLOC_CTX], ctx->ctx_size);
- if (IS_ERR(ctx->ctx_buf)) {
+ if (IS_ERR(ctx->ctx.alloc)) {
mfc_err("Allocating context buffer failed\n");
- ctx->ctx_phys = 0;
- ctx->ctx_buf = 0;
+ ctx->ctx.alloc = NULL;
return -ENOMEM;
}
- ctx->ctx_phys = s5p_mfc_mem_cookie(
- dev->alloc_ctx[MFC_BANK1_ALLOC_CTX], ctx->ctx_buf);
- BUG_ON(ctx->ctx_phys & ((1 << MFC_BANK1_ALIGN_ORDER) - 1));
- ctx->ctx_ofs = OFFSETA(ctx->ctx_phys);
- context_virt = vb2_dma_contig_memops.vaddr(ctx->ctx_buf);
- if (context_virt == NULL) {
+ ctx->ctx.dma = s5p_mfc_mem_cookie(
+ dev->alloc_ctx[MFC_BANK1_ALLOC_CTX], ctx->ctx.alloc);
+ BUG_ON(ctx->ctx.dma & ((1 << MFC_BANK1_ALIGN_ORDER) - 1));
+ ctx->ctx.ofs = OFFSETA(ctx->ctx.dma);
+ ctx->ctx.virt = vb2_dma_contig_memops.vaddr(ctx->ctx.alloc);
+ if (!ctx->ctx.virt) {
mfc_err("Remapping instance buffer failed\n");
- vb2_dma_contig_memops.put(ctx->ctx_buf);
- ctx->ctx_phys = 0;
- ctx->ctx_buf = 0;
+ vb2_dma_contig_memops.put(ctx->ctx.alloc);
+ ctx->ctx.alloc = NULL;
+ ctx->ctx.ofs = 0;
+ ctx->ctx.dma = 0;
return -ENOMEM;
}
/* Zero content of the allocated memory */
- memset(context_virt, 0, ctx->ctx_size);
+ memset(ctx->ctx.virt, 0, ctx->ctx_size);
wmb();
if (s5p_mfc_init_shm(ctx) < 0) {
- vb2_dma_contig_memops.put(ctx->ctx_buf);
- ctx->ctx_phys = 0;
- ctx->ctx_buf = 0;
+ vb2_dma_contig_memops.put(ctx->ctx.alloc);
+ ctx->ctx.alloc = NULL;
+ ctx->ctx.ofs = 0;
+ ctx->ctx.virt = NULL;
+ ctx->ctx.dma = 0;
return -ENOMEM;
}
return 0;
/* Release instance buffer */
void s5p_mfc_release_instance_buffer(struct s5p_mfc_ctx *ctx)
{
- if (ctx->ctx_buf) {
- vb2_dma_contig_memops.put(ctx->ctx_buf);
- ctx->ctx_phys = 0;
- ctx->ctx_buf = 0;
+ if (ctx->ctx.alloc) {
+ vb2_dma_contig_memops.put(ctx->ctx.alloc);
+ ctx->ctx.alloc = NULL;
+ ctx->ctx.ofs = 0;
+ ctx->ctx.virt = NULL;
+ ctx->ctx.dma = 0;
+ }
+ if (ctx->shm.alloc) {
+ vb2_dma_contig_memops.put(ctx->shm.alloc);
+ ctx->shm.alloc = NULL;
+ ctx->shm.ofs = 0;
+ ctx->shm.virt = NULL;
+ }
+}
+
+int s5p_mfc_alloc_dev_context_buffer(struct s5p_mfc_dev *dev)
+{
+ /* NOP */
+
+ return 0;
+}
+
+void s5p_mfc_release_dev_context_buffer(struct s5p_mfc_dev *dev)
+{
+ /* NOP */
+}
+
+void s5p_mfc_dec_calc_dpb_size(struct s5p_mfc_ctx *ctx)
+{
+ unsigned int guard_width, guard_height;
+
+ ctx->buf_width = ALIGN(ctx->img_width, S5P_FIMV_NV12MT_HALIGN);
+ ctx->buf_height = ALIGN(ctx->img_height, S5P_FIMV_NV12MT_VALIGN);
+ mfc_debug(2, "SEQ Done: Movie dimensions %dx%d, "
+ "buffer dimensions: %dx%d\n", ctx->img_width,
+ ctx->img_height, ctx->buf_width, ctx->buf_height);
+
+ if (ctx->codec_mode == S5P_FIMV_CODEC_H264_DEC) {
+ ctx->luma_size = ALIGN(ctx->buf_width * ctx->buf_height,
+ S5P_FIMV_DEC_BUF_ALIGN);
+ ctx->chroma_size = ALIGN(ctx->buf_width *
+ ALIGN((ctx->img_height >> 1),
+ S5P_FIMV_NV12MT_VALIGN),
+ S5P_FIMV_DEC_BUF_ALIGN);
+ ctx->mv_size = ALIGN(ctx->buf_width *
+ ALIGN((ctx->buf_height >> 2),
+ S5P_FIMV_NV12MT_VALIGN),
+ S5P_FIMV_DEC_BUF_ALIGN);
+ } else {
+ guard_width = ALIGN(ctx->img_width + 24, S5P_FIMV_NV12MT_HALIGN);
+ guard_height = ALIGN(ctx->img_height + 16, S5P_FIMV_NV12MT_VALIGN);
+ ctx->luma_size = ALIGN(guard_width * guard_height,
+ S5P_FIMV_DEC_BUF_ALIGN);
+
+ guard_width = ALIGN(ctx->img_width + 16, S5P_FIMV_NV12MT_HALIGN);
+ guard_height = ALIGN((ctx->img_height >> 1) + 4, S5P_FIMV_NV12MT_VALIGN);
+ ctx->chroma_size = ALIGN(guard_width * guard_height,
+ S5P_FIMV_DEC_BUF_ALIGN);
+
+ ctx->mv_size = 0;
}
- if (ctx->shm_alloc) {
- vb2_dma_contig_memops.put(ctx->shm_alloc);
- ctx->shm_alloc = 0;
- ctx->shm = 0;
+}
+
+void s5p_mfc_enc_calc_src_size(struct s5p_mfc_ctx *ctx)
+{
+ if (ctx->src_fmt->fourcc == V4L2_PIX_FMT_NV12M) {
+ ctx->buf_width = ALIGN(ctx->img_width, S5P_FIMV_NV12M_HALIGN);
+
+ ctx->luma_size = ALIGN(ctx->img_width, S5P_FIMV_NV12M_HALIGN)
+ * ALIGN(ctx->img_height, S5P_FIMV_NV12M_LVALIGN);
+ ctx->chroma_size = ALIGN(ctx->img_width, S5P_FIMV_NV12M_HALIGN)
+ * ALIGN((ctx->img_height >> 1), S5P_FIMV_NV12M_CVALIGN);
+
+ ctx->luma_size = ALIGN(ctx->luma_size, S5P_FIMV_NV12M_SALIGN);
+ ctx->chroma_size = ALIGN(ctx->chroma_size, S5P_FIMV_NV12M_SALIGN);
+ } else if (ctx->src_fmt->fourcc == V4L2_PIX_FMT_NV12MT) {
+ ctx->buf_width = ALIGN(ctx->img_width, S5P_FIMV_NV12MT_HALIGN);
+
+ ctx->luma_size = ALIGN(ctx->img_width, S5P_FIMV_NV12MT_HALIGN)
+ * ALIGN(ctx->img_height, S5P_FIMV_NV12MT_VALIGN);
+ ctx->chroma_size = ALIGN(ctx->img_width, S5P_FIMV_NV12MT_HALIGN)
+ * ALIGN((ctx->img_height >> 1), S5P_FIMV_NV12MT_VALIGN);
+
+ ctx->luma_size = ALIGN(ctx->luma_size, S5P_FIMV_NV12MT_SALIGN);
+ ctx->chroma_size = ALIGN(ctx->chroma_size, S5P_FIMV_NV12MT_SALIGN);
}
}
void s5p_mfc_set_dec_desc_buffer(struct s5p_mfc_ctx *ctx)
{
struct s5p_mfc_dev *dev = ctx->dev;
+ struct s5p_mfc_buf_size_v5 *buf_size = dev->variant->buf_size->priv;
- mfc_write(dev, OFFSETA(ctx->desc_phys), S5P_FIMV_SI_CH0_DESC_ADR);
- mfc_write(dev, DESC_BUF_SIZE, S5P_FIMV_SI_CH0_DESC_SIZE);
+ mfc_write(dev, OFFSETA(ctx->dsc.dma), S5P_FIMV_SI_CH0_DESC_ADR);
+ mfc_write(dev, buf_size->dsc, S5P_FIMV_SI_CH0_DESC_SIZE);
}
/* Set registers for shared buffer */
void s5p_mfc_set_shared_buffer(struct s5p_mfc_ctx *ctx)
{
struct s5p_mfc_dev *dev = ctx->dev;
- mfc_write(dev, ctx->shm_ofs, S5P_FIMV_SI_CH0_HOST_WR_ADR);
+ mfc_write(dev, ctx->shm.ofs, S5P_FIMV_SI_CH0_HOST_WR_ADR);
}
/* Set registers for decoding stream buffer */
mfc_write(dev, OFFSETA(buf_addr), S5P_FIMV_SI_CH0_SB_ST_ADR);
mfc_write(dev, ctx->dec_src_buf_size, S5P_FIMV_SI_CH0_CPB_SIZE);
mfc_write(dev, buf_size, S5P_FIMV_SI_CH0_SB_FRM_SIZE);
- s5p_mfc_write_shm(ctx, start_num_byte, START_BYTE_NUM);
+ s5p_mfc_write_info(ctx, start_num_byte, START_BYTE_NUM);
return 0;
}
mfc_debug(2, "Not enough memory has been allocated\n");
return -ENOMEM;
}
- s5p_mfc_write_shm(ctx, frame_size, ALLOC_LUMA_DPB_SIZE);
- s5p_mfc_write_shm(ctx, frame_size_ch, ALLOC_CHROMA_DPB_SIZE);
+ s5p_mfc_write_info(ctx, frame_size, ALLOC_LUMA_DPB_SIZE);
+ s5p_mfc_write_info(ctx, frame_size_ch, ALLOC_CHROMA_DPB_SIZE);
if (ctx->codec_mode == S5P_FIMV_CODEC_H264_DEC)
- s5p_mfc_write_shm(ctx, frame_size_mv, ALLOC_MV_SIZE);
+ s5p_mfc_write_info(ctx, frame_size_mv, ALLOC_MV_SIZE);
mfc_write(dev, ((S5P_FIMV_CH_INIT_BUFS & S5P_FIMV_CH_MASK)
<< S5P_FIMV_CH_SHIFT) | (ctx->inst_no),
S5P_FIMV_SI_CH0_INST_ID);
/* multi-slice control */
/* multi-slice MB number or bit size */
mfc_write(dev, p->slice_mode, S5P_FIMV_ENC_MSLICE_CTRL);
- if (p->slice_mode == V4L2_MPEG_VIDEO_MULTI_SICE_MODE_MAX_MB) {
+ if (p->slice_mode == V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_MAX_MB) {
mfc_write(dev, p->slice_mb, S5P_FIMV_ENC_MSLICE_MB);
- } else if (p->slice_mode == V4L2_MPEG_VIDEO_MULTI_SICE_MODE_MAX_BYTES) {
+ } else if (p->slice_mode == V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_MAX_BITS) {
mfc_write(dev, p->slice_bit, S5P_FIMV_ENC_MSLICE_BIT);
} else {
mfc_write(dev, 0, S5P_FIMV_ENC_MSLICE_MB);
/* reaction coefficient */
if (p->rc_frame)
mfc_write(dev, p->rc_reaction_coeff, S5P_FIMV_ENC_RC_RPARA);
- shm = s5p_mfc_read_shm(ctx, EXT_ENC_CONTROL);
+ shm = s5p_mfc_read_info(ctx, EXT_ENC_CONTROL);
/* seq header ctrl */
shm &= ~(0x1 << 3);
shm |= (p->seq_hdr_mode << 3);
/* frame skip mode */
shm &= ~(0x3 << 1);
shm |= (p->frame_skip_mode << 1);
- s5p_mfc_write_shm(ctx, shm, EXT_ENC_CONTROL);
+ s5p_mfc_write_info(ctx, shm, EXT_ENC_CONTROL);
/* fixed target bit */
- s5p_mfc_write_shm(ctx, p->fixed_target_bit, RC_CONTROL_CONFIG);
+ s5p_mfc_write_info(ctx, p->fixed_target_bit, RC_CONTROL_CONFIG);
return 0;
}
reg |= p_264->profile;
mfc_write(dev, reg, S5P_FIMV_ENC_PROFILE);
/* interlace */
- mfc_write(dev, p->interlace, S5P_FIMV_ENC_PIC_STRUCT);
+ mfc_write(dev, p_264->interlace, S5P_FIMV_ENC_PIC_STRUCT);
/* height */
- if (p->interlace)
+ if (p_264->interlace)
mfc_write(dev, ctx->img_height >> 1, S5P_FIMV_ENC_VSIZE_PX);
/* loopfilter ctrl */
mfc_write(dev, p_264->loop_filter_mode, S5P_FIMV_ENC_LF_CTRL);
reg = mfc_read(dev, S5P_FIMV_ENC_RC_CONFIG);
/* macroblock level rate control */
reg &= ~(0x1 << 8);
- reg |= (p_264->rc_mb << 8);
+ reg |= (p->rc_mb << 8);
/* frame QP */
reg &= ~(0x3F);
reg |= p_264->rc_frame_qp;
reg |= p_264->rc_min_qp;
mfc_write(dev, reg, S5P_FIMV_ENC_RC_QBOUND);
/* macroblock adaptive scaling features */
- if (p_264->rc_mb) {
+ if (p->rc_mb) {
reg = mfc_read(dev, S5P_FIMV_ENC_RC_MB_CTRL);
/* dark region */
reg &= ~(0x1 << 3);
reg |= p_264->rc_mb_activity;
mfc_write(dev, reg, S5P_FIMV_ENC_RC_MB_CTRL);
}
- if (!p->rc_frame &&
- !p_264->rc_mb) {
- shm = s5p_mfc_read_shm(ctx, P_B_FRAME_QP);
+ if (!p->rc_frame && !p->rc_mb) {
+ shm = s5p_mfc_read_info(ctx, P_B_FRAME_QP);
shm &= ~(0xFFF);
shm |= ((p_264->rc_b_frame_qp & 0x3F) << 6);
shm |= (p_264->rc_p_frame_qp & 0x3F);
- s5p_mfc_write_shm(ctx, shm, P_B_FRAME_QP);
+ s5p_mfc_write_info(ctx, shm, P_B_FRAME_QP);
}
/* extended encoder ctrl */
- shm = s5p_mfc_read_shm(ctx, EXT_ENC_CONTROL);
+ shm = s5p_mfc_read_info(ctx, EXT_ENC_CONTROL);
/* AR VUI control */
shm &= ~(0x1 << 15);
shm |= (p_264->vui_sar << 1);
- s5p_mfc_write_shm(ctx, shm, EXT_ENC_CONTROL);
+ s5p_mfc_write_info(ctx, shm, EXT_ENC_CONTROL);
if (p_264->vui_sar) {
/* aspect ration IDC */
- shm = s5p_mfc_read_shm(ctx, SAMPLE_ASPECT_RATIO_IDC);
+ shm = s5p_mfc_read_info(ctx, SAMPLE_ASPECT_RATIO_IDC);
shm &= ~(0xFF);
shm |= p_264->vui_sar_idc;
- s5p_mfc_write_shm(ctx, shm, SAMPLE_ASPECT_RATIO_IDC);
+ s5p_mfc_write_info(ctx, shm, SAMPLE_ASPECT_RATIO_IDC);
if (p_264->vui_sar_idc == 0xFF) {
/* sample AR info */
- shm = s5p_mfc_read_shm(ctx, EXTENDED_SAR);
+ shm = s5p_mfc_read_info(ctx, EXTENDED_SAR);
shm &= ~(0xFFFFFFFF);
shm |= p_264->vui_ext_sar_width << 16;
shm |= p_264->vui_ext_sar_height;
- s5p_mfc_write_shm(ctx, shm, EXTENDED_SAR);
+ s5p_mfc_write_info(ctx, shm, EXTENDED_SAR);
}
}
/* intra picture period for H.264 */
- shm = s5p_mfc_read_shm(ctx, H264_I_PERIOD);
+ shm = s5p_mfc_read_info(ctx, H264_I_PERIOD);
/* control */
shm &= ~(0x1 << 16);
shm |= (p_264->open_gop << 16);
shm &= ~(0xFFFF);
shm |= p_264->open_gop_size;
}
- s5p_mfc_write_shm(ctx, shm, H264_I_PERIOD);
+ s5p_mfc_write_info(ctx, shm, H264_I_PERIOD);
/* extended encoder ctrl */
- shm = s5p_mfc_read_shm(ctx, EXT_ENC_CONTROL);
+ shm = s5p_mfc_read_info(ctx, EXT_ENC_CONTROL);
/* vbv buffer size */
if (p->frame_skip_mode ==
V4L2_MPEG_MFC51_VIDEO_FRAME_SKIP_MODE_BUF_LIMIT) {
shm &= ~(0xFFFF << 16);
shm |= (p_264->cpb_size << 16);
}
- s5p_mfc_write_shm(ctx, shm, EXT_ENC_CONTROL);
+ s5p_mfc_write_info(ctx, shm, EXT_ENC_CONTROL);
return 0;
}
mfc_write(dev, p_mpeg4->quarter_pixel, S5P_FIMV_ENC_MPEG4_QUART_PXL);
/* qp */
if (!p->rc_frame) {
- shm = s5p_mfc_read_shm(ctx, P_B_FRAME_QP);
+ shm = s5p_mfc_read_info(ctx, P_B_FRAME_QP);
shm &= ~(0xFFF);
shm |= ((p_mpeg4->rc_b_frame_qp & 0x3F) << 6);
shm |= (p_mpeg4->rc_p_frame_qp & 0x3F);
- s5p_mfc_write_shm(ctx, shm, P_B_FRAME_QP);
+ s5p_mfc_write_info(ctx, shm, P_B_FRAME_QP);
}
/* frame rate */
if (p->rc_frame) {
p->rc_framerate_denom;
mfc_write(dev, framerate,
S5P_FIMV_ENC_RC_FRAME_RATE);
- shm = s5p_mfc_read_shm(ctx, RC_VOP_TIMING);
+ shm = s5p_mfc_read_info(ctx, RC_VOP_TIMING);
shm &= ~(0xFFFFFFFF);
shm |= (1 << 31);
shm |= ((p->rc_framerate_num & 0x7FFF) << 16);
shm |= (p->rc_framerate_denom & 0xFFFF);
- s5p_mfc_write_shm(ctx, shm, RC_VOP_TIMING);
+ s5p_mfc_write_info(ctx, shm, RC_VOP_TIMING);
}
} else {
mfc_write(dev, 0, S5P_FIMV_ENC_RC_FRAME_RATE);
reg |= p_mpeg4->rc_min_qp;
mfc_write(dev, reg, S5P_FIMV_ENC_RC_QBOUND);
/* extended encoder ctrl */
- shm = s5p_mfc_read_shm(ctx, EXT_ENC_CONTROL);
+ shm = s5p_mfc_read_info(ctx, EXT_ENC_CONTROL);
/* vbv buffer size */
if (p->frame_skip_mode ==
V4L2_MPEG_MFC51_VIDEO_FRAME_SKIP_MODE_BUF_LIMIT) {
shm &= ~(0xFFFF << 16);
shm |= (p->vbv_size << 16);
}
- s5p_mfc_write_shm(ctx, shm, EXT_ENC_CONTROL);
+ s5p_mfc_write_info(ctx, shm, EXT_ENC_CONTROL);
return 0;
}
s5p_mfc_set_enc_params(ctx);
/* qp */
if (!p->rc_frame) {
- shm = s5p_mfc_read_shm(ctx, P_B_FRAME_QP);
+ shm = s5p_mfc_read_info(ctx, P_B_FRAME_QP);
shm &= ~(0xFFF);
shm |= (p_h263->rc_p_frame_qp & 0x3F);
- s5p_mfc_write_shm(ctx, shm, P_B_FRAME_QP);
+ s5p_mfc_write_info(ctx, shm, P_B_FRAME_QP);
}
/* frame rate */
if (p->rc_frame && p->rc_framerate_denom)
reg |= p_h263->rc_min_qp;
mfc_write(dev, reg, S5P_FIMV_ENC_RC_QBOUND);
/* extended encoder ctrl */
- shm = s5p_mfc_read_shm(ctx, EXT_ENC_CONTROL);
+ shm = s5p_mfc_read_info(ctx, EXT_ENC_CONTROL);
/* vbv buffer size */
if (p->frame_skip_mode ==
V4L2_MPEG_MFC51_VIDEO_FRAME_SKIP_MODE_BUF_LIMIT) {
shm &= ~(0xFFFF << 16);
shm |= (p->vbv_size << 16);
}
- s5p_mfc_write_shm(ctx, shm, EXT_ENC_CONTROL);
+ s5p_mfc_write_info(ctx, shm, EXT_ENC_CONTROL);
return 0;
}
mfc_write(dev, ctx->dec_dst_flag, S5P_FIMV_SI_CH0_RELEASE_BUF);
s5p_mfc_set_shared_buffer(ctx);
- s5p_mfc_set_flush(ctx, ctx->dpb_flush_flag);
+ s5p_mfc_set_flush(ctx, ctx->dpb_flush);
/* Issue different commands to instance basing on whether it
* is the last frame or not. */
switch (last_frame) {
}
}
+void s5p_mfc_write_info(struct s5p_mfc_ctx *ctx, unsigned int data, unsigned int ofs)
+{
+ s5p_mfc_write_shm(ctx, data, ofs);
+}
+
+unsigned int s5p_mfc_read_info(struct s5p_mfc_ctx *ctx, unsigned int ofs)
+{
+ return s5p_mfc_read_shm(ctx, ofs);
+}
int s5p_mfc_alloc_instance_buffer(struct s5p_mfc_ctx *ctx);
void s5p_mfc_release_instance_buffer(struct s5p_mfc_ctx *ctx);
+int s5p_mfc_alloc_dev_context_buffer(struct s5p_mfc_dev *dev);
+void s5p_mfc_release_dev_context_buffer(struct s5p_mfc_dev *dev);
+
+void s5p_mfc_dec_calc_dpb_size(struct s5p_mfc_ctx *ctx);
+void s5p_mfc_enc_calc_src_size(struct s5p_mfc_ctx *ctx);
+
void s5p_mfc_try_run(struct s5p_mfc_dev *dev);
void s5p_mfc_cleanup_queue(struct list_head *lh, struct vb2_queue *vq);
+void s5p_mfc_write_info(struct s5p_mfc_ctx *ctx, unsigned int data, unsigned int ofs);
+unsigned int s5p_mfc_read_info(struct s5p_mfc_ctx *ctx, unsigned int ofs);
+
#define s5p_mfc_get_dspl_y_adr() (readl(dev->regs_base + \
S5P_FIMV_SI_DISPLAY_Y_ADR) << \
MFC_OFFSET_SHIFT)
MFC_OFFSET_SHIFT)
#define s5p_mfc_get_dspl_status() readl(dev->regs_base + \
S5P_FIMV_SI_DISPLAY_STATUS)
-#define s5p_mfc_get_frame_type() (readl(dev->regs_base + \
+#define s5p_mfc_get_dec_frame_type() (readl(dev->regs_base + \
S5P_FIMV_DECODE_FRAME_TYPE) \
& S5P_FIMV_DECODE_FRAME_MASK)
+#define s5p_mfc_get_disp_frame_type() ((s5p_mfc_read_shm(ctx, DISP_PIC_FRAME_TYPE) \
+ >> S5P_FIMV_SHARED_DISP_FRAME_TYPE_SHIFT) \
+ & S5P_FIMV_DECODE_FRAME_MASK)
#define s5p_mfc_get_consumed_stream() readl(dev->regs_base + \
S5P_FIMV_SI_CONSUMED_BYTES)
#define s5p_mfc_get_int_reason() (readl(dev->regs_base + \
S5P_FIMV_ENC_SI_STRM_SIZE)
#define s5p_mfc_get_enc_slice_type() readl(dev->regs_base + \
S5P_FIMV_ENC_SI_SLICE_TYPE)
+#define s5p_mfc_get_enc_dpb_count() -1
+#define s5p_mfc_get_enc_pic_count() readl(dev->regs_base + \
+ S5P_FIMV_ENC_SI_PIC_CNT)
+#define s5p_mfc_get_sei_avail_status() s5p_mfc_read_shm(ctx, FRAME_PACK_SEI_AVAIL)
+#define s5p_mfc_get_mvc_num_views() -1
+#define s5p_mfc_get_mvc_view_id() -1
#endif /* S5P_MFC_OPR_H_ */
--- /dev/null
+/*
+ * drivers/media/video/s5p-mfc/s5p_mfc_opr_v6.c
+ *
+ * Samsung MFC (Multi Function Codec - FIMV) driver
+ * This file contains hw related functions.
+ *
+ * Copyright (c) 2012 Samsung Electronics
+ * http://www.samsung.com/
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#undef DEBUG
+
+#include <linux/delay.h>
+#include <linux/mm.h>
+#include <linux/io.h>
+#include <linux/jiffies.h>
+#include <linux/firmware.h>
+#include <linux/err.h>
+#include <linux/sched.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/cacheflush.h>
+
+#include "s5p_mfc_common.h"
+#include "s5p_mfc_cmd.h"
+#include "s5p_mfc_intr.h"
+#include "s5p_mfc_pm.h"
+#include "s5p_mfc_debug.h"
+
+/* #define S5P_MFC_DEBUG_REGWRITE */
+#ifdef S5P_MFC_DEBUG_REGWRITE
+#undef writel
+#define writel(v, r) \
+ do { \
+ printk(KERN_ERR "MFCWRITE(%p): %08x\n", r, (unsigned int)v); \
+ __raw_writel(v, r); \
+ } while (0)
+#endif /* S5P_MFC_DEBUG_REGWRITE */
+
+#define READL(offset) readl(dev->regs_base + (offset))
+#define WRITEL(data, offset) writel((data), dev->regs_base + (offset))
+#define OFFSETA(x) (((x) - dev->port_a) >> S5P_FIMV_MEM_OFFSET)
+#define OFFSETB(x) (((x) - dev->port_b) >> S5P_FIMV_MEM_OFFSET)
+
+/* Allocate temporary buffers for decoding */
+int s5p_mfc_alloc_dec_temp_buffers(struct s5p_mfc_ctx *ctx)
+{
+ /* NOP */
+
+ return 0;
+}
+
+/* Release temproary buffers for decoding */
+void s5p_mfc_release_dec_desc_buffer(struct s5p_mfc_ctx *ctx)
+{
+ /* NOP */
+}
+
+/* Allocate codec buffers */
+int s5p_mfc_alloc_codec_buffers(struct s5p_mfc_ctx *ctx)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+ unsigned int mb_width, mb_height;
+
+ mb_width = mb_width(ctx->img_width);
+ mb_height = mb_height(ctx->img_height);
+
+ if (ctx->type == MFCINST_DECODER) {
+ mfc_debug(2, "Luma size:%d Chroma size:%d MV size:%d\n",
+ ctx->luma_size, ctx->chroma_size, ctx->mv_size);
+ mfc_debug(2, "Totals bufs: %d\n", ctx->total_dpb_count);
+ } else if (ctx->type == MFCINST_ENCODER) {
+ ctx->tmv_buffer_size = 2 * ALIGN((mb_width + 1) * (mb_height + 1) * 8, 16);
+ ctx->luma_dpb_size = ALIGN((mb_width * mb_height) * 256, 256);
+ ctx->chroma_dpb_size = ALIGN((mb_width * mb_height) * 128, 256);
+ ctx->me_buffer_size = ALIGN(((((ctx->img_width+63)/64) * 16) *
+ (((ctx->img_height+63)/64) * 16)) +
+ ((((mb_width*mb_height)+31)/32) * 16), 256);
+
+ mfc_debug(2, "recon luma size: %d chroma size: %d\n",
+ ctx->luma_dpb_size, ctx->chroma_dpb_size);
+ } else {
+ return -EINVAL;
+ }
+
+ /* Codecs have different memory requirements */
+ switch (ctx->codec_mode) {
+ case S5P_FIMV_CODEC_H264_DEC:
+ case S5P_FIMV_CODEC_H264_MVC_DEC:
+ ctx->scratch_buf_size = (mb_width * 192) + 64;
+ ctx->scratch_buf_size = ALIGN(ctx->scratch_buf_size, 256);
+ ctx->bank1_size =
+ ctx->scratch_buf_size +
+ (ctx->mv_count * ctx->mv_size);
+ break;
+ case S5P_FIMV_CODEC_MPEG4_DEC:
+ /* mb_width * (mb_height * 64 + 144) + 8192 * mb_height + 41088 */
+ ctx->scratch_buf_size = mb_width * (mb_height * 64 + 144) +
+ ((2048 + 15)/16 * mb_height * 64) +
+ ((2048 + 15)/16 * 256 + 8320);
+ ctx->scratch_buf_size = ALIGN(ctx->scratch_buf_size, 256);
+ ctx->bank1_size = ctx->scratch_buf_size;
+ break;
+ case S5P_FIMV_CODEC_VC1RCV_DEC:
+ case S5P_FIMV_CODEC_VC1_DEC:
+ ctx->scratch_buf_size = 2096 * (mb_width + mb_height + 1);
+ ctx->scratch_buf_size = ALIGN(ctx->scratch_buf_size, 256);
+ ctx->bank1_size = ctx->scratch_buf_size;
+ break;
+ case S5P_FIMV_CODEC_MPEG2_DEC:
+ ctx->bank1_size = 0;
+ ctx->bank2_size = 0;
+ break;
+ case S5P_FIMV_CODEC_H263_DEC:
+ ctx->scratch_buf_size = mb_width * 400;
+ ctx->scratch_buf_size = ALIGN(ctx->scratch_buf_size, 256);
+ ctx->bank1_size = ctx->scratch_buf_size;
+ break;
+ case S5P_FIMV_CODEC_VP8_DEC:
+ ctx->scratch_buf_size = mb_width * 32 + mb_height * 128 + 34816;
+ ctx->scratch_buf_size = ALIGN(ctx->scratch_buf_size, 256);
+ ctx->bank1_size = ctx->scratch_buf_size;
+ break;
+ case S5P_FIMV_CODEC_H264_ENC:
+ ctx->scratch_buf_size = (mb_width * 64) +
+ ((mb_width + 1) * 16) + (4096 * 16);
+ ctx->scratch_buf_size = ALIGN(ctx->scratch_buf_size, 256);
+ ctx->bank1_size =
+ ctx->scratch_buf_size + ctx->tmv_buffer_size +
+ (ctx->dpb_count * (ctx->luma_dpb_size +
+ ctx->chroma_dpb_size + ctx->me_buffer_size));
+ ctx->bank2_size = 0;
+ break;
+ case S5P_FIMV_CODEC_MPEG4_ENC:
+ case S5P_FIMV_CODEC_H263_ENC:
+ ctx->scratch_buf_size = (mb_width * 16) + ((mb_width + 1) * 16);
+ ctx->scratch_buf_size = ALIGN(ctx->scratch_buf_size, 256);
+ ctx->bank1_size =
+ ctx->scratch_buf_size + ctx->tmv_buffer_size +
+ (ctx->dpb_count * (ctx->luma_dpb_size +
+ ctx->chroma_dpb_size + ctx->me_buffer_size));
+ ctx->bank2_size = 0;
+ break;
+ default:
+ break;
+ }
+
+ /* Allocate only if memory from bank 1 is necessary */
+ if (ctx->bank1_size > 0) {
+ ctx->bank1_buf = vb2_dma_contig_memops.alloc(
+ dev->alloc_ctx[MFC_BANK1_ALLOC_CTX], ctx->bank1_size);
+ if (IS_ERR(ctx->bank1_buf)) {
+ ctx->bank1_buf = 0;
+ printk(KERN_ERR
+ "Buf alloc for decoding failed (port A)\n");
+ return -ENOMEM;
+ }
+ ctx->bank1_phys = s5p_mfc_mem_cookie(
+ dev->alloc_ctx[MFC_BANK1_ALLOC_CTX], ctx->bank1_buf);
+ BUG_ON(ctx->bank1_phys & ((1 << MFC_BANK1_ALIGN_ORDER) - 1));
+ }
+
+ return 0;
+}
+
+/* Release buffers allocated for codec */
+void s5p_mfc_release_codec_buffers(struct s5p_mfc_ctx *ctx)
+{
+ if (ctx->bank1_buf) {
+ vb2_dma_contig_memops.put(ctx->bank1_buf);
+ ctx->bank1_buf = 0;
+ ctx->bank1_phys = 0;
+ ctx->bank1_size = 0;
+ }
+}
+
+/* Allocate memory for instance data buffer */
+int s5p_mfc_alloc_instance_buffer(struct s5p_mfc_ctx *ctx)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+ struct s5p_mfc_buf_size_v6 *buf_size = dev->variant->buf_size->priv;
+
+ mfc_debug_enter();
+
+ switch (ctx->codec_mode) {
+ case S5P_FIMV_CODEC_H264_DEC:
+ case S5P_FIMV_CODEC_H264_MVC_DEC:
+ ctx->ctx_size = buf_size->h264_dec_ctx;
+ break;
+ case S5P_FIMV_CODEC_MPEG4_DEC:
+ case S5P_FIMV_CODEC_H263_DEC:
+ case S5P_FIMV_CODEC_VC1RCV_DEC:
+ case S5P_FIMV_CODEC_VC1_DEC:
+ case S5P_FIMV_CODEC_MPEG2_DEC:
+ case S5P_FIMV_CODEC_VP8_DEC:
+ ctx->ctx_size = buf_size->other_dec_ctx;
+ break;
+ case S5P_FIMV_CODEC_H264_ENC:
+ ctx->ctx_size = buf_size->h264_enc_ctx;
+ break;
+ case S5P_FIMV_CODEC_MPEG4_ENC:
+ case S5P_FIMV_CODEC_H263_ENC:
+ ctx->ctx_size = buf_size->other_enc_ctx;
+ break;
+ default:
+ ctx->ctx_size = 0;
+ mfc_err("Codec type(%d) should be checked!\n", ctx->codec_mode);
+ break;
+ }
+
+ ctx->ctx.alloc = vb2_dma_contig_memops.alloc(
+ dev->alloc_ctx[MFC_BANK1_ALLOC_CTX], ctx->ctx_size);
+ if (IS_ERR(ctx->ctx.alloc)) {
+ mfc_err("Allocating context buffer failed.\n");
+ return PTR_ERR(ctx->ctx.alloc);
+ }
+
+ ctx->ctx.dma = s5p_mfc_mem_cookie(
+ dev->alloc_ctx[MFC_BANK1_ALLOC_CTX], ctx->ctx.alloc);
+
+ ctx->ctx.virt = vb2_dma_contig_memops.vaddr(ctx->ctx.alloc);
+ if (!ctx->ctx.virt) {
+ vb2_dma_contig_memops.put(ctx->ctx.alloc);
+ ctx->ctx.alloc = NULL;
+ ctx->ctx.dma = 0;
+ ctx->ctx.virt = NULL;
+
+ mfc_err("Remapping context buffer failed.\n");
+ return -ENOMEM;
+ }
+
+ memset(ctx->ctx.virt, 0, ctx->ctx_size);
+ wmb();
+
+ mfc_debug_leave();
+
+ return 0;
+}
+
+/* Release instance buffer */
+void s5p_mfc_release_instance_buffer(struct s5p_mfc_ctx *ctx)
+{
+ mfc_debug_enter();
+
+ if (ctx->ctx.alloc) {
+ vb2_dma_contig_memops.put(ctx->ctx.alloc);
+ ctx->ctx.alloc = NULL;
+ ctx->ctx.dma = 0;
+ ctx->ctx.virt = NULL;
+ }
+
+ mfc_debug_leave();
+}
+
+/* Allocate context buffers for SYS_INIT */
+int s5p_mfc_alloc_dev_context_buffer(struct s5p_mfc_dev *dev)
+{
+ struct s5p_mfc_buf_size_v6 *buf_size = dev->variant->buf_size->priv;
+
+ mfc_debug_enter();
+
+ dev->ctx_buf.alloc = vb2_dma_contig_memops.alloc(
+ dev->alloc_ctx[MFC_BANK1_ALLOC_CTX], buf_size->dev_ctx);
+ if (IS_ERR(dev->ctx_buf.alloc)) {
+ mfc_err("Allocating DESC buffer failed.\n");
+ return PTR_ERR(dev->ctx_buf.alloc);
+ }
+
+ dev->ctx_buf.dma = s5p_mfc_mem_cookie(
+ dev->alloc_ctx[MFC_BANK1_ALLOC_CTX], dev->ctx_buf.alloc);
+
+ dev->ctx_buf.virt = vb2_dma_contig_memops.vaddr(dev->ctx_buf.alloc);
+ if (!dev->ctx_buf.virt) {
+ vb2_dma_contig_memops.put(dev->ctx_buf.alloc);
+ dev->ctx_buf.alloc = NULL;
+ dev->ctx_buf.dma = 0;
+
+ mfc_err("Remapping DESC buffer failed.\n");
+ return -ENOMEM;
+ }
+
+ memset(dev->ctx_buf.virt, 0, buf_size->dev_ctx);
+ wmb();
+
+ mfc_debug_leave();
+
+ return 0;
+}
+
+/* Release context buffers for SYS_INIT */
+void s5p_mfc_release_dev_context_buffer(struct s5p_mfc_dev *dev)
+{
+ if (dev->ctx_buf.alloc) {
+ vb2_dma_contig_memops.put(dev->ctx_buf.alloc);
+ dev->ctx_buf.alloc = NULL;
+ dev->ctx_buf.dma = 0;
+ dev->ctx_buf.virt = NULL;
+ }
+}
+
+static int calc_plane(int width, int height)
+{
+ int mbX, mbY;
+
+ mbX = (width + 15)/16;
+ mbY = (height + 15)/16;
+
+ if (width * height < 2048 * 1024)
+ mbY = (mbY + 1) / 2 * 2;
+
+ return (mbX * 16) * (mbY * 16);
+}
+
+void s5p_mfc_dec_calc_dpb_size(struct s5p_mfc_ctx *ctx)
+{
+ ctx->buf_width = ALIGN(ctx->img_width, S5P_FIMV_NV12MT_HALIGN);
+ ctx->buf_height = ALIGN(ctx->img_height, S5P_FIMV_NV12MT_VALIGN);
+ mfc_debug(2, "SEQ Done: Movie dimensions %dx%d, "
+ "buffer dimensions: %dx%d\n", ctx->img_width,
+ ctx->img_height, ctx->buf_width, ctx->buf_height);
+
+ ctx->luma_size = calc_plane(ctx->img_width, ctx->img_height);
+ ctx->chroma_size = calc_plane(ctx->img_width, (ctx->img_height >> 1));
+ if (ctx->codec_mode == S5P_FIMV_CODEC_H264_DEC ||
+ ctx->codec_mode == S5P_FIMV_CODEC_H264_MVC_DEC) {
+ ctx->mv_size = s5p_mfc_dec_mv_size(ctx->img_width,
+ ctx->img_height);
+ ctx->mv_size = ALIGN(ctx->mv_size, 16);
+ } else {
+ ctx->mv_size = 0;
+ }
+}
+
+void s5p_mfc_enc_calc_src_size(struct s5p_mfc_ctx *ctx)
+{
+ unsigned int mb_width, mb_height;
+
+ mb_width = mb_width(ctx->img_width);
+ mb_height = mb_height(ctx->img_height);
+
+ ctx->buf_width = ALIGN(ctx->img_width, S5P_FIMV_NV12M_HALIGN);
+ ctx->luma_size = ALIGN((mb_width * mb_height) * 256, 256);
+ ctx->chroma_size = ALIGN((mb_width * mb_height) * 128, 256);
+}
+
+/* Set registers for decoding stream buffer */
+int s5p_mfc_set_dec_stream_buffer(struct s5p_mfc_ctx *ctx, int buf_addr,
+ unsigned int start_num_byte, unsigned int strm_size)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+ struct s5p_mfc_buf_size *buf_size = dev->variant->buf_size;
+
+ mfc_debug_enter();
+ mfc_debug(2, "inst_no: %d, buf_addr: 0x%08x, buf_size: 0x"
+ "%08x (%d)\n", ctx->inst_no, buf_addr, strm_size, strm_size);
+ WRITEL(strm_size, S5P_FIMV_D_STREAM_DATA_SIZE);
+ WRITEL(buf_addr, S5P_FIMV_D_CPB_BUFFER_ADDR);
+ WRITEL(buf_size->cpb, S5P_FIMV_D_CPB_BUFFER_SIZE);
+ WRITEL(start_num_byte, S5P_FIMV_D_CPB_BUFFER_OFFSET);
+
+ mfc_debug_leave();
+ return 0;
+}
+
+/* Set decoding frame buffer */
+int s5p_mfc_set_dec_frame_buffer(struct s5p_mfc_ctx *ctx)
+{
+ unsigned int frame_size, i;
+ unsigned int frame_size_ch, frame_size_mv;
+ struct s5p_mfc_dev *dev = ctx->dev;
+ size_t buf_addr1;
+ int buf_size1;
+ int align_gap;
+
+ buf_addr1 = ctx->bank1_phys;
+ buf_size1 = ctx->bank1_size;
+
+ mfc_debug(2, "Buf1: %p (%d)\n", (void *)buf_addr1, buf_size1);
+ mfc_debug(2, "Total DPB COUNT: %d\n", ctx->total_dpb_count);
+ mfc_debug(2, "Setting display delay to %d\n", ctx->display_delay);
+
+ WRITEL(ctx->total_dpb_count, S5P_FIMV_D_NUM_DPB);
+ WRITEL(ctx->luma_size, S5P_FIMV_D_LUMA_DPB_SIZE);
+ WRITEL(ctx->chroma_size, S5P_FIMV_D_CHROMA_DPB_SIZE);
+
+ WRITEL(buf_addr1, S5P_FIMV_D_SCRATCH_BUFFER_ADDR);
+ WRITEL(ctx->scratch_buf_size, S5P_FIMV_D_SCRATCH_BUFFER_SIZE);
+ buf_addr1 += ctx->scratch_buf_size;
+ buf_size1 -= ctx->scratch_buf_size;
+
+ if (ctx->codec_mode == S5P_FIMV_CODEC_H264_DEC ||
+ ctx->codec_mode == S5P_FIMV_CODEC_H264_MVC_DEC){
+ WRITEL(ctx->mv_size, S5P_FIMV_D_MV_BUFFER_SIZE);
+ WRITEL(ctx->mv_count, S5P_FIMV_D_NUM_MV);
+ }
+
+ frame_size = ctx->luma_size;
+ frame_size_ch = ctx->chroma_size;
+ frame_size_mv = ctx->mv_size;
+ mfc_debug(2, "Frame size: %d ch: %d mv: %d\n", frame_size, frame_size_ch,
+ frame_size_mv);
+ for (i = 0; i < ctx->total_dpb_count; i++) {
+ /* Bank2 */
+ mfc_debug(2, "Luma %d: %x\n", i,
+ ctx->dst_bufs[i].cookie.raw.luma);
+ WRITEL(ctx->dst_bufs[i].cookie.raw.luma,
+ S5P_FIMV_D_LUMA_DPB + i * 4);
+ mfc_debug(2, "\tChroma %d: %x\n", i,
+ ctx->dst_bufs[i].cookie.raw.chroma);
+ WRITEL(ctx->dst_bufs[i].cookie.raw.chroma,
+ S5P_FIMV_D_CHROMA_DPB + i * 4);
+ }
+ if (ctx->codec_mode == S5P_FIMV_CODEC_H264_DEC ||
+ ctx->codec_mode == S5P_FIMV_CODEC_H264_MVC_DEC) {
+ for (i = 0; i < ctx->mv_count; i++) {
+ /* To test alignment */
+ align_gap = buf_addr1;
+ buf_addr1 = ALIGN(buf_addr1, 16);
+ align_gap = buf_addr1 - align_gap;
+ buf_size1 -= align_gap;
+
+ mfc_debug(2, "\tBuf1: %x, size: %d\n", buf_addr1, buf_size1);
+ WRITEL(buf_addr1, S5P_FIMV_D_MV_BUFFER + i * 4);
+ buf_addr1 += frame_size_mv;
+ buf_size1 -= frame_size_mv;
+ }
+ }
+
+ mfc_debug(2, "Buf1: %u, buf_size1: %d (frames %d)\n",
+ buf_addr1, buf_size1, ctx->total_dpb_count);
+ if (buf_size1 < 0) {
+ mfc_debug(2, "Not enough memory has been allocated.\n");
+ return -ENOMEM;
+ }
+
+ WRITEL(ctx->inst_no, S5P_FIMV_INSTANCE_ID);
+ s5p_mfc_cmd_host2risc(dev, S5P_FIMV_CH_INIT_BUFS, NULL);
+
+ mfc_debug(2, "After setting buffers.\n");
+ return 0;
+}
+
+/* Set registers for encoding stream buffer */
+int s5p_mfc_set_enc_stream_buffer(struct s5p_mfc_ctx *ctx,
+ unsigned long addr, unsigned int size)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+
+ WRITEL(addr, S5P_FIMV_E_STREAM_BUFFER_ADDR); /* 16B align */
+ WRITEL(size, S5P_FIMV_E_STREAM_BUFFER_SIZE);
+
+ mfc_debug(2, "stream buf addr: 0x%08lx, size: 0x%d",
+ addr, size);
+
+ return 0;
+}
+
+void s5p_mfc_set_enc_frame_buffer(struct s5p_mfc_ctx *ctx,
+ unsigned long y_addr, unsigned long c_addr)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+
+ WRITEL(y_addr, S5P_FIMV_E_SOURCE_LUMA_ADDR); /* 256B align */
+ WRITEL(c_addr, S5P_FIMV_E_SOURCE_CHROMA_ADDR);
+
+ mfc_debug(2, "enc src y buf addr: 0x%08lx", y_addr);
+ mfc_debug(2, "enc src c buf addr: 0x%08lx", c_addr);
+}
+
+void s5p_mfc_get_enc_frame_buffer(struct s5p_mfc_ctx *ctx,
+ unsigned long *y_addr, unsigned long *c_addr)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+ unsigned long enc_recon_y_addr, enc_recon_c_addr;
+
+ *y_addr = READL(S5P_FIMV_E_ENCODED_SOURCE_LUMA_ADDR);
+ *c_addr = READL(S5P_FIMV_E_ENCODED_SOURCE_CHROMA_ADDR);
+
+ enc_recon_y_addr = READL(S5P_FIMV_E_RECON_LUMA_DPB_ADDR);
+ enc_recon_c_addr = READL(S5P_FIMV_E_RECON_CHROMA_DPB_ADDR);
+
+ mfc_debug(2, "recon y addr: 0x%08lx", enc_recon_y_addr);
+ mfc_debug(2, "recon c addr: 0x%08lx", enc_recon_c_addr);
+}
+
+/* Set encoding ref & codec buffer */
+int s5p_mfc_set_enc_ref_buffer(struct s5p_mfc_ctx *ctx)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+ size_t buf_addr1, buf_size1;
+ int i;
+
+ mfc_debug_enter();
+
+ buf_addr1 = ctx->bank1_phys;
+ buf_size1 = ctx->bank1_size;
+
+ mfc_debug(2, "Buf1: %p (%d)\n", (void *)buf_addr1, buf_size1);
+
+ for (i = 0; i < ctx->dpb_count; i++) {
+ WRITEL(buf_addr1, S5P_FIMV_E_LUMA_DPB + (4 * i));
+ buf_addr1 += ctx->luma_dpb_size;
+ WRITEL(buf_addr1, S5P_FIMV_E_CHROMA_DPB + (4 * i));
+ buf_addr1 += ctx->chroma_dpb_size;
+ WRITEL(buf_addr1, S5P_FIMV_E_ME_BUFFER + (4 * i));
+ buf_addr1 += ctx->me_buffer_size;
+ buf_size1 -= (ctx->luma_dpb_size + ctx->chroma_dpb_size +
+ ctx->me_buffer_size);
+ }
+
+ WRITEL(buf_addr1, S5P_FIMV_E_SCRATCH_BUFFER_ADDR);
+ WRITEL(ctx->scratch_buf_size, S5P_FIMV_E_SCRATCH_BUFFER_SIZE);
+ buf_addr1 += ctx->scratch_buf_size;
+ buf_size1 -= ctx->scratch_buf_size;
+
+ WRITEL(buf_addr1, S5P_FIMV_E_TMV_BUFFER0);
+ buf_addr1 += ctx->tmv_buffer_size >> 1;
+ WRITEL(buf_addr1, S5P_FIMV_E_TMV_BUFFER1);
+ buf_addr1 += ctx->tmv_buffer_size >> 1;
+ buf_size1 -= ctx->tmv_buffer_size;
+
+ mfc_debug(2, "Buf1: %u, buf_size1: %d (ref frames %d)\n",
+ buf_addr1, buf_size1, ctx->dpb_count);
+ if (buf_size1 < 0) {
+ mfc_debug(2, "Not enough memory has been allocated.\n");
+ return -ENOMEM;
+ }
+
+ WRITEL(ctx->inst_no, S5P_FIMV_INSTANCE_ID);
+ s5p_mfc_cmd_host2risc(dev, S5P_FIMV_CH_INIT_BUFS, NULL);
+
+ mfc_debug_leave();
+
+ return 0;
+}
+
+static int s5p_mfc_set_slice_mode(struct s5p_mfc_ctx *ctx)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+
+ /* multi-slice control */
+ /* multi-slice MB number or bit size */
+ WRITEL(ctx->slice_mode, S5P_FIMV_E_MSLICE_MODE);
+ if (ctx->slice_mode == V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_MAX_MB) {
+ WRITEL(ctx->slice_size.mb, S5P_FIMV_E_MSLICE_SIZE_MB);
+ } else if (ctx->slice_mode == V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_MAX_BITS) {
+ WRITEL(ctx->slice_size.bits, S5P_FIMV_E_MSLICE_SIZE_BITS);
+ } else {
+ WRITEL(0x0, S5P_FIMV_E_MSLICE_SIZE_MB);
+ WRITEL(0x0, S5P_FIMV_E_MSLICE_SIZE_BITS);
+ }
+
+ return 0;
+}
+
+static int s5p_mfc_set_enc_params(struct s5p_mfc_ctx *ctx)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+ struct s5p_mfc_enc_params *p = &ctx->enc_params;
+ unsigned int reg = 0;
+
+ mfc_debug_enter();
+
+ /* width */
+ WRITEL(ctx->img_width, S5P_FIMV_E_FRAME_WIDTH); /* 16 align */
+ /* height */
+ WRITEL(ctx->img_height, S5P_FIMV_E_FRAME_HEIGHT); /* 16 align */
+
+ /* cropped width */
+ WRITEL(ctx->img_width, S5P_FIMV_E_CROPPED_FRAME_WIDTH);
+ /* cropped height */
+ WRITEL(ctx->img_height, S5P_FIMV_E_CROPPED_FRAME_HEIGHT);
+ /* cropped offset */
+ WRITEL(0x0, S5P_FIMV_E_FRAME_CROP_OFFSET);
+
+ /* pictype : IDR period */
+ reg = 0;
+ reg |= p->gop_size & 0xFFFF;
+ WRITEL(reg, S5P_FIMV_E_GOP_CONFIG);
+
+ /* multi-slice control */
+ /* multi-slice MB number or bit size */
+ ctx->slice_mode = p->slice_mode;
+ reg = 0;
+ if (p->slice_mode == V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_MAX_MB) {
+ reg |= (0x1 << 3);
+ WRITEL(reg, S5P_FIMV_E_ENC_OPTIONS);
+ ctx->slice_size.mb = p->slice_mb;
+ } else if (p->slice_mode == V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_MAX_BITS) {
+ reg |= (0x1 << 3);
+ WRITEL(reg, S5P_FIMV_E_ENC_OPTIONS);
+ ctx->slice_size.bits = p->slice_bit;
+ } else {
+ reg &= ~(0x1 << 3);
+ WRITEL(reg, S5P_FIMV_E_ENC_OPTIONS);
+ }
+
+ s5p_mfc_set_slice_mode(ctx);
+
+ /* cyclic intra refresh */
+ WRITEL(p->intra_refresh_mb, S5P_FIMV_E_IR_SIZE);
+ reg = READL(S5P_FIMV_E_ENC_OPTIONS);
+ if (p->intra_refresh_mb == 0)
+ reg &= ~(0x1 << 4);
+ else
+ reg |= (0x1 << 4);
+ WRITEL(reg, S5P_FIMV_E_ENC_OPTIONS);
+
+ /* 'NON_REFERENCE_STORE_ENABLE' for debugging */
+ reg = READL(S5P_FIMV_E_ENC_OPTIONS);
+ reg &= ~(0x1 << 9);
+ WRITEL(reg, S5P_FIMV_E_ENC_OPTIONS);
+
+ /* memory structure cur. frame */
+ if (ctx->src_fmt->fourcc == V4L2_PIX_FMT_NV12M) {
+ /* 0: Linear, 1: 2D tiled*/
+ reg = READL(S5P_FIMV_E_ENC_OPTIONS);
+ reg &= ~(0x1 << 7);
+ WRITEL(reg, S5P_FIMV_E_ENC_OPTIONS);
+ /* 0: NV12(CbCr), 1: NV21(CrCb) */
+ WRITEL(0x0, S5P_FIMV_PIXEL_FORMAT);
+ } else if (ctx->src_fmt->fourcc == V4L2_PIX_FMT_NV21M) {
+ /* 0: Linear, 1: 2D tiled*/
+ reg = READL(S5P_FIMV_E_ENC_OPTIONS);
+ reg &= ~(0x1 << 7);
+ WRITEL(reg, S5P_FIMV_E_ENC_OPTIONS);
+ /* 0: NV12(CbCr), 1: NV21(CrCb) */
+ WRITEL(0x1, S5P_FIMV_PIXEL_FORMAT);
+ } else if (ctx->src_fmt->fourcc == V4L2_PIX_FMT_NV12MT_16X16) {
+ /* 0: Linear, 1: 2D tiled*/
+ reg = READL(S5P_FIMV_E_ENC_OPTIONS);
+ reg |= (0x1 << 7);
+ WRITEL(reg, S5P_FIMV_E_ENC_OPTIONS);
+ /* 0: NV12(CbCr), 1: NV21(CrCb) */
+ WRITEL(0x0, S5P_FIMV_PIXEL_FORMAT);
+ }
+
+ /* memory structure recon. frame */
+ /* 0: Linear, 1: 2D tiled */
+ reg = READL(S5P_FIMV_E_ENC_OPTIONS);
+ reg |= (0x1 << 8);
+ WRITEL(reg, S5P_FIMV_E_ENC_OPTIONS);
+
+ /* padding control & value */
+ WRITEL(0x0, S5P_FIMV_E_PADDING_CTRL);
+ if (p->pad) {
+ reg = 0;
+ /** enable */
+ reg |= (1 << 31);
+ /** cr value */
+ reg |= ((p->pad_cr & 0xFF) << 16);
+ /** cb value */
+ reg |= ((p->pad_cb & 0xFF) << 8);
+ /** y value */
+ reg |= p->pad_luma & 0xFF;
+ WRITEL(reg, S5P_FIMV_E_PADDING_CTRL);
+ }
+
+ /* rate control config. */
+ reg = 0;
+ /* frame-level rate control */
+ reg |= ((p->rc_frame & 0x1) << 9);
+ WRITEL(reg, S5P_FIMV_E_RC_CONFIG);
+
+ /* bit rate */
+ if (p->rc_frame)
+ WRITEL(p->rc_bitrate,
+ S5P_FIMV_E_RC_BIT_RATE);
+ else
+ WRITEL(1, S5P_FIMV_E_RC_BIT_RATE);
+
+ /* reaction coefficient */
+ if (p->rc_frame) {
+ if (p->rc_reaction_coeff < TIGHT_CBR_MAX) /* tight CBR */
+ WRITEL(1, S5P_FIMV_E_RC_RPARAM);
+ else /* loose CBR */
+ WRITEL(2, S5P_FIMV_E_RC_RPARAM);
+ }
+
+ /* seq header ctrl */
+ reg = READL(S5P_FIMV_E_ENC_OPTIONS);
+ reg &= ~(0x1 << 2);
+ reg |= ((p->seq_hdr_mode & 0x1) << 2);
+
+ /* frame skip mode */
+ reg &= ~(0x3);
+ reg |= (p->frame_skip_mode & 0x3);
+ WRITEL(reg, S5P_FIMV_E_ENC_OPTIONS);
+
+ /* 'DROP_CONTROL_ENABLE', disable */
+ reg = READL(S5P_FIMV_E_RC_CONFIG);
+ reg &= ~(0x1 << 10);
+ WRITEL(reg, S5P_FIMV_E_RC_CONFIG);
+
+ /* setting for MV range [16, 256] */
+ reg = 0;
+ reg &= ~(0x3FFF);
+ reg = 256;
+ WRITEL(reg, S5P_FIMV_E_MV_HOR_RANGE);
+
+ reg = 0;
+ reg &= ~(0x3FFF);
+ reg = 256;
+ WRITEL(reg, S5P_FIMV_E_MV_VER_RANGE);
+
+ WRITEL(0x0, S5P_FIMV_E_FRAME_INSERTION);
+ WRITEL(0x0, S5P_FIMV_E_ROI_BUFFER_ADDR);
+ WRITEL(0x0, S5P_FIMV_E_PARAM_CHANGE);
+ WRITEL(0x0, S5P_FIMV_E_RC_ROI_CTRL);
+ WRITEL(0x0, S5P_FIMV_E_PICTURE_TAG);
+
+ WRITEL(0x0, S5P_FIMV_E_BIT_COUNT_ENABLE);
+ WRITEL(0x0, S5P_FIMV_E_MAX_BIT_COUNT);
+ WRITEL(0x0, S5P_FIMV_E_MIN_BIT_COUNT);
+
+ WRITEL(0x0, S5P_FIMV_E_METADATA_BUFFER_ADDR);
+ WRITEL(0x0, S5P_FIMV_E_METADATA_BUFFER_SIZE);
+
+ mfc_debug_leave();
+
+ return 0;
+}
+
+static int s5p_mfc_set_enc_params_h264(struct s5p_mfc_ctx *ctx)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+ struct s5p_mfc_enc_params *p = &ctx->enc_params;
+ struct s5p_mfc_h264_enc_params *p_h264 = &p->codec.h264;
+ unsigned int reg = 0;
+ int i;
+
+ mfc_debug_enter();
+
+ s5p_mfc_set_enc_params(ctx);
+
+ /* pictype : number of B */
+ reg = READL(S5P_FIMV_E_GOP_CONFIG);
+ reg &= ~(0x3 << 16);
+ reg |= ((p->num_b_frame & 0x3) << 16);
+ WRITEL(reg, S5P_FIMV_E_GOP_CONFIG);
+
+ /* profile & level */
+ reg = 0;
+ /** level */
+ reg |= ((p_h264->level & 0xFF) << 8);
+ /** profile - 0 ~ 3 */
+ reg |= p_h264->profile & 0x3F;
+ WRITEL(reg, S5P_FIMV_E_PICTURE_PROFILE);
+
+ /* rate control config. */
+ reg = READL(S5P_FIMV_E_RC_CONFIG);
+ /** macroblock level rate control */
+ reg &= ~(0x1 << 8);
+ reg |= ((p->rc_mb & 0x1) << 8);
+ WRITEL(reg, S5P_FIMV_E_RC_CONFIG);
+ /** frame QP */
+ reg &= ~(0x3F);
+ reg |= p_h264->rc_frame_qp & 0x3F;
+ WRITEL(reg, S5P_FIMV_E_RC_CONFIG);
+
+ /* max & min value of QP */
+ reg = 0;
+ /** max QP */
+ reg |= ((p_h264->rc_max_qp & 0x3F) << 8);
+ /** min QP */
+ reg |= p_h264->rc_min_qp & 0x3F;
+ WRITEL(reg, S5P_FIMV_E_RC_QP_BOUND);
+
+ /* other QPs */
+ WRITEL(0x0, S5P_FIMV_E_FIXED_PICTURE_QP);
+ if (!p->rc_frame && !p->rc_mb) {
+ reg = 0;
+ reg |= ((p_h264->rc_b_frame_qp & 0x3F) << 16);
+ reg |= ((p_h264->rc_p_frame_qp & 0x3F) << 8);
+ reg |= p_h264->rc_frame_qp & 0x3F;
+ WRITEL(reg, S5P_FIMV_E_FIXED_PICTURE_QP);
+ }
+
+ /* frame rate */
+ if (p->rc_frame && p->rc_framerate_num && p->rc_framerate_denom) {
+ reg = 0;
+ reg |= ((p->rc_framerate_num & 0xFFFF) << 16);
+ reg |= p->rc_framerate_denom & 0xFFFF;
+ WRITEL(reg, S5P_FIMV_E_RC_FRAME_RATE);
+ }
+
+ /* vbv buffer size */
+ if (p->frame_skip_mode ==
+ V4L2_MPEG_MFC51_VIDEO_FRAME_SKIP_MODE_BUF_LIMIT) {
+ WRITEL(p_h264->cpb_size & 0xFFFF, S5P_FIMV_E_VBV_BUFFER_SIZE);
+
+ if (p->rc_frame)
+ WRITEL(p->vbv_delay, S5P_FIMV_E_VBV_INIT_DELAY);
+ }
+
+ /* interlace */
+ reg = 0;
+ reg |= ((p_h264->interlace & 0x1) << 3);
+ WRITEL(reg, S5P_FIMV_E_H264_OPTIONS);
+
+ /* height */
+ if (p_h264->interlace) {
+ WRITEL(ctx->img_height >> 1, S5P_FIMV_E_FRAME_HEIGHT); /* 32 align */
+ /* cropped height */
+ WRITEL(ctx->img_height >> 1, S5P_FIMV_E_CROPPED_FRAME_HEIGHT);
+ }
+
+ /* loop filter ctrl */
+ reg = READL(S5P_FIMV_E_H264_OPTIONS);
+ reg &= ~(0x3 << 1);
+ reg |= ((p_h264->loop_filter_mode & 0x3) << 1);
+ WRITEL(reg, S5P_FIMV_E_H264_OPTIONS);
+
+ /* loopfilter alpha offset */
+ if (p_h264->loop_filter_alpha < 0) {
+ reg = 0x10;
+ reg |= (0xFF - p_h264->loop_filter_alpha) + 1;
+ } else {
+ reg = 0x00;
+ reg |= (p_h264->loop_filter_alpha & 0xF);
+ }
+ WRITEL(reg, S5P_FIMV_E_H264_LF_ALPHA_OFFSET);
+
+ /* loopfilter beta offset */
+ if (p_h264->loop_filter_beta < 0) {
+ reg = 0x10;
+ reg |= (0xFF - p_h264->loop_filter_beta) + 1;
+ } else {
+ reg = 0x00;
+ reg |= (p_h264->loop_filter_beta & 0xF);
+ }
+ WRITEL(reg, S5P_FIMV_E_H264_LF_BETA_OFFSET);
+
+ /* entropy coding mode */
+ reg = READL(S5P_FIMV_E_H264_OPTIONS);
+ reg &= ~(0x1);
+ reg |= p_h264->entropy_mode & 0x1;
+ WRITEL(reg, S5P_FIMV_E_H264_OPTIONS);
+
+ /* number of ref. picture */
+ reg = READL(S5P_FIMV_E_H264_OPTIONS);
+ reg &= ~(0x1 << 7);
+ reg |= (((p_h264->num_ref_pic_4p - 1) & 0x1) << 7);
+ WRITEL(reg, S5P_FIMV_E_H264_OPTIONS);
+
+ /* 8x8 transform enable */
+ reg = READL(S5P_FIMV_E_H264_OPTIONS);
+ reg &= ~(0x3 << 12);
+ reg |= ((p_h264->_8x8_transform & 0x3) << 12);
+ WRITEL(reg, S5P_FIMV_E_H264_OPTIONS);
+
+ /* macroblock adaptive scaling features */
+ WRITEL(0x0, S5P_FIMV_E_MB_RC_CONFIG);
+ if (p->rc_mb) {
+ reg = 0;
+ /** dark region */
+ reg |= ((p_h264->rc_mb_dark & 0x1) << 3);
+ /** smooth region */
+ reg |= ((p_h264->rc_mb_smooth & 0x1) << 2);
+ /** static region */
+ reg |= ((p_h264->rc_mb_static & 0x1) << 1);
+ /** high activity region */
+ reg |= p_h264->rc_mb_activity & 0x1;
+ WRITEL(reg, S5P_FIMV_E_MB_RC_CONFIG);
+ }
+
+ /* aspect ratio VUI */
+ reg = READL(S5P_FIMV_E_H264_OPTIONS);
+ reg &= ~(0x1 << 5);
+ reg |= ((p_h264->vui_sar & 0x1) << 5);
+ WRITEL(reg, S5P_FIMV_E_H264_OPTIONS);
+
+ WRITEL(0x0, S5P_FIMV_E_ASPECT_RATIO);
+ WRITEL(0x0, S5P_FIMV_E_EXTENDED_SAR);
+ if (p_h264->vui_sar) {
+ /* aspect ration IDC */
+ reg = 0;
+ reg |= p_h264->vui_sar_idc & 0xFF;
+ WRITEL(reg, S5P_FIMV_E_ASPECT_RATIO);
+ if (p_h264->vui_sar_idc == 0xFF) {
+ /* extended SAR */
+ reg = 0;
+ reg |= (p_h264->vui_ext_sar_width & 0xFFFF) << 16;
+ reg |= p_h264->vui_ext_sar_height & 0xFFFF;
+ WRITEL(reg, S5P_FIMV_E_EXTENDED_SAR);
+ }
+ }
+
+ /* intra picture period for H.264 open GOP */
+ /* control */
+ reg = READL(S5P_FIMV_E_H264_OPTIONS);
+ reg &= ~(0x1 << 4);
+ reg |= ((p_h264->open_gop & 0x1) << 4);
+ WRITEL(reg, S5P_FIMV_E_H264_OPTIONS);
+ /* value */
+ WRITEL(0x0, S5P_FIMV_E_H264_I_PERIOD);
+ if (p_h264->open_gop) {
+ reg = 0;
+ reg |= p_h264->open_gop_size & 0xFFFF;
+ WRITEL(reg, S5P_FIMV_E_H264_I_PERIOD);
+ }
+
+ /* 'WEIGHTED_BI_PREDICTION' for B is disable */
+ reg = READL(S5P_FIMV_E_H264_OPTIONS);
+ reg &= ~(0x3 << 9);
+ WRITEL(reg, S5P_FIMV_E_H264_OPTIONS);
+
+ /* 'CONSTRAINED_INTRA_PRED_ENABLE' is disable */
+ reg = READL(S5P_FIMV_E_H264_OPTIONS);
+ reg &= ~(0x1 << 14);
+ WRITEL(reg, S5P_FIMV_E_H264_OPTIONS);
+
+ /* ASO */
+ reg = READL(S5P_FIMV_E_H264_OPTIONS);
+ reg &= ~(0x1 << 6);
+ reg |= ((p_h264->aso & 0x1) << 6);
+ WRITEL(reg, S5P_FIMV_E_H264_OPTIONS);
+
+ /* hier qp enable */
+ reg = READL(S5P_FIMV_E_H264_OPTIONS);
+ reg &= ~(0x1 << 8);
+ reg |= ((p_h264->open_gop & 0x1) << 8);
+ WRITEL(reg, S5P_FIMV_E_H264_OPTIONS);
+ reg = 0;
+ if (p_h264->hier_qp && p_h264->hier_qp_layer) {
+ reg |= (p_h264->hier_qp_type & 0x1) << 0x3;
+ reg |= p_h264->hier_qp_layer & 0x7;
+ WRITEL(reg, S5P_FIMV_E_H264_NUM_T_LAYER);
+ /* QP value for each layer */
+ for (i = 0; i < (p_h264->hier_qp_layer & 0x7); i++)
+ WRITEL(p_h264->hier_qp_layer_qp[i],
+ S5P_FIMV_E_H264_HIERARCHICAL_QP_LAYER0 + i * 4);
+ }
+ /* number of coding layer should be zero when hierarchical is disable */
+ WRITEL(reg, S5P_FIMV_E_H264_NUM_T_LAYER);
+
+ /* frame packing SEI generation */
+ reg = READL(S5P_FIMV_E_H264_OPTIONS);
+ reg &= ~(0x1 << 25);
+ reg |= ((p_h264->sei_frame_packing & 0x1) << 25);
+ WRITEL(reg, S5P_FIMV_E_H264_OPTIONS);
+ if (p_h264->sei_frame_packing) {
+ reg = 0;
+ /** current frame0 flag */
+ reg |= ((p_h264->sei_fp_curr_frame_0 & 0x1) << 2);
+ /** arrangement type */
+ reg |= p_h264->sei_fp_arrangement_type & 0x3;
+ WRITEL(reg, S5P_FIMV_E_H264_FRAME_PACKING_SEI_INFO);
+ }
+
+ if (p_h264->fmo) {
+ switch (p_h264->fmo_map_type) {
+ case V4L2_MPEG_VIDEO_H264_FMO_MAP_TYPE_INTERLEAVED_SLICES:
+ if (p_h264->fmo_slice_grp > 4)
+ p_h264->fmo_slice_grp = 4;
+ for (i = 0; i < (p_h264->fmo_slice_grp & 0xF); i++)
+ WRITEL(p_h264->fmo_run_len[i] - 1,
+ S5P_FIMV_E_H264_FMO_RUN_LENGTH_MINUS1_0 + i * 4);
+ break;
+ case V4L2_MPEG_VIDEO_H264_FMO_MAP_TYPE_SCATTERED_SLICES:
+ if (p_h264->fmo_slice_grp > 4)
+ p_h264->fmo_slice_grp = 4;
+ break;
+ case V4L2_MPEG_VIDEO_H264_FMO_MAP_TYPE_RASTER_SCAN:
+ case V4L2_MPEG_VIDEO_H264_FMO_MAP_TYPE_WIPE_SCAN:
+ if (p_h264->fmo_slice_grp > 2)
+ p_h264->fmo_slice_grp = 2;
+ WRITEL(p_h264->fmo_chg_dir & 0x1, S5P_FIMV_E_H264_FMO_SLICE_GRP_CHANGE_DIR);
+ /* the valid range is 0 ~ number of macroblocks -1 */
+ WRITEL(p_h264->fmo_chg_rate, S5P_FIMV_E_H264_FMO_SLICE_GRP_CHANGE_RATE_MINUS1);
+ break;
+ default:
+ mfc_err("Unsupported map type for FMO: %d\n", p_h264->fmo_map_type);
+ p_h264->fmo_map_type = 0;
+ p_h264->fmo_slice_grp = 1;
+ break;
+ }
+
+ WRITEL(p_h264->fmo_map_type, S5P_FIMV_E_H264_FMO_SLICE_GRP_MAP_TYPE);
+ WRITEL(p_h264->fmo_slice_grp - 1, S5P_FIMV_E_H264_FMO_NUM_SLICE_GRP_MINUS1);
+ } else {
+ WRITEL(0, S5P_FIMV_E_H264_FMO_NUM_SLICE_GRP_MINUS1);
+ }
+
+ mfc_debug_leave();
+
+ return 0;
+}
+
+static int s5p_mfc_set_enc_params_mpeg4(struct s5p_mfc_ctx *ctx)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+ struct s5p_mfc_enc_params *p = &ctx->enc_params;
+ struct s5p_mfc_mpeg4_enc_params *p_mpeg4 = &p->codec.mpeg4;
+ unsigned int reg = 0;
+
+ mfc_debug_enter();
+
+ s5p_mfc_set_enc_params(ctx);
+
+ /* pictype : number of B */
+ reg = READL(S5P_FIMV_E_GOP_CONFIG);
+ reg &= ~(0x3 << 16);
+ reg |= ((p->num_b_frame & 0x3) << 16);
+ WRITEL(reg, S5P_FIMV_E_GOP_CONFIG);
+
+ /* profile & level */
+ reg = 0;
+ /** level */
+ reg |= ((p_mpeg4->level & 0xFF) << 8);
+ /** profile - 0 ~ 1 */
+ reg |= p_mpeg4->profile & 0x3F;
+ WRITEL(reg, S5P_FIMV_E_PICTURE_PROFILE);
+
+ /* rate control config. */
+ reg = READL(S5P_FIMV_E_RC_CONFIG);
+ /** macroblock level rate control */
+ reg &= ~(0x1 << 8);
+ reg |= ((p->rc_mb & 0x1) << 8);
+ WRITEL(reg, S5P_FIMV_E_RC_CONFIG);
+ /** frame QP */
+ reg &= ~(0x3F);
+ reg |= p_mpeg4->rc_frame_qp & 0x3F;
+ WRITEL(reg, S5P_FIMV_E_RC_CONFIG);
+
+ /* max & min value of QP */
+ reg = 0;
+ /** max QP */
+ reg |= ((p_mpeg4->rc_max_qp & 0x3F) << 8);
+ /** min QP */
+ reg |= p_mpeg4->rc_min_qp & 0x3F;
+ WRITEL(reg, S5P_FIMV_E_RC_QP_BOUND);
+
+ /* other QPs */
+ WRITEL(0x0, S5P_FIMV_E_FIXED_PICTURE_QP);
+ if (!p->rc_frame && !p->rc_mb) {
+ reg = 0;
+ reg |= ((p_mpeg4->rc_b_frame_qp & 0x3F) << 16);
+ reg |= ((p_mpeg4->rc_p_frame_qp & 0x3F) << 8);
+ reg |= p_mpeg4->rc_frame_qp & 0x3F;
+ WRITEL(reg, S5P_FIMV_E_FIXED_PICTURE_QP);
+ }
+
+ /* frame rate */
+ if (p->rc_frame && p->rc_framerate_num && p->rc_framerate_denom) {
+ reg = 0;
+ reg |= ((p->rc_framerate_num & 0xFFFF) << 16);
+ reg |= p->rc_framerate_denom & 0xFFFF;
+ WRITEL(reg, S5P_FIMV_E_RC_FRAME_RATE);
+ }
+
+ /* vbv buffer size */
+ if (p->frame_skip_mode ==
+ V4L2_MPEG_MFC51_VIDEO_FRAME_SKIP_MODE_BUF_LIMIT) {
+ WRITEL(p->vbv_size & 0xFFFF, S5P_FIMV_E_VBV_BUFFER_SIZE);
+
+ if (p->rc_frame)
+ WRITEL(p->vbv_delay, S5P_FIMV_E_VBV_INIT_DELAY);
+ }
+
+ /* Disable HEC */
+ WRITEL(0x0, S5P_FIMV_E_MPEG4_OPTIONS);
+ WRITEL(0x0, S5P_FIMV_E_MPEG4_HEC_PERIOD);
+
+ mfc_debug_leave();
+
+ return 0;
+}
+
+static int s5p_mfc_set_enc_params_h263(struct s5p_mfc_ctx *ctx)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+ struct s5p_mfc_enc_params *p = &ctx->enc_params;
+ struct s5p_mfc_mpeg4_enc_params *p_h263 = &p->codec.mpeg4;
+ unsigned int reg = 0;
+
+ mfc_debug_enter();
+
+ s5p_mfc_set_enc_params(ctx);
+
+ /* profile & level */
+ reg = 0;
+ /** profile */
+ reg |= (0x1 << 4);
+ WRITEL(reg, S5P_FIMV_E_PICTURE_PROFILE);
+
+ /* rate control config. */
+ reg = READL(S5P_FIMV_E_RC_CONFIG);
+ /** macroblock level rate control */
+ reg &= ~(0x1 << 8);
+ reg |= ((p->rc_mb & 0x1) << 8);
+ WRITEL(reg, S5P_FIMV_E_RC_CONFIG);
+ /** frame QP */
+ reg &= ~(0x3F);
+ reg |= p_h263->rc_frame_qp & 0x3F;
+ WRITEL(reg, S5P_FIMV_E_RC_CONFIG);
+
+ /* max & min value of QP */
+ reg = 0;
+ /** max QP */
+ reg |= ((p_h263->rc_max_qp & 0x3F) << 8);
+ /** min QP */
+ reg |= p_h263->rc_min_qp & 0x3F;
+ WRITEL(reg, S5P_FIMV_E_RC_QP_BOUND);
+
+ /* other QPs */
+ WRITEL(0x0, S5P_FIMV_E_FIXED_PICTURE_QP);
+ if (!p->rc_frame && !p->rc_mb) {
+ reg = 0;
+ reg |= ((p_h263->rc_b_frame_qp & 0x3F) << 16);
+ reg |= ((p_h263->rc_p_frame_qp & 0x3F) << 8);
+ reg |= p_h263->rc_frame_qp & 0x3F;
+ WRITEL(reg, S5P_FIMV_E_FIXED_PICTURE_QP);
+ }
+
+ /* frame rate */
+ if (p->rc_frame && p->rc_framerate_num && p->rc_framerate_denom) {
+ reg = 0;
+ reg |= ((p->rc_framerate_num & 0xFFFF) << 16);
+ reg |= p->rc_framerate_denom & 0xFFFF;
+ WRITEL(reg, S5P_FIMV_E_RC_FRAME_RATE);
+ }
+
+ /* vbv buffer size */
+ if (p->frame_skip_mode ==
+ V4L2_MPEG_MFC51_VIDEO_FRAME_SKIP_MODE_BUF_LIMIT) {
+ WRITEL(p->vbv_size & 0xFFFF, S5P_FIMV_E_VBV_BUFFER_SIZE);
+
+ if (p->rc_frame)
+ WRITEL(p->vbv_delay, S5P_FIMV_E_VBV_INIT_DELAY);
+ }
+
+ mfc_debug_leave();
+
+ return 0;
+}
+
+/* Initialize decoding */
+int s5p_mfc_init_decode(struct s5p_mfc_ctx *ctx)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+ unsigned int reg = 0;
+ int fmo_aso_ctrl = 0;
+
+ mfc_debug_enter();
+ mfc_debug(2, "InstNo: %d/%d\n", ctx->inst_no, S5P_FIMV_CH_SEQ_HEADER);
+ mfc_debug(2, "BUFs: %08x %08x %08x\n",
+ READL(S5P_FIMV_D_CPB_BUFFER_ADDR),
+ READL(S5P_FIMV_D_CPB_BUFFER_ADDR),
+ READL(S5P_FIMV_D_CPB_BUFFER_ADDR));
+
+ /* FMO_ASO_CTRL - 0: Enable, 1: Disable */
+ reg |= (fmo_aso_ctrl << S5P_FIMV_D_OPT_FMO_ASO_CTRL_MASK);
+
+ /* When user sets desplay_delay to 0,
+ * It works as "display_delay enable" and delay set to 0.
+ * If user wants display_delay disable, It should be
+ * set to negative value. */
+ if (ctx->display_delay >= 0) {
+ reg |= (0x1 << S5P_FIMV_D_OPT_DDELAY_EN_SHIFT);
+ WRITEL((ctx->display_delay | 0x8), S5P_FIMV_D_DISPLAY_DELAY);
+ }
+ /* Setup loop filter, for decoding this is only valid for MPEG4 */
+ if (ctx->codec_mode == S5P_FIMV_CODEC_MPEG4_DEC) {
+ mfc_debug(2, "Set loop filter to: %d\n", ctx->loop_filter_mpeg4);
+ reg |= (ctx->loop_filter_mpeg4 << S5P_FIMV_D_OPT_LF_CTRL_SHIFT);
+ }
+ if (ctx->dst_fmt->fourcc == V4L2_PIX_FMT_NV12MT_16X16)
+ reg |= (0x1 << S5P_FIMV_D_OPT_TILE_MODE_SHIFT);
+
+ WRITEL(reg, S5P_FIMV_D_DEC_OPTIONS);
+
+ /* 0: NV12(CbCr), 1: NV21(CrCb) */
+ if (ctx->dst_fmt->fourcc == V4L2_PIX_FMT_NV21M)
+ WRITEL(0x1, S5P_FIMV_PIXEL_FORMAT);
+ else
+ WRITEL(0x0, S5P_FIMV_PIXEL_FORMAT);
+
+ /* sei parse */
+ WRITEL(ctx->sei_fp_parse & 0x1, S5P_FIMV_D_SEI_ENABLE);
+
+ WRITEL(ctx->inst_no, S5P_FIMV_INSTANCE_ID);
+ s5p_mfc_cmd_host2risc(dev, S5P_FIMV_CH_SEQ_HEADER, NULL);
+
+ mfc_debug_leave();
+ return 0;
+}
+
+static inline void s5p_mfc_set_flush(struct s5p_mfc_ctx *ctx, int flush)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+ unsigned int dpb;
+ if (flush)
+ dpb = READL(S5P_FIMV_SI_CH0_DPB_CONF_CTRL) | (1 << 14);
+ else
+ dpb = READL(S5P_FIMV_SI_CH0_DPB_CONF_CTRL) & ~(1 << 14);
+ WRITEL(dpb, S5P_FIMV_SI_CH0_DPB_CONF_CTRL);
+}
+
+/* Decode a single frame */
+int s5p_mfc_decode_one_frame(struct s5p_mfc_ctx *ctx, int last_frame)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+
+ WRITEL(0xffffffff, S5P_FIMV_D_AVAILABLE_DPB_FLAG_LOWER);
+ WRITEL(0xffffffff, S5P_FIMV_D_AVAILABLE_DPB_FLAG_UPPER);
+ WRITEL(ctx->slice_interface & 0x1, S5P_FIMV_D_SLICE_IF_ENABLE);
+
+ WRITEL(ctx->inst_no, S5P_FIMV_INSTANCE_ID);
+ /* Issue different commands to instance basing on whether it
+ * is the last frame or not. */
+ switch (last_frame) {
+ case 0:
+ s5p_mfc_cmd_host2risc(dev, S5P_FIMV_CH_FRAME_START, NULL);
+ break;
+ case 1:
+ s5p_mfc_cmd_host2risc(dev, S5P_FIMV_CH_LAST_FRAME, NULL);
+ break;
+ }
+
+ mfc_debug(2, "Decoding a usual frame.\n");
+ return 0;
+}
+
+int s5p_mfc_init_encode(struct s5p_mfc_ctx *ctx)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+
+ mfc_debug(2, "++\n");
+
+ if (ctx->codec_mode == S5P_FIMV_CODEC_H264_ENC)
+ s5p_mfc_set_enc_params_h264(ctx);
+ else if (ctx->codec_mode == S5P_FIMV_CODEC_MPEG4_ENC)
+ s5p_mfc_set_enc_params_mpeg4(ctx);
+ else if (ctx->codec_mode == S5P_FIMV_CODEC_H263_ENC)
+ s5p_mfc_set_enc_params_h263(ctx);
+ else {
+ mfc_err("Unknown codec for encoding (%x).\n",
+ ctx->codec_mode);
+ return -EINVAL;
+ }
+
+ WRITEL(ctx->inst_no, S5P_FIMV_INSTANCE_ID);
+ s5p_mfc_cmd_host2risc(dev, S5P_FIMV_CH_SEQ_HEADER, NULL);
+
+ mfc_debug(2, "--\n");
+
+ return 0;
+}
+
+int s5p_mfc_h264_set_aso_slice_order(struct s5p_mfc_ctx *ctx)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+ struct s5p_mfc_enc_params *p = &ctx->enc_params;
+ struct s5p_mfc_h264_enc_params *p_h264 = &p->codec.h264;
+ int i;
+
+ if (p_h264->aso) {
+ for (i = 0; i < 8; i++)
+ WRITEL(p_h264->aso_slice_order[i],
+ S5P_FIMV_E_H264_ASO_SLICE_ORDER_0 + i * 4);
+ }
+ return 0;
+}
+
+/* Encode a single frame */
+int s5p_mfc_encode_one_frame(struct s5p_mfc_ctx *ctx)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+
+ mfc_debug(2, "++\n");
+
+ /* memory structure cur. frame */
+
+ if (ctx->codec_mode == S5P_FIMV_CODEC_H264_ENC)
+ s5p_mfc_h264_set_aso_slice_order(ctx);
+
+ s5p_mfc_set_slice_mode(ctx);
+
+ WRITEL(ctx->inst_no, S5P_FIMV_INSTANCE_ID);
+ s5p_mfc_cmd_host2risc(dev, S5P_FIMV_CH_FRAME_START, NULL);
+
+ mfc_debug(2, "--\n");
+
+ return 0;
+}
+
+static inline int s5p_mfc_get_new_ctx(struct s5p_mfc_dev *dev)
+{
+ unsigned long flags;
+ int new_ctx;
+ int cnt;
+
+ spin_lock_irqsave(&dev->condlock, flags);
+ mfc_debug(2, "Previos context: %d (bits %08lx)\n", dev->curr_ctx,
+ dev->ctx_work_bits);
+ new_ctx = (dev->curr_ctx + 1) % MFC_NUM_CONTEXTS;
+ cnt = 0;
+ while (!test_bit(new_ctx, &dev->ctx_work_bits)) {
+ new_ctx = (new_ctx + 1) % MFC_NUM_CONTEXTS;
+ cnt++;
+ if (cnt > MFC_NUM_CONTEXTS) {
+ /* No contexts to run */
+ spin_unlock_irqrestore(&dev->condlock, flags);
+ return -EAGAIN;
+ }
+ }
+ spin_unlock_irqrestore(&dev->condlock, flags);
+ return new_ctx;
+}
+
+static inline void s5p_mfc_run_dec_last_frames(struct s5p_mfc_ctx *ctx)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+ struct s5p_mfc_buf *temp_vb;
+ unsigned long flags;
+
+ spin_lock_irqsave(&dev->irqlock, flags);
+
+ /* Frames are being decoded */
+ if (list_empty(&ctx->src_queue)) {
+ mfc_debug(2, "No src buffers.\n");
+ spin_unlock_irqrestore(&dev->irqlock, flags);
+ return;
+ }
+ /* Get the next source buffer */
+ temp_vb = list_entry(ctx->src_queue.next, struct s5p_mfc_buf, list);
+ temp_vb->used = 1;
+ s5p_mfc_set_dec_stream_buffer(ctx,
+ vb2_dma_contig_plane_dma_addr(temp_vb->b, 0), 0, 0);
+ spin_unlock_irqrestore(&dev->irqlock, flags);
+
+ dev->curr_ctx = ctx->num;
+ s5p_mfc_clean_ctx_int_flags(ctx);
+ s5p_mfc_decode_one_frame(ctx, 1);
+}
+
+static inline int s5p_mfc_run_dec_frame(struct s5p_mfc_ctx *ctx)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+ struct s5p_mfc_buf *temp_vb;
+ unsigned long flags;
+ int last_frame = 0;
+ unsigned int index;
+
+ spin_lock_irqsave(&dev->irqlock, flags);
+
+ /* Frames are being decoded */
+ if (list_empty(&ctx->src_queue)) {
+ mfc_debug(2, "No src buffers.\n");
+ spin_unlock_irqrestore(&dev->irqlock, flags);
+ return -EAGAIN;
+ }
+ /* Get the next source buffer */
+ temp_vb = list_entry(ctx->src_queue.next, struct s5p_mfc_buf, list);
+ temp_vb->used = 1;
+ s5p_mfc_set_dec_stream_buffer(ctx,
+ vb2_dma_contig_plane_dma_addr(temp_vb->b, 0), 0,
+ temp_vb->b->v4l2_planes[0].bytesused);
+ spin_unlock_irqrestore(&dev->irqlock, flags);
+
+ index = temp_vb->b->v4l2_buf.index;
+
+ dev->curr_ctx = ctx->num;
+ s5p_mfc_clean_ctx_int_flags(ctx);
+ if (temp_vb->b->v4l2_planes[0].bytesused == 0) {
+ last_frame = 1;
+ mfc_debug(2, "Setting ctx->state to FINISHING\n");
+ ctx->state = MFCINST_FINISHING;
+ }
+ s5p_mfc_decode_one_frame(ctx, last_frame);
+
+ return 0;
+}
+
+static inline int s5p_mfc_run_enc_frame(struct s5p_mfc_ctx *ctx)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+ unsigned long flags;
+ struct s5p_mfc_buf *dst_mb;
+ struct s5p_mfc_buf *src_mb;
+ unsigned long src_y_addr, src_c_addr, dst_addr;
+ /*
+ unsigned int src_y_size, src_c_size;
+ */
+ unsigned int dst_size;
+ unsigned int index;
+
+ spin_lock_irqsave(&dev->irqlock, flags);
+
+ if (list_empty(&ctx->src_queue)) {
+ mfc_debug(2, "no src buffers.\n");
+ spin_unlock_irqrestore(&dev->irqlock, flags);
+ return -EAGAIN;
+ }
+
+ if (list_empty(&ctx->dst_queue)) {
+ mfc_debug(2, "no dst buffers.\n");
+ spin_unlock_irqrestore(&dev->irqlock, flags);
+ return -EAGAIN;
+ }
+
+ src_mb = list_entry(ctx->src_queue.next, struct s5p_mfc_buf, list);
+ src_mb->used = 1;
+ src_y_addr = vb2_dma_contig_plane_dma_addr(src_mb->b, 0);
+ src_c_addr = vb2_dma_contig_plane_dma_addr(src_mb->b, 1);
+
+ mfc_debug(2, "enc src y addr: 0x%08lx", src_y_addr);
+ mfc_debug(2, "enc src c addr: 0x%08lx", src_c_addr);
+
+ s5p_mfc_set_enc_frame_buffer(ctx, src_y_addr, src_c_addr);
+
+ dst_mb = list_entry(ctx->dst_queue.next, struct s5p_mfc_buf, list);
+ dst_mb->used = 1;
+ dst_addr = vb2_dma_contig_plane_dma_addr(dst_mb->b, 0);
+ dst_size = vb2_plane_size(dst_mb->b, 0);
+
+ s5p_mfc_set_enc_stream_buffer(ctx, dst_addr, dst_size);
+
+ spin_unlock_irqrestore(&dev->irqlock, flags);
+
+ index = src_mb->b->v4l2_buf.index;
+
+ dev->curr_ctx = ctx->num;
+ s5p_mfc_clean_ctx_int_flags(ctx);
+ s5p_mfc_encode_one_frame(ctx);
+
+ return 0;
+}
+
+static inline void s5p_mfc_run_init_dec(struct s5p_mfc_ctx *ctx)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+ unsigned long flags;
+ struct s5p_mfc_buf *temp_vb;
+
+ /* Initializing decoding - parsing header */
+ spin_lock_irqsave(&dev->irqlock, flags);
+ mfc_debug(2, "Preparing to init decoding.\n");
+ temp_vb = list_entry(ctx->src_queue.next, struct s5p_mfc_buf, list);
+ mfc_debug(2, "Header size: %d\n", temp_vb->b->v4l2_planes[0].bytesused);
+ s5p_mfc_set_dec_stream_buffer(ctx,
+ vb2_dma_contig_plane_dma_addr(temp_vb->b, 0), 0,
+ temp_vb->b->v4l2_planes[0].bytesused);
+ spin_unlock_irqrestore(&dev->irqlock, flags);
+ dev->curr_ctx = ctx->num;
+ s5p_mfc_clean_ctx_int_flags(ctx);
+ s5p_mfc_init_decode(ctx);
+}
+
+static inline void s5p_mfc_run_init_enc(struct s5p_mfc_ctx *ctx)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+ unsigned long flags;
+ struct s5p_mfc_buf *dst_mb;
+ unsigned long dst_addr;
+ unsigned int dst_size;
+
+ spin_lock_irqsave(&dev->irqlock, flags);
+
+ dst_mb = list_entry(ctx->dst_queue.next, struct s5p_mfc_buf, list);
+ dst_addr = vb2_dma_contig_plane_dma_addr(dst_mb->b, 0);
+ dst_size = vb2_plane_size(dst_mb->b, 0);
+ s5p_mfc_set_enc_stream_buffer(ctx, dst_addr, dst_size);
+ spin_unlock_irqrestore(&dev->irqlock, flags);
+ dev->curr_ctx = ctx->num;
+ s5p_mfc_clean_ctx_int_flags(ctx);
+ s5p_mfc_init_encode(ctx);
+}
+
+static inline int s5p_mfc_run_init_dec_buffers(struct s5p_mfc_ctx *ctx)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+ int ret;
+ /* Header was parsed now starting processing
+ * First set the output frame buffers
+ * s5p_mfc_alloc_dec_buffers(ctx); */
+
+ if (ctx->capture_state != QUEUE_BUFS_MMAPED) {
+ mfc_err("It seems that not all destionation buffers were "
+ "mmaped.\nMFC requires that all destination are mmaped "
+ "before starting processing.\n");
+ return -EAGAIN;
+ }
+
+ dev->curr_ctx = ctx->num;
+ s5p_mfc_clean_ctx_int_flags(ctx);
+ ret = s5p_mfc_set_dec_frame_buffer(ctx);
+ if (ret) {
+ mfc_err("Failed to alloc frame mem.\n");
+ ctx->state = MFCINST_ERROR;
+ }
+ return ret;
+}
+
+static inline int s5p_mfc_run_init_enc_buffers(struct s5p_mfc_ctx *ctx)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+ int ret;
+
+ ret = s5p_mfc_alloc_codec_buffers(ctx);
+ if (ret) {
+ mfc_err("Failed to allocate encoding buffers.\n");
+ return -ENOMEM;
+ }
+
+ /* Header was generated now starting processing
+ * First set the reference frame buffers
+ */
+ if (ctx->capture_state != QUEUE_BUFS_REQUESTED) {
+ mfc_err("It seems that destionation buffers were not "
+ "requested.\nMFC requires that header should be generated "
+ "before allocating codec buffer.\n");
+ return -EAGAIN;
+ }
+
+ dev->curr_ctx = ctx->num;
+ s5p_mfc_clean_ctx_int_flags(ctx);
+ ret = s5p_mfc_set_enc_ref_buffer(ctx);
+ if (ret) {
+ mfc_err("Failed to alloc frame mem.\n");
+ ctx->state = MFCINST_ERROR;
+ }
+ return ret;
+}
+
+/* Try running an operation on hardware */
+void s5p_mfc_try_run(struct s5p_mfc_dev *dev)
+{
+ struct s5p_mfc_ctx *ctx;
+ int new_ctx;
+ unsigned int ret = 0;
+
+ mfc_debug(1, "Try run dev: %p\n", dev);
+
+ /* Check whether hardware is not running */
+ if (test_and_set_bit(0, &dev->hw_lock) != 0) {
+ /* This is perfectly ok, the scheduled ctx should wait */
+ mfc_debug(1, "Couldn't lock HW.\n");
+ return;
+ }
+
+ /* Choose the context to run */
+ new_ctx = s5p_mfc_get_new_ctx(dev);
+ if (new_ctx < 0) {
+ /* No contexts to run */
+ if (test_and_clear_bit(0, &dev->hw_lock) == 0) {
+ mfc_err("Failed to unlock hardware.\n");
+ return;
+ }
+
+ mfc_debug(1, "No ctx is scheduled to be run.\n");
+ return;
+ }
+
+ mfc_debug(1, "New context: %d\n", new_ctx);
+ ctx = dev->ctx[new_ctx];
+ mfc_debug(1, "Seting new context to %p\n", ctx);
+ /* Got context to run in ctx */
+ mfc_debug(1, "ctx->dst_queue_cnt=%d ctx->dpb_count=%d ctx->src_queue_cnt=%d\n",
+ ctx->dst_queue_cnt, ctx->dpb_count, ctx->src_queue_cnt);
+ mfc_debug(1, "ctx->state=%d\n", ctx->state);
+ /* Last frame has already been sent to MFC
+ * Now obtaining frames from MFC buffer */
+
+ s5p_mfc_clock_on();
+ if (ctx->type == MFCINST_DECODER) {
+ switch (ctx->state) {
+ case MFCINST_FINISHING:
+ s5p_mfc_run_dec_last_frames(ctx);
+ break;
+ case MFCINST_RUNNING:
+ ret = s5p_mfc_run_dec_frame(ctx);
+ break;
+ case MFCINST_INIT:
+ ret = s5p_mfc_open_inst_cmd(ctx);
+ break;
+ case MFCINST_RETURN_INST:
+ ret = s5p_mfc_close_inst_cmd(ctx);
+ break;
+ case MFCINST_GOT_INST:
+ s5p_mfc_run_init_dec(ctx);
+ break;
+ case MFCINST_HEAD_PARSED:
+ ret = s5p_mfc_run_init_dec_buffers(ctx);
+ break;
+ case MFCINST_RES_CHANGE_INIT:
+ s5p_mfc_run_dec_last_frames(ctx);
+ break;
+ case MFCINST_RES_CHANGE_FLUSH:
+ s5p_mfc_run_dec_last_frames(ctx);
+ break;
+ case MFCINST_RES_CHANGE_END:
+ mfc_debug(2, "Finished remaining frames after resolution change.\n");
+ ctx->capture_state = QUEUE_FREE;
+ mfc_debug(2, "Will re-init the codec`.\n");
+ s5p_mfc_run_init_dec(ctx);
+ break;
+ default:
+ ret = -EAGAIN;
+ }
+ } else if (ctx->type == MFCINST_ENCODER) {
+ switch (ctx->state) {
+ case MFCINST_FINISHING:
+ case MFCINST_RUNNING:
+ ret = s5p_mfc_run_enc_frame(ctx);
+ break;
+ case MFCINST_INIT:
+ ret = s5p_mfc_open_inst_cmd(ctx);
+ break;
+ case MFCINST_RETURN_INST:
+ ret = s5p_mfc_close_inst_cmd(ctx);
+ break;
+ case MFCINST_GOT_INST:
+ s5p_mfc_run_init_enc(ctx);
+ break;
+ case MFCINST_HEAD_PARSED: /* Only for MFC6.x */
+ ret = s5p_mfc_run_init_enc_buffers(ctx);
+ break;
+ default:
+ ret = -EAGAIN;
+ }
+ } else {
+ mfc_err("invalid context type: %d\n", ctx->type);
+ ret = -EAGAIN;
+ }
+
+ if (ret) {
+ /* Free hardware lock */
+ if (test_and_clear_bit(0, &dev->hw_lock) == 0)
+ mfc_err("Failed to unlock hardware.\n");
+
+ /* This is in deed imporant, as no operation has been
+ * scheduled, reduce the clock count as no one will
+ * ever do this, because no interrupt related to this try_run
+ * will ever come from hardware. */
+ s5p_mfc_clock_off();
+ }
+}
+
+
+void s5p_mfc_cleanup_queue(struct list_head *lh, struct vb2_queue *vq)
+{
+ struct s5p_mfc_buf *b;
+ int i;
+
+ while (!list_empty(lh)) {
+ b = list_entry(lh->next, struct s5p_mfc_buf, list);
+ for (i = 0; i < b->b->num_planes; i++)
+ vb2_set_plane_payload(b->b, i, 0);
+ vb2_buffer_done(b->b, VB2_BUF_STATE_ERROR);
+ list_del(&b->list);
+ }
+}
+
+void s5p_mfc_write_info(struct s5p_mfc_ctx *ctx, unsigned int data, unsigned int ofs)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+
+ s5p_mfc_clock_on();
+ WRITEL(data, ofs);
+ s5p_mfc_clock_off();
+}
+
+unsigned int s5p_mfc_read_info(struct s5p_mfc_ctx *ctx, unsigned int ofs)
+{
+ struct s5p_mfc_dev *dev = ctx->dev;
+ int ret;
+
+ s5p_mfc_clock_on();
+ ret = READL(ofs);
+ s5p_mfc_clock_off();
+
+ return ret;
+}
--- /dev/null
+/*
+ * drivers/media/video/s5p-mfc/s5p_mfc_opr_v6.h
+ *
+ * Header file for Samsung MFC (Multi Function Codec - FIMV) driver
+ * Contains declarations of hw related functions.
+ *
+ * Copyright (c) 2012 Samsung Electronics
+ * http://www.samsung.com/
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef S5P_MFC_OPR_V6_H_
+#define S5P_MFC_OPR_V6_H_
+
+#include "s5p_mfc_common.h"
+
+#define MFC_CTRL_MODE_CUSTOM MFC_CTRL_MODE_SFR
+
+int s5p_mfc_init_decode(struct s5p_mfc_ctx *ctx);
+int s5p_mfc_init_encode(struct s5p_mfc_ctx *mfc_ctx);
+
+int s5p_mfc_set_dec_frame_buffer(struct s5p_mfc_ctx *ctx);
+int s5p_mfc_set_dec_stream_buffer(struct s5p_mfc_ctx *ctx, int buf_addr,
+ unsigned int start_num_byte,
+ unsigned int buf_size);
+
+void s5p_mfc_set_enc_frame_buffer(struct s5p_mfc_ctx *ctx,
+ unsigned long y_addr, unsigned long c_addr);
+int s5p_mfc_set_enc_stream_buffer(struct s5p_mfc_ctx *ctx,
+ unsigned long addr, unsigned int size);
+void s5p_mfc_get_enc_frame_buffer(struct s5p_mfc_ctx *ctx,
+ unsigned long *y_addr, unsigned long *c_addr);
+int s5p_mfc_set_enc_ref_buffer(struct s5p_mfc_ctx *mfc_ctx);
+
+int s5p_mfc_decode_one_frame(struct s5p_mfc_ctx *ctx, int last_frame);
+int s5p_mfc_encode_one_frame(struct s5p_mfc_ctx *mfc_ctx);
+
+/* Memory allocation */
+int s5p_mfc_alloc_dec_temp_buffers(struct s5p_mfc_ctx *ctx);
+void s5p_mfc_set_dec_desc_buffer(struct s5p_mfc_ctx *ctx);
+void s5p_mfc_release_dec_desc_buffer(struct s5p_mfc_ctx *ctx);
+
+int s5p_mfc_alloc_codec_buffers(struct s5p_mfc_ctx *ctx);
+void s5p_mfc_release_codec_buffers(struct s5p_mfc_ctx *ctx);
+
+int s5p_mfc_alloc_instance_buffer(struct s5p_mfc_ctx *ctx);
+void s5p_mfc_release_instance_buffer(struct s5p_mfc_ctx *ctx);
+int s5p_mfc_alloc_dev_context_buffer(struct s5p_mfc_dev *dev);
+void s5p_mfc_release_dev_context_buffer(struct s5p_mfc_dev *dev);
+
+void s5p_mfc_dec_calc_dpb_size(struct s5p_mfc_ctx *ctx);
+void s5p_mfc_enc_calc_src_size(struct s5p_mfc_ctx *ctx);
+
+#define s5p_mfc_get_dspl_y_adr() (readl(dev->regs_base + \
+ S5P_FIMV_SI_DISPLAY_Y_ADR) )
+#define s5p_mfc_get_dec_y_adr() (readl(dev->regs_base + \
+ S5P_FIMV_D_DISPLAY_LUMA_ADDR) )
+#define s5p_mfc_get_dspl_status() readl(dev->regs_base + \
+ S5P_FIMV_D_DISPLAY_STATUS)
+#define s5p_mfc_get_decoded_status() readl(dev->regs_base + \
+ S5P_FIMV_D_DECODED_STATUS)
+#define s5p_mfc_get_dec_frame_type() (readl(dev->regs_base + \
+ S5P_FIMV_D_DECODED_FRAME_TYPE) \
+ & S5P_FIMV_DECODE_FRAME_MASK)
+#define s5p_mfc_get_disp_frame_type() (readl(ctx->dev->regs_base + \
+ S5P_FIMV_D_DISPLAY_FRAME_TYPE) \
+ & S5P_FIMV_DECODE_FRAME_MASK)
+#define s5p_mfc_get_consumed_stream() readl(dev->regs_base + \
+ S5P_FIMV_D_DECODED_NAL_SIZE)
+#define s5p_mfc_get_int_reason() (readl(dev->regs_base + \
+ S5P_FIMV_RISC2HOST_CMD) & \
+ S5P_FIMV_RISC2HOST_CMD_MASK)
+#define s5p_mfc_get_int_err() readl(dev->regs_base + \
+ S5P_FIMV_ERROR_CODE)
+#define s5p_mfc_err_dec(x) (((x) & S5P_FIMV_ERR_DEC_MASK) >> \
+ S5P_FIMV_ERR_DEC_SHIFT)
+#define s5p_mfc_err_dspl(x) (((x) & S5P_FIMV_ERR_DSPL_MASK) >> \
+ S5P_FIMV_ERR_DSPL_SHIFT)
+#define s5p_mfc_get_img_width() readl(dev->regs_base + \
+ S5P_FIMV_D_DISPLAY_FRAME_WIDTH)
+#define s5p_mfc_get_img_height() readl(dev->regs_base + \
+ S5P_FIMV_D_DISPLAY_FRAME_HEIGHT)
+#define s5p_mfc_get_dpb_count() readl(dev->regs_base + \
+ S5P_FIMV_D_MIN_NUM_DPB)
+#define s5p_mfc_get_mv_count() readl(dev->regs_base + \
+ S5P_FIMV_D_MIN_NUM_MV)
+#define s5p_mfc_get_inst_no() readl(dev->regs_base + \
+ S5P_FIMV_RET_INSTANCE_ID)
+#define s5p_mfc_get_enc_dpb_count() readl(dev->regs_base + \
+ S5P_FIMV_E_NUM_DPB)
+#define s5p_mfc_get_enc_strm_size() readl(dev->regs_base + \
+ S5P_FIMV_E_STREAM_SIZE)
+#define s5p_mfc_get_enc_slice_type() readl(dev->regs_base + \
+ S5P_FIMV_E_SLICE_TYPE)
+#define s5p_mfc_get_enc_pic_count() readl(dev->regs_base + \
+ S5P_FIMV_E_PICTURE_COUNT)
+#define s5p_mfc_get_sei_avail_status() readl(dev->regs_base + \
+ S5P_FIMV_D_FRAME_PACK_SEI_AVAIL)
+#define s5p_mfc_get_mvc_num_views() readl(dev->regs_base + \
+ S5P_FIMV_D_MVC_NUM_VIEWS)
+#define s5p_mfc_get_mvc_view_id() readl(dev->regs_base + \
+ S5P_FIMV_D_MVC_VIEW_ID)
+
+#define mb_width(x_size) ((x_size + 15) / 16)
+#define mb_height(y_size) ((y_size + 15) / 16)
+#define s5p_mfc_dec_mv_size(x, y) (mb_width(x) * (((mb_height(y)+1)/2)*2) * 64 + 128)
+
+/* Definition */
+#define ENC_MULTI_SLICE_MB_MAX ((1 << 30) - 1)
+#define ENC_MULTI_SLICE_BIT_MIN 2800
+#define ENC_INTRA_REFRESH_MB_MAX ((1 << 18) - 1)
+#define ENC_VBV_BUF_SIZE_MAX ((1 << 30) - 1)
+#define ENC_H264_LOOP_FILTER_AB_MIN -12
+#define ENC_H264_LOOP_FILTER_AB_MAX 12
+#define ENC_H264_RC_FRAME_RATE_MAX ((1 << 16) - 1)
+#define ENC_H263_RC_FRAME_RATE_MAX ((1 << 16) - 1)
+#define ENC_H264_PROFILE_MAX 3
+#define ENC_H264_LEVEL_MAX 42
+#define ENC_MPEG4_VOP_TIME_RES_MAX ((1 << 16) - 1)
+#define FRAME_DELTA_H264_H263 1
+#define TIGHT_CBR_MAX 10
+
+/* Definitions for shared memory compatibility */
+#define PIC_TIME_TOP S5P_FIMV_D_RET_PICTURE_TAG_TOP
+#define PIC_TIME_BOT S5P_FIMV_D_RET_PICTURE_TAG_BOT
+#define CROP_INFO_H S5P_FIMV_D_DISPLAY_CROP_INFO1
+#define CROP_INFO_V S5P_FIMV_D_DISPLAY_CROP_INFO2
+
+void s5p_mfc_try_run(struct s5p_mfc_dev *dev);
+
+void s5p_mfc_cleanup_queue(struct list_head *lh, struct vb2_queue *vq);
+
+void s5p_mfc_write_info(struct s5p_mfc_ctx *ctx, unsigned int data, unsigned int ofs);
+unsigned int s5p_mfc_read_info(struct s5p_mfc_ctx *ctx, unsigned int ofs);
+
+#endif /* S5P_MFC_OPR_V6_H_ */
#include "s5p_mfc_debug.h"
#include "s5p_mfc_pm.h"
-#define MFC_CLKNAME "sclk_mfc"
+#ifdef CONFIG_VIDEO_SAMSUNG_S5P_MFC_V5
+#define MFC_CLKNAME "sclk_mfc"
+#elif CONFIG_VIDEO_SAMSUNG_S5P_MFC_V6
+#define MFC_CLKNAME "aclk_333"
+#endif
#define MFC_GATE_CLK_NAME "mfc"
#define CLK_DEBUG
{
struct s5p_mfc_dev *dev = ctx->dev;
void *shm_alloc_ctx = dev->alloc_ctx[MFC_BANK1_ALLOC_CTX];
+ struct s5p_mfc_buf_size_v5 *buf_size = dev->variant->buf_size->priv;
- ctx->shm_alloc = vb2_dma_contig_memops.alloc(shm_alloc_ctx,
- SHARED_BUF_SIZE);
- if (IS_ERR(ctx->shm_alloc)) {
+ ctx->shm.alloc = vb2_dma_contig_memops.alloc(shm_alloc_ctx,
+ buf_size->shm);
+ if (IS_ERR(ctx->shm.alloc)) {
mfc_err("failed to allocate shared memory\n");
- return PTR_ERR(ctx->shm_alloc);
+ return PTR_ERR(ctx->shm.alloc);
}
- /* shm_ofs only keeps the offset from base (port a) */
- ctx->shm_ofs = s5p_mfc_mem_cookie(shm_alloc_ctx, ctx->shm_alloc)
+ /* shared memory offset only keeps the offset from base (port a) */
+ ctx->shm.ofs = s5p_mfc_mem_cookie(shm_alloc_ctx, ctx->shm.alloc)
- dev->bank1;
- BUG_ON(ctx->shm_ofs & ((1 << MFC_BANK1_ALIGN_ORDER) - 1));
- ctx->shm = vb2_dma_contig_memops.vaddr(ctx->shm_alloc);
- if (!ctx->shm) {
- vb2_dma_contig_memops.put(ctx->shm_alloc);
- ctx->shm_ofs = 0;
- ctx->shm_alloc = NULL;
+ BUG_ON(ctx->shm.ofs & ((1 << MFC_BANK1_ALIGN_ORDER) - 1));
+ ctx->shm.virt = vb2_dma_contig_memops.vaddr(ctx->shm.alloc);
+ if (!ctx->shm.virt) {
+ vb2_dma_contig_memops.put(ctx->shm.alloc);
+ ctx->shm.alloc = NULL;
+ ctx->shm.ofs = 0;
mfc_err("failed to virt addr of shared memory\n");
return -ENOMEM;
}
- memset((void *)ctx->shm, 0, SHARED_BUF_SIZE);
+ memset((void *)ctx->shm.virt, 0, buf_size->shm);
wmb();
return 0;
}
DBG_HISTORY_INPUT1 = 0xD4, /* C */
DBG_HISTORY_OUTPUT = 0xD8, /* C */
HIERARCHICAL_P_QP = 0xE0, /* E, H.264 */
+ FRAME_PACK_SEI_ENABLE = 0x168, /* C */
+ FRAME_PACK_SEI_AVAIL = 0x16c, /* D */
+ FRAME_PACK_SEI_INFO = 0x17c, /* E */
};
int s5p_mfc_init_shm(struct s5p_mfc_ctx *ctx);
-#define s5p_mfc_write_shm(ctx, x, ofs) \
- do { \
- writel(x, (ctx->shm + ofs)); \
- wmb(); \
+#define s5p_mfc_write_shm(ctx, x, ofs) \
+ do { \
+ writel(x, (ctx->shm.virt + ofs)); \
+ wmb(); \
} while (0)
static inline u32 s5p_mfc_read_shm(struct s5p_mfc_ctx *ctx, unsigned int ofs)
{
rmb();
- return readl(ctx->shm + ofs);
+ return readl(ctx->shm.virt + ofs);
}
#endif /* S5P_MFC_SHM_H_ */
bool "Samsung TV driver for S5P platform (experimental)"
depends on PLAT_S5P && PM_RUNTIME
depends on EXPERIMENTAL
+ select DMA_SHARED_BUFFER
default n
---help---
Say Y here to enable selecting the TV output devices for
return vb2_dqbuf(&layer->vb_queue, p, file->f_flags & O_NONBLOCK);
}
+static int mxr_expbuf(struct file *file, void *priv,
+ struct v4l2_exportbuffer *eb)
+{
+ struct mxr_layer *layer = video_drvdata(file);
+
+ mxr_dbg(layer->mdev, "%s:%d\n", __func__, __LINE__);
+ return vb2_expbuf(&layer->vb_queue, eb);
+}
+
static int mxr_streamon(struct file *file, void *priv, enum v4l2_buf_type i)
{
struct mxr_layer *layer = video_drvdata(file);
.vidioc_querybuf = mxr_querybuf,
.vidioc_qbuf = mxr_qbuf,
.vidioc_dqbuf = mxr_dqbuf,
+ .vidioc_expbuf = mxr_expbuf,
/* Streaming control */
.vidioc_streamon = mxr_streamon,
.vidioc_streamoff = mxr_streamoff,
layer->vb_queue = (struct vb2_queue) {
.type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE,
- .io_modes = VB2_MMAP | VB2_USERPTR,
+ .io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF,
.drv_priv = layer,
.buf_struct_size = sizeof(struct mxr_buffer),
.ops = &mxr_video_qops,
{ 1920, 1080, "1080p@30" }, /* V4L2_DV_1080P30 */
{ 1920, 1080, "1080p@50" }, /* V4L2_DV_1080P50 */
{ 1920, 1080, "1080p@60" }, /* V4L2_DV_1080P60 */
+ { 720, 480, "480p@60" }, /* V4L2_DV_480P60 */
+ { 1920, 1080, "1080i@59.94" }, /* V4L2_DV_1080I59_94 */
+ { 1920, 1080, "1080p@59.94" }, /* V4L2_DV_1080P59_94 */
+ { 1280, 720, "720p@60_fp" }, /* V4L2_DV_720P60_FP */
+ { 1280, 720, "720p@60_sb_half" },/* V4L2_DV_720P60_SB_HALF */
+ { 1280, 720, "720p@60_tb" }, /* V4L2_DV_720P60_TB */
+ { 1280, 720, "720p@59_94_fp" }, /* V4L2_DV_720P59_94_FP */
+ { 1280, 720, "720p@59_94_sb_half" },
+ /* V4L2_DV_720P59_94_SB_HALF */
+ { 1280, 720, "720p@59_94_tb" }, /* V4L2_DV_720P59_94_TB */
+ { 1280, 720, "720p@50_fp" }, /* V4L2_DV_720P50_FP */
+ { 1280, 720, "720p@50_sb_half" }, /* V4L2_DV_720P50_SB_HALF */
+ { 1280, 720, "720p@50_tb" }, /* V4L2_DV_720P50_TB */
+ { 1920, 1080, "1080p@24_fp" }, /* V4L2_DV_1080P24_FP */
+ { 1920, 1080, "1080p@24_sb_half" },/* V4L2_DV_1080P24_SB_HALF */
+ { 1920, 1080, "1080p@24_tb" }, /* V4L2_DV_1080P24_TB */
+ { 1920, 1080, "1080p@23_98_fp" },
+ /* V4L2_DV_1080P23_98_FP */
+ { 1920, 1080, "1080p@23_98_sb_half" },
+ /* V4L2_DV_1080P23_98_SB_HALF */
+ { 1920, 1080, "1080p@23_98_tb" },/* V4L2_DV_1080P23_98_TB */
+ { 1920, 1080, "1080i@60_sb_half" },/* V4L2_DV_1080I60_SB_HALF */
+ { 1920, 1080, "1080i@59_94_sb_half" },
+ /* V4L2_DV_1080I59_94_SB_HALF */
+ { 1920, 1080, "1080i@50_sb_half" },/* V4L2_DV_1080I50_SB_HALF */
+ { 1920, 1080, "1080p@60_sb_half" },
+ /* V4L2_DV_1080P60_SB_HALF */
+ { 1920, 1080, "1080p@60_tb" }, /* V4L2_DV_1080P60_TB */
+ { 1920, 1080, "1080p@30_fp" }, /* V4L2_DV_1080P30_FP */
+ { 1920, 1080, "1080p@30_sb_half" },/* V4L2_DV_1080P30_SB_HALF */
+ { 1920, 1080, "1080p@30_tb" }, /* V4L2_DV_1080P30_TB */
};
if (info == NULL || preset >= ARRAY_SIZE(dv_presets))
up_pln = compat_ptr(p);
if (put_user((unsigned long)up_pln, &up->m.userptr))
return -EFAULT;
+ } else if (memory == V4L2_MEMORY_DMABUF) {
+ if (copy_in_user(&up->m.fd, &up32->m.fd, sizeof(int)))
+ return -EFAULT;
} else {
if (copy_in_user(&up->m.mem_offset, &up32->m.mem_offset,
sizeof(__u32)))
if (copy_in_user(&up32->m.mem_offset, &up->m.mem_offset,
sizeof(__u32)))
return -EFAULT;
+ /* For DMABUF, driver might've set up the fd, so copy it back. */
+ if (memory == V4L2_MEMORY_DMABUF)
+ if (copy_in_user(&up32->m.fd, &up->m.fd,
+ sizeof(int)))
+ return -EFAULT;
return 0;
}
if (get_user(kp->m.offset, &up->m.offset))
return -EFAULT;
break;
+ case V4L2_MEMORY_DMABUF:
+ if (get_user(kp->m.fd, &up->m.fd))
+ return -EFAULT;
+ break;
}
}
if (put_user(kp->m.offset, &up->m.offset))
return -EFAULT;
break;
+ case V4L2_MEMORY_DMABUF:
+ if (put_user(kp->m.fd, &up->m.fd))
+ return -EFAULT;
+ break;
}
}
case VIDIOC_S_FBUF32:
case VIDIOC_OVERLAY32:
case VIDIOC_QBUF32:
+ case VIDIOC_EXPBUF:
case VIDIOC_DQBUF32:
case VIDIOC_STREAMON32:
case VIDIOC_STREAMOFF32:
case V4L2_CID_MIN_BUFFERS_FOR_CAPTURE: return "Min Number of Capture Buffers";
case V4L2_CID_MIN_BUFFERS_FOR_OUTPUT: return "Min Number of Output Buffers";
case V4L2_CID_ALPHA_COMPONENT: return "Alpha Component";
-
+ case V4L2_CID_CODEC_DISPLAY_STATUS: return "Display Status";
/* MPEG controls */
/* Keep the order of the 'case's the same as in videodev2.h! */
case V4L2_CID_MPEG_CLASS: return "MPEG Encoder Controls";
break;
case V4L2_CID_MIN_BUFFERS_FOR_CAPTURE:
case V4L2_CID_MIN_BUFFERS_FOR_OUTPUT:
+ case V4L2_CID_CODEC_DISPLAY_STATUS:
*type = V4L2_CTRL_TYPE_INTEGER;
*flags |= V4L2_CTRL_FLAG_READ_ONLY;
break;
int ret = -ENODEV;
if (vdev->fops->unlocked_ioctl) {
- if (vdev->lock && mutex_lock_interruptible(vdev->lock))
- return -ERESTARTSYS;
+ bool locked = false;
+
+ if (vdev->lock) {
+ /* always lock unless the cmd is marked as "don't use lock" */
+ locked = !v4l2_is_known_ioctl(cmd) ||
+ !test_bit(_IOC_NR(cmd), vdev->dont_use_lock);
+
+ if (locked && mutex_lock_interruptible(vdev->lock))
+ return -ERESTARTSYS;
+ }
if (video_is_registered(vdev))
ret = vdev->fops->unlocked_ioctl(filp, cmd, arg);
- if (vdev->lock)
+ if (locked)
mutex_unlock(vdev->lock);
} else if (vdev->fops->ioctl) {
/* This code path is a replacement for the BKL. It is a major
return find_first_zero_bit(used, VIDEO_NUM_DEVICES);
}
+#define SET_VALID_IOCTL(ops, cmd, op) \
+ if (ops->op) \
+ set_bit(_IOC_NR(cmd), valid_ioctls)
+
+/* This determines which ioctls are actually implemented in the driver.
+ It's a one-time thing which simplifies video_ioctl2 as it can just do
+ a bit test.
+
+ Note that drivers can override this by setting bits to 1 in
+ vdev->valid_ioctls. If an ioctl is marked as 1 when this function is
+ called, then that ioctl will actually be marked as unimplemented.
+
+ It does that by first setting up the local valid_ioctls bitmap, and
+ at the end do a:
+
+ vdev->valid_ioctls = valid_ioctls & ~(vdev->valid_ioctls)
+ */
+static void determine_valid_ioctls(struct video_device *vdev)
+{
+ DECLARE_BITMAP(valid_ioctls, BASE_VIDIOC_PRIVATE);
+ const struct v4l2_ioctl_ops *ops = vdev->ioctl_ops;
+
+ bitmap_zero(valid_ioctls, BASE_VIDIOC_PRIVATE);
+
+ SET_VALID_IOCTL(ops, VIDIOC_QUERYCAP, vidioc_querycap);
+ if (ops->vidioc_g_priority ||
+ test_bit(V4L2_FL_USE_FH_PRIO, &vdev->flags))
+ set_bit(_IOC_NR(VIDIOC_G_PRIORITY), valid_ioctls);
+ if (ops->vidioc_s_priority ||
+ test_bit(V4L2_FL_USE_FH_PRIO, &vdev->flags))
+ set_bit(_IOC_NR(VIDIOC_S_PRIORITY), valid_ioctls);
+ if (ops->vidioc_enum_fmt_vid_cap ||
+ ops->vidioc_enum_fmt_vid_out ||
+ ops->vidioc_enum_fmt_vid_cap_mplane ||
+ ops->vidioc_enum_fmt_vid_out_mplane ||
+ ops->vidioc_enum_fmt_vid_overlay ||
+ ops->vidioc_enum_fmt_type_private)
+ set_bit(_IOC_NR(VIDIOC_ENUM_FMT), valid_ioctls);
+ if (ops->vidioc_g_fmt_vid_cap ||
+ ops->vidioc_g_fmt_vid_out ||
+ ops->vidioc_g_fmt_vid_cap_mplane ||
+ ops->vidioc_g_fmt_vid_out_mplane ||
+ ops->vidioc_g_fmt_vid_overlay ||
+ ops->vidioc_g_fmt_vbi_cap ||
+ ops->vidioc_g_fmt_vid_out_overlay ||
+ ops->vidioc_g_fmt_vbi_out ||
+ ops->vidioc_g_fmt_sliced_vbi_cap ||
+ ops->vidioc_g_fmt_sliced_vbi_out ||
+ ops->vidioc_g_fmt_type_private)
+ set_bit(_IOC_NR(VIDIOC_G_FMT), valid_ioctls);
+ if (ops->vidioc_s_fmt_vid_cap ||
+ ops->vidioc_s_fmt_vid_out ||
+ ops->vidioc_s_fmt_vid_cap_mplane ||
+ ops->vidioc_s_fmt_vid_out_mplane ||
+ ops->vidioc_s_fmt_vid_overlay ||
+ ops->vidioc_s_fmt_vbi_cap ||
+ ops->vidioc_s_fmt_vid_out_overlay ||
+ ops->vidioc_s_fmt_vbi_out ||
+ ops->vidioc_s_fmt_sliced_vbi_cap ||
+ ops->vidioc_s_fmt_sliced_vbi_out ||
+ ops->vidioc_s_fmt_type_private)
+ set_bit(_IOC_NR(VIDIOC_S_FMT), valid_ioctls);
+ if (ops->vidioc_try_fmt_vid_cap ||
+ ops->vidioc_try_fmt_vid_out ||
+ ops->vidioc_try_fmt_vid_cap_mplane ||
+ ops->vidioc_try_fmt_vid_out_mplane ||
+ ops->vidioc_try_fmt_vid_overlay ||
+ ops->vidioc_try_fmt_vbi_cap ||
+ ops->vidioc_try_fmt_vid_out_overlay ||
+ ops->vidioc_try_fmt_vbi_out ||
+ ops->vidioc_try_fmt_sliced_vbi_cap ||
+ ops->vidioc_try_fmt_sliced_vbi_out ||
+ ops->vidioc_try_fmt_type_private)
+ set_bit(_IOC_NR(VIDIOC_TRY_FMT), valid_ioctls);
+ SET_VALID_IOCTL(ops, VIDIOC_REQBUFS, vidioc_reqbufs);
+ SET_VALID_IOCTL(ops, VIDIOC_QUERYBUF, vidioc_querybuf);
+ SET_VALID_IOCTL(ops, VIDIOC_QBUF, vidioc_qbuf);
+ SET_VALID_IOCTL(ops, VIDIOC_EXPBUF, vidioc_expbuf);
+ SET_VALID_IOCTL(ops, VIDIOC_DQBUF, vidioc_dqbuf);
+ SET_VALID_IOCTL(ops, VIDIOC_OVERLAY, vidioc_overlay);
+ SET_VALID_IOCTL(ops, VIDIOC_G_FBUF, vidioc_g_fbuf);
+ SET_VALID_IOCTL(ops, VIDIOC_S_FBUF, vidioc_s_fbuf);
+ SET_VALID_IOCTL(ops, VIDIOC_STREAMON, vidioc_streamon);
+ SET_VALID_IOCTL(ops, VIDIOC_STREAMOFF, vidioc_streamoff);
+ if (vdev->tvnorms)
+ set_bit(_IOC_NR(VIDIOC_ENUMSTD), valid_ioctls);
+ if (ops->vidioc_g_std || vdev->current_norm)
+ set_bit(_IOC_NR(VIDIOC_G_STD), valid_ioctls);
+ SET_VALID_IOCTL(ops, VIDIOC_S_STD, vidioc_s_std);
+ SET_VALID_IOCTL(ops, VIDIOC_QUERYSTD, vidioc_querystd);
+ SET_VALID_IOCTL(ops, VIDIOC_ENUMINPUT, vidioc_enum_input);
+ SET_VALID_IOCTL(ops, VIDIOC_G_INPUT, vidioc_g_input);
+ SET_VALID_IOCTL(ops, VIDIOC_S_INPUT, vidioc_s_input);
+ SET_VALID_IOCTL(ops, VIDIOC_ENUMOUTPUT, vidioc_enum_output);
+ SET_VALID_IOCTL(ops, VIDIOC_G_OUTPUT, vidioc_g_output);
+ SET_VALID_IOCTL(ops, VIDIOC_S_OUTPUT, vidioc_s_output);
+ /* Note: the control handler can also be passed through the filehandle,
+ and that can't be tested here. If the bit for these control ioctls
+ is set, then the ioctl is valid. But if it is 0, then it can still
+ be valid if the filehandle passed the control handler. */
+ if (vdev->ctrl_handler || ops->vidioc_queryctrl)
+ set_bit(_IOC_NR(VIDIOC_QUERYCTRL), valid_ioctls);
+ if (vdev->ctrl_handler || ops->vidioc_g_ctrl || ops->vidioc_g_ext_ctrls)
+ set_bit(_IOC_NR(VIDIOC_G_CTRL), valid_ioctls);
+ if (vdev->ctrl_handler || ops->vidioc_s_ctrl || ops->vidioc_s_ext_ctrls)
+ set_bit(_IOC_NR(VIDIOC_S_CTRL), valid_ioctls);
+ if (vdev->ctrl_handler || ops->vidioc_g_ext_ctrls)
+ set_bit(_IOC_NR(VIDIOC_G_EXT_CTRLS), valid_ioctls);
+ if (vdev->ctrl_handler || ops->vidioc_s_ext_ctrls)
+ set_bit(_IOC_NR(VIDIOC_S_EXT_CTRLS), valid_ioctls);
+ if (vdev->ctrl_handler || ops->vidioc_try_ext_ctrls)
+ set_bit(_IOC_NR(VIDIOC_TRY_EXT_CTRLS), valid_ioctls);
+ if (vdev->ctrl_handler || ops->vidioc_querymenu)
+ set_bit(_IOC_NR(VIDIOC_QUERYMENU), valid_ioctls);
+ SET_VALID_IOCTL(ops, VIDIOC_ENUMAUDIO, vidioc_enumaudio);
+ SET_VALID_IOCTL(ops, VIDIOC_G_AUDIO, vidioc_g_audio);
+ SET_VALID_IOCTL(ops, VIDIOC_S_AUDIO, vidioc_s_audio);
+ SET_VALID_IOCTL(ops, VIDIOC_ENUMAUDOUT, vidioc_enumaudout);
+ SET_VALID_IOCTL(ops, VIDIOC_G_AUDOUT, vidioc_g_audout);
+ SET_VALID_IOCTL(ops, VIDIOC_S_AUDOUT, vidioc_s_audout);
+ SET_VALID_IOCTL(ops, VIDIOC_G_MODULATOR, vidioc_g_modulator);
+ SET_VALID_IOCTL(ops, VIDIOC_S_MODULATOR, vidioc_s_modulator);
+ if (ops->vidioc_g_crop || ops->vidioc_g_selection)
+ set_bit(_IOC_NR(VIDIOC_G_CROP), valid_ioctls);
+ if (ops->vidioc_s_crop || ops->vidioc_s_selection)
+ set_bit(_IOC_NR(VIDIOC_S_CROP), valid_ioctls);
+ SET_VALID_IOCTL(ops, VIDIOC_G_SELECTION, vidioc_g_selection);
+ SET_VALID_IOCTL(ops, VIDIOC_S_SELECTION, vidioc_s_selection);
+ if (ops->vidioc_cropcap || ops->vidioc_g_selection)
+ set_bit(_IOC_NR(VIDIOC_CROPCAP), valid_ioctls);
+ SET_VALID_IOCTL(ops, VIDIOC_G_JPEGCOMP, vidioc_g_jpegcomp);
+ SET_VALID_IOCTL(ops, VIDIOC_S_JPEGCOMP, vidioc_s_jpegcomp);
+ SET_VALID_IOCTL(ops, VIDIOC_G_ENC_INDEX, vidioc_g_enc_index);
+ SET_VALID_IOCTL(ops, VIDIOC_ENCODER_CMD, vidioc_encoder_cmd);
+ SET_VALID_IOCTL(ops, VIDIOC_TRY_ENCODER_CMD, vidioc_try_encoder_cmd);
+ SET_VALID_IOCTL(ops, VIDIOC_DECODER_CMD, vidioc_decoder_cmd);
+ SET_VALID_IOCTL(ops, VIDIOC_TRY_DECODER_CMD, vidioc_try_decoder_cmd);
+ if (ops->vidioc_g_parm || vdev->current_norm)
+ set_bit(_IOC_NR(VIDIOC_G_PARM), valid_ioctls);
+ SET_VALID_IOCTL(ops, VIDIOC_S_PARM, vidioc_s_parm);
+ SET_VALID_IOCTL(ops, VIDIOC_G_TUNER, vidioc_g_tuner);
+ SET_VALID_IOCTL(ops, VIDIOC_S_TUNER, vidioc_s_tuner);
+ SET_VALID_IOCTL(ops, VIDIOC_G_FREQUENCY, vidioc_g_frequency);
+ SET_VALID_IOCTL(ops, VIDIOC_S_FREQUENCY, vidioc_s_frequency);
+ SET_VALID_IOCTL(ops, VIDIOC_G_SLICED_VBI_CAP, vidioc_g_sliced_vbi_cap);
+ SET_VALID_IOCTL(ops, VIDIOC_LOG_STATUS, vidioc_log_status);
+#ifdef CONFIG_VIDEO_ADV_DEBUG
+ SET_VALID_IOCTL(ops, VIDIOC_DBG_G_REGISTER, vidioc_g_register);
+ SET_VALID_IOCTL(ops, VIDIOC_DBG_S_REGISTER, vidioc_s_register);
+#endif
+ SET_VALID_IOCTL(ops, VIDIOC_DBG_G_CHIP_IDENT, vidioc_g_chip_ident);
+ SET_VALID_IOCTL(ops, VIDIOC_S_HW_FREQ_SEEK, vidioc_s_hw_freq_seek);
+ SET_VALID_IOCTL(ops, VIDIOC_ENUM_FRAMESIZES, vidioc_enum_framesizes);
+ SET_VALID_IOCTL(ops, VIDIOC_ENUM_FRAMEINTERVALS, vidioc_enum_frameintervals);
+ SET_VALID_IOCTL(ops, VIDIOC_ENUM_DV_PRESETS, vidioc_enum_dv_presets);
+ SET_VALID_IOCTL(ops, VIDIOC_S_DV_PRESET, vidioc_s_dv_preset);
+ SET_VALID_IOCTL(ops, VIDIOC_G_DV_PRESET, vidioc_g_dv_preset);
+ SET_VALID_IOCTL(ops, VIDIOC_QUERY_DV_PRESET, vidioc_query_dv_preset);
+ SET_VALID_IOCTL(ops, VIDIOC_S_DV_TIMINGS, vidioc_s_dv_timings);
+ SET_VALID_IOCTL(ops, VIDIOC_G_DV_TIMINGS, vidioc_g_dv_timings);
+ /* yes, really vidioc_subscribe_event */
+ SET_VALID_IOCTL(ops, VIDIOC_DQEVENT, vidioc_subscribe_event);
+ SET_VALID_IOCTL(ops, VIDIOC_SUBSCRIBE_EVENT, vidioc_subscribe_event);
+ SET_VALID_IOCTL(ops, VIDIOC_UNSUBSCRIBE_EVENT, vidioc_unsubscribe_event);
+ SET_VALID_IOCTL(ops, VIDIOC_CREATE_BUFS, vidioc_create_bufs);
+ SET_VALID_IOCTL(ops, VIDIOC_PREPARE_BUF, vidioc_prepare_buf);
+ bitmap_andnot(vdev->valid_ioctls, valid_ioctls, vdev->valid_ioctls,
+ BASE_VIDIOC_PRIVATE);
+}
+
/**
* __video_register_device - register video4linux devices
* @vdev: video device structure we want to register
vdev->index = get_index(vdev);
mutex_unlock(&videodev_lock);
+ if (vdev->ioctl_ops)
+ determine_valid_ioctls(vdev);
+
/* Part 3: Initialize the character device */
vdev->cdev = cdev_alloc();
if (vdev->cdev == NULL) {
memset((u8 *)(p) + offsetof(typeof(*(p)), field) + sizeof((p)->field), \
0, sizeof(*(p)) - offsetof(typeof(*(p)), field) - sizeof((p)->field))
-#define have_fmt_ops(foo) ( \
- ops->vidioc_##foo##_fmt_vid_cap || \
- ops->vidioc_##foo##_fmt_vid_out || \
- ops->vidioc_##foo##_fmt_vid_cap_mplane || \
- ops->vidioc_##foo##_fmt_vid_out_mplane || \
- ops->vidioc_##foo##_fmt_vid_overlay || \
- ops->vidioc_##foo##_fmt_vbi_cap || \
- ops->vidioc_##foo##_fmt_vid_out_overlay || \
- ops->vidioc_##foo##_fmt_vbi_out || \
- ops->vidioc_##foo##_fmt_sliced_vbi_cap || \
- ops->vidioc_##foo##_fmt_sliced_vbi_out || \
- ops->vidioc_##foo##_fmt_type_private)
-
struct std_descr {
v4l2_std_id std;
const char *descr;
[V4L2_MEMORY_MMAP] = "mmap",
[V4L2_MEMORY_USERPTR] = "userptr",
[V4L2_MEMORY_OVERLAY] = "overlay",
+ [V4L2_MEMORY_DMABUF] = "dmabuf",
};
#define prt_names(a, arr) ((((a) >= 0) && ((a) < ARRAY_SIZE(arr))) ? \
/* ------------------------------------------------------------------ */
/* debug help functions */
-static const char *v4l2_ioctls[] = {
- [_IOC_NR(VIDIOC_QUERYCAP)] = "VIDIOC_QUERYCAP",
- [_IOC_NR(VIDIOC_RESERVED)] = "VIDIOC_RESERVED",
- [_IOC_NR(VIDIOC_ENUM_FMT)] = "VIDIOC_ENUM_FMT",
- [_IOC_NR(VIDIOC_G_FMT)] = "VIDIOC_G_FMT",
- [_IOC_NR(VIDIOC_S_FMT)] = "VIDIOC_S_FMT",
- [_IOC_NR(VIDIOC_REQBUFS)] = "VIDIOC_REQBUFS",
- [_IOC_NR(VIDIOC_QUERYBUF)] = "VIDIOC_QUERYBUF",
- [_IOC_NR(VIDIOC_G_FBUF)] = "VIDIOC_G_FBUF",
- [_IOC_NR(VIDIOC_S_FBUF)] = "VIDIOC_S_FBUF",
- [_IOC_NR(VIDIOC_OVERLAY)] = "VIDIOC_OVERLAY",
- [_IOC_NR(VIDIOC_QBUF)] = "VIDIOC_QBUF",
- [_IOC_NR(VIDIOC_DQBUF)] = "VIDIOC_DQBUF",
- [_IOC_NR(VIDIOC_STREAMON)] = "VIDIOC_STREAMON",
- [_IOC_NR(VIDIOC_STREAMOFF)] = "VIDIOC_STREAMOFF",
- [_IOC_NR(VIDIOC_G_PARM)] = "VIDIOC_G_PARM",
- [_IOC_NR(VIDIOC_S_PARM)] = "VIDIOC_S_PARM",
- [_IOC_NR(VIDIOC_G_STD)] = "VIDIOC_G_STD",
- [_IOC_NR(VIDIOC_S_STD)] = "VIDIOC_S_STD",
- [_IOC_NR(VIDIOC_ENUMSTD)] = "VIDIOC_ENUMSTD",
- [_IOC_NR(VIDIOC_ENUMINPUT)] = "VIDIOC_ENUMINPUT",
- [_IOC_NR(VIDIOC_G_CTRL)] = "VIDIOC_G_CTRL",
- [_IOC_NR(VIDIOC_S_CTRL)] = "VIDIOC_S_CTRL",
- [_IOC_NR(VIDIOC_G_TUNER)] = "VIDIOC_G_TUNER",
- [_IOC_NR(VIDIOC_S_TUNER)] = "VIDIOC_S_TUNER",
- [_IOC_NR(VIDIOC_G_AUDIO)] = "VIDIOC_G_AUDIO",
- [_IOC_NR(VIDIOC_S_AUDIO)] = "VIDIOC_S_AUDIO",
- [_IOC_NR(VIDIOC_QUERYCTRL)] = "VIDIOC_QUERYCTRL",
- [_IOC_NR(VIDIOC_QUERYMENU)] = "VIDIOC_QUERYMENU",
- [_IOC_NR(VIDIOC_G_INPUT)] = "VIDIOC_G_INPUT",
- [_IOC_NR(VIDIOC_S_INPUT)] = "VIDIOC_S_INPUT",
- [_IOC_NR(VIDIOC_G_OUTPUT)] = "VIDIOC_G_OUTPUT",
- [_IOC_NR(VIDIOC_S_OUTPUT)] = "VIDIOC_S_OUTPUT",
- [_IOC_NR(VIDIOC_ENUMOUTPUT)] = "VIDIOC_ENUMOUTPUT",
- [_IOC_NR(VIDIOC_G_AUDOUT)] = "VIDIOC_G_AUDOUT",
- [_IOC_NR(VIDIOC_S_AUDOUT)] = "VIDIOC_S_AUDOUT",
- [_IOC_NR(VIDIOC_G_MODULATOR)] = "VIDIOC_G_MODULATOR",
- [_IOC_NR(VIDIOC_S_MODULATOR)] = "VIDIOC_S_MODULATOR",
- [_IOC_NR(VIDIOC_G_FREQUENCY)] = "VIDIOC_G_FREQUENCY",
- [_IOC_NR(VIDIOC_S_FREQUENCY)] = "VIDIOC_S_FREQUENCY",
- [_IOC_NR(VIDIOC_CROPCAP)] = "VIDIOC_CROPCAP",
- [_IOC_NR(VIDIOC_G_CROP)] = "VIDIOC_G_CROP",
- [_IOC_NR(VIDIOC_S_CROP)] = "VIDIOC_S_CROP",
- [_IOC_NR(VIDIOC_G_SELECTION)] = "VIDIOC_G_SELECTION",
- [_IOC_NR(VIDIOC_S_SELECTION)] = "VIDIOC_S_SELECTION",
- [_IOC_NR(VIDIOC_G_JPEGCOMP)] = "VIDIOC_G_JPEGCOMP",
- [_IOC_NR(VIDIOC_S_JPEGCOMP)] = "VIDIOC_S_JPEGCOMP",
- [_IOC_NR(VIDIOC_QUERYSTD)] = "VIDIOC_QUERYSTD",
- [_IOC_NR(VIDIOC_TRY_FMT)] = "VIDIOC_TRY_FMT",
- [_IOC_NR(VIDIOC_ENUMAUDIO)] = "VIDIOC_ENUMAUDIO",
- [_IOC_NR(VIDIOC_ENUMAUDOUT)] = "VIDIOC_ENUMAUDOUT",
- [_IOC_NR(VIDIOC_G_PRIORITY)] = "VIDIOC_G_PRIORITY",
- [_IOC_NR(VIDIOC_S_PRIORITY)] = "VIDIOC_S_PRIORITY",
- [_IOC_NR(VIDIOC_G_SLICED_VBI_CAP)] = "VIDIOC_G_SLICED_VBI_CAP",
- [_IOC_NR(VIDIOC_LOG_STATUS)] = "VIDIOC_LOG_STATUS",
- [_IOC_NR(VIDIOC_G_EXT_CTRLS)] = "VIDIOC_G_EXT_CTRLS",
- [_IOC_NR(VIDIOC_S_EXT_CTRLS)] = "VIDIOC_S_EXT_CTRLS",
- [_IOC_NR(VIDIOC_TRY_EXT_CTRLS)] = "VIDIOC_TRY_EXT_CTRLS",
-#if 1
- [_IOC_NR(VIDIOC_ENUM_FRAMESIZES)] = "VIDIOC_ENUM_FRAMESIZES",
- [_IOC_NR(VIDIOC_ENUM_FRAMEINTERVALS)] = "VIDIOC_ENUM_FRAMEINTERVALS",
- [_IOC_NR(VIDIOC_G_ENC_INDEX)] = "VIDIOC_G_ENC_INDEX",
- [_IOC_NR(VIDIOC_ENCODER_CMD)] = "VIDIOC_ENCODER_CMD",
- [_IOC_NR(VIDIOC_TRY_ENCODER_CMD)] = "VIDIOC_TRY_ENCODER_CMD",
-
- [_IOC_NR(VIDIOC_DECODER_CMD)] = "VIDIOC_DECODER_CMD",
- [_IOC_NR(VIDIOC_TRY_DECODER_CMD)] = "VIDIOC_TRY_DECODER_CMD",
- [_IOC_NR(VIDIOC_DBG_S_REGISTER)] = "VIDIOC_DBG_S_REGISTER",
- [_IOC_NR(VIDIOC_DBG_G_REGISTER)] = "VIDIOC_DBG_G_REGISTER",
-
- [_IOC_NR(VIDIOC_DBG_G_CHIP_IDENT)] = "VIDIOC_DBG_G_CHIP_IDENT",
- [_IOC_NR(VIDIOC_S_HW_FREQ_SEEK)] = "VIDIOC_S_HW_FREQ_SEEK",
+
+struct v4l2_ioctl_info {
+ unsigned int ioctl;
+ u16 flags;
+ const char * const name;
+};
+
+/* This control needs a priority check */
+#define INFO_FL_PRIO (1 << 0)
+/* This control can be valid if the filehandle passes a control handler. */
+#define INFO_FL_CTRL (1 << 1)
+
+#define IOCTL_INFO(_ioctl, _flags) [_IOC_NR(_ioctl)] = { \
+ .ioctl = _ioctl, \
+ .flags = _flags, \
+ .name = #_ioctl, \
+}
+
+static struct v4l2_ioctl_info v4l2_ioctls[] = {
+ IOCTL_INFO(VIDIOC_QUERYCAP, 0),
+ IOCTL_INFO(VIDIOC_ENUM_FMT, 0),
+ IOCTL_INFO(VIDIOC_G_FMT, 0),
+ IOCTL_INFO(VIDIOC_S_FMT, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_REQBUFS, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_QUERYBUF, 0),
+ IOCTL_INFO(VIDIOC_G_FBUF, 0),
+ IOCTL_INFO(VIDIOC_S_FBUF, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_OVERLAY, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_QBUF, 0),
+ IOCTL_INFO(VIDIOC_EXPBUF, 0),
+ IOCTL_INFO(VIDIOC_DQBUF, 0),
+ IOCTL_INFO(VIDIOC_STREAMON, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_STREAMOFF, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_G_PARM, 0),
+ IOCTL_INFO(VIDIOC_S_PARM, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_G_STD, 0),
+ IOCTL_INFO(VIDIOC_S_STD, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_ENUMSTD, 0),
+ IOCTL_INFO(VIDIOC_ENUMINPUT, 0),
+ IOCTL_INFO(VIDIOC_G_CTRL, INFO_FL_CTRL),
+ IOCTL_INFO(VIDIOC_S_CTRL, INFO_FL_PRIO | INFO_FL_CTRL),
+ IOCTL_INFO(VIDIOC_G_TUNER, 0),
+ IOCTL_INFO(VIDIOC_S_TUNER, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_G_AUDIO, 0),
+ IOCTL_INFO(VIDIOC_S_AUDIO, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_QUERYCTRL, INFO_FL_CTRL),
+ IOCTL_INFO(VIDIOC_QUERYMENU, INFO_FL_CTRL),
+ IOCTL_INFO(VIDIOC_G_INPUT, 0),
+ IOCTL_INFO(VIDIOC_S_INPUT, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_G_OUTPUT, 0),
+ IOCTL_INFO(VIDIOC_S_OUTPUT, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_ENUMOUTPUT, 0),
+ IOCTL_INFO(VIDIOC_G_AUDOUT, 0),
+ IOCTL_INFO(VIDIOC_S_AUDOUT, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_G_MODULATOR, 0),
+ IOCTL_INFO(VIDIOC_S_MODULATOR, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_G_FREQUENCY, 0),
+ IOCTL_INFO(VIDIOC_S_FREQUENCY, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_CROPCAP, 0),
+ IOCTL_INFO(VIDIOC_G_CROP, 0),
+ IOCTL_INFO(VIDIOC_S_CROP, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_G_SELECTION, 0),
+ IOCTL_INFO(VIDIOC_S_SELECTION, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_G_JPEGCOMP, 0),
+ IOCTL_INFO(VIDIOC_S_JPEGCOMP, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_QUERYSTD, 0),
+ IOCTL_INFO(VIDIOC_TRY_FMT, 0),
+ IOCTL_INFO(VIDIOC_ENUMAUDIO, 0),
+ IOCTL_INFO(VIDIOC_ENUMAUDOUT, 0),
+ IOCTL_INFO(VIDIOC_G_PRIORITY, 0),
+ IOCTL_INFO(VIDIOC_S_PRIORITY, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_G_SLICED_VBI_CAP, 0),
+ IOCTL_INFO(VIDIOC_LOG_STATUS, 0),
+ IOCTL_INFO(VIDIOC_G_EXT_CTRLS, INFO_FL_CTRL),
+ IOCTL_INFO(VIDIOC_S_EXT_CTRLS, INFO_FL_PRIO | INFO_FL_CTRL),
+ IOCTL_INFO(VIDIOC_TRY_EXT_CTRLS, 0),
+ IOCTL_INFO(VIDIOC_ENUM_FRAMESIZES, 0),
+ IOCTL_INFO(VIDIOC_ENUM_FRAMEINTERVALS, 0),
+ IOCTL_INFO(VIDIOC_G_ENC_INDEX, 0),
+ IOCTL_INFO(VIDIOC_ENCODER_CMD, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_TRY_ENCODER_CMD, 0),
+ IOCTL_INFO(VIDIOC_DECODER_CMD, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_TRY_DECODER_CMD, 0),
+#ifdef CONFIG_VIDEO_ADV_DEBUG
+ IOCTL_INFO(VIDIOC_DBG_S_REGISTER, 0),
+ IOCTL_INFO(VIDIOC_DBG_G_REGISTER, 0),
#endif
- [_IOC_NR(VIDIOC_ENUM_DV_PRESETS)] = "VIDIOC_ENUM_DV_PRESETS",
- [_IOC_NR(VIDIOC_S_DV_PRESET)] = "VIDIOC_S_DV_PRESET",
- [_IOC_NR(VIDIOC_G_DV_PRESET)] = "VIDIOC_G_DV_PRESET",
- [_IOC_NR(VIDIOC_QUERY_DV_PRESET)] = "VIDIOC_QUERY_DV_PRESET",
- [_IOC_NR(VIDIOC_S_DV_TIMINGS)] = "VIDIOC_S_DV_TIMINGS",
- [_IOC_NR(VIDIOC_G_DV_TIMINGS)] = "VIDIOC_G_DV_TIMINGS",
- [_IOC_NR(VIDIOC_DQEVENT)] = "VIDIOC_DQEVENT",
- [_IOC_NR(VIDIOC_SUBSCRIBE_EVENT)] = "VIDIOC_SUBSCRIBE_EVENT",
- [_IOC_NR(VIDIOC_UNSUBSCRIBE_EVENT)] = "VIDIOC_UNSUBSCRIBE_EVENT",
- [_IOC_NR(VIDIOC_CREATE_BUFS)] = "VIDIOC_CREATE_BUFS",
- [_IOC_NR(VIDIOC_PREPARE_BUF)] = "VIDIOC_PREPARE_BUF",
+ IOCTL_INFO(VIDIOC_DBG_G_CHIP_IDENT, 0),
+ IOCTL_INFO(VIDIOC_S_HW_FREQ_SEEK, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_ENUM_DV_PRESETS, 0),
+ IOCTL_INFO(VIDIOC_S_DV_PRESET, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_G_DV_PRESET, 0),
+ IOCTL_INFO(VIDIOC_QUERY_DV_PRESET, 0),
+ IOCTL_INFO(VIDIOC_S_DV_TIMINGS, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_G_DV_TIMINGS, 0),
+ IOCTL_INFO(VIDIOC_DQEVENT, 0),
+ IOCTL_INFO(VIDIOC_SUBSCRIBE_EVENT, 0),
+ IOCTL_INFO(VIDIOC_UNSUBSCRIBE_EVENT, 0),
+ IOCTL_INFO(VIDIOC_CREATE_BUFS, INFO_FL_PRIO),
+ IOCTL_INFO(VIDIOC_PREPARE_BUF, 0),
};
#define V4L2_IOCTLS ARRAY_SIZE(v4l2_ioctls)
+bool v4l2_is_known_ioctl(unsigned int cmd)
+{
+ if (_IOC_NR(cmd) >= V4L2_IOCTLS)
+ return false;
+ return v4l2_ioctls[_IOC_NR(cmd)].ioctl == cmd;
+}
+
/* Common ioctl debug function. This function can be used by
external ioctl messages as well as internal V4L ioctl */
void v4l_printk_ioctl(unsigned int cmd)
type = "v4l2";
break;
}
- printk("%s", v4l2_ioctls[_IOC_NR(cmd)]);
+ printk("%s", v4l2_ioctls[_IOC_NR(cmd)].name);
return;
default:
type = "unknown";
void *fh = file->private_data;
struct v4l2_fh *vfh = NULL;
int use_fh_prio = 0;
- long ret_prio = 0;
long ret = -ENOTTY;
if (ops == NULL) {
return ret;
}
- if ((vfd->debug & V4L2_DEBUG_IOCTL) &&
- !(vfd->debug & V4L2_DEBUG_IOCTL_ARG)) {
- v4l_print_ioctl(vfd->name, cmd);
- printk(KERN_CONT "\n");
- }
-
if (test_bit(V4L2_FL_USES_V4L2_FH, &vfd->flags)) {
vfh = file->private_data;
use_fh_prio = test_bit(V4L2_FL_USE_FH_PRIO, &vfd->flags);
}
- if (use_fh_prio)
- ret_prio = v4l2_prio_check(vfd->prio, vfh->prio);
+ if (v4l2_is_known_ioctl(cmd)) {
+ struct v4l2_ioctl_info *info = &v4l2_ioctls[_IOC_NR(cmd)];
+
+ if (!test_bit(_IOC_NR(cmd), vfd->valid_ioctls) &&
+ !((info->flags & INFO_FL_CTRL) && vfh && vfh->ctrl_handler))
+ return -ENOTTY;
+
+ if (use_fh_prio && (info->flags & INFO_FL_PRIO)) {
+ ret = v4l2_prio_check(vfd->prio, vfh->prio);
+ if (ret)
+ return ret;
+ }
+ }
+
+ if ((vfd->debug & V4L2_DEBUG_IOCTL) &&
+ !(vfd->debug & V4L2_DEBUG_IOCTL_ARG)) {
+ v4l_print_ioctl(vfd->name, cmd);
+ printk(KERN_CONT "\n");
+ }
switch (cmd) {
{
struct v4l2_capability *cap = (struct v4l2_capability *)arg;
- if (!ops->vidioc_querycap)
- break;
-
cap->version = LINUX_VERSION_CODE;
ret = ops->vidioc_querycap(file, fh, cap);
if (!ret)
{
enum v4l2_priority *p = arg;
- if (!ops->vidioc_s_priority && !use_fh_prio)
- break;
dbgarg(cmd, "setting priority to %d\n", *p);
if (ops->vidioc_s_priority)
ret = ops->vidioc_s_priority(file, fh, *p);
else
- ret = ret_prio ? ret_prio :
- v4l2_prio_change(&vfd->v4l2_dev->prio,
+ ret = v4l2_prio_change(&vfd->v4l2_dev->prio,
&vfh->prio, *p);
break;
}
{
struct v4l2_fmtdesc *f = arg;
+ ret = -EINVAL;
switch (f->type) {
case V4L2_BUF_TYPE_VIDEO_CAPTURE:
if (likely(ops->vidioc_enum_fmt_vid_cap))
default:
break;
}
- if (likely (!ret))
+ if (likely(!ret))
dbgarg(cmd, "index=%d, type=%d, flags=%d, "
"pixelformat=%c%c%c%c, description='%s'\n",
f->index, f->type, f->flags,
(f->pixelformat >> 16) & 0xff,
(f->pixelformat >> 24) & 0xff,
f->description);
- else if (ret == -ENOTTY &&
- (ops->vidioc_enum_fmt_vid_cap ||
- ops->vidioc_enum_fmt_vid_out ||
- ops->vidioc_enum_fmt_vid_cap_mplane ||
- ops->vidioc_enum_fmt_vid_out_mplane ||
- ops->vidioc_enum_fmt_vid_overlay ||
- ops->vidioc_enum_fmt_type_private))
- ret = -EINVAL;
break;
}
case VIDIOC_G_FMT:
/* FIXME: Should be one dump per type */
dbgarg(cmd, "type=%s\n", prt_names(f->type, v4l2_type_names));
+ ret = -EINVAL;
switch (f->type) {
case V4L2_BUF_TYPE_VIDEO_CAPTURE:
if (ops->vidioc_g_fmt_vid_cap)
fh, f);
break;
}
- if (unlikely(ret == -ENOTTY && have_fmt_ops(g)))
- ret = -EINVAL;
-
break;
}
case VIDIOC_S_FMT:
{
struct v4l2_format *f = (struct v4l2_format *)arg;
- if (!have_fmt_ops(s))
- break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
ret = -EINVAL;
/* FIXME: Should be one dump per type */
/* FIXME: Should be one dump per type */
dbgarg(cmd, "type=%s\n", prt_names(f->type,
v4l2_type_names));
+ ret = -EINVAL;
switch (f->type) {
case V4L2_BUF_TYPE_VIDEO_CAPTURE:
CLEAR_AFTER_FIELD(f, fmt.pix);
fh, f);
break;
}
- if (unlikely(ret == -ENOTTY && have_fmt_ops(try)))
- ret = -EINVAL;
break;
}
/* FIXME: Those buf reqs could be handled here,
{
struct v4l2_requestbuffers *p = arg;
- if (!ops->vidioc_reqbufs)
- break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
ret = check_fmt(ops, p->type);
if (ret)
break;
{
struct v4l2_buffer *p = arg;
- if (!ops->vidioc_querybuf)
- break;
ret = check_fmt(ops, p->type);
if (ret)
break;
{
struct v4l2_buffer *p = arg;
- if (!ops->vidioc_qbuf)
- break;
ret = check_fmt(ops, p->type);
if (ret)
break;
dbgbuf(cmd, vfd, p);
break;
}
+ case VIDIOC_EXPBUF:
+ {
+ ret = ops->vidioc_expbuf(file, fh, arg);
+ break;
+ }
case VIDIOC_DQBUF:
{
struct v4l2_buffer *p = arg;
- if (!ops->vidioc_dqbuf)
- break;
ret = check_fmt(ops, p->type);
if (ret)
break;
{
int *i = arg;
- if (!ops->vidioc_overlay)
- break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
dbgarg(cmd, "value=%d\n", *i);
ret = ops->vidioc_overlay(file, fh, *i);
break;
{
struct v4l2_framebuffer *p = arg;
- if (!ops->vidioc_g_fbuf)
- break;
ret = ops->vidioc_g_fbuf(file, fh, arg);
if (!ret) {
dbgarg(cmd, "capability=0x%x, flags=%d, base=0x%08lx\n",
{
struct v4l2_framebuffer *p = arg;
- if (!ops->vidioc_s_fbuf)
- break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
dbgarg(cmd, "capability=0x%x, flags=%d, base=0x%08lx\n",
p->capability, p->flags, (unsigned long)p->base);
v4l_print_pix_fmt(vfd, &p->fmt);
{
enum v4l2_buf_type i = *(int *)arg;
- if (!ops->vidioc_streamon)
- break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
dbgarg(cmd, "type=%s\n", prt_names(i, v4l2_type_names));
ret = ops->vidioc_streamon(file, fh, i);
break;
{
enum v4l2_buf_type i = *(int *)arg;
- if (!ops->vidioc_streamoff)
- break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
dbgarg(cmd, "type=%s\n", prt_names(i, v4l2_type_names));
ret = ops->vidioc_streamoff(file, fh, i);
break;
dbgarg(cmd, "std=%08Lx\n", (long long unsigned)*id);
- if (!ops->vidioc_s_std)
- break;
-
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
ret = -EINVAL;
norm = (*id) & vfd->tvnorms;
if (vfd->tvnorms && !norm) /* Check if std is supported */
{
v4l2_std_id *p = arg;
- if (!ops->vidioc_querystd)
- break;
/*
* If nothing detected, it should return all supported
* Drivers just need to mask the std argument, in order
if (ops->vidioc_s_dv_timings)
p->capabilities |= V4L2_IN_CAP_CUSTOM_TIMINGS;
- if (!ops->vidioc_enum_input)
- break;
-
ret = ops->vidioc_enum_input(file, fh, p);
if (!ret)
dbgarg(cmd, "index=%d, name=%s, type=%d, "
{
unsigned int *i = arg;
- if (!ops->vidioc_g_input)
- break;
ret = ops->vidioc_g_input(file, fh, i);
if (!ret)
dbgarg(cmd, "value=%d\n", *i);
{
unsigned int *i = arg;
- if (!ops->vidioc_s_input)
- break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
dbgarg(cmd, "value=%d\n", *i);
ret = ops->vidioc_s_input(file, fh, *i);
break;
{
struct v4l2_output *p = arg;
- if (!ops->vidioc_enum_output)
- break;
-
/*
* We set the flags for CAP_PRESETS, CAP_CUSTOM_TIMINGS &
* CAP_STD here based on ioctl handler provided by the
{
unsigned int *i = arg;
- if (!ops->vidioc_g_output)
- break;
ret = ops->vidioc_g_output(file, fh, i);
if (!ret)
dbgarg(cmd, "value=%d\n", *i);
{
unsigned int *i = arg;
- if (!ops->vidioc_s_output)
- break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
dbgarg(cmd, "value=%d\n", *i);
ret = ops->vidioc_s_output(file, fh, *i);
break;
if (!(vfh && vfh->ctrl_handler) && !vfd->ctrl_handler &&
!ops->vidioc_s_ctrl && !ops->vidioc_s_ext_ctrls)
break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
dbgarg(cmd, "id=0x%x, value=%d\n", p->id, p->value);
if (!(vfh && vfh->ctrl_handler) && !vfd->ctrl_handler &&
!ops->vidioc_s_ext_ctrls)
break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
v4l_print_ext_ctrls(cmd, vfd, p, 1);
if (vfh && vfh->ctrl_handler)
ret = v4l2_s_ext_ctrls(vfh, vfh->ctrl_handler, p);
{
struct v4l2_audio *p = arg;
- if (!ops->vidioc_enumaudio)
- break;
ret = ops->vidioc_enumaudio(file, fh, p);
if (!ret)
dbgarg(cmd, "index=%d, name=%s, capability=0x%x, "
{
struct v4l2_audio *p = arg;
- if (!ops->vidioc_g_audio)
- break;
-
ret = ops->vidioc_g_audio(file, fh, p);
if (!ret)
dbgarg(cmd, "index=%d, name=%s, capability=0x%x, "
{
struct v4l2_audio *p = arg;
- if (!ops->vidioc_s_audio)
- break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
dbgarg(cmd, "index=%d, name=%s, capability=0x%x, "
"mode=0x%x\n", p->index, p->name,
p->capability, p->mode);
{
struct v4l2_audioout *p = arg;
- if (!ops->vidioc_enumaudout)
- break;
dbgarg(cmd, "Enum for index=%d\n", p->index);
ret = ops->vidioc_enumaudout(file, fh, p);
if (!ret)
{
struct v4l2_audioout *p = arg;
- if (!ops->vidioc_g_audout)
- break;
-
ret = ops->vidioc_g_audout(file, fh, p);
if (!ret)
dbgarg2("index=%d, name=%s, capability=%d, "
{
struct v4l2_audioout *p = arg;
- if (!ops->vidioc_s_audout)
- break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
dbgarg(cmd, "index=%d, name=%s, capability=%d, "
"mode=%d\n", p->index, p->name,
p->capability, p->mode);
{
struct v4l2_modulator *p = arg;
- if (!ops->vidioc_g_modulator)
- break;
ret = ops->vidioc_g_modulator(file, fh, p);
if (!ret)
dbgarg(cmd, "index=%d, name=%s, "
{
struct v4l2_modulator *p = arg;
- if (!ops->vidioc_s_modulator)
- break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
dbgarg(cmd, "index=%d, name=%s, capability=%d, "
"rangelow=%d, rangehigh=%d, txsubchans=%d\n",
p->index, p->name, p->capability, p->rangelow,
{
struct v4l2_crop *p = arg;
- if (!ops->vidioc_g_crop && !ops->vidioc_g_selection)
- break;
-
dbgarg(cmd, "type=%s\n", prt_names(p->type, v4l2_type_names));
if (ops->vidioc_g_crop) {
{
struct v4l2_crop *p = arg;
- if (!ops->vidioc_s_crop && !ops->vidioc_s_selection)
- break;
-
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
dbgarg(cmd, "type=%s\n", prt_names(p->type, v4l2_type_names));
dbgrect(vfd, "", &p->c);
{
struct v4l2_selection *p = arg;
- if (!ops->vidioc_g_selection)
- break;
-
dbgarg(cmd, "type=%s\n", prt_names(p->type, v4l2_type_names));
ret = ops->vidioc_g_selection(file, fh, p);
{
struct v4l2_selection *p = arg;
- if (!ops->vidioc_s_selection)
- break;
-
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
dbgarg(cmd, "type=%s\n", prt_names(p->type, v4l2_type_names));
dbgrect(vfd, "", &p->r);
struct v4l2_cropcap *p = arg;
/*FIXME: Should also show v4l2_fract pixelaspect */
- if (!ops->vidioc_cropcap && !ops->vidioc_g_selection)
- break;
-
dbgarg(cmd, "type=%s\n", prt_names(p->type, v4l2_type_names));
if (ops->vidioc_cropcap) {
ret = ops->vidioc_cropcap(file, fh, p);
{
struct v4l2_jpegcompression *p = arg;
- if (!ops->vidioc_g_jpegcomp)
- break;
-
ret = ops->vidioc_g_jpegcomp(file, fh, p);
if (!ret)
dbgarg(cmd, "quality=%d, APPn=%d, "
{
struct v4l2_jpegcompression *p = arg;
- if (!ops->vidioc_g_jpegcomp)
- break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
dbgarg(cmd, "quality=%d, APPn=%d, APP_len=%d, "
"COM_len=%d, jpeg_markers=%d\n",
p->quality, p->APPn, p->APP_len,
{
struct v4l2_enc_idx *p = arg;
- if (!ops->vidioc_g_enc_index)
- break;
ret = ops->vidioc_g_enc_index(file, fh, p);
if (!ret)
dbgarg(cmd, "entries=%d, entries_cap=%d\n",
{
struct v4l2_encoder_cmd *p = arg;
- if (!ops->vidioc_encoder_cmd)
- break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
ret = ops->vidioc_encoder_cmd(file, fh, p);
if (!ret)
dbgarg(cmd, "cmd=%d, flags=%x\n", p->cmd, p->flags);
{
struct v4l2_encoder_cmd *p = arg;
- if (!ops->vidioc_try_encoder_cmd)
- break;
ret = ops->vidioc_try_encoder_cmd(file, fh, p);
if (!ret)
dbgarg(cmd, "cmd=%d, flags=%x\n", p->cmd, p->flags);
{
struct v4l2_decoder_cmd *p = arg;
- if (!ops->vidioc_decoder_cmd)
- break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
ret = ops->vidioc_decoder_cmd(file, fh, p);
if (!ret)
dbgarg(cmd, "cmd=%d, flags=%x\n", p->cmd, p->flags);
{
struct v4l2_decoder_cmd *p = arg;
- if (!ops->vidioc_try_decoder_cmd)
- break;
ret = ops->vidioc_try_decoder_cmd(file, fh, p);
if (!ret)
dbgarg(cmd, "cmd=%d, flags=%x\n", p->cmd, p->flags);
{
struct v4l2_streamparm *p = arg;
- if (!ops->vidioc_g_parm && !vfd->current_norm)
- break;
if (ops->vidioc_g_parm) {
ret = check_fmt(ops, p->type);
if (ret)
{
struct v4l2_streamparm *p = arg;
- if (!ops->vidioc_s_parm)
- break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
ret = check_fmt(ops, p->type);
if (ret)
break;
{
struct v4l2_tuner *p = arg;
- if (!ops->vidioc_g_tuner)
- break;
-
p->type = (vfd->vfl_type == VFL_TYPE_RADIO) ?
V4L2_TUNER_RADIO : V4L2_TUNER_ANALOG_TV;
ret = ops->vidioc_g_tuner(file, fh, p);
{
struct v4l2_tuner *p = arg;
- if (!ops->vidioc_s_tuner)
- break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
p->type = (vfd->vfl_type == VFL_TYPE_RADIO) ?
V4L2_TUNER_RADIO : V4L2_TUNER_ANALOG_TV;
dbgarg(cmd, "index=%d, name=%s, type=%d, "
{
struct v4l2_frequency *p = arg;
- if (!ops->vidioc_g_frequency)
- break;
-
p->type = (vfd->vfl_type == VFL_TYPE_RADIO) ?
V4L2_TUNER_RADIO : V4L2_TUNER_ANALOG_TV;
ret = ops->vidioc_g_frequency(file, fh, p);
struct v4l2_frequency *p = arg;
enum v4l2_tuner_type type;
- if (!ops->vidioc_s_frequency)
- break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
type = (vfd->vfl_type == VFL_TYPE_RADIO) ?
V4L2_TUNER_RADIO : V4L2_TUNER_ANALOG_TV;
dbgarg(cmd, "tuner=%d, type=%d, frequency=%d\n",
{
struct v4l2_sliced_vbi_cap *p = arg;
- if (!ops->vidioc_g_sliced_vbi_cap)
- break;
-
/* Clear up to type, everything after type is zerod already */
memset(p, 0, offsetof(struct v4l2_sliced_vbi_cap, type));
}
case VIDIOC_LOG_STATUS:
{
- if (!ops->vidioc_log_status)
- break;
if (vfd->v4l2_dev)
pr_info("%s: ================= START STATUS =================\n",
vfd->v4l2_dev->name);
vfd->v4l2_dev->name);
break;
}
-#ifdef CONFIG_VIDEO_ADV_DEBUG
case VIDIOC_DBG_G_REGISTER:
{
+#ifdef CONFIG_VIDEO_ADV_DEBUG
struct v4l2_dbg_register *p = arg;
- if (ops->vidioc_g_register) {
- if (!capable(CAP_SYS_ADMIN))
- ret = -EPERM;
- else
- ret = ops->vidioc_g_register(file, fh, p);
- }
+ if (!capable(CAP_SYS_ADMIN))
+ ret = -EPERM;
+ else
+ ret = ops->vidioc_g_register(file, fh, p);
+#endif
break;
}
case VIDIOC_DBG_S_REGISTER:
{
+#ifdef CONFIG_VIDEO_ADV_DEBUG
struct v4l2_dbg_register *p = arg;
- if (ops->vidioc_s_register) {
- if (!capable(CAP_SYS_ADMIN))
- ret = -EPERM;
- else
- ret = ops->vidioc_s_register(file, fh, p);
- }
+ if (!capable(CAP_SYS_ADMIN))
+ ret = -EPERM;
+ else
+ ret = ops->vidioc_s_register(file, fh, p);
+#endif
break;
}
-#endif
case VIDIOC_DBG_G_CHIP_IDENT:
{
struct v4l2_dbg_chip_ident *p = arg;
- if (!ops->vidioc_g_chip_ident)
- break;
p->ident = V4L2_IDENT_NONE;
p->revision = 0;
ret = ops->vidioc_g_chip_ident(file, fh, p);
struct v4l2_hw_freq_seek *p = arg;
enum v4l2_tuner_type type;
- if (!ops->vidioc_s_hw_freq_seek)
- break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
type = (vfd->vfl_type == VFL_TYPE_RADIO) ?
V4L2_TUNER_RADIO : V4L2_TUNER_ANALOG_TV;
dbgarg(cmd,
{
struct v4l2_frmsizeenum *p = arg;
- if (!ops->vidioc_enum_framesizes)
- break;
-
ret = ops->vidioc_enum_framesizes(file, fh, p);
dbgarg(cmd,
"index=%d, pixelformat=%c%c%c%c, type=%d ",
{
struct v4l2_frmivalenum *p = arg;
- if (!ops->vidioc_enum_frameintervals)
- break;
-
ret = ops->vidioc_enum_frameintervals(file, fh, p);
dbgarg(cmd,
"index=%d, pixelformat=%d, width=%d, height=%d, type=%d ",
{
struct v4l2_dv_enum_preset *p = arg;
- if (!ops->vidioc_enum_dv_presets)
- break;
-
ret = ops->vidioc_enum_dv_presets(file, fh, p);
if (!ret)
dbgarg(cmd,
{
struct v4l2_dv_preset *p = arg;
- if (!ops->vidioc_s_dv_preset)
- break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
-
dbgarg(cmd, "preset=%d\n", p->preset);
ret = ops->vidioc_s_dv_preset(file, fh, p);
break;
{
struct v4l2_dv_preset *p = arg;
- if (!ops->vidioc_g_dv_preset)
- break;
-
ret = ops->vidioc_g_dv_preset(file, fh, p);
if (!ret)
dbgarg(cmd, "preset=%d\n", p->preset);
{
struct v4l2_dv_preset *p = arg;
- if (!ops->vidioc_query_dv_preset)
- break;
-
ret = ops->vidioc_query_dv_preset(file, fh, p);
if (!ret)
dbgarg(cmd, "preset=%d\n", p->preset);
{
struct v4l2_dv_timings *p = arg;
- if (!ops->vidioc_s_dv_timings)
- break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
-
switch (p->type) {
case V4L2_DV_BT_656_1120:
dbgarg2("bt-656/1120:interlaced=%d, pixelclock=%lld,"
{
struct v4l2_dv_timings *p = arg;
- if (!ops->vidioc_g_dv_timings)
- break;
-
ret = ops->vidioc_g_dv_timings(file, fh, p);
if (!ret) {
switch (p->type) {
{
struct v4l2_event *ev = arg;
- if (!ops->vidioc_subscribe_event)
- break;
-
ret = v4l2_event_dequeue(fh, ev, file->f_flags & O_NONBLOCK);
if (ret < 0) {
dbgarg(cmd, "no pending events?");
{
struct v4l2_event_subscription *sub = arg;
- if (!ops->vidioc_subscribe_event)
- break;
-
ret = ops->vidioc_subscribe_event(fh, sub);
if (ret < 0) {
dbgarg(cmd, "failed, ret=%ld", ret);
{
struct v4l2_event_subscription *sub = arg;
- if (!ops->vidioc_unsubscribe_event)
- break;
-
ret = ops->vidioc_unsubscribe_event(fh, sub);
if (ret < 0) {
dbgarg(cmd, "failed, ret=%ld", ret);
{
struct v4l2_create_buffers *create = arg;
- if (!ops->vidioc_create_bufs)
- break;
- if (ret_prio) {
- ret = ret_prio;
- break;
- }
ret = check_fmt(ops, create->format.type);
if (ret)
break;
{
struct v4l2_buffer *b = arg;
- if (!ops->vidioc_prepare_buf)
- break;
ret = check_fmt(ops, b->type);
if (ret)
break;
default:
if (!ops->vidioc_default)
break;
- ret = ops->vidioc_default(file, fh, ret_prio >= 0, cmd, arg);
+ ret = ops->vidioc_default(file, fh, use_fh_prio ?
+ v4l2_prio_check(vfd->prio, vfh->prio) >= 0 : 0,
+ cmd, arg);
break;
} /* switch */
return ret;
}
EXPORT_SYMBOL_GPL(v4l2_m2m_querybuf);
+/*
+ * v4l2_m2m_expbuf() - multi-queue-not-aware EXPBUF
+ */
+int v4l2_m2m_expbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ struct v4l2_exportbuffer *eb)
+{
+ struct vb2_queue *vq;
+
+ /* Below code is written only for FIMC CAPTURE for now */
+ vq = v4l2_m2m_get_vq(m2m_ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE);
+ return vb2_expbuf(vq, eb);
+}
+EXPORT_SYMBOL_GPL(v4l2_m2m_expbuf);
/**
* v4l2_m2m_qbuf() - enqueue a source or destination buffer, depending on
case V4L2_MEMORY_OVERLAY:
b->m.offset = vb->boff;
break;
+ case V4L2_MEMORY_DMABUF:
+ /* DMABUF is not handled in videobuf framework */
+ break;
}
b->flags = 0;
break;
case V4L2_MEMORY_USERPTR:
case V4L2_MEMORY_OVERLAY:
+ case V4L2_MEMORY_DMABUF:
/* nothing */
break;
}
}
}
+/**
+ * __vb2_plane_dmabuf_put() - release memory associated with
+ * a DMABUF shared plane
+ */
+static void __vb2_plane_dmabuf_put(struct vb2_queue *q, struct vb2_plane *p)
+{
+ if (!p->mem_priv)
+ return;
+
+ if (p->dbuf_mapped)
+ call_memop(q, unmap_dmabuf, p->mem_priv);
+
+ call_memop(q, detach_dmabuf, p->mem_priv);
+ dma_buf_put(p->dbuf);
+ memset(p, 0, sizeof *p);
+}
+
+/**
+ * __vb2_buf_dmabuf_put() - release memory associated with
+ * a DMABUF shared buffer
+ */
+static void __vb2_buf_dmabuf_put(struct vb2_buffer *vb)
+{
+ struct vb2_queue *q = vb->vb2_queue;
+ unsigned int plane;
+
+ for (plane = 0; plane < vb->num_planes; ++plane)
+ __vb2_plane_dmabuf_put(q, &vb->planes[plane]);
+}
+
/**
* __setup_offsets() - setup unique offsets ("cookies") for every plane in
* every buffer on the queue
/* Free MMAP buffers or release USERPTR buffers */
if (q->memory == V4L2_MEMORY_MMAP)
__vb2_buf_mem_free(vb);
+ else if (q->memory == V4L2_MEMORY_DMABUF)
+ __vb2_buf_dmabuf_put(vb);
else
__vb2_buf_userptr_put(vb);
}
*/
memcpy(b->m.planes, vb->v4l2_planes,
b->length * sizeof(struct v4l2_plane));
+
+ if (q->memory == V4L2_MEMORY_DMABUF) {
+ unsigned int plane;
+ for (plane = 0; plane < vb->num_planes; ++plane)
+ b->m.planes[plane].m.fd = 0;
+ }
} else {
/*
* We use length and offset in v4l2_planes array even for
b->m.offset = vb->v4l2_planes[0].m.mem_offset;
else if (q->memory == V4L2_MEMORY_USERPTR)
b->m.userptr = vb->v4l2_planes[0].m.userptr;
+ else if (q->memory == V4L2_MEMORY_DMABUF)
+ b->m.fd = 0;
}
/*
return 0;
}
+/**
+ * __verify_dmabuf_ops() - verify that all memory operations required for
+ * DMABUF queue type have been provided
+ */
+static int __verify_dmabuf_ops(struct vb2_queue *q)
+{
+ if (!(q->io_modes & VB2_DMABUF) || !q->mem_ops->attach_dmabuf ||
+ !q->mem_ops->detach_dmabuf || !q->mem_ops->map_dmabuf ||
+ !q->mem_ops->unmap_dmabuf)
+ return -EINVAL;
+
+ return 0;
+}
+
/**
* vb2_reqbufs() - Initiate streaming
* @q: videobuf2 queue
return -EBUSY;
}
- if (req->memory != V4L2_MEMORY_MMAP
- && req->memory != V4L2_MEMORY_USERPTR) {
+ if (req->memory != V4L2_MEMORY_MMAP &&
+ req->memory != V4L2_MEMORY_DMABUF &&
+ req->memory != V4L2_MEMORY_USERPTR) {
dprintk(1, "reqbufs: unsupported memory type\n");
return -EINVAL;
}
return -EINVAL;
}
+ if (req->memory == V4L2_MEMORY_DMABUF && __verify_dmabuf_ops(q)) {
+ dprintk(1, "reqbufs: DMABUF for current setup unsupported\n");
+ return -EINVAL;
+ }
+
if (req->count == 0 || q->num_buffers != 0 || q->memory != req->memory) {
/*
* We already have buffers allocated, so first check if they
return -EBUSY;
}
- if (create->memory != V4L2_MEMORY_MMAP
- && create->memory != V4L2_MEMORY_USERPTR) {
+ if (create->memory != V4L2_MEMORY_MMAP &&
+ create->memory != V4L2_MEMORY_USERPTR &&
+ create->memory != V4L2_MEMORY_DMABUF) {
dprintk(1, "%s(): unsupported memory type\n", __func__);
return -EINVAL;
}
return -EINVAL;
}
+ if (create->memory == V4L2_MEMORY_DMABUF && __verify_dmabuf_ops(q)) {
+ dprintk(1, "%s(): DMABUF for current setup unsupported\n", __func__);
+ return -EINVAL;
+ }
+
if (q->num_buffers == VIDEO_MAX_FRAME) {
dprintk(1, "%s(): maximum number of buffers already allocated\n",
__func__);
{
struct vb2_queue *q = vb->vb2_queue;
unsigned long flags;
+ unsigned int plane;
if (vb->state != VB2_BUF_STATE_ACTIVE)
return;
dprintk(4, "Done processing on buffer %d, state: %d\n",
vb->v4l2_buf.index, vb->state);
+ /* sync buffers */
+ for (plane = 0; plane < vb->num_planes; ++plane)
+ call_memop(q, finish, vb->planes[plane].mem_priv);
+
/* Add the buffer to the done buffers list */
spin_lock_irqsave(&q->done_lock, flags);
vb->state = state;
b->m.planes[plane].length;
}
}
+ if (b->memory == V4L2_MEMORY_DMABUF) {
+ for (plane = 0; plane < vb->num_planes; ++plane) {
+ v4l2_planes[plane].bytesused =
+ b->m.planes[plane].bytesused;
+ v4l2_planes[plane].m.fd =
+ b->m.planes[plane].m.fd;
+ }
+ }
} else {
/*
* Single-planar buffers do not use planes array,
v4l2_planes[0].m.userptr = b->m.userptr;
v4l2_planes[0].length = b->length;
}
+
+ if (b->memory == V4L2_MEMORY_DMABUF)
+ v4l2_planes[0].m.fd = b->m.fd;
+
}
vb->v4l2_buf.field = b->field;
return __fill_vb2_buffer(vb, b, vb->v4l2_planes);
}
+/**
+ * __qbuf_dmabuf() - handle qbuf of a DMABUF buffer
+ */
+static int __qbuf_dmabuf(struct vb2_buffer *vb, const struct v4l2_buffer *b)
+{
+ struct v4l2_plane planes[VIDEO_MAX_PLANES];
+ struct vb2_queue *q = vb->vb2_queue;
+ void *mem_priv;
+ unsigned int plane;
+ int ret;
+ int write = !V4L2_TYPE_IS_OUTPUT(q->type);
+
+ /* Verify and copy relevant information provided by the userspace */
+ ret = __fill_vb2_buffer(vb, b, planes);
+ if (ret)
+ return ret;
+
+ for (plane = 0; plane < vb->num_planes; ++plane) {
+ struct dma_buf *dbuf = dma_buf_get(planes[plane].m.fd);
+
+ if (IS_ERR_OR_NULL(dbuf)) {
+ dprintk(1, "qbuf: invalid dmabuf fd for "
+ "plane %d\n", plane);
+ ret = -EINVAL;
+ goto err;
+ }
+
+ /* Skip the plane if already verified */
+ if (dbuf == vb->planes[plane].dbuf) {
+ planes[plane].length = dbuf->size;
+ dma_buf_put(dbuf);
+ continue;
+ }
+
+ dprintk(3, "qbuf: buffer description for plane %d changed, "
+ "reattaching dma buf\n", plane);
+
+ /* Release previously acquired memory if present */
+ __vb2_plane_dmabuf_put(q, &vb->planes[plane]);
+
+ /* Acquire each plane's memory */
+ mem_priv = call_memop(q, attach_dmabuf, q->alloc_ctx[plane],
+ dbuf, q->plane_sizes[plane], write);
+ if (IS_ERR(mem_priv)) {
+ dprintk(1, "qbuf: failed acquiring dmabuf "
+ "memory for plane %d\n", plane);
+ ret = PTR_ERR(mem_priv);
+ goto err;
+ }
+
+ planes[plane].length = dbuf->size;
+ vb->planes[plane].dbuf = dbuf;
+ vb->planes[plane].mem_priv = mem_priv;
+ }
+
+ /* TODO: This pins the buffer(s) with dma_buf_map_attachment()).. but
+ * really we want to do this just before the DMA, not while queueing
+ * the buffer(s)..
+ */
+ for (plane = 0; plane < vb->num_planes; ++plane) {
+ ret = call_memop(q, map_dmabuf, vb->planes[plane].mem_priv);
+ if (ret) {
+ dprintk(1, "qbuf: failed mapping dmabuf "
+ "memory for plane %d\n", plane);
+ goto err;
+ }
+ vb->planes[plane].dbuf_mapped = 1;
+ }
+
+ /*
+ * Call driver-specific initialization on the newly acquired buffer,
+ * if provided.
+ */
+ ret = call_qop(q, buf_init, vb);
+ if (ret) {
+ dprintk(1, "qbuf: buffer initialization failed\n");
+ goto err;
+ }
+
+ /*
+ * Now that everything is in order, copy relevant information
+ * provided by userspace.
+ */
+ for (plane = 0; plane < vb->num_planes; ++plane)
+ vb->v4l2_planes[plane] = planes[plane];
+
+ return 0;
+err:
+ /* In case of errors, release planes that were already acquired */
+ __vb2_buf_dmabuf_put(vb);
+
+ return ret;
+}
+
/**
* __enqueue_in_driver() - enqueue a vb2_buffer in driver for processing
*/
static void __enqueue_in_driver(struct vb2_buffer *vb)
{
struct vb2_queue *q = vb->vb2_queue;
+ unsigned int plane;
vb->state = VB2_BUF_STATE_ACTIVE;
atomic_inc(&q->queued_count);
+
+ /* sync buffers */
+ for (plane = 0; plane < vb->num_planes; ++plane)
+ call_memop(q, prepare, vb->planes[plane].mem_priv);
+
q->ops->buf_queue(vb);
}
case V4L2_MEMORY_USERPTR:
ret = __qbuf_userptr(vb, b);
break;
+ case V4L2_MEMORY_DMABUF:
+ ret = __qbuf_dmabuf(vb, b);
+ break;
default:
WARN(1, "Invalid queue type\n");
ret = -EINVAL;
return ret;
}
+ /* TODO: this unpins the buffer(dma_buf_unmap_attachment()).. but
+ * really we want to do this just after DMA, not when the
+ * buffer is dequeued..
+ */
+ if (q->memory == V4L2_MEMORY_DMABUF) {
+ unsigned int i;
+
+ for (i = 0; i < vb->num_planes; ++i) {
+ call_memop(q, unmap_dmabuf, vb->planes[i].mem_priv);
+ vb->planes[i].dbuf_mapped = 0;
+ }
+ }
+
switch (vb->state) {
case VB2_BUF_STATE_DONE:
dprintk(3, "dqbuf: Returning done buffer\n");
return -EINVAL;
}
+/**
+ * vb2_expbuf() - Export a buffer as a file descriptor
+ * @q: videobuf2 queue
+ * @eb: export buffer structure passed from userspace to vidioc_expbuf
+ * handler in driver
+ *
+ * The return values from this function are intended to be directly returned
+ * from vidioc_expbuf handler in driver.
+ */
+int vb2_expbuf(struct vb2_queue *q, struct v4l2_exportbuffer *eb)
+{
+ struct vb2_buffer *vb = NULL;
+ struct vb2_plane *vb_plane;
+ unsigned int buffer, plane;
+ int ret;
+ struct dma_buf *dbuf;
+
+ if (q->memory != V4L2_MEMORY_MMAP) {
+ dprintk(1, "Queue is not currently set up for mmap\n");
+ return -EINVAL;
+ }
+
+ if (!q->mem_ops->get_dmabuf) {
+ dprintk(1, "Queue does not support DMA buffer exporting\n");
+ return -EINVAL;
+ }
+
+ if (eb->flags & ~O_CLOEXEC) {
+ dprintk(1, "Queue does support only O_CLOEXEC flag\n");
+ return -EINVAL;
+ }
+
+ /*
+ * Find the plane corresponding to the offset passed by userspace.
+ */
+ ret = __find_plane_by_offset(q, eb->mem_offset, &buffer, &plane);
+ if (ret) {
+ dprintk(1, "invalid offset %u\n", eb->mem_offset);
+ return ret;
+ }
+
+ vb = q->bufs[buffer];
+ vb_plane = &vb->planes[plane];
+
+ dbuf = call_memop(q, get_dmabuf, vb_plane->mem_priv);
+ if (IS_ERR_OR_NULL(dbuf)) {
+ dprintk(1, "Failed to export buffer %d, plane %d\n",
+ buffer, plane);
+ return -EINVAL;
+ }
+
+ ret = dma_buf_fd(dbuf, eb->flags);
+ if (ret < 0) {
+ dprintk(3, "buffer %d, plane %d failed to export (%d)\n",
+ buffer, plane, ret);
+ dma_buf_put(dbuf);
+ return ret;
+ }
+
+ dprintk(3, "buffer %d, plane %d exported as %d descriptor\n",
+ buffer, plane, ret);
+ eb->fd = ret;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(vb2_expbuf);
+
/**
* vb2_mmap() - map video buffers into application address space
* @q: videobuf2 queue
* the Free Software Foundation.
*/
+#include <linux/dma-buf.h>
#include <linux/module.h>
+#include <linux/scatterlist.h>
+#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/dma-mapping.h>
};
struct vb2_dc_buf {
- struct vb2_dc_conf *conf;
+ struct device *dev;
void *vaddr;
- dma_addr_t dma_addr;
unsigned long size;
- struct vm_area_struct *vma;
- atomic_t refcount;
+ dma_addr_t dma_addr;
+ enum dma_data_direction dma_dir;
+ struct sg_table *dma_sgt;
+
+ /* MMAP related */
struct vb2_vmarea_handler handler;
+ atomic_t refcount;
+ struct sg_table *sgt_base;
+
+ /* USERPTR related */
+ struct vm_area_struct *vma;
+
+ /* DMABUF related */
+ struct dma_buf_attachment *db_attach;
};
-static void vb2_dma_contig_put(void *buf_priv);
+/*********************************************/
+/* scatterlist table functions */
+/*********************************************/
+
+
+static void vb2_dc_sgt_foreach_page(struct sg_table *sgt,
+ void (*cb)(struct page *pg))
+{
+ struct scatterlist *s;
+ unsigned int i;
+
+ for_each_sg(sgt->sgl, s, sgt->orig_nents, i) {
+ struct page *page = sg_page(s);
+ unsigned int n_pages = PAGE_ALIGN(s->offset + s->length)
+ >> PAGE_SHIFT;
+ unsigned int j;
+
+ for (j = 0; j < n_pages; ++j, ++page)
+ cb(page);
+ }
+}
+
+static unsigned long vb2_dc_get_contiguous_size(struct sg_table *sgt)
+{
+ struct scatterlist *s;
+ dma_addr_t expected = sg_dma_address(sgt->sgl);
+ unsigned int i;
+ unsigned long size = 0;
+
+ for_each_sg(sgt->sgl, s, sgt->nents, i) {
+ if (sg_dma_address(s) != expected)
+ break;
+ expected = sg_dma_address(s) + sg_dma_len(s);
+ size += sg_dma_len(s);
+ }
+ return size;
+}
+
+/*********************************************/
+/* callbacks for all buffers */
+/*********************************************/
+
+static void *vb2_dc_cookie(void *buf_priv)
+{
+ struct vb2_dc_buf *buf = buf_priv;
+
+ return &buf->dma_addr;
+}
+
+static void *vb2_dc_vaddr(void *buf_priv)
+{
+ struct vb2_dc_buf *buf = buf_priv;
+
+ return buf->vaddr;
+}
+
+static unsigned int vb2_dc_num_users(void *buf_priv)
+{
+ struct vb2_dc_buf *buf = buf_priv;
+
+ return atomic_read(&buf->refcount);
+}
+
+static void vb2_dc_prepare(void *buf_priv)
+{
+ struct vb2_dc_buf *buf = buf_priv;
+ struct sg_table *sgt = buf->dma_sgt;
+
+ /* DMABUF exporter will flush the cache for us */
+ if (!sgt || buf->db_attach)
+ return;
+
+ dma_sync_sg_for_device(buf->dev, sgt->sgl, sgt->nents, buf->dma_dir);
+}
+
+static void vb2_dc_finish(void *buf_priv)
+{
+ struct vb2_dc_buf *buf = buf_priv;
+ struct sg_table *sgt = buf->dma_sgt;
+
+ /* DMABUF exporter will flush the cache for us */
+ if (!sgt || buf->db_attach)
+ return;
+
+ dma_sync_sg_for_cpu(buf->dev, sgt->sgl, sgt->nents, buf->dma_dir);
+}
+
+/*********************************************/
+/* callbacks for MMAP buffers */
+/*********************************************/
+
+static void vb2_dc_put(void *buf_priv)
+{
+ struct vb2_dc_buf *buf = buf_priv;
+
+ if (!atomic_dec_and_test(&buf->refcount))
+ return;
+
+ if (buf->sgt_base) {
+ sg_free_table(buf->sgt_base);
+ kfree(buf->sgt_base);
+ }
+ dma_free_coherent(buf->dev, buf->size, buf->vaddr, buf->dma_addr);
+ kfree(buf);
+}
-static void *vb2_dma_contig_alloc(void *alloc_ctx, unsigned long size)
+static void *vb2_dc_alloc(void *alloc_ctx, unsigned long size)
{
struct vb2_dc_conf *conf = alloc_ctx;
+ struct device *dev = conf->dev;
struct vb2_dc_buf *buf;
buf = kzalloc(sizeof *buf, GFP_KERNEL);
if (!buf)
return ERR_PTR(-ENOMEM);
- buf->vaddr = dma_alloc_coherent(conf->dev, size, &buf->dma_addr,
- GFP_KERNEL);
+ buf->vaddr = dma_alloc_coherent(dev, size, &buf->dma_addr, GFP_KERNEL);
if (!buf->vaddr) {
- dev_err(conf->dev, "dma_alloc_coherent of size %ld failed\n",
- size);
+ dev_err(dev, "dma_alloc_coherent of size %ld failed\n", size);
kfree(buf);
return ERR_PTR(-ENOMEM);
}
- buf->conf = conf;
+ buf->dev = dev;
buf->size = size;
buf->handler.refcount = &buf->refcount;
- buf->handler.put = vb2_dma_contig_put;
+ buf->handler.put = vb2_dc_put;
buf->handler.arg = buf;
atomic_inc(&buf->refcount);
return buf;
}
-static void vb2_dma_contig_put(void *buf_priv)
+static int vb2_dc_mmap(void *buf_priv, struct vm_area_struct *vma)
{
struct vb2_dc_buf *buf = buf_priv;
+ int ret;
- if (atomic_dec_and_test(&buf->refcount)) {
- dma_free_coherent(buf->conf->dev, buf->size, buf->vaddr,
- buf->dma_addr);
- kfree(buf);
+ if (!buf) {
+ printk(KERN_ERR "No buffer to map\n");
+ return -EINVAL;
+ }
+
+ /*
+ * dma_mmap_* uses vm_pgoff as in-buffer offset, but we want to
+ * map whole buffer
+ */
+ vma->vm_pgoff = 0;
+
+ ret = dma_mmap_coherent(buf->dev, vma, buf->vaddr,
+ buf->dma_addr, buf->size);
+
+ if (ret) {
+ printk(KERN_ERR "Remapping memory failed, error: %d\n", ret);
+ return ret;
}
+
+ vma->vm_flags |= VM_DONTEXPAND | VM_RESERVED;
+ vma->vm_private_data = &buf->handler;
+ vma->vm_ops = &vb2_common_vm_ops;
+
+ vma->vm_ops->open(vma);
+
+ printk(KERN_DEBUG "%s: mapped dma addr 0x%08lx at 0x%08lx, size %ld\n",
+ __func__, (unsigned long)buf->dma_addr, vma->vm_start,
+ buf->size);
+
+ return 0;
}
-static void *vb2_dma_contig_cookie(void *buf_priv)
+/*********************************************/
+/* DMABUF ops for exporters */
+/*********************************************/
+
+struct vb2_dc_attachment {
+ struct sg_table sgt;
+ enum dma_data_direction dir;
+};
+
+static int vb2_dc_dmabuf_ops_attach(struct dma_buf *dbuf, struct device *dev,
+ struct dma_buf_attachment *dbuf_attach)
{
- struct vb2_dc_buf *buf = buf_priv;
+ struct vb2_dc_attachment *attach;
+ unsigned int i;
+ struct scatterlist *rd, *wr;
+ struct sg_table *sgt;
+ struct vb2_dc_buf *buf = dbuf->priv;
+ int ret;
- return &buf->dma_addr;
+ attach = kzalloc(sizeof *attach, GFP_KERNEL);
+ if (!attach)
+ return -ENOMEM;
+
+ sgt = &attach->sgt;
+ /* Copy the buf->base_sgt scatter list to the attachment, as we can't
+ * map the same scatter list to multiple attachments at the same time.
+ */
+ ret = sg_alloc_table(sgt, buf->sgt_base->orig_nents, GFP_KERNEL);
+ if (ret) {
+ kfree(attach);
+ return -ENOMEM;
+ }
+
+ rd = buf->sgt_base->sgl;
+ wr = sgt->sgl;
+ for (i = 0; i < sgt->orig_nents; ++i) {
+ sg_set_page(wr, sg_page(rd), rd->length, rd->offset);
+ rd = sg_next(rd);
+ wr = sg_next(wr);
+ }
+
+ attach->dir = DMA_NONE;
+ dbuf_attach->priv = attach;
+
+ return 0;
}
-static void *vb2_dma_contig_vaddr(void *buf_priv)
+static void vb2_dc_dmabuf_ops_detach(struct dma_buf *dbuf,
+ struct dma_buf_attachment *db_attach)
{
- struct vb2_dc_buf *buf = buf_priv;
- if (!buf)
- return NULL;
+ struct vb2_dc_attachment *attach = db_attach->priv;
+ struct sg_table *sgt;
+
+ if (!attach)
+ return;
+
+ sgt = &attach->sgt;
+
+ /* release the scatterlist cache */
+ if (attach->dir != DMA_NONE)
+ dma_unmap_sg(db_attach->dev, sgt->sgl, sgt->orig_nents,
+ attach->dir);
+ sg_free_table(sgt);
+ kfree(attach);
+ db_attach->priv = NULL;
+}
+
+static struct sg_table *vb2_dc_dmabuf_ops_map(
+ struct dma_buf_attachment *db_attach, enum dma_data_direction dir)
+{
+ struct vb2_dc_attachment *attach = db_attach->priv;
+ /* stealing dmabuf mutex to serialize map/unmap operations */
+ struct mutex *lock = &db_attach->dmabuf->lock;
+ struct sg_table *sgt;
+ int ret;
+
+ mutex_lock(lock);
+
+ sgt = &attach->sgt;
+ /* return previously mapped sg table */
+ if (attach->dir == dir) {
+ mutex_unlock(lock);
+ return sgt;
+ }
+
+ /* release any previous cache */
+ if (attach->dir != DMA_NONE) {
+ dma_unmap_sg(db_attach->dev, sgt->sgl, sgt->orig_nents,
+ attach->dir);
+ attach->dir = DMA_NONE;
+ }
+
+ /* mapping to the client with new direction */
+ ret = dma_map_sg(db_attach->dev, sgt->sgl, sgt->orig_nents, dir);
+ if (ret <= 0) {
+ printk(KERN_ERR "failed to map scatterlist\n");
+ mutex_unlock(lock);
+ return ERR_PTR(-EIO);
+ }
+
+ attach->dir = dir;
+
+ mutex_unlock(lock);
+
+ return sgt;
+}
+
+static void vb2_dc_dmabuf_ops_unmap(struct dma_buf_attachment *db_attach,
+ struct sg_table *sgt, enum dma_data_direction dir)
+{
+ /* nothing to be done here */
+}
+
+static void vb2_dc_dmabuf_ops_release(struct dma_buf *dbuf)
+{
+ /* drop reference obtained in vb2_dc_get_dmabuf */
+ vb2_dc_put(dbuf->priv);
+}
+
+static void *vb2_dc_dmabuf_ops_kmap(struct dma_buf *dbuf, unsigned long pgnum)
+{
+ struct vb2_dc_buf *buf = dbuf->priv;
+
+ return buf->vaddr + pgnum * PAGE_SIZE;
+}
+
+static void *vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf)
+{
+ struct vb2_dc_buf *buf = dbuf->priv;
return buf->vaddr;
}
-static unsigned int vb2_dma_contig_num_users(void *buf_priv)
+/* a dummy function to support the mmap functionality for now */
+static void *vb2_dc_dmabuf_ops_mmap(struct dma_buf *dbuf)
{
- struct vb2_dc_buf *buf = buf_priv;
+ /* do nothing */
+ return NULL;
+}
- return atomic_read(&buf->refcount);
+static struct dma_buf_ops vb2_dc_dmabuf_ops = {
+ .attach = vb2_dc_dmabuf_ops_attach,
+ .detach = vb2_dc_dmabuf_ops_detach,
+ .map_dma_buf = vb2_dc_dmabuf_ops_map,
+ .unmap_dma_buf = vb2_dc_dmabuf_ops_unmap,
+ .kmap = vb2_dc_dmabuf_ops_kmap,
+ .kmap_atomic = vb2_dc_dmabuf_ops_kmap,
+ .vmap = vb2_dc_dmabuf_ops_vmap,
+ .mmap = vb2_dc_dmabuf_ops_mmap,
+ .release = vb2_dc_dmabuf_ops_release,
+};
+
+static struct sg_table *vb2_dc_get_base_sgt(struct vb2_dc_buf *buf)
+{
+ int ret;
+ struct sg_table *sgt;
+
+ sgt = kmalloc(sizeof *sgt, GFP_KERNEL);
+ if (!sgt) {
+ dev_err(buf->dev, "failed to alloc sg table\n");
+ return ERR_PTR(-ENOMEM);
+ }
+
+ ret = dma_get_sgtable(buf->dev, sgt, buf->vaddr, buf->dma_addr,
+ buf->size);
+ if (ret < 0) {
+ dev_err(buf->dev, "failed to get scatterlist from DMA API\n");
+ kfree(sgt);
+ return ERR_PTR(ret);
+ }
+
+ return sgt;
}
-static int vb2_dma_contig_mmap(void *buf_priv, struct vm_area_struct *vma)
+static struct dma_buf *vb2_dc_get_dmabuf(void *buf_priv)
{
struct vb2_dc_buf *buf = buf_priv;
+ struct dma_buf *dbuf;
+ struct sg_table *sgt = buf->sgt_base;
- if (!buf) {
- printk(KERN_ERR "No buffer to map\n");
- return -EINVAL;
+ if (!sgt)
+ sgt = vb2_dc_get_base_sgt(buf);
+ if (WARN_ON(IS_ERR(sgt)))
+ return NULL;
+
+ /* cache base sgt for future use */
+ buf->sgt_base = sgt;
+
+ dbuf = dma_buf_export(buf, &vb2_dc_dmabuf_ops, buf->size, 0);
+ if (IS_ERR(dbuf))
+ return NULL;
+
+ /* dmabuf keeps reference to vb2 buffer */
+ atomic_inc(&buf->refcount);
+
+ return dbuf;
+}
+
+/*********************************************/
+/* callbacks for USERPTR buffers */
+/*********************************************/
+
+static inline int vma_is_io(struct vm_area_struct *vma)
+{
+ return !!(vma->vm_flags & (VM_IO | VM_PFNMAP));
+}
+
+static int vb2_dc_get_user_pages(unsigned long start, struct page **pages,
+ int n_pages, struct vm_area_struct *vma, int write)
+{
+ if (vma_is_io(vma)) {
+ unsigned int i;
+
+ for (i = 0; i < n_pages; ++i, start += PAGE_SIZE) {
+ unsigned long pfn;
+ int ret = follow_pfn(vma, start, &pfn);
+
+ if (ret) {
+ printk(KERN_ERR "no page for address %lu\n",
+ start);
+ return ret;
+ }
+ pages[i] = pfn_to_page(pfn);
+ }
+ } else {
+ int n;
+
+ n = get_user_pages(current, current->mm, start & PAGE_MASK,
+ n_pages, write, 1, pages, NULL);
+ /* negative error means that no page was pinned */
+ n = max(n, 0);
+ if (n != n_pages) {
+ printk(KERN_ERR "got only %d of %d user pages\n",
+ n, n_pages);
+ while (n)
+ put_page(pages[--n]);
+ return -EFAULT;
+ }
}
- return vb2_mmap_pfn_range(vma, buf->dma_addr, buf->size,
- &vb2_common_vm_ops, &buf->handler);
+ return 0;
}
-static void *vb2_dma_contig_get_userptr(void *alloc_ctx, unsigned long vaddr,
- unsigned long size, int write)
+static void vb2_dc_put_dirty_page(struct page *page)
{
+ set_page_dirty_lock(page);
+ put_page(page);
+}
+
+static void vb2_dc_put_userptr(void *buf_priv)
+{
+ struct vb2_dc_buf *buf = buf_priv;
+ struct sg_table *sgt = buf->dma_sgt;
+
+ dma_unmap_sg(buf->dev, sgt->sgl, sgt->orig_nents, buf->dma_dir);
+ if (!vma_is_io(buf->vma))
+ vb2_dc_sgt_foreach_page(sgt, vb2_dc_put_dirty_page);
+
+ sg_free_table(sgt);
+ kfree(sgt);
+ vb2_put_vma(buf->vma);
+ kfree(buf);
+}
+
+static void *vb2_dc_get_userptr(void *alloc_ctx, unsigned long vaddr,
+ unsigned long size, int write)
+{
+ struct vb2_dc_conf *conf = alloc_ctx;
struct vb2_dc_buf *buf;
+ unsigned long start;
+ unsigned long end;
+ unsigned long offset;
+ struct page **pages;
+ int n_pages;
+ int ret = 0;
struct vm_area_struct *vma;
- dma_addr_t dma_addr = 0;
- int ret;
+ struct sg_table *sgt;
+ unsigned long contig_size;
buf = kzalloc(sizeof *buf, GFP_KERNEL);
if (!buf)
return ERR_PTR(-ENOMEM);
- ret = vb2_get_contig_userptr(vaddr, size, &vma, &dma_addr);
+ buf->dev = conf->dev;
+ buf->dma_dir = write ? DMA_FROM_DEVICE : DMA_TO_DEVICE;
+
+ start = vaddr & PAGE_MASK;
+ offset = vaddr & ~PAGE_MASK;
+ end = PAGE_ALIGN(vaddr + size);
+ n_pages = (end - start) >> PAGE_SHIFT;
+
+ pages = kmalloc(n_pages * sizeof pages[0], GFP_KERNEL);
+ if (!pages) {
+ ret = -ENOMEM;
+ printk(KERN_ERR "failed to allocate pages table\n");
+ goto fail_buf;
+ }
+
+ /* current->mm->mmap_sem is taken by videobuf2 core */
+ vma = find_vma(current->mm, vaddr);
+ if (!vma) {
+ printk(KERN_ERR "no vma for address %lu\n", vaddr);
+ ret = -EFAULT;
+ goto fail_pages;
+ }
+
+ if (vma->vm_end < vaddr + size) {
+ printk(KERN_ERR "vma at %lu is too small for %lu bytes\n",
+ vaddr, size);
+ ret = -EFAULT;
+ goto fail_pages;
+ }
+
+ buf->vma = vb2_get_vma(vma);
+ if (!buf->vma) {
+ printk(KERN_ERR "failed to copy vma\n");
+ ret = -ENOMEM;
+ goto fail_pages;
+ }
+
+ /* extract page list from userspace mapping */
+ ret = vb2_dc_get_user_pages(start, pages, n_pages, vma, write);
if (ret) {
- printk(KERN_ERR "Failed acquiring VMA for vaddr 0x%08lx\n",
- vaddr);
- kfree(buf);
- return ERR_PTR(ret);
+ printk(KERN_ERR "failed to get user pages\n");
+ goto fail_vma;
+ }
+
+ sgt = kzalloc(sizeof *sgt, GFP_KERNEL);
+ if (!sgt) {
+ printk(KERN_ERR "failed to allocate sg table\n");
+ ret = -ENOMEM;
+ goto fail_get_user_pages;
+ }
+
+ ret = sg_alloc_table_from_pages(sgt, pages, n_pages,
+ offset, size, GFP_KERNEL);
+ if (ret) {
+ printk(KERN_ERR "failed to initialize sg table\n");
+ goto fail_sgt;
+ }
+
+ /* pages are no longer needed */
+ kfree(pages);
+ pages = NULL;
+
+ sgt->nents = dma_map_sg(buf->dev, sgt->sgl, sgt->orig_nents,
+ buf->dma_dir);
+ if (sgt->nents <= 0) {
+ printk(KERN_ERR "failed to map scatterlist\n");
+ ret = -EIO;
+ goto fail_sgt_init;
+ }
+
+ contig_size = vb2_dc_get_contiguous_size(sgt);
+ if (contig_size < size) {
+ printk(KERN_ERR "contiguous mapping is too small %lu/%lu\n",
+ contig_size, size);
+ ret = -EFAULT;
+ goto fail_map_sg;
}
+ buf->dma_addr = sg_dma_address(sgt->sgl);
buf->size = size;
- buf->dma_addr = dma_addr;
- buf->vma = vma;
+ buf->dma_sgt = sgt;
return buf;
+
+fail_map_sg:
+ dma_unmap_sg(buf->dev, sgt->sgl, sgt->orig_nents, buf->dma_dir);
+
+fail_sgt_init:
+ if (!vma_is_io(buf->vma))
+ vb2_dc_sgt_foreach_page(sgt, put_page);
+ sg_free_table(sgt);
+
+fail_sgt:
+ kfree(sgt);
+
+fail_get_user_pages:
+ if (pages && !vma_is_io(buf->vma))
+ while (n_pages)
+ put_page(pages[--n_pages]);
+
+fail_vma:
+ vb2_put_vma(buf->vma);
+
+fail_pages:
+ kfree(pages); /* kfree is NULL-proof */
+
+fail_buf:
+ kfree(buf);
+
+ return ERR_PTR(ret);
}
-static void vb2_dma_contig_put_userptr(void *mem_priv)
+/*********************************************/
+/* callbacks for DMABUF buffers */
+/*********************************************/
+
+static int vb2_dc_map_dmabuf(void *mem_priv)
{
struct vb2_dc_buf *buf = mem_priv;
+ struct sg_table *sgt;
+ unsigned long contig_size;
- if (!buf)
+ if (WARN_ON(!buf->db_attach)) {
+ printk(KERN_ERR "trying to pin a non attached buffer\n");
+ return -EINVAL;
+ }
+
+ if (WARN_ON(buf->dma_sgt)) {
+ printk(KERN_ERR "dmabuf buffer is already pinned\n");
+ return 0;
+ }
+
+ /* get the associated scatterlist for this buffer */
+ sgt = dma_buf_map_attachment(buf->db_attach, buf->dma_dir);
+ if (IS_ERR_OR_NULL(sgt)) {
+ printk(KERN_ERR "Error getting dmabuf scatterlist\n");
+ return -EINVAL;
+ }
+
+ /* checking if dmabuf is big enough to store contiguous chunk */
+ contig_size = vb2_dc_get_contiguous_size(sgt);
+ if (contig_size < buf->size) {
+ printk(KERN_ERR "contiguous chunk is too small %lu/%lu b\n",
+ contig_size, buf->size);
+ dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir);
+ return -EFAULT;
+ }
+
+ buf->dma_addr = sg_dma_address(sgt->sgl);
+ buf->dma_sgt = sgt;
+
+ return 0;
+}
+
+static void vb2_dc_unmap_dmabuf(void *mem_priv)
+{
+ struct vb2_dc_buf *buf = mem_priv;
+ struct sg_table *sgt = buf->dma_sgt;
+
+ if (WARN_ON(!buf->db_attach)) {
+ printk(KERN_ERR "trying to unpin a not attached buffer\n");
return;
+ }
- vb2_put_vma(buf->vma);
+ if (WARN_ON(!sgt)) {
+ printk(KERN_ERR "dmabuf buffer is already unpinned\n");
+ return;
+ }
+
+ dma_buf_unmap_attachment(buf->db_attach, sgt, buf->dma_dir);
+
+ buf->dma_addr = 0;
+ buf->dma_sgt = NULL;
+}
+
+static void vb2_dc_detach_dmabuf(void *mem_priv)
+{
+ struct vb2_dc_buf *buf = mem_priv;
+
+ /* if vb2 works correctly you should never detach mapped buffer */
+ if (WARN_ON(buf->dma_addr))
+ vb2_dc_unmap_dmabuf(buf);
+
+ /* detach this attachment */
+ dma_buf_detach(buf->db_attach->dmabuf, buf->db_attach);
kfree(buf);
}
+static void *vb2_dc_attach_dmabuf(void *alloc_ctx, struct dma_buf *dbuf,
+ unsigned long size, int write)
+{
+ struct vb2_dc_conf *conf = alloc_ctx;
+ struct vb2_dc_buf *buf;
+ struct dma_buf_attachment *dba;
+
+ if (dbuf->size < size)
+ return ERR_PTR(-EFAULT);
+
+ buf = kzalloc(sizeof *buf, GFP_KERNEL);
+ if (!buf)
+ return ERR_PTR(-ENOMEM);
+
+ buf->dev = conf->dev;
+ /* create attachment for the dmabuf with the user device */
+ dba = dma_buf_attach(dbuf, buf->dev);
+ if (IS_ERR(dba)) {
+ printk(KERN_ERR "failed to attach dmabuf\n");
+ kfree(buf);
+ return dba;
+ }
+
+ buf->dma_dir = write ? DMA_FROM_DEVICE : DMA_TO_DEVICE;
+ buf->size = size;
+ buf->db_attach = dba;
+
+ return buf;
+}
+
+/*********************************************/
+/* DMA CONTIG exported functions */
+/*********************************************/
+
const struct vb2_mem_ops vb2_dma_contig_memops = {
- .alloc = vb2_dma_contig_alloc,
- .put = vb2_dma_contig_put,
- .cookie = vb2_dma_contig_cookie,
- .vaddr = vb2_dma_contig_vaddr,
- .mmap = vb2_dma_contig_mmap,
- .get_userptr = vb2_dma_contig_get_userptr,
- .put_userptr = vb2_dma_contig_put_userptr,
- .num_users = vb2_dma_contig_num_users,
+ .alloc = vb2_dc_alloc,
+ .put = vb2_dc_put,
+ .get_dmabuf = vb2_dc_get_dmabuf,
+ .cookie = vb2_dc_cookie,
+ .vaddr = vb2_dc_vaddr,
+ .mmap = vb2_dc_mmap,
+ .get_userptr = vb2_dc_get_userptr,
+ .put_userptr = vb2_dc_put_userptr,
+ .prepare = vb2_dc_prepare,
+ .finish = vb2_dc_finish,
+ .map_dmabuf = vb2_dc_map_dmabuf,
+ .unmap_dmabuf = vb2_dc_unmap_dmabuf,
+ .attach_dmabuf = vb2_dc_attach_dmabuf,
+ .detach_dmabuf = vb2_dc_detach_dmabuf,
+ .num_users = vb2_dc_num_users,
};
EXPORT_SYMBOL_GPL(vb2_dma_contig_memops);
--- /dev/null
+/*
+ * videobuf2-fb.c - FrameBuffer API emulator on top of Videobuf2 framework
+ *
+ * Copyright (C) 2011 Samsung Electronics
+ *
+ * Author: Marek Szyprowski <m.szyprowski <at> samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation.
+ */
+
+#include <linux/err.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <linux/poll.h>
+#include <linux/slab.h>
+#include <linux/sched.h>
+#include <linux/fb.h>
+
+#include <linux/videodev2.h>
+#include <media/v4l2-dev.h>
+#include <media/v4l2-ioctl.h>
+#include <media/videobuf2-core.h>
+#include <media/videobuf2-fb.h>
+
+static int debug = 1;
+module_param(debug, int, 0644);
+
+#define dprintk(level, fmt, arg...) \
+ do { \
+ if (debug >= level) \
+ printk(KERN_DEBUG "vb2: " fmt, ## arg); \
+ } while (0)
+
+struct vb2_fb_data {
+ struct video_device *vfd;
+ struct vb2_queue *q;
+ struct device *dev;
+ struct v4l2_requestbuffers req;
+ struct v4l2_buffer b;
+ struct v4l2_plane p;
+ void *vaddr;
+ unsigned int size;
+ int refcount;
+ int blank;
+ int streaming;
+
+ struct file fake_file;
+ struct dentry fake_dentry;
+ struct inode fake_inode;
+};
+
+static int vb2_fb_stop(struct fb_info *info);
+
+struct fmt_desc {
+ __u32 fourcc;
+ __u32 bits_per_pixel;
+ struct fb_bitfield red;
+ struct fb_bitfield green;
+ struct fb_bitfield blue;
+ struct fb_bitfield transp;
+};
+
+static struct fmt_desc fmt_conv_table[] = {
+ {
+ .fourcc = V4L2_PIX_FMT_RGB565,
+ .bits_per_pixel = 16,
+ .red = { .offset = 11, .length = 5, },
+ .green = { .offset = 5, .length = 6, },
+ .blue = { .offset = 0, .length = 5, },
+ }, {
+ .fourcc = V4L2_PIX_FMT_RGB555,
+ .bits_per_pixel = 16,
+ .red = { .offset = 11, .length = 5, },
+ .green = { .offset = 5, .length = 5, },
+ .blue = { .offset = 0, .length = 5, },
+ }, {
+ .fourcc = V4L2_PIX_FMT_RGB444,
+ .bits_per_pixel = 16,
+ .red = { .offset = 8, .length = 4, },
+ .green = { .offset = 4, .length = 4, },
+ .blue = { .offset = 0, .length = 4, },
+ .transp = { .offset = 12, .length = 4, },
+ }, {
+ .fourcc = V4L2_PIX_FMT_BGR32,
+ .bits_per_pixel = 32,
+ .red = { .offset = 16, .length = 4, },
+ .green = { .offset = 8, .length = 8, },
+ .blue = { .offset = 0, .length = 8, },
+ .transp = { .offset = 24, .length = 8, },
+ },
+ /* TODO: add more format descriptors */
+};
+
+/**
+ * vb2_drv_lock() - a shortcut to call driver specific lock()
+ * @q: videobuf2 queue
+ */
+static inline void vb2_drv_lock(struct vb2_queue *q)
+{
+ q->ops->wait_finish(q);
+}
+
+/**
+ * vb2_drv_unlock() - a shortcut to call driver specific unlock()
+ * @q: videobuf2 queue
+ */
+static inline void vb2_drv_unlock(struct vb2_queue *q)
+{
+ q->ops->wait_prepare(q);
+}
+
+static int s3c_fb_ioctl(struct fb_info *info, unsigned int cmd,
+ unsigned long arg)
+{
+ return 0;
+}
+
+/**
+ * vb2_fb_activate() - activate framebuffer emulator
+ * @info: framebuffer vb2 emulator data
+ * This function activates framebuffer emulator. The pixel format
+ * is acquired from video node, memory is allocated and framebuffer
+ * structures are filled with valid data.
+ */
+static int vb2_fb_activate(struct fb_info *info)
+{
+ struct vb2_fb_data *data = info->par;
+ struct vb2_queue *q = data->q;
+ struct fb_var_screeninfo *var;
+ struct v4l2_format fmt;
+ struct fmt_desc *conv = NULL;
+ int width, height, fourcc, bpl, size;
+ int i, ret = 0;
+ int (*g_fmt)(struct file *file, void *fh, struct v4l2_format *f);
+
+ /*
+ * Check if streaming api has not been already activated.
+ */
+ if (q->streaming || q->num_buffers > 0)
+ return -EBUSY;
+
+ dprintk(3, "setting up framebuffer\n");
+
+ /*
+ * Open video node.
+ */
+ ret = data->vfd->fops->open(&data->fake_file);
+ if (ret)
+ return ret;
+
+ /*
+ * Get format from the video node.
+ */
+ memset(&fmt, 0, sizeof(fmt));
+ fmt.type = q->type;
+ if (data->vfd->ioctl_ops->vidioc_g_fmt_vid_out) {
+ g_fmt = data->vfd->ioctl_ops->vidioc_g_fmt_vid_out;
+ ret = g_fmt(&data->fake_file, data->fake_file.private_data, &fmt);
+ if (ret)
+ goto err;
+ width = fmt.fmt.pix.width;
+ height = fmt.fmt.pix.height;
+ fourcc = fmt.fmt.pix.pixelformat;
+ bpl = fmt.fmt.pix.bytesperline;
+ size = fmt.fmt.pix.sizeimage;
+ } else if (data->vfd->ioctl_ops->vidioc_g_fmt_vid_out_mplane) {
+ g_fmt = data->vfd->ioctl_ops->vidioc_g_fmt_vid_out_mplane;
+ ret = g_fmt(&data->fake_file, data->fake_file.private_data, &fmt);
+ if (ret)
+ goto err;
+ width = fmt.fmt.pix_mp.width;
+ height = fmt.fmt.pix_mp.height;
+ fourcc = fmt.fmt.pix_mp.pixelformat;
+ bpl = fmt.fmt.pix_mp.plane_fmt[0].bytesperline;
+ size = fmt.fmt.pix_mp.plane_fmt[0].sizeimage;
+ } else {
+ ret = -EINVAL;
+ goto err;
+ }
+
+ dprintk(3, "fb emu: width %d height %d fourcc %08x size %d bpl %d\n",
+ width, height, fourcc, size, bpl);
+
+ /*
+ * Find format mapping with fourcc returned by g_fmt().
+ */
+ for (i = 0; i < ARRAY_SIZE(fmt_conv_table); i++) {
+ if (fmt_conv_table[i].fourcc == fourcc) {
+ conv = &fmt_conv_table[i];
+ break;
+ }
+ }
+
+ if (conv == NULL) {
+ ret = -EBUSY;
+ goto err;
+ }
+
+ /*
+ * Request buffers and use MMAP type to force driver
+ * to allocate buffers by itself.
+ */
+ data->req.count = 1;
+ data->req.memory = V4L2_MEMORY_MMAP;
+ data->req.type = q->type;
+ ret = vb2_reqbufs(q, &data->req);
+ if (ret)
+ goto err;
+
+ /*
+ * Check if plane_count is correct,
+ * multiplane buffers are not supported.
+ */
+ if (q->bufs[0]->num_planes != 1) {
+ data->req.count = 0;
+ ret = -EBUSY;
+ goto err;
+ }
+
+ /*
+ * Get kernel address of the buffer.
+ */
+ data->vaddr = vb2_plane_vaddr(q->bufs[0], 0);
+ if (data->vaddr == NULL) {
+ ret = -EINVAL;
+ goto err;
+ }
+ data->size = size = vb2_plane_size(q->bufs[0], 0);
+
+ /*
+ * Clear the buffer
+ */
+ memset(data->vaddr, 0, size);
+
+ /*
+ * Setup framebuffer parameters
+ */
+ info->screen_base = data->vaddr;
+ info->screen_size = size;
+ info->fix.line_length = bpl;
+ info->fix.smem_len = info->fix.mmio_len = size;
+
+ var = &info->var;
+ var->xres = var->xres_virtual = var->width = width;
+ var->yres = var->yres_virtual = var->height = height;
+ var->bits_per_pixel = conv->bits_per_pixel;
+ var->red = conv->red;
+ var->green = conv->green;
+ var->blue = conv->blue;
+ var->transp = conv->transp;
+
+ return 0;
+
+err:
+ data->vfd->fops->release(&data->fake_file);
+ return ret;
+}
+
+/**
+ * vb2_fb_deactivate() - deactivate framebuffer emulator
+ * @info: framebuffer vb2 emulator data
+ * Stop displaying video data and close framebuffer emulator.
+ */
+static int vb2_fb_deactivate(struct fb_info *info)
+{
+ struct vb2_fb_data *data = info->par;
+
+ info->screen_base = NULL;
+ info->screen_size = 0;
+ data->blank = 1;
+ data->streaming = 0;
+
+ vb2_fb_stop(info);
+ return data->vfd->fops->release(&data->fake_file);
+}
+
+/**
+ * vb2_fb_start() - start displaying the video buffer
+ * @info: framebuffer vb2 emulator data
+ * This function queues video buffer to the driver and starts streaming.
+ */
+static int vb2_fb_start(struct fb_info *info)
+{
+ struct vb2_fb_data *data = info->par;
+ struct v4l2_buffer *b = &data->b;
+ struct v4l2_plane *p = &data->p;
+ struct vb2_queue *q = data->q;
+ int ret;
+
+ if (data->streaming)
+ return 0;
+
+ /*
+ * Prepare the buffer and queue it.
+ */
+ memset(b, 0, sizeof(*b));
+ b->type = q->type;
+ b->memory = q->memory;
+ b->index = 0;
+
+ if (b->type == V4L2_BUF_TYPE_VIDEO_OUTPUT) {
+ b->bytesused = data->size;
+ b->length = data->size;
+ } else if (b->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) {
+ memset(p, 0, sizeof(*p));
+ b->m.planes = p;
+ b->length = 1;
+ p->bytesused = data->size;
+ p->length = data->size;
+ }
+ ret = vb2_qbuf(q, b);
+ if (ret)
+ return ret;
+
+ /*
+ * Start streaming.
+ */
+ ret = vb2_streamon(q, q->type);
+ if (ret == 0) {
+ data->streaming = 1;
+ dprintk(3, "fb emu: enabled streaming\n");
+ }
+ return ret;
+}
+
+/**
+ * vb2_fb_start() - stop displaying video buffer
+ * @info: framebuffer vb2 emulator data
+ * This function stops streaming on the video driver.
+ */
+static int vb2_fb_stop(struct fb_info *info)
+{
+ struct vb2_fb_data *data = info->par;
+ struct vb2_queue *q = data->q;
+ int ret = 0;
+
+ if (data->streaming) {
+ ret = vb2_streamoff(q, q->type);
+ data->streaming = 0;
+ dprintk(3, "fb emu: disabled streaming\n");
+ }
+
+ return ret;
+}
+
+/**
+ * vb2_fb_open() - open method for emulated framebuffer
+ * @info: framebuffer vb2 emulator data
+ * @user: client type (0 means kernel, 1 mean userspace)
+ */
+static int vb2_fb_open(struct fb_info *info, int user)
+{
+ struct vb2_fb_data *data = info->par;
+ int ret = 0;
+ dprintk(3, "fb emu: open()\n");
+
+ /*
+ * Reject open() call from fb console.
+ */
+ if (user == 0)
+ return -ENODEV;
+
+ vb2_drv_lock(data->q);
+
+ /*
+ * Activate emulation on the first open.
+ */
+ if (data->refcount == 0)
+ ret = vb2_fb_activate(info);
+
+ if (ret == 0)
+ data->refcount++;
+
+ vb2_drv_unlock(data->q);
+
+ return ret;
+}
+
+/**
+ * vb2_fb_release() - release method for emulated framebuffer
+ * @info: framebuffer vb2 emulator data
+ * @user: client type (0 means kernel, 1 mean userspace)
+ */
+static int vb2_fb_release(struct fb_info *info, int user)
+{
+ struct vb2_fb_data *data = info->par;
+ int ret = 0;
+
+ dprintk(3, "fb emu: release()\n");
+
+ vb2_drv_lock(data->q);
+
+ if (--data->refcount == 0)
+ ret = vb2_fb_deactivate(info);
+
+ vb2_drv_unlock(data->q);
+
+ return ret;
+}
+
+/**
+ * vb2_fb_mmap() - mmap method for emulated framebuffer
+ * @info: framebuffer vb2 emulator data
+ * @vma: memory area to map
+ */
+static int vb2_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+{
+ struct vb2_fb_data *data = info->par;
+ int ret = 0;
+
+ dprintk(3, "fb emu: mmap offset %ld\n", vma->vm_pgoff);
+
+ /*
+ * Add flags required by v4l2/vb2
+ */
+ vma->vm_flags |= VM_SHARED;
+
+ /*
+ * Only the most common case (mapping the whole framebuffer) is
+ * supported for now.
+ */
+ if (vma->vm_pgoff != 0 || (vma->vm_end - vma->vm_start) < data->size)
+ return -EINVAL;
+
+ vb2_drv_lock(data->q);
+ ret = vb2_mmap(data->q, vma);
+ vb2_drv_unlock(data->q);
+
+ return ret;
+}
+
+/**
+ * vb2_fb_blank() - blank method for emulated framebuffer
+ * @blank_mode: requested blank method
+ * @info: framebuffer vb2 emulator data
+ */
+static int vb2_fb_blank(int blank_mode, struct fb_info *info)
+{
+ struct vb2_fb_data *data = info->par;
+ int ret = -EBUSY;
+
+ dprintk(3, "fb emu: blank mode %d, blank %d, streaming %d\n",
+ blank_mode, data->blank, data->streaming);
+
+ /*
+ * If no blank mode change then return immediately
+ */
+ if ((data->blank && blank_mode != FB_BLANK_UNBLANK) ||
+ (!data->blank && blank_mode == FB_BLANK_UNBLANK))
+ return 0;
+
+ /*
+ * Currently blank works only if device has been opened first.
+ */
+ if (!data->refcount)
+ return -EBUSY;
+
+ vb2_drv_lock(data->q);
+
+ /*
+ * Start emulation if user requested mode == FB_BLANK_UNBLANK.
+ */
+ if (blank_mode == FB_BLANK_UNBLANK && data->blank) {
+ ret = vb2_fb_start(info);
+ if (ret == 0)
+ data->blank = 0;
+ }
+
+ /*
+ * Stop emulation if user requested mode != FB_BLANK_UNBLANK.
+ */
+ if (blank_mode != FB_BLANK_UNBLANK && !data->blank) {
+ ret = vb2_fb_stop(info);
+ if (ret == 0)
+ data->blank = 1;
+ }
+
+ vb2_drv_unlock(data->q);
+
+ return ret;
+}
+
+static struct fb_ops vb2_fb_ops = {
+ .owner = THIS_MODULE,
+ .fb_open = vb2_fb_open,
+ .fb_release = vb2_fb_release,
+ .fb_mmap = vb2_fb_mmap,
+ .fb_blank = vb2_fb_blank,
+ .fb_fillrect = cfb_fillrect,
+ .fb_copyarea = cfb_copyarea,
+ .fb_imageblit = cfb_imageblit,
+ .fb_ioctl = s3c_fb_ioctl,
+};
+
+/**
+ * vb2_fb_reqister() - register framebuffer emulation
+ * @q: videobuf2 queue
+ * @vfd: video node
+ * This function registers framebuffer emulation for specified
+ * videobuf2 queue and video node. It returns a pointer to the registered
+ * framebuffer device.
+ */
+void *vb2_fb_register(struct vb2_queue *q, struct video_device *vfd)
+{
+ struct vb2_fb_data *data;
+ struct fb_info *info;
+ int ret;
+
+ BUG_ON(q->type != V4L2_BUF_TYPE_VIDEO_OUTPUT &&
+ q->type != V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE);
+ BUG_ON(!q->mem_ops->vaddr);
+ BUG_ON(!q->ops->wait_prepare || !q->ops->wait_finish);
+ BUG_ON(!vfd->ioctl_ops || !vfd->fops);
+
+ if (!try_module_get(vfd->fops->owner))
+ return ERR_PTR(-ENODEV);
+
+ info = framebuffer_alloc(sizeof(struct vb2_fb_data), &vfd->dev);
+ if (!info)
+ return ERR_PTR(-ENOMEM);
+
+ data = info->par;
+
+ info->fix.type = FB_TYPE_PACKED_PIXELS;
+ info->fix.accel = FB_ACCEL_NONE;
+ info->fix.visual = FB_VISUAL_TRUECOLOR,
+ info->var.activate = FB_ACTIVATE_NOW;
+ info->var.vmode = FB_VMODE_NONINTERLACED;
+ info->fbops = &vb2_fb_ops;
+ info->flags = FBINFO_FLAG_DEFAULT;
+ info->screen_base = NULL;
+
+ ret = register_framebuffer(info);
+ if (ret)
+ return ERR_PTR(ret);
+
+ printk(KERN_INFO "fb%d: registered frame buffer emulation for /dev/%s\n",
+ info->node, dev_name(&vfd->dev));
+
+ data->blank = 1;
+ data->vfd = vfd;
+ data->q = q;
+ data->fake_file.f_path.dentry = &data->fake_dentry;
+ data->fake_dentry.d_inode = &data->fake_inode;
+ data->fake_inode.i_rdev = vfd->cdev->dev;
+
+ return info;
+}
+EXPORT_SYMBOL_GPL(vb2_fb_register);
+
+/**
+ * vb2_fb_unreqister() - unregister framebuffer emulation
+ * @fb_emu: emulated framebuffer device
+ */
+int vb2_fb_unregister(void *fb_emu)
+{
+ struct fb_info *info = fb_emu;
+ struct vb2_fb_data *data = info->par;
+ struct module *owner = data->vfd->fops->owner;
+
+ unregister_framebuffer(info);
+ module_put(owner);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(vb2_fb_unregister);
+
+MODULE_DESCRIPTION("FrameBuffer emulator for Videobuf2 and Video for Linux 2");
+MODULE_AUTHOR("Marek Szyprowski");
+MODULE_LICENSE("GPL");
}
EXPORT_SYMBOL_GPL(vb2_get_contig_userptr);
-/**
- * vb2_mmap_pfn_range() - map physical pages to userspace
- * @vma: virtual memory region for the mapping
- * @paddr: starting physical address of the memory to be mapped
- * @size: size of the memory to be mapped
- * @vm_ops: vm operations to be assigned to the created area
- * @priv: private data to be associated with the area
- *
- * Returns 0 on success.
- */
-int vb2_mmap_pfn_range(struct vm_area_struct *vma, unsigned long paddr,
- unsigned long size,
- const struct vm_operations_struct *vm_ops,
- void *priv)
-{
- int ret;
-
- size = min_t(unsigned long, vma->vm_end - vma->vm_start, size);
-
- vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
- ret = remap_pfn_range(vma, vma->vm_start, paddr >> PAGE_SHIFT,
- size, vma->vm_page_prot);
- if (ret) {
- printk(KERN_ERR "Remapping memory failed, error: %d\n", ret);
- return ret;
- }
-
- vma->vm_flags |= VM_DONTEXPAND | VM_RESERVED;
- vma->vm_private_data = priv;
- vma->vm_ops = vm_ops;
-
- vma->vm_ops->open(vma);
-
- pr_debug("%s: mapped paddr 0x%08lx at 0x%08lx, size %ld\n",
- __func__, paddr, vma->vm_start, size);
-
- return 0;
-}
-EXPORT_SYMBOL_GPL(vb2_mmap_pfn_range);
-
/**
* vb2_common_vm_open() - increase refcount of the vma
* @vma: virtual memory region for the mapping
unsigned int n_pages;
atomic_t refcount;
struct vb2_vmarea_handler handler;
+ struct dma_buf *dbuf;
};
static void vb2_vmalloc_put(void *buf_priv);
return 0;
}
+/*********************************************/
+/* callbacks for DMABUF buffers */
+/*********************************************/
+
+static int vb2_vmalloc_map_dmabuf(void *mem_priv)
+{
+ struct vb2_vmalloc_buf *buf = mem_priv;
+
+ buf->vaddr = dma_buf_vmap(buf->dbuf);
+
+ return buf->vaddr ? 0 : -EFAULT;
+}
+
+static void vb2_vmalloc_unmap_dmabuf(void *mem_priv)
+{
+ struct vb2_vmalloc_buf *buf = mem_priv;
+
+ dma_buf_vunmap(buf->dbuf, buf->vaddr);
+ buf->vaddr = NULL;
+}
+
+static void vb2_vmalloc_detach_dmabuf(void *mem_priv)
+{
+ struct vb2_vmalloc_buf *buf = mem_priv;
+
+ if (buf->vaddr)
+ dma_buf_vunmap(buf->dbuf, buf->vaddr);
+
+ kfree(buf);
+}
+
+static void *vb2_vmalloc_attach_dmabuf(void *alloc_ctx, struct dma_buf *dbuf,
+ unsigned long size, int write)
+{
+ struct vb2_vmalloc_buf *buf;
+
+ if (dbuf->size < size)
+ return ERR_PTR(-EFAULT);
+
+ buf = kzalloc(sizeof *buf, GFP_KERNEL);
+ if (!buf)
+ return ERR_PTR(-ENOMEM);
+
+ buf->dbuf = dbuf;
+ buf->write = write;
+ buf->size = size;
+
+ return buf;
+}
+
+
const struct vb2_mem_ops vb2_vmalloc_memops = {
.alloc = vb2_vmalloc_alloc,
.put = vb2_vmalloc_put,
.get_userptr = vb2_vmalloc_get_userptr,
.put_userptr = vb2_vmalloc_put_userptr,
+ .map_dmabuf = vb2_vmalloc_map_dmabuf,
+ .unmap_dmabuf = vb2_vmalloc_unmap_dmabuf,
+ .attach_dmabuf = vb2_vmalloc_attach_dmabuf,
+ .detach_dmabuf = vb2_vmalloc_detach_dmabuf,
.vaddr = vb2_vmalloc_vaddr,
.mmap = vb2_vmalloc_mmap,
.num_users = vb2_vmalloc_num_users,
q = &dev->vb_vidq;
memset(q, 0, sizeof(dev->vb_vidq));
q->type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
- q->io_modes = VB2_MMAP | VB2_USERPTR | VB2_READ;
+ q->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF | VB2_READ;
q->drv_priv = dev;
q->buf_struct_size = sizeof(struct vivi_buffer);
q->ops = &vivi_video_qops;
select individual components like voltage regulators, RTC and
battery-charger under the corresponding menus.
+config MFD_CHROMEOS_EC
+ tristate "ChromeOS Embedded Controller"
+ select MFD_CORE
+ depends on I2C
+ help
+ If you say yes here you get support for the ChromeOS Embedded
+ Controller providing keyboard, battery and power services
+ over I2C bus.
+
config MFD_SM501
tristate "Support for Silicon Motion SM501"
---help---
additional drivers must be enabled in order to use the functionality
of the device.
+config MFD_MAX77686
+ bool "Maxim Semiconductor MAX77686 PMIC Support"
+ depends on I2C=y && GENERIC_HARDIRQS
+ select MFD_CORE
+ help
+ Say yes here to support for Maxim Semiconductor MAX77686.
+ This is a Power Management IC with RTC on chip.
+ This driver provides common support for accessing the device;
+ additional drivers must be enabled in order to use the functionality
+ of the device.
+
+config DEBUG_MAX77686
+ bool "MAX77686 PMIC debugging"
+ depends on MFD_MAX77686
+ help
+ Say yes, if you need enable debug messages in
+ MFD_MAX77686 driver.
+ Further for enabling/disabling particular type of debug
+ messages set max77686_debug_mask accordingly.
+
config MFD_S5M_CORE
bool "SAMSUNG S5M Series Support"
depends on I2C=y && GENERIC_HARDIRQS
obj-$(CONFIG_MFD_88PM860X) += 88pm860x.o
obj-$(CONFIG_MFD_SM501) += sm501.o
obj-$(CONFIG_MFD_ASIC3) += asic3.o tmio_core.o
+obj-$(CONFIG_MFD_CHROMEOS_EC) += chromeos_ec.o
obj-$(CONFIG_HTC_EGPIO) += htc-egpio.o
obj-$(CONFIG_HTC_PASIC3) += htc-pasic3.o
obj-$(CONFIG_MFD_MAX8925) += max8925.o
obj-$(CONFIG_MFD_MAX8997) += max8997.o max8997-irq.o
obj-$(CONFIG_MFD_MAX8998) += max8998.o max8998-irq.o
+obj-$(CONFIG_MFD_MAX77686) += max77686.o max77686-irq.o
pcf50633-objs := pcf50633-core.o pcf50633-irq.o
obj-$(CONFIG_MFD_PCF50633) += pcf50633.o
--- /dev/null
+/*
+ * Copyright (C) 2012 Google, Inc
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ *
+ * The ChromeOS EC multi function device is used to mux all the requests
+ * to the EC device for its multiple features : keyboard controller,
+ * battery charging and regulator control, firmware update.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/i2c.h>
+#include <linux/interrupt.h>
+#include <linux/mfd/core.h>
+#include <linux/mfd/chromeos_ec.h>
+#include <linux/mfd/chromeos_ec_commands.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+
+
+#define COMMAND_MAX_TRIES 3
+
+static int cros_ec_command_xfer_noretry(struct chromeos_ec_device *ec_dev,
+ struct chromeos_ec_msg *msg)
+{
+ int ret;
+ int i;
+ int packet_len;
+ u8 res_code;
+ u8 *out_buf = NULL;
+ u8 *in_buf = NULL;
+ u8 sum;
+ struct i2c_msg i2c_msg[2];
+
+ i2c_msg[0].addr = ec_dev->client->addr;
+ i2c_msg[0].flags = 0;
+ i2c_msg[1].addr = ec_dev->client->addr;
+ i2c_msg[1].flags = I2C_M_RD;
+
+ if (msg->in_len) {
+ /* allocate larger packet
+ * (one byte for checksum, one for result code)
+ */
+ packet_len = msg->in_len + 2;
+ in_buf = kzalloc(packet_len, GFP_KERNEL);
+ if (!in_buf)
+ goto done;
+ i2c_msg[1].len = packet_len;
+ i2c_msg[1].buf = (char *)in_buf;
+ } else {
+ i2c_msg[1].len = 1;
+ i2c_msg[1].buf = (char *)&res_code;
+ }
+
+ if (msg->out_len) {
+ /* allocate larger packet
+ * (one byte for checksum, one for command code)
+ */
+ packet_len = msg->out_len + 2;
+ out_buf = kzalloc(packet_len, GFP_KERNEL);
+ if (!out_buf)
+ goto done;
+ i2c_msg[0].len = packet_len;
+ i2c_msg[0].buf = (char *)out_buf;
+ out_buf[0] = msg->cmd;
+
+ /* copy message payload and compute checksum */
+ for (i = 0, sum = 0; i < msg->out_len; i++) {
+ out_buf[i + 1] = msg->out_buf[i];
+ sum += out_buf[i + 1];
+ }
+ out_buf[msg->out_len + 1] = sum;
+ } else {
+ i2c_msg[0].len = 1;
+ i2c_msg[0].buf = (char *)&msg->cmd;
+ }
+
+ /* send command to EC and read answer */
+ ret = i2c_transfer(ec_dev->client->adapter, i2c_msg, 2);
+ if (ret < 0) {
+ dev_err(ec_dev->dev, "i2c transfer failed: %d\n", ret);
+ goto done;
+ } else if (ret != 2) {
+ dev_err(ec_dev->dev, "failed to get response: %d\n", ret);
+ ret = -EIO;
+ goto done;
+ }
+
+ /* check response error code */
+ if (i2c_msg[1].buf[0]) {
+ dev_warn(ec_dev->dev, "command 0x%02x returned an error %d\n",
+ msg->cmd, i2c_msg[1].buf[0]);
+ ret = -EINVAL;
+ goto done;
+ }
+ if (msg->in_len) {
+ /* copy response packet payload and compute checksum */
+ for (i = 0, sum = 0; i < msg->in_len; i++) {
+ msg->in_buf[i] = in_buf[i + 1];
+ sum += in_buf[i + 1];
+ }
+#ifdef DEBUG
+ dev_dbg(ec_dev->dev, "packet: ");
+ for (i = 0; i < i2c_msg[1].len; i++)
+ printk(" %02x", in_buf[i]);
+ printk(", sum = %02x\n", sum);
+#endif
+ if (sum != in_buf[msg->in_len + 1]) {
+ dev_err(ec_dev->dev, "bad packet checksum\n");
+ ret = -EBADMSG;
+ goto done;
+ }
+ }
+
+ ret = 0;
+ done:
+ kfree(in_buf);
+ kfree(out_buf);
+ return ret;
+}
+
+static int cros_ec_command_xfer(struct chromeos_ec_device *ec_dev,
+ struct chromeos_ec_msg *msg)
+{
+ int tries;
+ int ret;
+ /*
+ * Try the command a few times in case there are transmission errors.
+ * It is possible that this is overkill, but we don't completely trust
+ * i2c.
+ */
+ for (tries = 0; tries < COMMAND_MAX_TRIES; tries++) {
+ ret = cros_ec_command_xfer_noretry(ec_dev, msg);
+ if (ret >= 0)
+ return ret;
+ }
+ dev_err(ec_dev->dev, "mkbp_command failed with %d (%d tries)\n",
+ ret, tries);
+ return ret;
+}
+
+static int cros_ec_command_raw(struct chromeos_ec_device *ec_dev,
+ struct i2c_msg *msgs, int num)
+{
+ return i2c_transfer(ec_dev->client->adapter, msgs, num);
+}
+
+static int cros_ec_command_recv(struct chromeos_ec_device *ec_dev,
+ char cmd, void *buf, int buf_len)
+{
+ struct chromeos_ec_msg msg;
+
+ msg.cmd = cmd;
+ msg.in_buf = buf;
+ msg.in_len = buf_len;
+ msg.out_buf = NULL;
+ msg.out_len = 0;
+
+ return cros_ec_command_xfer(ec_dev, &msg);
+}
+
+static int cros_ec_command_send(struct chromeos_ec_device *ec_dev,
+ char cmd, void *buf, int buf_len)
+{
+ struct chromeos_ec_msg msg;
+
+ msg.cmd = cmd;
+ msg.out_buf = buf;
+ msg.out_len = buf_len;
+ msg.in_buf = NULL;
+ msg.in_len = 0;
+
+ return cros_ec_command_xfer(ec_dev, &msg);
+}
+
+static irqreturn_t ec_irq_thread(int irq, void *data)
+{
+ struct chromeos_ec_device *ec = data;
+
+ blocking_notifier_call_chain(&ec->event_notifier, 1, ec);
+
+ return IRQ_HANDLED;
+}
+
+static int __devinit check_protocol_version(struct chromeos_ec_device *ec)
+{
+ int ret;
+ struct ec_response_proto_version data;
+
+ ret = cros_ec_command_recv(ec, EC_CMD_PROTO_VERSION,
+ &data, sizeof(data));
+ if (ret < 0)
+ return ret;
+ dev_info(ec->dev, "protocol version: %d\n", data.version);
+ if (data.version != EC_PROTO_VERSION)
+ return -EPROTONOSUPPORT;
+
+ return 0;
+}
+
+static struct mfd_cell cros_devs[] = {
+ {
+ .name = "mkbp",
+ .id = 1,
+ },
+ {
+ .name = "cros_ec-fw",
+ .id = 2,
+ },
+ {
+ .name = "cros_ec-i2c",
+ .id = 3,
+ },
+};
+
+static int __devinit cros_ec_probe(struct i2c_client *client,
+ const struct i2c_device_id *dev_id)
+{
+ struct device *dev = &client->dev;
+ struct chromeos_ec_device *ec_dev = NULL;
+ int err;
+
+ dev_dbg(dev, "probing\n");
+
+ ec_dev = kzalloc(sizeof(*ec_dev), GFP_KERNEL);
+ if (ec_dev == NULL) {
+ err = -ENOMEM;
+ dev_err(dev, "cannot allocate\n");
+ goto fail;
+ }
+
+ ec_dev->client = client;
+ ec_dev->dev = dev;
+ i2c_set_clientdata(client, ec_dev);
+ ec_dev->irq = client->irq;
+ ec_dev->command_send = cros_ec_command_send;
+ ec_dev->command_recv = cros_ec_command_recv;
+ ec_dev->command_xfer = cros_ec_command_xfer;
+ ec_dev->command_raw = cros_ec_command_raw;
+
+ BLOCKING_INIT_NOTIFIER_HEAD(&ec_dev->event_notifier);
+
+ err = request_threaded_irq(ec_dev->irq, NULL, ec_irq_thread,
+ IRQF_TRIGGER_LOW | IRQF_ONESHOT,
+ "chromeos-ec", ec_dev);
+ if (err) {
+ dev_err(dev, "request irq %d: error %d\n", ec_dev->irq, err);
+ goto fail;
+ }
+
+ err = check_protocol_version(ec_dev);
+ if (err < 0) {
+ dev_err(dev, "protocol version check failed: %d\n", err);
+ goto fail_irq;
+ }
+
+ err = mfd_add_devices(dev, 0, cros_devs,
+ ARRAY_SIZE(cros_devs),
+ NULL, ec_dev->irq);
+ if (err)
+ goto fail_irq;
+
+ return 0;
+fail_irq:
+ free_irq(ec_dev->irq, ec_dev);
+fail:
+ kfree(ec_dev);
+ return err;
+}
+
+#ifdef CONFIG_PM_SLEEP
+static int cros_ec_suspend(struct device *dev)
+{
+ return 0;
+}
+
+static int cros_ec_resume(struct device *dev)
+{
+ return 0;
+}
+#endif
+
+static SIMPLE_DEV_PM_OPS(cros_ec_pm_ops, cros_ec_suspend, cros_ec_resume);
+
+static const struct i2c_device_id cros_ec_i2c_id[] = {
+ { "chromeos-ec", 0 },
+ { }
+};
+MODULE_DEVICE_TABLE(i2c, mkbp_i2c_id);
+
+static struct i2c_driver cros_ec_driver = {
+ .driver = {
+ .name = "chromeos-ec",
+ .owner = THIS_MODULE,
+ .pm = &cros_ec_pm_ops,
+ },
+ .probe = cros_ec_probe,
+ .id_table = cros_ec_i2c_id,
+};
+
+module_i2c_driver(cros_ec_driver);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("ChromeOS EC multi function device");
--- /dev/null
+/*
+ * max77686-irq.c - Interrupt controller support for MAX77686
+ *
+ * Copyright (C) 2012 Samsung Electronics Co.Ltd
+ * Chiwoong Byun <woong.byun@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * This driver is based on max8997-irq.c
+ */
+
+#include <linux/err.h>
+#include <linux/irq.h>
+#include <linux/interrupt.h>
+#include <linux/mfd/max77686.h>
+#include <linux/mfd/max77686-private.h>
+#include <linux/irqdomain.h>
+
+int max77686_debug_mask = MAX77686_DEBUG_INFO; /* enable debug prints */
+
+static const u8 max77686_mask_reg[] = {
+ [PMIC_INT1] = MAX77686_REG_INT1MSK,
+ [PMIC_INT2] = MAX77686_REG_INT2MSK,
+ [RTC_INT] = MAX77686_RTC_INTM,
+};
+
+static struct i2c_client *max77686_get_i2c(struct max77686_dev *max77686,
+ enum max77686_irq_source src)
+{
+ switch (src) {
+ case PMIC_INT1...PMIC_INT2:
+ return max77686->i2c;
+ case RTC_INT:
+ return max77686->rtc;
+ default:
+ return ERR_PTR(-EINVAL);
+ }
+}
+
+struct max77686_irq_data {
+ int mask;
+ enum max77686_irq_source group;
+};
+
+static const struct max77686_irq_data max77686_irqs[] = {
+ [MAX77686_PMICIRQ_PWRONF] = { .group = PMIC_INT1,
+ .mask = 1 << 0 },
+ [MAX77686_PMICIRQ_PWRONR] = { .group = PMIC_INT1,
+ .mask = 1 << 1 },
+ [MAX77686_PMICIRQ_JIGONBF] = { .group = PMIC_INT1,
+ .mask = 1 << 2 },
+ [MAX77686_PMICIRQ_JIGONBR] = { .group = PMIC_INT1,
+ .mask = 1 << 3 },
+ [MAX77686_PMICIRQ_ACOKBF] = { .group = PMIC_INT1,
+ .mask = 1 << 4 },
+ [MAX77686_PMICIRQ_ACOKBR] = { .group = PMIC_INT1,
+ .mask = 1 << 5 },
+ [MAX77686_PMICIRQ_ONKEY1S] = { .group = PMIC_INT1,
+ .mask = 1 << 6 },
+ [MAX77686_PMICIRQ_MRSTB] = { .group = PMIC_INT1,
+ .mask = 1 << 7 },
+ [MAX77686_PMICIRQ_140C] = { .group = PMIC_INT2,
+ .mask = 1 << 0 },
+ [MAX77686_PMICIRQ_120C] = { .group = PMIC_INT2,
+ .mask = 1 << 1 },
+ [MAX77686_RTCIRQ_RTC60S] = { .group = RTC_INT,
+ .mask = 1 << 0 },
+ [MAX77686_RTCIRQ_RTCA1] = { .group = RTC_INT,
+ .mask = 1 << 1 },
+ [MAX77686_RTCIRQ_RTCA2] = { .group = RTC_INT,
+ .mask = 1 << 2 },
+ [MAX77686_RTCIRQ_SMPL] = { .group = RTC_INT,
+ .mask = 1 << 3 },
+ [MAX77686_RTCIRQ_RTC1S] = { .group = RTC_INT,
+ .mask = 1 << 4 },
+ [MAX77686_RTCIRQ_WTSR] = { .group = RTC_INT,
+ .mask = 1 << 5 },
+};
+
+static void max77686_irq_lock(struct irq_data *data)
+{
+ struct max77686_dev *max77686 = irq_get_chip_data(data->irq);
+
+ mutex_lock(&max77686->irqlock);
+}
+
+static void max77686_irq_sync_unlock(struct irq_data *data)
+{
+ struct max77686_dev *max77686 = irq_get_chip_data(data->irq);
+ int i;
+
+ for (i = 0; i < MAX77686_IRQ_GROUP_NR; i++) {
+ u8 mask_reg = max77686_mask_reg[i];
+ struct i2c_client *i2c = max77686_get_i2c(max77686, i);
+
+ dbg_mask("%s: mask_reg[%d]=0x%x, cur=0x%x\n",
+ __func__, i, mask_reg, max77686->irq_masks_cur[i]);
+
+ if (mask_reg == MAX77686_REG_INVALID || IS_ERR_OR_NULL(i2c))
+ continue;
+
+ max77686->irq_masks_cache[i] = max77686->irq_masks_cur[i];
+
+ max77686_write_reg(i2c, max77686_mask_reg[i],
+ max77686->irq_masks_cur[i]);
+ }
+
+ mutex_unlock(&max77686->irqlock);
+}
+
+static void max77686_irq_mask(struct irq_data *data)
+{
+ struct max77686_dev *max77686 = irq_get_chip_data(data->irq);
+ const struct max77686_irq_data *irq_data = &max77686_irqs[data->hwirq];
+
+ max77686->irq_masks_cur[irq_data->group] |= irq_data->mask;
+ dbg_mask("%s: group=%d, cur=0x%x\n",
+ __func__, irq_data->group,
+ max77686->irq_masks_cur[irq_data->group]);
+
+}
+
+static void max77686_irq_unmask(struct irq_data *data)
+{
+ struct max77686_dev *max77686 = irq_get_chip_data(data->irq);
+ const struct max77686_irq_data *irq_data = &max77686_irqs[data->hwirq];
+
+ max77686->irq_masks_cur[irq_data->group] &= ~irq_data->mask;
+ dbg_mask("%s: group=%d, cur=0x%x\n",
+ __func__, irq_data->group,
+ max77686->irq_masks_cur[irq_data->group]);
+
+}
+
+static struct irq_chip max77686_irq_chip = {
+ .name = "max77686",
+ .irq_bus_lock = max77686_irq_lock,
+ .irq_bus_sync_unlock = max77686_irq_sync_unlock,
+ .irq_mask = max77686_irq_mask,
+ .irq_unmask = max77686_irq_unmask,
+};
+
+static irqreturn_t max77686_irq_thread(int irq, void *data)
+{
+ struct max77686_dev *max77686 = data;
+ u8 irq_reg[MAX77686_IRQ_GROUP_NR] = { };
+ u8 irq_src;
+ int ret, i, cur_irq;
+
+ ret = max77686_read_reg(max77686->i2c, MAX77686_REG_INTSRC, &irq_src);
+ if (ret < 0) {
+ dev_err(max77686->dev, "Failed to read interrupt source: %d\n",
+ ret);
+ return IRQ_NONE;
+ }
+
+ dbg_int("%s: irq_src=0x%x\n", __func__, irq_src);
+
+ if (irq_src == MAX77686_IRQSRC_PMIC) {
+ ret = max77686_bulk_read(max77686->i2c, MAX77686_REG_INT1,
+ 2, irq_reg);
+ if (ret < 0) {
+ dev_err(max77686->dev,
+ "Failed to read pmic interrupt: %d\n", ret);
+ return IRQ_NONE;
+ }
+
+ dbg_int("%s: int1=0x%x, int2=0x%x\n", __func__,
+ irq_reg[PMIC_INT1], irq_reg[PMIC_INT2]);
+ }
+
+ if (irq_src & MAX77686_IRQSRC_RTC) {
+ ret = max77686_read_reg(max77686->rtc, MAX77686_RTC_INT,
+ &irq_reg[RTC_INT]);
+ if (ret < 0) {
+ dev_err(max77686->dev,
+ "Failed to read rtc interrupt: %d\n", ret);
+ return IRQ_NONE;
+ }
+ dbg_int("%s: rtc int=0x%x\n", __func__,
+ irq_reg[RTC_INT]);
+
+ }
+
+ for (i = 0; i < MAX77686_IRQ_GROUP_NR; i++)
+ irq_reg[i] &= ~max77686->irq_masks_cur[i];
+
+ for (i = 0; i < MAX77686_IRQ_NR; i++) {
+ if (irq_reg[max77686_irqs[i].group] & max77686_irqs[i].mask) {
+ cur_irq = irq_find_mapping(max77686->irq_domain, i);
+ if (cur_irq)
+ handle_nested_irq(cur_irq);
+ }
+ }
+
+ dbg_info("%s returning\n", __func__);
+
+ return IRQ_HANDLED;
+}
+
+int max77686_irq_resume(struct max77686_dev *max77686)
+{
+ if (max77686->irq && max77686->irq_domain)
+ max77686_irq_thread(0, max77686);
+
+ return 0;
+}
+
+static int max77686_irq_domain_map(struct irq_domain *d, unsigned int irq,
+ irq_hw_number_t hw)
+{
+ struct max77686_dev *max77686 = d->host_data;
+
+ irq_set_chip_data(irq, max77686);
+ irq_set_chip_and_handler(irq, &max77686_irq_chip, handle_edge_irq);
+ irq_set_nested_thread(irq, 1);
+#ifdef CONFIG_ARM
+ set_irq_flags(irq, IRQF_VALID);
+#else
+ irq_set_noprobe(irq);
+#endif
+ return 0;
+}
+
+static struct irq_domain_ops max77686_irq_domain_ops = {
+ .map = max77686_irq_domain_map,
+};
+
+int max77686_irq_init(struct max77686_dev *max77686)
+{
+ int i;
+ int ret;
+ struct irq_domain *domain;
+
+ if (!max77686->irq) {
+ dev_warn(max77686->dev,
+ "No interrupt specified.\n");
+ return 0;
+ }
+
+ mutex_init(&max77686->irqlock);
+
+ /* Mask individual interrupt sources */
+ for (i = 0; i < MAX77686_IRQ_GROUP_NR; i++) {
+ struct i2c_client *i2c;
+
+ max77686->irq_masks_cur[i] = 0xff;
+ max77686->irq_masks_cache[i] = 0xff;
+ i2c = max77686_get_i2c(max77686, i);
+
+ if (IS_ERR_OR_NULL(i2c))
+ continue;
+ if (max77686_mask_reg[i] == MAX77686_REG_INVALID)
+ continue;
+
+ max77686_write_reg(i2c, max77686_mask_reg[i], 0xff);
+ }
+
+ domain = irq_domain_add_linear(NULL, MAX77686_IRQ_NR,
+ &max77686_irq_domain_ops, &max77686);
+ if (!domain) {
+ dev_err(max77686->dev, "could not create irq domain\n");
+ return -ENODEV;
+ }
+
+ ret = request_threaded_irq(max77686->irq, NULL, max77686_irq_thread,
+ IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ "max77686-irq", max77686);
+
+ if (ret) {
+ dev_err(max77686->dev, "Failed to request IRQ %d: %d\n",
+ max77686->irq, ret);
+ return ret;
+ }
+
+ dbg_info("%s : returning\n", __func__);
+
+ return 0;
+}
+
+void max77686_irq_exit(struct max77686_dev *max77686)
+{
+ if (max77686->irq)
+ free_irq(max77686->irq, max77686);
+}
--- /dev/null
+/*
+ * max77686.c - mfd core driver for the Maxim 77686
+ *
+ * Copyright (C) 2012 Samsung Electronics
+ * Chiwoong Byun <woong.byun@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * This driver is based on max8997.c
+ */
+
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/i2c.h>
+#include <linux/interrupt.h>
+#include <linux/err.h>
+#include <linux/pm_runtime.h>
+#include <linux/mutex.h>
+#include <linux/mfd/core.h>
+#include <linux/mfd/max77686.h>
+#include <linux/mfd/max77686-private.h>
+
+#define I2C_ADDR_RTC (0x0C >> 1)
+
+#ifdef CONFIG_OF
+static struct of_device_id __devinitdata max77686_pmic_dt_match[] = {
+ {.compatible = "maxim,max77686-pmic", .data = TYPE_MAX77686},
+ {},
+};
+#endif
+
+static struct mfd_cell max77686_devs[] = {
+ {.name = "max77686-pmic",},
+ {.name = "max77686-rtc",},
+};
+
+int max77686_read_reg(struct i2c_client *i2c, u8 reg, u8 *dest)
+{
+ struct max77686_dev *max77686 = i2c_get_clientdata(i2c);
+ int ret;
+
+ mutex_lock(&max77686->iolock);
+ ret = i2c_smbus_read_byte_data(i2c, reg);
+ mutex_unlock(&max77686->iolock);
+ if (ret < 0)
+ return ret;
+
+ ret &= 0xff;
+ *dest = ret;
+ return 0;
+}
+EXPORT_SYMBOL_GPL(max77686_read_reg);
+
+int max77686_bulk_read(struct i2c_client *i2c, u8 reg, int count, u8 *buf)
+{
+ struct max77686_dev *max77686 = i2c_get_clientdata(i2c);
+ int ret;
+
+ mutex_lock(&max77686->iolock);
+ ret = i2c_smbus_read_i2c_block_data(i2c, reg, count, buf);
+ mutex_unlock(&max77686->iolock);
+ if (ret < 0)
+ return ret;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(max77686_bulk_read);
+
+int max77686_write_reg(struct i2c_client *i2c, u8 reg, u8 value)
+{
+ struct max77686_dev *max77686 = i2c_get_clientdata(i2c);
+ int ret;
+
+ mutex_lock(&max77686->iolock);
+ ret = i2c_smbus_write_byte_data(i2c, reg, value);
+ mutex_unlock(&max77686->iolock);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(max77686_write_reg);
+
+int max77686_bulk_write(struct i2c_client *i2c, u8 reg, int count, u8 *buf)
+{
+ struct max77686_dev *max77686 = i2c_get_clientdata(i2c);
+ int ret;
+
+ mutex_lock(&max77686->iolock);
+ ret = i2c_smbus_write_i2c_block_data(i2c, reg, count, buf);
+ mutex_unlock(&max77686->iolock);
+ if (ret < 0)
+ return ret;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(max77686_bulk_write);
+
+int max77686_update_reg(struct i2c_client *i2c, u8 reg, u8 val, u8 mask)
+{
+ struct max77686_dev *max77686 = i2c_get_clientdata(i2c);
+ int ret;
+
+ mutex_lock(&max77686->iolock);
+ ret = i2c_smbus_read_byte_data(i2c, reg);
+ if (ret >= 0) {
+ u8 old_val = ret & 0xff;
+ u8 new_val = (val & mask) | (old_val & (~mask));
+ ret = i2c_smbus_write_byte_data(i2c, reg, new_val);
+ }
+ mutex_unlock(&max77686->iolock);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(max77686_update_reg);
+
+#ifdef CONFIG_OF
+static struct max77686_platform_data *max77686_i2c_parse_dt_pdata(struct device
+ *dev)
+{
+ struct max77686_platform_data *pd;
+
+ pd = devm_kzalloc(dev, sizeof(*pd), GFP_KERNEL);
+ if (!pd) {
+ dev_err(dev, "could not allocate memory for pdata\n");
+ return ERR_PTR(-ENOMEM);
+ }
+
+ if (of_get_property(dev->of_node, "max77686,wakeup", NULL))
+ pd->wakeup = true;
+
+ return pd;
+}
+#else
+static struct max77686_platform_data *max77686_i2c_parse_dt_pdata(struct device
+ *dev)
+{
+ return 0;
+}
+#endif
+
+static inline int max77686_i2c_get_driver_data(struct i2c_client *i2c,
+ const struct i2c_device_id *id)
+{
+#ifdef CONFIG_OF
+ if (i2c->dev.of_node) {
+ const struct of_device_id *match;
+ match = of_match_node(max77686_pmic_dt_match,
+ i2c->dev.of_node);
+ return (int)match->data;
+ }
+#endif
+ return (int)id->driver_data;
+}
+
+static int max77686_i2c_probe(struct i2c_client *i2c,
+ const struct i2c_device_id *id)
+{
+ struct max77686_dev *max77686;
+ struct max77686_platform_data *pdata = i2c->dev.platform_data;
+ int ret = 0;
+ u8 data;
+
+ max77686 = kzalloc(sizeof(struct max77686_dev), GFP_KERNEL);
+ if (max77686 == NULL) {
+ dev_err(max77686->dev, "could not allocate memory\n");
+ return -ENOMEM;
+ }
+
+ max77686->dev = &i2c->dev;
+
+ if (max77686->dev->of_node) {
+ pdata = max77686_i2c_parse_dt_pdata(max77686->dev);
+ if (IS_ERR(pdata)) {
+ ret = PTR_ERR(pdata);
+ goto err;
+ }
+ }
+
+ if (!pdata) {
+ ret = -ENODEV;
+ dbg_info("%s : No platform data found\n", __func__);
+ goto err;
+ }
+
+ i2c_set_clientdata(i2c, max77686);
+ max77686->i2c = i2c;
+ max77686->irq = i2c->irq;
+ max77686->type = max77686_i2c_get_driver_data(i2c, id);
+
+ max77686->pdata = pdata;
+ max77686->wakeup = pdata->wakeup;
+
+ mutex_init(&max77686->iolock);
+
+ max77686->rtc = i2c_new_dummy(i2c->adapter, I2C_ADDR_RTC);
+ i2c_set_clientdata(max77686->rtc, max77686);
+ max77686_irq_init(max77686);
+
+ ret = mfd_add_devices(max77686->dev, -1, max77686_devs,
+ ARRAY_SIZE(max77686_devs), NULL, 0);
+
+ if (ret < 0) {
+ dbg_info("%s : mfd_add_devices failed\n", __func__);
+ goto err_mfd;
+ }
+
+ pm_runtime_set_active(max77686->dev);
+ device_init_wakeup(max77686->dev, max77686->wakeup);
+
+ if (max77686_read_reg(i2c, MAX77686_REG_DEVICE_ID, &data) < 0) {
+ ret = -EIO;
+ dbg_info("%s : device not found on this channel\n", __func__);
+ goto err_mfd;
+ } else
+ dev_info(max77686->dev, "device found\n");
+
+ return ret;
+
+ err_mfd:
+ mfd_remove_devices(max77686->dev);
+ max77686_irq_exit(max77686);
+ i2c_unregister_device(max77686->rtc);
+ err:
+ kfree(max77686);
+ dev_err(max77686->dev, "device probe failed : %d\n", ret);
+ return ret;
+}
+
+static int max77686_i2c_remove(struct i2c_client *i2c)
+{
+ struct max77686_dev *max77686 = i2c_get_clientdata(i2c);
+
+ device_init_wakeup(max77686->dev, 0);
+ pm_runtime_set_suspended(max77686->dev);
+ mfd_remove_devices(max77686->dev);
+ max77686_irq_exit(max77686);
+ i2c_unregister_device(max77686->rtc);
+ kfree(max77686);
+ return 0;
+}
+
+static const struct i2c_device_id max77686_i2c_id[] = {
+ {"max77686", TYPE_MAX77686},
+ {}
+};
+
+MODULE_DEVICE_TABLE(i2c, max77686_i2c_id);
+
+static int max77686_suspend(struct device *dev)
+{
+ struct i2c_client *i2c = container_of(dev, struct i2c_client, dev);
+ struct max77686_dev *max77686 = i2c_get_clientdata(i2c);
+
+ if (device_may_wakeup(dev))
+ enable_irq_wake(max77686->irq);
+
+ return 0;
+}
+
+static int max77686_resume(struct device *dev)
+{
+ struct i2c_client *i2c = container_of(dev, struct i2c_client, dev);
+ struct max77686_dev *max77686 = i2c_get_clientdata(i2c);
+
+ if (device_may_wakeup(dev))
+ disable_irq_wake(max77686->irq);
+
+ max77686_irq_resume(max77686);
+ return 0;
+}
+
+const struct dev_pm_ops max77686_pm = {
+ .suspend = max77686_suspend,
+ .resume = max77686_resume,
+};
+
+static struct i2c_driver max77686_i2c_driver = {
+ .driver = {
+ .name = "max77686",
+ .owner = THIS_MODULE,
+ .pm = &max77686_pm,
+ .of_match_table = of_match_ptr(max77686_pmic_dt_match),
+ },
+ .probe = max77686_i2c_probe,
+ .remove = max77686_i2c_remove,
+ .id_table = max77686_i2c_id,
+};
+
+static int __init max77686_i2c_init(void)
+{
+ return i2c_add_driver(&max77686_i2c_driver);
+}
+
+subsys_initcall(max77686_i2c_init);
+
+static void __exit max77686_i2c_exit(void)
+{
+ i2c_del_driver(&max77686_i2c_driver);
+}
+
+module_exit(max77686_i2c_exit);
+
+MODULE_DESCRIPTION("MAXIM 77686 multi-function core driver");
+MODULE_AUTHOR("Chiwoong Byun <woong.byun@samsung.com>");
+MODULE_LICENSE("GPL");
#include <linux/mfd/tps65090.h>
#include <linux/regmap.h>
#include <linux/err.h>
+#include <linux/irqdomain.h>
#define NUM_INT_REG 2
#define TOTAL_NUM_REG 0x18
static struct mfd_cell tps65090s[] = {
{
- .name = "tps65910-pmic",
+ .name = "tps65090-pmic",
},
{
- .name = "tps65910-regulator",
+ .name = "tps65090-regulator",
},
};
return acks ? IRQ_HANDLED : IRQ_NONE;
}
-static int __devinit tps65090_irq_init(struct tps65090 *tps65090, int irq,
- int irq_base)
+static int __devinit tps65090_irq_init(struct tps65090 *tps65090, int irq)
{
int i, ret;
+ int irq_base;
+ int nr_irqs = ARRAY_SIZE(tps65090_irqs);
- if (!irq_base) {
- dev_err(tps65090->dev, "IRQ base not set\n");
- return -EINVAL;
+ irq_base = irq_alloc_descs(-1, 0, nr_irqs, 0);
+ if (IS_ERR_VALUE(irq_base)) {
+ dev_err(tps65090->dev, "Fail to allocate IRQ descs\n");
+ return irq_base;
}
+ irq_domain_add_legacy(tps65090->dev->of_node, nr_irqs, irq_base, 0,
+ &irq_domain_simple_ops, NULL);
+
mutex_init(&tps65090->irq_lock);
for (i = 0; i < NUM_INT_REG; i++)
ret = request_threaded_irq(irq, NULL, tps65090_irq, IRQF_ONESHOT,
"tps65090", tps65090);
- if (!ret) {
- device_init_wakeup(tps65090->dev, 1);
- enable_irq_wake(irq);
+ if (ret) {
+ dev_err(tps65090->dev, "failed to request threaded irq\n");
+ irq_free_descs(tps65090->irq_base, nr_irqs);
+ return ret;
}
- return ret;
+ device_init_wakeup(tps65090->dev, 1);
+ enable_irq_wake(irq);
+
+ return 0;
}
static bool is_volatile_reg(struct device *dev, unsigned int reg)
static int __devinit tps65090_i2c_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
- struct tps65090_platform_data *pdata = client->dev.platform_data;
struct tps65090 *tps65090;
int ret;
- if (!pdata) {
- dev_err(&client->dev, "tps65090 requires platform data\n");
- return -EINVAL;
- }
-
tps65090 = devm_kzalloc(&client->dev, sizeof(struct tps65090),
GFP_KERNEL);
if (tps65090 == NULL)
mutex_init(&tps65090->lock);
if (client->irq) {
- ret = tps65090_irq_init(tps65090, client->irq, pdata->irq_base);
+ ret = tps65090_irq_init(tps65090, client->irq);
if (ret) {
dev_err(&client->dev, "IRQ init failed with err: %d\n",
ret);
regmap_exit(tps65090->rmap);
err_irq_exit:
- if (client->irq)
+ if (client->irq) {
free_irq(client->irq, tps65090);
+ irq_free_descs(tps65090->irq_base, ARRAY_SIZE(tps65090_irqs));
+ }
err_exit:
return ret;
}
mfd_remove_devices(tps65090->dev);
regmap_exit(tps65090->rmap);
- if (client->irq)
+ if (client->irq) {
free_irq(client->irq, tps65090);
+ irq_free_descs(tps65090->irq_base, ARRAY_SIZE(tps65090_irqs));
+ }
return 0;
}
case PM_POST_RESTORE:
spin_lock_irqsave(&host->lock, flags);
- host->rescan_disable = 0;
+ host->rescan_disable = 1;
host->power_notify_type = MMC_HOST_PW_NOTIFY_LONG;
spin_unlock_irqrestore(&host->lock, flags);
mmc_detect_change(host, 0);
#include <linux/mmc/host.h>
#include <linux/mmc/mmc.h>
#include <linux/mmc/dw_mmc.h>
+#include <linux/of.h>
#include "dw_mmc.h"
+#ifdef CONFIG_OF
+static struct dw_mci_drv_data synopsis_drv_data = {
+ .ctrl_type = DW_MCI_TYPE_SYNOPSIS,
+};
+
+static unsigned long exynos5250_dwmmc_caps[4] = {
+ MMC_CAP_UHS_DDR50 | MMC_CAP_1_8V_DDR |
+ MMC_CAP_8_BIT_DATA | MMC_CAP_CMD23,
+ MMC_CAP_CMD23,
+ MMC_CAP_CMD23,
+ MMC_CAP_CMD23,
+};
+
+static struct dw_mci_drv_data exynos5250_drv_data = {
+ .ctrl_type = DW_MCI_TYPE_EXYNOS5250,
+ .caps = exynos5250_dwmmc_caps,
+};
+
+static const struct of_device_id dw_mci_pltfm_match[] = {
+ { .compatible = "synopsis,dw-mshc",
+ .data = (void *)&synopsis_drv_data, },
+ { .compatible = "synopsis,dw-mshc-exynos5250",
+ .data = (void *)&exynos5250_drv_data, },
+ {},
+};
+MODULE_DEVICE_TABLE(of, dw_mci_pltfm_match);
+#else
+static const struct of_device_id dw_mci_pltfm_match[];
+#endif
+
static int dw_mci_pltfm_probe(struct platform_device *pdev)
{
struct dw_mci *host;
if (!host->regs)
goto err_free;
platform_set_drvdata(pdev, host);
+
+ if (pdev->dev.of_node) {
+ const struct of_device_id *match;
+ match = of_match_node(dw_mci_pltfm_match, pdev->dev.of_node);
+ host->drv_data = match->data;
+ }
+
ret = dw_mci_probe(host);
if (ret)
goto err_out;
.remove = __exit_p(dw_mci_pltfm_remove),
.driver = {
.name = "dw_mmc",
+ .of_match_table = of_match_ptr(dw_mci_pltfm_match),
.pm = &dw_mci_pltfm_pmops,
},
};
#include <linux/bitops.h>
#include <linux/regulator/consumer.h>
#include <linux/workqueue.h>
+#include <linux/of.h>
+#include <linux/of_gpio.h>
#include "dw_mmc.h"
+#define NUM_PINS(x) (x + 2)
+
/* Common flag combinations */
#define DW_MCI_DATA_ERROR_FLAGS (SDMMC_INT_DTO | SDMMC_INT_DCRC | \
SDMMC_INT_HTO | SDMMC_INT_SBE | \
struct dw_mci_slot {
struct mmc_host *mmc;
struct dw_mci *host;
+ int wp_gpio;
+ int cd_gpio;
u32 ctype;
int last_detect_state;
};
-static struct workqueue_struct *dw_mci_card_workqueue;
-
#if defined(CONFIG_DEBUG_FS)
static int dw_mci_req_show(struct seq_file *s, void *v)
{
static u32 dw_mci_prepare_command(struct mmc_host *mmc, struct mmc_command *cmd)
{
struct mmc_data *data;
+ struct dw_mci_slot *slot = mmc_priv(mmc);
u32 cmdr;
cmd->error = -EINPROGRESS;
cmdr |= SDMMC_CMD_DAT_WR;
}
+ if (slot->host->drv_data->ctrl_type == DW_MCI_TYPE_EXYNOS5250)
+ if (SDMMC_CLKSEL_GET_SELCLK_DRV(mci_readl(slot->host, CLKSEL)))
+ cmdr |= SDMMC_USE_HOLD_REG;
+
return cmdr;
}
regs = mci_readl(slot->host, UHS_REG);
/* DDR mode set */
- if (ios->timing == MMC_TIMING_UHS_DDR50)
+ if (ios->timing == MMC_TIMING_UHS_DDR50) {
regs |= (0x1 << slot->id) << 16;
- else
+ mci_writel(slot->host, CLKSEL, slot->host->ddr_timing);
+ } else {
regs &= ~(0x1 << slot->id) << 16;
+ mci_writel(slot->host, CLKSEL, slot->host->sdr_timing);
+ }
+
+ if (slot->host->drv_data->ctrl_type == DW_MCI_TYPE_EXYNOS5250) {
+ slot->host->bus_hz = clk_get_rate(slot->host->ciu_clk);
+ slot->host->bus_hz /= SDMMC_CLKSEL_GET_DIVRATIO(
+ mci_readl(slot->host, CLKSEL));
+ }
mci_writel(slot->host, UHS_REG, regs);
struct dw_mci_board *brd = slot->host->pdata;
/* Use platform get_ro function, else try on board write protect */
- if (brd->get_ro)
+ if (brd->quirks & DW_MCI_QUIRK_NO_WRITE_PROTECT)
+ read_only = 0;
+ else if (brd->get_ro)
read_only = brd->get_ro(slot->id);
+ else if (gpio_is_valid(slot->wp_gpio))
+ read_only = gpio_get_value(slot->wp_gpio);
else
read_only =
mci_readl(slot->host, WRTPRT) & (1 << slot->id) ? 1 : 0;
present = 1;
else if (brd->get_cd)
present = !brd->get_cd(slot->id);
+ else if (gpio_is_valid(slot->cd_gpio))
+ present = gpio_get_value(slot->cd_gpio);
else
present = (mci_readl(slot->host, CDETECT) & (1 << slot->id))
== 0 ? 1 : 0;
if (pending & SDMMC_INT_CD) {
mci_writel(host, RINTSTS, SDMMC_INT_CD);
- queue_work(dw_mci_card_workqueue, &host->card_work);
+ queue_work(host->card_workqueue, &host->card_work);
}
/* Handle SDIO Interrupts */
if (pending & (SDMMC_IDMAC_INT_TI | SDMMC_IDMAC_INT_RI)) {
mci_writel(host, IDSTS, SDMMC_IDMAC_INT_TI | SDMMC_IDMAC_INT_RI);
mci_writel(host, IDSTS, SDMMC_IDMAC_INT_NI);
- set_bit(EVENT_DATA_COMPLETE, &host->pending_events);
host->dma_ops->complete(host);
}
#endif
}
}
+#ifdef CONFIG_OF
+static struct device_node *dw_mci_of_find_slot_node(struct device *dev, u8 slot)
+{
+ struct device_node *np;
+ char name[7];
+
+ if (!dev || !dev->of_node)
+ return NULL;
+
+ for_each_child_of_node(dev->of_node, np) {
+ sprintf(name, "slot%d", slot);
+ if (!strcmp(name, np->name))
+ return np;
+ }
+ return NULL;
+}
+
+static u32 dw_mci_of_get_bus_wd(struct device *dev, u8 slot)
+{
+ struct device_node *np = dw_mci_of_find_slot_node(dev, slot);
+ u32 bus_wd = 1;
+
+ if (!np)
+ return 1;
+
+ if (of_property_read_u32(np, "bus-width", &bus_wd))
+ dev_err(dev, "bus-width property not found, assuming width"
+ " as 1\n");
+ return bus_wd;
+}
+
+static int dw_mci_of_setup_bus(struct dw_mci *host, u8 slot, u32 bus_wd)
+{
+ struct device_node *np = dw_mci_of_find_slot_node(&host->dev, slot);
+ int idx, gpio, ret;
+
+ for (idx = 0; idx < NUM_PINS(bus_wd); idx++) {
+ gpio = of_get_gpio(np, idx);
+ if (!gpio_is_valid(gpio)) {
+ dev_err(&host->dev, "invalid gpio: %d\n", gpio);
+ return -EINVAL;
+ }
+
+ ret = devm_gpio_request(&host->dev, gpio, "dw-mci-bus");
+ if (ret) {
+ dev_err(&host->dev, "gpio [%d] request failed\n", gpio);
+ return -EBUSY;
+ }
+ }
+
+ host->slot[slot]->wp_gpio = -1;
+ gpio = of_get_named_gpio(np, "wp_gpios", 0);
+ if (!gpio_is_valid(gpio)) {
+ dev_info(&host->dev, "wp gpio not available");
+ } else {
+ ret = devm_gpio_request(&host->dev, gpio, "dw-mci-wp");
+ if (ret)
+ dev_info(&host->dev, "gpio [%d] request failed\n",
+ gpio);
+ else
+ host->slot[slot]->wp_gpio = gpio;
+ }
+
+ host->slot[slot]->cd_gpio = -1;
+ gpio = of_get_named_gpio(np, "cd-gpios", 0);
+ if (!gpio_is_valid(gpio)) {
+ dev_info(&host->dev, "cd gpio not available");
+ } else {
+ ret = devm_gpio_request(&host->dev, gpio, "dw-mci-cd");
+ if (ret)
+ dev_err(&host->dev, "gpio [%d] request failed\n", gpio);
+ else
+ host->slot[slot]->cd_gpio = gpio;
+ }
+
+ return 0;
+}
+
+#else /* CONFIG_OF */
+static u32 dw_mci_of_get_bus_wd(struct device *dev, u8 slot)
+{
+ return 1;
+}
+
+static int dw_mci_of_setup_bus(struct dw_mci *host, u8 slot, u32 bus_wd)
+{
+ return -EINVAL;
+}
+#endif /* CONFIG_OF */
+
static int __init dw_mci_init_slot(struct dw_mci *host, unsigned int id)
{
struct mmc_host *mmc;
struct dw_mci_slot *slot;
+ unsigned int ctrl_id;
mmc = mmc_alloc_host(sizeof(struct dw_mci_slot), &host->dev);
if (!mmc)
slot->id = id;
slot->mmc = mmc;
slot->host = host;
+ host->slot[id] = slot;
mmc->ops = &dw_mci_ops;
mmc->f_min = DIV_ROUND_UP(host->bus_hz, 510);
if (host->pdata->caps)
mmc->caps = host->pdata->caps;
+ if (host->dev.of_node) {
+ ctrl_id = of_alias_get_id(host->dev.of_node, "mshc");
+ if (ctrl_id < 0)
+ ctrl_id = 0;
+ mmc->caps |= host->drv_data->caps[ctrl_id];
+ }
+
if (host->pdata->caps2)
mmc->caps2 = host->pdata->caps2;
- if (host->pdata->get_bus_wd)
+ if (host->pdata->get_bus_wd) {
if (host->pdata->get_bus_wd(slot->id) >= 4)
mmc->caps |= MMC_CAP_4_BIT_DATA;
+ } else if (host->dev.of_node) {
+ unsigned int bus_width;
+ bus_width = dw_mci_of_get_bus_wd(&host->dev, slot->id);
+ if (bus_width >= 4)
+ mmc->caps |= MMC_CAP_4_BIT_DATA;
+ dw_mci_of_setup_bus(host, slot->id, bus_width);
+ }
if (host->pdata->quirks & DW_MCI_QUIRK_HIGHSPEED)
mmc->caps |= MMC_CAP_SD_HIGHSPEED | MMC_CAP_MMC_HIGHSPEED;
else
clear_bit(DW_MMC_CARD_PRESENT, &slot->flags);
- host->slot[id] = slot;
mmc_add_host(mmc);
#if defined(CONFIG_DEBUG_FS)
* Card may have been plugged in prior to boot so we
* need to run the detect tasklet
*/
- queue_work(dw_mci_card_workqueue, &host->card_work);
+ queue_work(host->card_workqueue, &host->card_work);
return 0;
}
return false;
}
+#ifdef CONFIG_OF
+static struct dw_mci_of_quirks {
+ char *quirk;
+ int id;
+} of_quriks[] = {
+ {
+ .quirk = "supports-highspeed",
+ .id = DW_MCI_QUIRK_HIGHSPEED,
+ }, {
+ .quirk = "card-detection-broken",
+ .id = DW_MCI_QUIRK_BROKEN_CARD_DETECTION,
+ }, {
+ .quirk = "no-write-protect",
+ .id = DW_MCI_QUIRK_NO_WRITE_PROTECT,
+ }
+};
+
+static struct dw_mci_board *dw_mci_parse_dt(struct dw_mci *host)
+{
+ struct dw_mci_board *pdata;
+ struct device *dev = &host->dev;
+ struct device_node *np = dev->of_node, *slot;
+ u32 timing[3];
+ int idx, cnt;
+
+ pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL);
+ if (!pdata) {
+ dev_err(dev, "could not allocate memory for pdata\n");
+ return ERR_PTR(-ENOMEM);
+ }
+
+ /* find out number of slots supported */
+ for_each_child_of_node(np, slot)
+ pdata->num_slots++;
+
+ /* get quirks */
+ cnt = sizeof(of_quriks) / sizeof(struct dw_mci_of_quirks);
+ for (idx = 0; idx < cnt; idx++)
+ if (of_get_property(np, of_quriks[idx].quirk, NULL))
+ pdata->quirks |= of_quriks[idx].id;
+
+ if (of_property_read_u32_array(dev->of_node,
+ "samsung,dw-mshc-sdr-timing", timing, 3))
+ host->sdr_timing = DW_MCI_DEF_SDR_TIMING;
+ else
+ host->sdr_timing = SDMMC_CLKSEL_TIMING(timing[0],
+ timing[1], timing[2]);
+
+ if (of_property_read_u32_array(dev->of_node,
+ "samsung,dw-mshc-ddr-timing", timing, 3))
+ host->ddr_timing = DW_MCI_DEF_DDR_TIMING;
+ else
+ host->ddr_timing = SDMMC_CLKSEL_TIMING(timing[0],
+ timing[1], timing[2]);
+
+ if (of_property_read_u32(np, "fifo-depth", &pdata->fifo_depth))
+ dev_info(dev, "fifo-depth property not found, using "
+ "value of FIFOTH register as default\n");
+
+ of_property_read_u32(np, "card-detect-delay", &pdata->detect_delay_ms);
+
+ return pdata;
+}
+
+#else /* CONFIG_OF */
+static struct dw_mci_board *dw_mci_parse_dt(struct dw_mci *host)
+{
+ return ERR_PTR(-EINVAL);
+}
+#endif /* CONFIG_OF */
+
int dw_mci_probe(struct dw_mci *host)
{
int width, i, ret = 0;
u32 fifo_size;
- if (!host->pdata || !host->pdata->init) {
- dev_err(&host->dev,
- "Platform data must supply init function\n");
- return -ENODEV;
+ if (!host->pdata) {
+ host->pdata = dw_mci_parse_dt(host);
+ if (IS_ERR(host->pdata)) {
+ dev_err(&host->dev, "platform data not available\n");
+ return -EINVAL;
+ }
}
if (!host->pdata->select_slot && host->pdata->num_slots > 1) {
return -ENODEV;
}
- if (!host->pdata->bus_hz) {
+ host->biu_clk = clk_get(&host->dev, "biu");
+ if (IS_ERR(host->biu_clk))
+ dev_info(&host->dev, "biu clock not available\n");
+ else
+ clk_enable(host->biu_clk);
+
+ host->ciu_clk = clk_get(&host->dev, "ciu");
+ if (IS_ERR(host->ciu_clk))
+ dev_info(&host->dev, "ciu clock not available\n");
+ else
+ clk_enable(host->ciu_clk);
+
+ if (IS_ERR(host->ciu_clk))
+ host->bus_hz = host->pdata->bus_hz;
+ else
+ host->bus_hz = clk_get_rate(host->ciu_clk);
+
+ if (!host->bus_hz) {
dev_err(&host->dev,
"Platform data must supply bus speed\n");
- return -ENODEV;
+ ret = -ENODEV;
+ goto err_clk;
}
- host->bus_hz = host->pdata->bus_hz;
host->quirks = host->pdata->quirks;
spin_lock_init(&host->lock);
INIT_LIST_HEAD(&host->queue);
-
host->dma_ops = host->pdata->dma_ops;
dw_mci_init_dma(host);
mci_writel(host, CLKSRC, 0);
tasklet_init(&host->tasklet, dw_mci_tasklet_func, (unsigned long)host);
- dw_mci_card_workqueue = alloc_workqueue("dw-mci-card",
+ host->card_workqueue = alloc_workqueue("dw-mci-card",
WQ_MEM_RECLAIM | WQ_NON_REENTRANT, 1);
- if (!dw_mci_card_workqueue)
+ if (!host->card_workqueue)
goto err_dmaunmap;
INIT_WORK(&host->card_work, dw_mci_work_routine_card);
ret = request_irq(host->irq, dw_mci_interrupt, host->irq_flags, "dw-mci", host);
free_irq(host->irq, host);
err_workqueue:
- destroy_workqueue(dw_mci_card_workqueue);
+ destroy_workqueue(host->card_workqueue);
err_dmaunmap:
if (host->use_dma && host->dma_ops->exit)
regulator_disable(host->vmmc);
regulator_put(host->vmmc);
}
+ kfree(host);
+
+err_clk:
+ clk_disable(host->ciu_clk);
+ clk_disable(host->biu_clk);
+ clk_put(host->ciu_clk);
+ clk_put(host->biu_clk);
return ret;
}
EXPORT_SYMBOL(dw_mci_probe);
mci_writel(host, CLKSRC, 0);
free_irq(host->irq, host);
- destroy_workqueue(dw_mci_card_workqueue);
+ destroy_workqueue(host->card_workqueue);
dma_free_coherent(&host->dev, PAGE_SIZE, host->sg_cpu, host->sg_dma);
if (host->use_dma && host->dma_ops->exit)
regulator_put(host->vmmc);
}
+ clk_disable(host->ciu_clk);
+ clk_disable(host->biu_clk);
+ clk_put(host->ciu_clk);
+ clk_put(host->biu_clk);
}
EXPORT_SYMBOL(dw_mci_remove);
if (host->vmmc)
regulator_enable(host->vmmc);
- if (host->dma_ops->init)
+ if (host->use_dma && host->dma_ops->init)
host->dma_ops->init(host);
if (!mci_wait_reset(&host->dev, host)) {
#define SDMMC_IDINTEN 0x090
#define SDMMC_DSCADDR 0x094
#define SDMMC_BUFADDR 0x098
+#define SDMMC_CLKSEL 0x09C /* specific to Samsung Exynos5250 */
#define SDMMC_DATA(x) (x)
/*
#define SDMMC_INT_ERROR 0xbfc2
/* Command register defines */
#define SDMMC_CMD_START BIT(31)
+#define SDMMC_USE_HOLD_REG BIT(29)
#define SDMMC_CMD_CCS_EXP BIT(23)
#define SDMMC_CMD_CEATA_RD BIT(22)
#define SDMMC_CMD_UPD_CLK BIT(21)
/* Version ID register define */
#define SDMMC_GET_VERID(x) ((x) & 0xFFFF)
+#define DW_MCI_DEF_SDR_TIMING 0x03030002
+#define DW_MCI_DEF_DDR_TIMING 0x03020001
+#define SDMMC_CLKSEL_CCLK_SAMPLE(x) (((x) & 3) << 0)
+#define SDMMC_CLKSEL_CCLK_DRIVE(x) (((x) & 3) << 16)
+#define SDMMC_CLKSEL_CCLK_DIVIDER(x) (((x) & 3) << 24)
+#define SDMMC_CLKSEL_TIMING(x, y, z) (SDMMC_CLKSEL_CCLK_SAMPLE(x) | \
+ SDMMC_CLKSEL_CCLK_DRIVE(y) | \
+ SDMMC_CLKSEL_CCLK_DIVIDER(z))
+#define SDMMC_CLKSEL_GET_DIVRATIO(x) ((((x) >> 24) & 0x7) + 1)
+#define SDMMC_CLKSEL_GET_SELCLK_DRV(x) (((x) >> 16) & 0x7)
+
/* Register access macros */
#define mci_readl(dev, reg) \
__raw_readl((dev)->regs + SDMMC_##reg)
extern int dw_mci_resume(struct dw_mci *host);
#endif
+/* Variations in the dw_mci controller */
+#define DW_MCI_TYPE_SYNOPSIS 0
+#define DW_MCI_TYPE_EXYNOS5250 1 /* Samsung Exynos5250 Extensions */
+
+/* dw_mci platform driver data */
+struct dw_mci_drv_data {
+ unsigned long ctrl_type;
+ unsigned long *caps;
+};
+
#endif /* _DW_MMC_H_ */
{ "w25q32", INFO(0xef4016, 0, 64 * 1024, 64, SECT_4K) },
{ "w25x64", INFO(0xef3017, 0, 64 * 1024, 128, SECT_4K) },
{ "w25q64", INFO(0xef4017, 0, 64 * 1024, 128, SECT_4K) },
+ { "w25q80", INFO(0xef5014, 0, 64 * 1024, 16, SECT_4K) },
/* Catalyst / On Semiconductor -- non-JEDEC */
{ "cat25c11", CAT25_INFO( 16, 8, 16, 1) },
mwifiex_dump_station_info(struct mwifiex_private *priv,
struct station_info *sinfo)
{
- struct mwifiex_ds_get_signal signal;
struct mwifiex_rate_cfg rate;
- int ret = 0;
sinfo->filled = STATION_INFO_RX_BYTES | STATION_INFO_TX_BYTES |
- STATION_INFO_RX_PACKETS |
- STATION_INFO_TX_PACKETS
- | STATION_INFO_SIGNAL | STATION_INFO_TX_BITRATE;
+ STATION_INFO_RX_PACKETS | STATION_INFO_TX_PACKETS |
+ STATION_INFO_TX_BITRATE |
+ STATION_INFO_SIGNAL | STATION_INFO_SIGNAL_AVG;
/* Get signal information from the firmware */
- memset(&signal, 0, sizeof(struct mwifiex_ds_get_signal));
- if (mwifiex_get_signal_info(priv, &signal)) {
- dev_err(priv->adapter->dev, "getting signal information\n");
- ret = -EFAULT;
+ if (mwifiex_send_cmd_sync(priv, HostCmd_CMD_RSSI_INFO,
+ HostCmd_ACT_GEN_GET, 0, NULL)) {
+ dev_err(priv->adapter->dev, "failed to get signal information\n");
+ return -EFAULT;
}
if (mwifiex_drv_get_data_rate(priv, &rate)) {
dev_err(priv->adapter->dev, "getting data rate\n");
- ret = -EFAULT;
+ return -EFAULT;
}
/* Get DTIM period information from firmware */
sinfo->txrate.flags |= RATE_INFO_FLAGS_SHORT_GI;
}
+ sinfo->signal_avg = priv->bcn_rssi_avg;
sinfo->rx_bytes = priv->stats.rx_bytes;
sinfo->tx_bytes = priv->stats.tx_bytes;
sinfo->rx_packets = priv->stats.rx_packets;
sinfo->tx_packets = priv->stats.tx_packets;
- sinfo->signal = priv->qual_level;
+ sinfo->signal = priv->bcn_rssi_avg;
/* bit rate is in 500 kb/s units. Convert it to 100kb/s units */
sinfo->txrate.legacy = rate.rate * 5;
priv->curr_bss_params.bss_descriptor.beacon_period;
}
- return ret;
+ return 0;
}
/*
return mwifiex_dump_station_info(priv, sinfo);
}
+/*
+ * CFG802.11 operation handler to dump station information.
+ */
+static int
+mwifiex_cfg80211_dump_station(struct wiphy *wiphy, struct net_device *dev,
+ int idx, u8 *mac, struct station_info *sinfo)
+{
+ struct mwifiex_private *priv = mwifiex_netdev_get_priv(dev);
+
+ if (!priv->media_connected || idx)
+ return -ENOENT;
+
+ memcpy(mac, priv->cfg_bssid, ETH_ALEN);
+
+ return mwifiex_dump_station_info(priv, sinfo);
+}
+
/* Supported rates to be advertised to the cfg80211 */
static struct ieee80211_rate mwifiex_rates[] = {
return 0;
}
+/*
+ * CFG802.11 operation handler for connection quality monitoring.
+ *
+ * This function subscribes/unsubscribes HIGH_RSSI and LOW_RSSI
+ * events to FW.
+ */
+static int mwifiex_cfg80211_set_cqm_rssi_config(struct wiphy *wiphy,
+ struct net_device *dev,
+ s32 rssi_thold, u32 rssi_hyst)
+{
+ struct mwifiex_private *priv = mwifiex_netdev_get_priv(dev);
+ struct mwifiex_ds_misc_subsc_evt subsc_evt;
+
+ priv->cqm_rssi_thold = rssi_thold;
+ priv->cqm_rssi_hyst = rssi_hyst;
+
+ memset(&subsc_evt, 0x00, sizeof(struct mwifiex_ds_misc_subsc_evt));
+ subsc_evt.events = BITMASK_BCN_RSSI_LOW | BITMASK_BCN_RSSI_HIGH;
+
+ /* Subscribe/unsubscribe low and high rssi events */
+ if (rssi_thold && rssi_hyst) {
+ subsc_evt.action = HostCmd_ACT_BITWISE_SET;
+ subsc_evt.bcn_l_rssi_cfg.abs_value = abs(rssi_thold);
+ subsc_evt.bcn_h_rssi_cfg.abs_value = abs(rssi_thold);
+ subsc_evt.bcn_l_rssi_cfg.evt_freq = 1;
+ subsc_evt.bcn_h_rssi_cfg.evt_freq = 1;
+ return mwifiex_send_cmd_sync(priv,
+ HostCmd_CMD_802_11_SUBSCRIBE_EVENT,
+ 0, 0, &subsc_evt);
+ } else {
+ subsc_evt.action = HostCmd_ACT_BITWISE_CLR;
+ return mwifiex_send_cmd_sync(priv,
+ HostCmd_CMD_802_11_SUBSCRIBE_EVENT,
+ 0, 0, &subsc_evt);
+ }
+
+ return 0;
+}
+
/*
* CFG802.11 operation handler for disconnection request.
*
priv->user_scan_cfg->num_ssids = request->n_ssids;
priv->user_scan_cfg->ssid_list = request->ssids;
+ if (request->ie && request->ie_len) {
+ for (i = 0; i < MWIFIEX_MAX_VSIE_NUM; i++) {
+ if (priv->vs_ie[i].mask != MWIFIEX_VSIE_MASK_CLEAR)
+ continue;
+ priv->vs_ie[i].mask = MWIFIEX_VSIE_MASK_SCAN;
+ memcpy(&priv->vs_ie[i].ie, request->ie,
+ request->ie_len);
+ break;
+ }
+ }
+
for (i = 0; i < request->n_channels; i++) {
chan = request->channels[i];
priv->user_scan_cfg->chan_list[i].chan_number = chan->hw_value;
if (mwifiex_set_user_scan_ioctl(priv, priv->user_scan_cfg))
return -EFAULT;
+ if (request->ie && request->ie_len) {
+ for (i = 0; i < MWIFIEX_MAX_VSIE_NUM; i++) {
+ if (priv->vs_ie[i].mask == MWIFIEX_VSIE_MASK_SCAN) {
+ priv->vs_ie[i].mask = MWIFIEX_VSIE_MASK_CLEAR;
+ memset(&priv->vs_ie[i].ie, 0,
+ MWIFIEX_MAX_VSIE_LEN);
+ }
+ }
+ }
return 0;
}
.connect = mwifiex_cfg80211_connect,
.disconnect = mwifiex_cfg80211_disconnect,
.get_station = mwifiex_cfg80211_get_station,
+ .dump_station = mwifiex_cfg80211_dump_station,
.set_wiphy_params = mwifiex_cfg80211_set_wiphy_params,
.set_channel = mwifiex_cfg80211_set_channel,
.join_ibss = mwifiex_cfg80211_join_ibss,
.set_power_mgmt = mwifiex_cfg80211_set_power_mgmt,
.set_tx_power = mwifiex_cfg80211_set_tx_power,
.set_bitrate_mask = mwifiex_cfg80211_set_bitrate_mask,
+ .set_cqm_rssi_config = mwifiex_cfg80211_set_cqm_rssi_config,
};
/*
void *wdev_priv;
struct wireless_dev *wdev;
struct ieee80211_sta_ht_cap *ht_info;
+ u8 *country_code;
wdev = kzalloc(sizeof(struct wireless_dev), GFP_KERNEL);
if (!wdev) {
}
wdev->iftype = NL80211_IFTYPE_STATION;
wdev->wiphy->max_scan_ssids = 10;
+ wdev->wiphy->max_scan_ie_len = MWIFIEX_MAX_VSIE_LEN;
wdev->wiphy->interface_modes = BIT(NL80211_IFTYPE_STATION) |
BIT(NL80211_IFTYPE_ADHOC);
memcpy(wdev->wiphy->perm_addr, priv->curr_addr, ETH_ALEN);
wdev->wiphy->signal_type = CFG80211_SIGNAL_TYPE_MBM;
- /* Reserve space for bss band information */
- wdev->wiphy->bss_priv_size = sizeof(u8);
+ /* Reserve space for mwifiex specific private data for BSS */
+ wdev->wiphy->bss_priv_size = sizeof(struct mwifiex_bss_priv);
wdev->wiphy->reg_notifier = mwifiex_reg_notifier;
"info: successfully registered wiphy device\n");
}
+ country_code = mwifiex_11d_code_2_region(priv->adapter->region_code);
+ if (country_code && regulatory_hint(wdev->wiphy, country_code))
+ dev_err(priv->adapter->dev,
+ "%s: regulatory_hint failed\n", __func__);
+
priv->wdev = wdev;
return ret;
static u8 supported_rates_n[N_SUPPORTED_RATES] = { 0x02, 0x04, 0 };
+struct region_code_mapping {
+ u8 code;
+ u8 region[IEEE80211_COUNTRY_STRING_LEN];
+};
+
+static struct region_code_mapping region_code_mapping_t[] = {
+ { 0x10, "US " }, /* US FCC */
+ { 0x20, "CA " }, /* IC Canada */
+ { 0x30, "EU " }, /* ETSI */
+ { 0x31, "ES " }, /* Spain */
+ { 0x32, "FR " }, /* France */
+ { 0x40, "JP " }, /* Japan */
+ { 0x41, "JP " }, /* Japan */
+ { 0x50, "CN " }, /* China */
+};
+
+/* This function converts integer code to region string */
+u8 *mwifiex_11d_code_2_region(u8 code)
+{
+ u8 i;
+ u8 size = sizeof(region_code_mapping_t)/
+ sizeof(struct region_code_mapping);
+
+ /* Look for code in mapping table */
+ for (i = 0; i < size; i++)
+ if (region_code_mapping_t[i].code == code)
+ return region_code_mapping_t[i].region;
+
+ return NULL;
+}
+
/*
* This function maps an index in supported rates table into
* the corresponding data rate.
p += sprintf(p, "essid=\"%s\"\n", info.ssid.ssid);
p += sprintf(p, "bssid=\"%pM\"\n", info.bssid);
p += sprintf(p, "channel=\"%d\"\n", (int) info.bss_chan);
- p += sprintf(p, "region_code = \"%02x\"\n", info.region_code);
+ p += sprintf(p, "country_code = \"%s\"\n", info.country_code);
netdev_for_each_mc_addr(ha, netdev)
p += sprintf(p, "multicast_address[%d]=\"%pM\"\n",
#define TLV_TYPE_KEY_MATERIAL (PROPRIETARY_TLV_BASE_ID + 0)
#define TLV_TYPE_CHANLIST (PROPRIETARY_TLV_BASE_ID + 1)
#define TLV_TYPE_NUMPROBES (PROPRIETARY_TLV_BASE_ID + 2)
+#define TLV_TYPE_RSSI_LOW (PROPRIETARY_TLV_BASE_ID + 4)
#define TLV_TYPE_PASSTHROUGH (PROPRIETARY_TLV_BASE_ID + 10)
#define TLV_TYPE_WMMQSTATUS (PROPRIETARY_TLV_BASE_ID + 16)
#define TLV_TYPE_WILDCARDSSID (PROPRIETARY_TLV_BASE_ID + 18)
#define TLV_TYPE_TSFTIMESTAMP (PROPRIETARY_TLV_BASE_ID + 19)
+#define TLV_TYPE_RSSI_HIGH (PROPRIETARY_TLV_BASE_ID + 22)
#define TLV_TYPE_AUTH_TYPE (PROPRIETARY_TLV_BASE_ID + 31)
#define TLV_TYPE_CHANNELBANDLIST (PROPRIETARY_TLV_BASE_ID + 42)
#define TLV_TYPE_RATE_DROP_CONTROL (PROPRIETARY_TLV_BASE_ID + 82)
#define TLV_TYPE_RATE_SCOPE (PROPRIETARY_TLV_BASE_ID + 83)
#define TLV_TYPE_POWER_GROUP (PROPRIETARY_TLV_BASE_ID + 84)
#define TLV_TYPE_WAPI_IE (PROPRIETARY_TLV_BASE_ID + 94)
+#define TLV_TYPE_MGMT_IE (PROPRIETARY_TLV_BASE_ID + 105)
#define TLV_TYPE_AUTO_DS_PARAM (PROPRIETARY_TLV_BASE_ID + 113)
#define TLV_TYPE_PS_PARAM (PROPRIETARY_TLV_BASE_ID + 114)
#define HostCmd_CMD_802_11_KEY_MATERIAL 0x005e
#define HostCmd_CMD_802_11_BG_SCAN_QUERY 0x006c
#define HostCmd_CMD_WMM_GET_STATUS 0x0071
+#define HostCmd_CMD_802_11_SUBSCRIBE_EVENT 0x0075
#define HostCmd_CMD_802_11_TX_RATE_QUERY 0x007f
#define HostCmd_CMD_802_11_IBSS_COALESCING_STATUS 0x0083
#define HostCmd_CMD_VERSION_EXT 0x0097
#define HostCmd_RET_BIT 0x8000
#define HostCmd_ACT_GEN_GET 0x0000
#define HostCmd_ACT_GEN_SET 0x0001
+#define HostCmd_ACT_BITWISE_SET 0x0002
+#define HostCmd_ACT_BITWISE_CLR 0x0003
#define HostCmd_RESULT_OK 0x0000
#define HostCmd_ACT_MAC_RX_ON 0x0001
struct mwifiex_bcn_param {
u8 bssid[ETH_ALEN];
u8 rssi;
- __le32 timestamp[2];
+ __le64 timestamp;
__le16 beacon_period;
__le16 cap_info_bitmap;
} __packed;
struct ieee_types_vendor_header vend_hdr;
u8 qos_info_bitmap;
u8 reserved;
- struct ieee_types_wmm_ac_parameters ac_params[IEEE80211_MAX_QUEUES];
+ struct ieee_types_wmm_ac_parameters ac_params[IEEE80211_NUM_ACS];
} __packed;
struct ieee_types_wmm_info {
struct host_cmd_ds_wmm_get_status {
u8 queue_status_tlv[sizeof(struct mwifiex_ie_types_wmm_queue_status) *
- IEEE80211_MAX_QUEUES];
+ IEEE80211_NUM_ACS];
u8 wmm_param_tlv[sizeof(struct ieee_types_wmm_parameter) + 2];
} __packed;
u32 sleep_cookie_addr_hi;
} __packed;
+struct mwifiex_ie_types_rssi_threshold {
+ struct mwifiex_ie_types_header header;
+ u8 abs_value;
+ u8 evt_freq;
+} __packed;
+
+struct host_cmd_ds_802_11_subsc_evt {
+ __le16 action;
+ __le16 events;
+} __packed;
+
struct host_cmd_ds_command {
__le16 command;
__le16 size;
struct host_cmd_ds_set_bss_mode bss_mode;
struct host_cmd_ds_pcie_details pcie_host_spec;
struct host_cmd_ds_802_11_eeprom_access eeprom;
+ struct host_cmd_ds_802_11_subsc_evt subsc_evt;
} params;
} __packed;
return 0;
}
+static void scan_delay_timer_fn(unsigned long data)
+{
+ struct mwifiex_private *priv = (struct mwifiex_private *)data;
+ struct mwifiex_adapter *adapter = priv->adapter;
+ struct cmd_ctrl_node *cmd_node, *tmp_node;
+ unsigned long flags;
+
+ if (!mwifiex_wmm_lists_empty(adapter)) {
+ if (adapter->scan_delay_cnt == MWIFIEX_MAX_SCAN_DELAY_CNT) {
+ /*
+ * Abort scan operation by cancelling all pending scan
+ * command
+ */
+ spin_lock_irqsave(&adapter->scan_pending_q_lock, flags);
+ list_for_each_entry_safe(cmd_node, tmp_node,
+ &adapter->scan_pending_q,
+ list) {
+ list_del(&cmd_node->list);
+ cmd_node->wait_q_enabled = false;
+ mwifiex_insert_cmd_to_free_q(adapter, cmd_node);
+ }
+ spin_unlock_irqrestore(&adapter->scan_pending_q_lock,
+ flags);
+
+ spin_lock_irqsave(&adapter->mwifiex_cmd_lock, flags);
+ adapter->scan_processing = false;
+ spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock,
+ flags);
+
+ if (priv->user_scan_cfg) {
+ dev_dbg(priv->adapter->dev,
+ "info: %s: scan aborted\n", __func__);
+ cfg80211_scan_done(priv->scan_request, 1);
+ priv->scan_request = NULL;
+ kfree(priv->user_scan_cfg);
+ priv->user_scan_cfg = NULL;
+ }
+ } else {
+ /*
+ * Tx data queue is still not empty, delay scan
+ * operation further by 20msec.
+ */
+ mod_timer(&priv->scan_delay_timer, jiffies +
+ msecs_to_jiffies(MWIFIEX_SCAN_DELAY_MSEC));
+ adapter->scan_delay_cnt++;
+ }
+ } else {
+ /*
+ * Tx data queue is empty. Get scan command from scan_pending_q
+ * and put to cmd_pending_q to resume scan operation
+ */
+ adapter->scan_delay_cnt = 0;
+ spin_lock_irqsave(&adapter->scan_pending_q_lock, flags);
+ cmd_node = list_first_entry(&adapter->scan_pending_q,
+ struct cmd_ctrl_node, list);
+ list_del(&cmd_node->list);
+ spin_unlock_irqrestore(&adapter->scan_pending_q_lock, flags);
+
+ mwifiex_insert_cmd_to_pending_q(adapter, cmd_node, true);
+ }
+}
+
/*
* This function initializes the private structure and sets default
* values to the members.
priv->wmm_qosinfo = 0;
priv->curr_bcn_buf = NULL;
priv->curr_bcn_size = 0;
+ priv->wps_ie = NULL;
+ priv->wps_ie_len = 0;
priv->scan_block = false;
+ setup_timer(&priv->scan_delay_timer, scan_delay_timer_fn,
+ (unsigned long)priv);
+
return mwifiex_add_bss_prio_tbl(priv);
}
u32 wep_icv_error[4];
};
-#define BCN_RSSI_AVG_MASK 0x00000002
-#define BCN_NF_AVG_MASK 0x00000200
-#define ALL_RSSI_INFO_MASK 0x00000fff
-
-struct mwifiex_ds_get_signal {
- /*
- * Bit0: Last Beacon RSSI, Bit1: Average Beacon RSSI,
- * Bit2: Last Data RSSI, Bit3: Average Data RSSI,
- * Bit4: Last Beacon SNR, Bit5: Average Beacon SNR,
- * Bit6: Last Data SNR, Bit7: Average Data SNR,
- * Bit8: Last Beacon NF, Bit9: Average Beacon NF,
- * Bit10: Last Data NF, Bit11: Average Data NF
- */
- u16 selector;
- s16 bcn_rssi_last;
- s16 bcn_rssi_avg;
- s16 data_rssi_last;
- s16 data_rssi_avg;
- s16 bcn_snr_last;
- s16 bcn_snr_avg;
- s16 data_snr_last;
- s16 data_snr_avg;
- s16 bcn_nf_last;
- s16 bcn_nf_avg;
- s16 data_nf_last;
- s16 data_nf_avg;
-};
-
#define MWIFIEX_MAX_VER_STR_LEN 128
struct mwifiex_ver_ext {
u32 bss_mode;
struct cfg80211_ssid ssid;
u32 bss_chan;
- u32 region_code;
+ u8 country_code[3];
u32 media_connected;
u32 max_power_level;
u32 min_power_level;
u8 cmd[MWIFIEX_SIZE_OF_CMD_BUFFER];
};
+#define BITMASK_BCN_RSSI_LOW BIT(0)
+#define BITMASK_BCN_RSSI_HIGH BIT(4)
+
+enum subsc_evt_rssi_state {
+ EVENT_HANDLED,
+ RSSI_LOW_RECVD,
+ RSSI_HIGH_RECVD
+};
+
+struct subsc_evt_cfg {
+ u8 abs_value;
+ u8 evt_freq;
+};
+
+struct mwifiex_ds_misc_subsc_evt {
+ u16 action;
+ u16 events;
+ struct subsc_evt_cfg bcn_l_rssi_cfg;
+ struct subsc_evt_cfg bcn_h_rssi_cfg;
+};
+
#define MWIFIEX_MAX_VSIE_LEN (256)
#define MWIFIEX_MAX_VSIE_NUM (8)
+#define MWIFIEX_VSIE_MASK_CLEAR 0x00
#define MWIFIEX_VSIE_MASK_SCAN 0x01
#define MWIFIEX_VSIE_MASK_ASSOC 0x02
#define MWIFIEX_VSIE_MASK_ADHOC 0x04
*buffer += sizeof(tsf_tlv.header);
/* TSF at the time when beacon/probe_response was received */
- tsf_val = cpu_to_le64(bss_desc->network_tsf);
+ tsf_val = cpu_to_le64(bss_desc->fw_tsf);
memcpy(*buffer, &tsf_val, sizeof(tsf_val));
*buffer += sizeof(tsf_val);
- memcpy(&tsf_val, bss_desc->time_stamp, sizeof(tsf_val));
+ tsf_val = cpu_to_le64(bss_desc->timestamp);
dev_dbg(priv->adapter->dev,
"info: %s: TSF offset calc: %016llx - %016llx\n",
- __func__, tsf_val, bss_desc->network_tsf);
+ __func__, bss_desc->timestamp, bss_desc->fw_tsf);
memcpy(*buffer, &tsf_val, sizeof(tsf_val));
*buffer += sizeof(tsf_val);
return 0;
}
+/*
+ * This function appends a WPS IE. It is called from the network join command
+ * preparation routine.
+ *
+ * If the IE buffer has been setup by the application, this routine appends
+ * the buffer as a WPS TLV type to the request.
+ */
+static int
+mwifiex_cmd_append_wps_ie(struct mwifiex_private *priv, u8 **buffer)
+{
+ int retLen = 0;
+ struct mwifiex_ie_types_header ie_header;
+
+ if (!buffer || !*buffer)
+ return 0;
+
+ /*
+ * If there is a wps ie buffer setup, append it to the return
+ * parameter buffer pointer.
+ */
+ if (priv->wps_ie_len) {
+ dev_dbg(priv->adapter->dev, "cmd: append wps ie %d to %p\n",
+ priv->wps_ie_len, *buffer);
+
+ /* Wrap the generic IE buffer with a pass through TLV type */
+ ie_header.type = cpu_to_le16(TLV_TYPE_MGMT_IE);
+ ie_header.len = cpu_to_le16(priv->wps_ie_len);
+ memcpy(*buffer, &ie_header, sizeof(ie_header));
+ *buffer += sizeof(ie_header);
+ retLen += sizeof(ie_header);
+
+ memcpy(*buffer, priv->wps_ie, priv->wps_ie_len);
+ *buffer += priv->wps_ie_len;
+ retLen += priv->wps_ie_len;
+
+ }
+
+ kfree(priv->wps_ie);
+ priv->wps_ie_len = 0;
+ return retLen;
+}
+
/*
* This function appends a WAPI IE.
*
if (priv->sec_info.wapi_enabled && priv->wapi_ie_len)
mwifiex_cmd_append_wapi_ie(priv, &pos);
+ if (priv->wps.session_enable && priv->wps_ie_len)
+ mwifiex_cmd_append_wps_ie(priv, &pos);
mwifiex_cmd_append_generic_ie(priv, &pos);
}
}
- if (!adapter->scan_processing && !adapter->data_sent &&
- !mwifiex_wmm_lists_empty(adapter)) {
+ if ((!adapter->scan_processing || adapter->scan_delay_cnt) &&
+ !adapter->data_sent && !mwifiex_wmm_lists_empty(adapter)) {
mwifiex_wmm_process_tx(adapter);
if (adapter->hs_activated) {
adapter->is_hs_configured = false;
}
/*
- * This function initializes the hardware and firmware.
+ * This function gets firmware and initializes it.
*
* The main initialization steps followed are -
* - Download the correct firmware to card
- * - Allocate and initialize the adapter structure
- * - Initialize the private structures
* - Issue the init commands to firmware
*/
-static int mwifiex_init_hw_fw(struct mwifiex_adapter *adapter)
+static void mwifiex_fw_dpc(const struct firmware *firmware, void *context)
{
- int ret, err;
+ int ret;
+ char fmt[64];
+ struct mwifiex_private *priv;
+ struct mwifiex_adapter *adapter = context;
struct mwifiex_fw_image fw;
- memset(&fw, 0, sizeof(struct mwifiex_fw_image));
-
- err = request_firmware(&adapter->firmware, adapter->fw_name,
- adapter->dev);
- if (err < 0) {
- dev_err(adapter->dev, "request_firmware() returned"
- " error code %#x\n", err);
- ret = -1;
+ if (!firmware) {
+ dev_err(adapter->dev,
+ "Failed to get firmware %s\n", adapter->fw_name);
goto done;
}
+
+ memset(&fw, 0, sizeof(struct mwifiex_fw_image));
+ adapter->firmware = firmware;
fw.fw_buf = (u8 *) adapter->firmware->data;
fw.fw_len = adapter->firmware->size;
/* Wait for mwifiex_init to complete */
wait_event_interruptible(adapter->init_wait_q,
adapter->init_wait_q_woken);
- if (adapter->hw_status != MWIFIEX_HW_STATUS_READY) {
- ret = -1;
+ if (adapter->hw_status != MWIFIEX_HW_STATUS_READY)
goto done;
+
+ priv = adapter->priv[0];
+ if (mwifiex_register_cfg80211(priv) != 0) {
+ dev_err(adapter->dev, "cannot register with cfg80211\n");
+ goto err_init_fw;
+ }
+
+ rtnl_lock();
+ /* Create station interface by default */
+ if (!mwifiex_add_virtual_intf(priv->wdev->wiphy, "mlan%d",
+ NL80211_IFTYPE_STATION, NULL, NULL)) {
+ dev_err(adapter->dev, "cannot create default STA interface\n");
+ goto err_add_intf;
}
- ret = 0;
+ rtnl_unlock();
+
+ mwifiex_drv_get_driver_version(adapter, fmt, sizeof(fmt) - 1);
+ dev_notice(adapter->dev, "driver_version = %s\n", fmt);
+ goto done;
+err_add_intf:
+ mwifiex_del_virtual_intf(priv->wdev->wiphy, priv->netdev);
+ rtnl_unlock();
+err_init_fw:
+ pr_debug("info: %s: unregister device\n", __func__);
+ adapter->if_ops.unregister_dev(adapter);
done:
- if (adapter->firmware)
- release_firmware(adapter->firmware);
- if (ret)
- ret = -1;
+ release_firmware(adapter->firmware);
+ complete(&adapter->fw_load);
+ return;
+}
+
+/*
+ * This function initializes the hardware and gets firmware.
+ */
+static int mwifiex_init_hw_fw(struct mwifiex_adapter *adapter)
+{
+ int ret;
+
+ init_completion(&adapter->fw_load);
+ ret = request_firmware_nowait(THIS_MODULE, 1, adapter->fw_name,
+ adapter->dev, GFP_KERNEL, adapter,
+ mwifiex_fw_dpc);
+ if (ret < 0)
+ dev_err(adapter->dev,
+ "request_firmware_nowait() returned error %d\n", ret);
return ret;
}
struct mwifiex_if_ops *if_ops, u8 iface_type)
{
struct mwifiex_adapter *adapter;
- char fmt[64];
- struct mwifiex_private *priv;
if (down_interruptible(sem))
goto exit_sem_err;
goto err_init_fw;
}
- priv = adapter->priv[0];
-
- if (mwifiex_register_cfg80211(priv) != 0) {
- dev_err(adapter->dev, "cannot register netdevice"
- " with cfg80211\n");
- goto err_init_fw;
- }
-
- rtnl_lock();
- /* Create station interface by default */
- if (!mwifiex_add_virtual_intf(priv->wdev->wiphy, "mlan%d",
- NL80211_IFTYPE_STATION, NULL, NULL)) {
- rtnl_unlock();
- dev_err(adapter->dev, "cannot create default station"
- " interface\n");
- goto err_add_intf;
- }
-
- rtnl_unlock();
-
up(sem);
-
- mwifiex_drv_get_driver_version(adapter, fmt, sizeof(fmt) - 1);
- dev_notice(adapter->dev, "driver_version = %s\n", fmt);
-
return 0;
-err_add_intf:
- rtnl_lock();
- mwifiex_del_virtual_intf(priv->wdev->wiphy, priv->netdev);
- rtnl_unlock();
err_init_fw:
pr_debug("info: %s: unregister device\n", __func__);
adapter->if_ops.unregister_dev(adapter);
#define SCAN_BEACON_ENTRY_PAD 6
-#define MWIFIEX_PASSIVE_SCAN_CHAN_TIME 200
-#define MWIFIEX_ACTIVE_SCAN_CHAN_TIME 200
-#define MWIFIEX_SPECIFIC_SCAN_CHAN_TIME 110
+#define MWIFIEX_PASSIVE_SCAN_CHAN_TIME 110
+#define MWIFIEX_ACTIVE_SCAN_CHAN_TIME 30
+#define MWIFIEX_SPECIFIC_SCAN_CHAN_TIME 30
#define SCAN_RSSI(RSSI) (0x100 - ((u8)(RSSI)))
#define MWIFIEX_MAX_TOTAL_SCAN_TIME (MWIFIEX_TIMER_10S - MWIFIEX_TIMER_1S)
+#define MWIFIEX_MAX_SCAN_DELAY_CNT 50
+#define MWIFIEX_SCAN_DELAY_MSEC 20
+
#define RSN_GTK_OUI_OFFSET 2
#define MWIFIEX_OUI_NOT_PRESENT 0
u32 packets_out[MAX_NUM_TID];
/* spin lock to protect ra_list */
spinlock_t ra_list_spinlock;
- struct mwifiex_wmm_ac_status ac_status[IEEE80211_MAX_QUEUES];
- enum mwifiex_wmm_ac_e ac_down_graded_vals[IEEE80211_MAX_QUEUES];
+ struct mwifiex_wmm_ac_status ac_status[IEEE80211_NUM_ACS];
+ enum mwifiex_wmm_ac_e ac_down_graded_vals[IEEE80211_NUM_ACS];
u32 drv_pkt_delay_max;
- u8 queue_priority[IEEE80211_MAX_QUEUES];
+ u8 queue_priority[IEEE80211_NUM_ACS];
u32 user_pri_pkt_tx_ctrl[WMM_HIGHEST_PRIORITY + 1]; /* UP: 0 to 7 */
/* Number of transmit packets queued */
atomic_t tx_pkts_queued;
* BAND_A(0X04): 'a' band
*/
u16 bss_band;
- u64 network_tsf;
- u8 time_stamp[8];
+ u64 fw_tsf;
+ u64 timestamp;
union ieee_types_phy_param_set phy_param_set;
union ieee_types_ss_param_set ss_param_set;
u16 cap_info_bitmap;
struct host_cmd_ds_802_11_key_material aes_key;
u8 wapi_ie[256];
u8 wapi_ie_len;
+ u8 *wps_ie;
+ u8 wps_ie_len;
u8 wmm_required;
u8 wmm_enabled;
u8 wmm_qosinfo;
struct dentry *dfs_dev_dir;
#endif
u8 nick_name[16];
- u8 qual_level, qual_noise;
u16 current_key_index;
struct semaphore async_sem;
u8 scan_pending_on_block;
u8 country_code[IEEE80211_COUNTRY_STRING_LEN];
struct wps wps;
u8 scan_block;
+ s32 cqm_rssi_thold;
+ u32 cqm_rssi_hyst;
+ u8 subsc_evt_rssi_state;
+ struct timer_list scan_delay_timer;
};
enum mwifiex_ba_status {
u8 cmd_wait_q_woken;
};
+struct mwifiex_bss_priv {
+ u8 band;
+ u64 fw_tsf;
+};
+
struct mwifiex_if_ops {
int (*init_if) (struct mwifiex_adapter *);
void (*cleanup_if) (struct mwifiex_adapter *);
u8 scan_wait_q_woken;
struct cmd_ctrl_node *cmd_queued;
spinlock_t queue_lock; /* lock for tx queues */
+ struct completion fw_load;
+ u8 scan_delay_cnt;
};
int mwifiex_init_lock_list(struct mwifiex_adapter *adapter);
int mwifiex_cancel_hs(struct mwifiex_private *priv, int cmd_type);
int mwifiex_enable_hs(struct mwifiex_adapter *adapter);
int mwifiex_disable_auto_ds(struct mwifiex_private *priv);
-int mwifiex_get_signal_info(struct mwifiex_private *priv,
- struct mwifiex_ds_get_signal *signal);
int mwifiex_drv_get_data_rate(struct mwifiex_private *priv,
struct mwifiex_rate_cfg *rate);
int mwifiex_request_scan(struct mwifiex_private *priv,
int mwifiex_get_bss_info(struct mwifiex_private *,
struct mwifiex_bss_info *);
int mwifiex_fill_new_bss_desc(struct mwifiex_private *priv,
- u8 *bssid, s32 rssi, u8 *ie_buf,
- size_t ie_len, u16 beacon_period,
- u16 cap_info_bitmap, u8 band,
+ struct cfg80211_bss *bss,
struct mwifiex_bssdescriptor *bss_desc);
int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
- struct mwifiex_bssdescriptor *bss_entry,
- u8 *ie_buf, u32 ie_len);
+ struct mwifiex_bssdescriptor *bss_entry);
int mwifiex_check_network_compatibility(struct mwifiex_private *priv,
struct mwifiex_bssdescriptor *bss_desc);
u32 *flags, struct vif_params *params);
int mwifiex_del_virtual_intf(struct wiphy *wiphy, struct net_device *dev);
+u8 *mwifiex_11d_code_2_region(u8 code);
#ifdef CONFIG_DEBUG_FS
void mwifiex_debugfs_init(void);
if (!adapter || !adapter->priv_num)
return;
+ /* In case driver is removed when asynchronous FW load is in progress */
+ wait_for_completion(&adapter->fw_load);
+
if (user_rmmod) {
#ifdef CONFIG_PM
if (adapter->is_suspended)
/* The maximum number of channels the firmware can scan per command */
#define MWIFIEX_MAX_CHANNELS_PER_SPECIFIC_SCAN 14
-#define MWIFIEX_CHANNELS_PER_SCAN_CMD 4
+#define MWIFIEX_DEF_CHANNELS_PER_SCAN_CMD 4
+#define MWIFIEX_LIMIT_1_CHANNEL_PER_SCAN_CMD 15
+#define MWIFIEX_LIMIT_2_CHANNELS_PER_SCAN_CMD 27
+#define MWIFIEX_LIMIT_3_CHANNELS_PER_SCAN_CMD 35
/* Memory needed to store a max sized Channel List TLV for a firmware scan */
#define CHAN_TLV_MAX_SIZE (sizeof(struct mwifiex_ie_types_header) \
* This routine is used for any scan that is not provided with a
* specific channel list to scan.
*/
-static void
+static int
mwifiex_scan_create_channel_list(struct mwifiex_private *priv,
const struct mwifiex_user_scan_cfg
*user_scan_in,
}
}
+ return chan_idx;
}
/*
u32 num_probes;
u32 ssid_len;
u32 chan_idx;
+ u32 chan_num;
u32 scan_type;
u16 scan_dur;
u8 channel;
if (*filtered_scan)
*max_chan_per_scan = MWIFIEX_MAX_CHANNELS_PER_SPECIFIC_SCAN;
else
- *max_chan_per_scan = MWIFIEX_CHANNELS_PER_SCAN_CMD;
+ *max_chan_per_scan = MWIFIEX_DEF_CHANNELS_PER_SCAN_CMD;
/* If the input config or adapter has the number of Probes set,
add tlv */
dev_dbg(adapter->dev,
"info: Scan: Scanning current channel only\n");
}
-
+ chan_num = chan_idx;
} else {
dev_dbg(adapter->dev,
"info: Scan: Creating full region channel list\n");
- mwifiex_scan_create_channel_list(priv, user_scan_in,
- scan_chan_list,
- *filtered_scan);
+ chan_num = mwifiex_scan_create_channel_list(priv, user_scan_in,
+ scan_chan_list,
+ *filtered_scan);
+ }
+
+ /*
+ * In associated state we will reduce the number of channels scanned per
+ * scan command to avoid any traffic delay/loss. This number is decided
+ * based on total number of channels to be scanned due to constraints
+ * of command buffers.
+ */
+ if (priv->media_connected) {
+ if (chan_num < MWIFIEX_LIMIT_1_CHANNEL_PER_SCAN_CMD)
+ *max_chan_per_scan = 1;
+ else if (chan_num < MWIFIEX_LIMIT_2_CHANNELS_PER_SCAN_CMD)
+ *max_chan_per_scan = 2;
+ else if (chan_num < MWIFIEX_LIMIT_3_CHANNELS_PER_SCAN_CMD)
+ *max_chan_per_scan = 3;
}
}
* This function parses provided beacon buffer and updates
* respective fields in bss descriptor structure.
*/
-int
-mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
- struct mwifiex_bssdescriptor *bss_entry,
- u8 *ie_buf, u32 ie_len)
+int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
+ struct mwifiex_bssdescriptor *bss_entry)
{
int ret = 0;
u8 element_id;
found_data_rate_ie = false;
rate_size = 0;
- current_ptr = ie_buf;
- bytes_left = ie_len;
- bss_entry->beacon_buf = ie_buf;
- bss_entry->beacon_buf_size = ie_len;
+ current_ptr = bss_entry->beacon_buf;
+ bytes_left = bss_entry->beacon_buf_size;
/* Process variable IE */
while (bytes_left >= 2) {
return ret;
}
-static int
-mwifiex_update_curr_bss_params(struct mwifiex_private *priv, u8 *bssid,
- s32 rssi, const u8 *ie_buf, size_t ie_len,
- u16 beacon_period, u16 cap_info_bitmap, u8 band)
+static int mwifiex_update_curr_bss_params(struct mwifiex_private *priv,
+ struct cfg80211_bss *bss)
{
struct mwifiex_bssdescriptor *bss_desc;
int ret;
unsigned long flags;
- u8 *beacon_ie;
/* Allocate and fill new bss descriptor */
bss_desc = kzalloc(sizeof(struct mwifiex_bssdescriptor),
return -ENOMEM;
}
- beacon_ie = kmemdup(ie_buf, ie_len, GFP_KERNEL);
- if (!beacon_ie) {
- kfree(bss_desc);
- dev_err(priv->adapter->dev, " failed to alloc beacon_ie\n");
- return -ENOMEM;
- }
-
- ret = mwifiex_fill_new_bss_desc(priv, bssid, rssi, beacon_ie,
- ie_len, beacon_period,
- cap_info_bitmap, band, bss_desc);
+ ret = mwifiex_fill_new_bss_desc(priv, bss, bss_desc);
if (ret)
goto done;
done:
kfree(bss_desc);
- kfree(beacon_ie);
return 0;
}
const u8 *ie_buf;
size_t ie_len;
u16 channel = 0;
- u64 network_tsf = 0;
+ u64 fw_tsf = 0;
u16 beacon_size = 0;
u32 curr_bcn_bytes;
u32 freq;
u16 beacon_period;
u16 cap_info_bitmap;
u8 *current_ptr;
+ u64 timestamp;
struct mwifiex_bcn_param *bcn_param;
+ struct mwifiex_bss_priv *bss_priv;
if (bytes_left >= sizeof(beacon_size)) {
/* Extract & convert beacon size from command buffer */
memcpy(bssid, bcn_param->bssid, ETH_ALEN);
- rssi = (s32) (bcn_param->rssi);
- dev_dbg(adapter->dev, "info: InterpretIE: RSSI=%02X\n", rssi);
+ rssi = (s32) bcn_param->rssi;
+ rssi = (-rssi) * 100; /* Convert dBm to mBm */
+ dev_dbg(adapter->dev, "info: InterpretIE: RSSI=%d\n", rssi);
+ timestamp = le64_to_cpu(bcn_param->timestamp);
beacon_period = le16_to_cpu(bcn_param->beacon_period);
cap_info_bitmap = le16_to_cpu(bcn_param->cap_info_bitmap);
/*
* If the TSF TLV was appended to the scan results, save this
- * entry's TSF value in the networkTSF field.The networkTSF is
- * the firmware's TSF value at the time the beacon or probe
- * response was received.
+ * entry's TSF value in the fw_tsf field. It is the firmware's
+ * TSF value at the time the beacon or probe response was
+ * received.
*/
if (tsf_tlv)
- memcpy(&network_tsf,
- &tsf_tlv->tsf_data[idx * TSF_DATA_SIZE],
- sizeof(network_tsf));
+ memcpy(&fw_tsf, &tsf_tlv->tsf_data[idx * TSF_DATA_SIZE],
+ sizeof(fw_tsf));
if (channel) {
struct ieee80211_channel *chan;
if (chan && !(chan->flags & IEEE80211_CHAN_DISABLED)) {
bss = cfg80211_inform_bss(priv->wdev->wiphy,
- chan, bssid, network_tsf,
+ chan, bssid, timestamp,
cap_info_bitmap, beacon_period,
ie_buf, ie_len, rssi, GFP_KERNEL);
- *(u8 *)bss->priv = band;
- cfg80211_put_bss(bss);
-
+ bss_priv = (struct mwifiex_bss_priv *)bss->priv;
+ bss_priv->band = band;
+ bss_priv->fw_tsf = fw_tsf;
if (priv->media_connected &&
!memcmp(bssid,
priv->curr_bss_params.bss_descriptor
.mac_address, ETH_ALEN))
- mwifiex_update_curr_bss_params
- (priv, bssid, rssi,
- ie_buf, ie_len,
- beacon_period,
- cap_info_bitmap, band);
+ mwifiex_update_curr_bss_params(priv,
+ bss);
+ cfg80211_put_bss(bss);
}
} else {
dev_dbg(adapter->dev, "missing BSS channel IE\n");
priv->user_scan_cfg = NULL;
}
} else {
- /* Get scan command from scan_pending_q and put to
- cmd_pending_q */
- cmd_node = list_first_entry(&adapter->scan_pending_q,
- struct cmd_ctrl_node, list);
- list_del(&cmd_node->list);
- spin_unlock_irqrestore(&adapter->scan_pending_q_lock, flags);
-
- mwifiex_insert_cmd_to_pending_q(adapter, cmd_node, true);
+ if (!mwifiex_wmm_lists_empty(adapter)) {
+ spin_unlock_irqrestore(&adapter->scan_pending_q_lock,
+ flags);
+ adapter->scan_delay_cnt = 1;
+ mod_timer(&priv->scan_delay_timer, jiffies +
+ msecs_to_jiffies(MWIFIEX_SCAN_DELAY_MSEC));
+ } else {
+ /* Get scan command from scan_pending_q and put to
+ cmd_pending_q */
+ cmd_node = list_first_entry(&adapter->scan_pending_q,
+ struct cmd_ctrl_node, list);
+ list_del(&cmd_node->list);
+ spin_unlock_irqrestore(&adapter->scan_pending_q_lock,
+ flags);
+ mwifiex_insert_cmd_to_pending_q(adapter, cmd_node,
+ true);
+ }
}
done:
if (!adapter || !adapter->priv_num)
return;
+ /* In case driver is removed when asynchronous FW load is in progress */
+ wait_for_completion(&adapter->fw_load);
+
if (user_rmmod) {
if (adapter->is_suspended)
mwifiex_sdio_resume(adapter->dev);
a->mpa_tx.ports |= (1<<(a->mpa_tx.pkt_cnt+1+(MAX_PORT - \
a->mp_end_port))); \
a->mpa_tx.pkt_cnt++; \
-} while (0);
+} while (0)
/* SDIO Tx aggregation limit ? */
#define MP_TX_AGGR_PKT_LIMIT_REACHED(a) \
a->mpa_tx.buf_len = 0; \
a->mpa_tx.ports = 0; \
a->mpa_tx.start_port = 0; \
-} while (0);
+} while (0)
/* SDIO Rx aggregation limit ? */
#define MP_RX_AGGR_PKT_LIMIT_REACHED(a) \
a->mpa_rx.skb_arr[a->mpa_rx.pkt_cnt] = skb; \
a->mpa_rx.len_arr[a->mpa_rx.pkt_cnt] = skb->len; \
a->mpa_rx.pkt_cnt++; \
-} while (0);
+} while (0)
/* Reset SDIO Rx aggregation buffer parameters */
#define MP_RX_AGGR_BUF_RESET(a) do { \
a->mpa_rx.buf_len = 0; \
a->mpa_rx.ports = 0; \
a->mpa_rx.start_port = 0; \
-} while (0);
+} while (0)
/* data structure for SDIO MPA TX */
return 0;
}
+/*
+ * This function prepares command for event subscription, configuration
+ * and query. Events can be subscribed or unsubscribed. Current subscribed
+ * events can be queried. Also, current subscribed events are reported in
+ * every FW response.
+ */
+static int
+mwifiex_cmd_802_11_subsc_evt(struct mwifiex_private *priv,
+ struct host_cmd_ds_command *cmd,
+ struct mwifiex_ds_misc_subsc_evt *subsc_evt_cfg)
+{
+ struct host_cmd_ds_802_11_subsc_evt *subsc_evt = &cmd->params.subsc_evt;
+ struct mwifiex_ie_types_rssi_threshold *rssi_tlv;
+ u16 event_bitmap;
+ u8 *pos;
+
+ cmd->command = cpu_to_le16(HostCmd_CMD_802_11_SUBSCRIBE_EVENT);
+ cmd->size = cpu_to_le16(sizeof(struct host_cmd_ds_802_11_subsc_evt) +
+ S_DS_GEN);
+
+ subsc_evt->action = cpu_to_le16(subsc_evt_cfg->action);
+ dev_dbg(priv->adapter->dev, "cmd: action: %d\n", subsc_evt_cfg->action);
+
+ /*For query requests, no configuration TLV structures are to be added.*/
+ if (subsc_evt_cfg->action == HostCmd_ACT_GEN_GET)
+ return 0;
+
+ subsc_evt->events = cpu_to_le16(subsc_evt_cfg->events);
+
+ event_bitmap = subsc_evt_cfg->events;
+ dev_dbg(priv->adapter->dev, "cmd: event bitmap : %16x\n",
+ event_bitmap);
+
+ if (((subsc_evt_cfg->action == HostCmd_ACT_BITWISE_CLR) ||
+ (subsc_evt_cfg->action == HostCmd_ACT_BITWISE_SET)) &&
+ (event_bitmap == 0)) {
+ dev_dbg(priv->adapter->dev, "Error: No event specified "
+ "for bitwise action type\n");
+ return -EINVAL;
+ }
+
+ /*
+ * Append TLV structures for each of the specified events for
+ * subscribing or re-configuring. This is not required for
+ * bitwise unsubscribing request.
+ */
+ if (subsc_evt_cfg->action == HostCmd_ACT_BITWISE_CLR)
+ return 0;
+
+ pos = ((u8 *)subsc_evt) +
+ sizeof(struct host_cmd_ds_802_11_subsc_evt);
+
+ if (event_bitmap & BITMASK_BCN_RSSI_LOW) {
+ rssi_tlv = (struct mwifiex_ie_types_rssi_threshold *) pos;
+
+ rssi_tlv->header.type = cpu_to_le16(TLV_TYPE_RSSI_LOW);
+ rssi_tlv->header.len =
+ cpu_to_le16(sizeof(struct mwifiex_ie_types_rssi_threshold) -
+ sizeof(struct mwifiex_ie_types_header));
+ rssi_tlv->abs_value = subsc_evt_cfg->bcn_l_rssi_cfg.abs_value;
+ rssi_tlv->evt_freq = subsc_evt_cfg->bcn_l_rssi_cfg.evt_freq;
+
+ dev_dbg(priv->adapter->dev, "Cfg Beacon Low Rssi event, "
+ "RSSI:-%d dBm, Freq:%d\n",
+ subsc_evt_cfg->bcn_l_rssi_cfg.abs_value,
+ subsc_evt_cfg->bcn_l_rssi_cfg.evt_freq);
+
+ pos += sizeof(struct mwifiex_ie_types_rssi_threshold);
+ le16_add_cpu(&cmd->size,
+ sizeof(struct mwifiex_ie_types_rssi_threshold));
+ }
+
+ if (event_bitmap & BITMASK_BCN_RSSI_HIGH) {
+ rssi_tlv = (struct mwifiex_ie_types_rssi_threshold *) pos;
+
+ rssi_tlv->header.type = cpu_to_le16(TLV_TYPE_RSSI_HIGH);
+ rssi_tlv->header.len =
+ cpu_to_le16(sizeof(struct mwifiex_ie_types_rssi_threshold) -
+ sizeof(struct mwifiex_ie_types_header));
+ rssi_tlv->abs_value = subsc_evt_cfg->bcn_h_rssi_cfg.abs_value;
+ rssi_tlv->evt_freq = subsc_evt_cfg->bcn_h_rssi_cfg.evt_freq;
+
+ dev_dbg(priv->adapter->dev, "Cfg Beacon High Rssi event, "
+ "RSSI:-%d dBm, Freq:%d\n",
+ subsc_evt_cfg->bcn_h_rssi_cfg.abs_value,
+ subsc_evt_cfg->bcn_h_rssi_cfg.evt_freq);
+
+ pos += sizeof(struct mwifiex_ie_types_rssi_threshold);
+ le16_add_cpu(&cmd->size,
+ sizeof(struct mwifiex_ie_types_rssi_threshold));
+ }
+
+ return 0;
+}
+
/*
* This function prepares the commands before sending them to the firmware.
*
case HostCmd_CMD_PCIE_DESC_DETAILS:
ret = mwifiex_cmd_pcie_host_spec(priv, cmd_ptr, cmd_action);
break;
+ case HostCmd_CMD_802_11_SUBSCRIBE_EVENT:
+ ret = mwifiex_cmd_802_11_subsc_evt(priv, cmd_ptr, data_buf);
+ break;
default:
dev_err(priv->adapter->dev,
"PREP_CMD: unknown cmd- %#x\n", cmd_no);
* calculated SNR values.
*/
static int mwifiex_ret_802_11_rssi_info(struct mwifiex_private *priv,
- struct host_cmd_ds_command *resp,
- struct mwifiex_ds_get_signal *signal)
+ struct host_cmd_ds_command *resp)
{
struct host_cmd_ds_802_11_rssi_info_rsp *rssi_info_rsp =
&resp->params.rssi_info_rsp;
+ struct mwifiex_ds_misc_subsc_evt subsc_evt;
priv->data_rssi_last = le16_to_cpu(rssi_info_rsp->data_rssi_last);
priv->data_nf_last = le16_to_cpu(rssi_info_rsp->data_nf_last);
priv->bcn_rssi_avg = le16_to_cpu(rssi_info_rsp->bcn_rssi_avg);
priv->bcn_nf_avg = le16_to_cpu(rssi_info_rsp->bcn_nf_avg);
- /* Need to indicate IOCTL complete */
- if (signal) {
- memset(signal, 0, sizeof(*signal));
-
- signal->selector = ALL_RSSI_INFO_MASK;
-
- /* RSSI */
- signal->bcn_rssi_last = priv->bcn_rssi_last;
- signal->bcn_rssi_avg = priv->bcn_rssi_avg;
- signal->data_rssi_last = priv->data_rssi_last;
- signal->data_rssi_avg = priv->data_rssi_avg;
-
- /* SNR */
- signal->bcn_snr_last =
- CAL_SNR(priv->bcn_rssi_last, priv->bcn_nf_last);
- signal->bcn_snr_avg =
- CAL_SNR(priv->bcn_rssi_avg, priv->bcn_nf_avg);
- signal->data_snr_last =
- CAL_SNR(priv->data_rssi_last, priv->data_nf_last);
- signal->data_snr_avg =
- CAL_SNR(priv->data_rssi_avg, priv->data_nf_avg);
-
- /* NF */
- signal->bcn_nf_last = priv->bcn_nf_last;
- signal->bcn_nf_avg = priv->bcn_nf_avg;
- signal->data_nf_last = priv->data_nf_last;
- signal->data_nf_avg = priv->data_nf_avg;
+ if (priv->subsc_evt_rssi_state == EVENT_HANDLED)
+ return 0;
+
+ /* Resubscribe low and high rssi events with new thresholds */
+ memset(&subsc_evt, 0x00, sizeof(struct mwifiex_ds_misc_subsc_evt));
+ subsc_evt.events = BITMASK_BCN_RSSI_LOW | BITMASK_BCN_RSSI_HIGH;
+ subsc_evt.action = HostCmd_ACT_BITWISE_SET;
+ if (priv->subsc_evt_rssi_state == RSSI_LOW_RECVD) {
+ subsc_evt.bcn_l_rssi_cfg.abs_value = abs(priv->bcn_rssi_avg -
+ priv->cqm_rssi_hyst);
+ subsc_evt.bcn_h_rssi_cfg.abs_value = abs(priv->cqm_rssi_thold);
+ } else if (priv->subsc_evt_rssi_state == RSSI_HIGH_RECVD) {
+ subsc_evt.bcn_l_rssi_cfg.abs_value = abs(priv->cqm_rssi_thold);
+ subsc_evt.bcn_h_rssi_cfg.abs_value = abs(priv->bcn_rssi_avg +
+ priv->cqm_rssi_hyst);
}
+ subsc_evt.bcn_l_rssi_cfg.evt_freq = 1;
+ subsc_evt.bcn_h_rssi_cfg.evt_freq = 1;
+
+ priv->subsc_evt_rssi_state = EVENT_HANDLED;
+
+ mwifiex_send_cmd_async(priv, HostCmd_CMD_802_11_SUBSCRIBE_EVENT,
+ 0, 0, &subsc_evt);
return 0;
}
return 0;
}
+/*
+ * This function handles the command response for subscribe event command.
+ */
+static int mwifiex_ret_subsc_evt(struct mwifiex_private *priv,
+ struct host_cmd_ds_command *resp,
+ struct mwifiex_ds_misc_subsc_evt *sub_event)
+{
+ struct host_cmd_ds_802_11_subsc_evt *cmd_sub_event =
+ (struct host_cmd_ds_802_11_subsc_evt *)&resp->params.subsc_evt;
+
+ /* For every subscribe event command (Get/Set/Clear), FW reports the
+ * current set of subscribed events*/
+ dev_dbg(priv->adapter->dev, "Bitmap of currently subscribed events: %16x\n",
+ le16_to_cpu(cmd_sub_event->events));
+
+ /*Return the subscribed event info for a Get request*/
+ if (sub_event)
+ sub_event->events = le16_to_cpu(cmd_sub_event->events);
+
+ return 0;
+}
+
/*
* This function handles the command responses.
*
ret = mwifiex_ret_get_log(priv, resp, data_buf);
break;
case HostCmd_CMD_RSSI_INFO:
- ret = mwifiex_ret_802_11_rssi_info(priv, resp, data_buf);
+ ret = mwifiex_ret_802_11_rssi_info(priv, resp);
break;
case HostCmd_CMD_802_11_SNMP_MIB:
ret = mwifiex_ret_802_11_snmp_mib(priv, resp, data_buf);
break;
case HostCmd_CMD_PCIE_DESC_DETAILS:
break;
+ case HostCmd_CMD_802_11_SUBSCRIBE_EVENT:
+ ret = mwifiex_ret_subsc_evt(priv, resp, data_buf);
+ break;
default:
dev_err(adapter->dev, "CMD_RESP: unknown cmd response %#x\n",
resp->command);
mwifiex_stop_net_dev_queue(priv->netdev, adapter);
if (netif_carrier_ok(priv->netdev))
netif_carrier_off(priv->netdev);
- /* Reset wireless stats signal info */
- priv->qual_level = 0;
- priv->qual_noise = 0;
}
/*
break;
case EVENT_RSSI_LOW:
+ cfg80211_cqm_rssi_notify(priv->netdev,
+ NL80211_CQM_RSSI_THRESHOLD_EVENT_LOW,
+ GFP_KERNEL);
+ mwifiex_send_cmd_async(priv, HostCmd_CMD_RSSI_INFO,
+ HostCmd_ACT_GEN_GET, 0, NULL);
+ priv->subsc_evt_rssi_state = RSSI_LOW_RECVD;
dev_dbg(adapter->dev, "event: Beacon RSSI_LOW\n");
break;
case EVENT_SNR_LOW:
dev_dbg(adapter->dev, "event: MAX_FAIL\n");
break;
case EVENT_RSSI_HIGH:
+ cfg80211_cqm_rssi_notify(priv->netdev,
+ NL80211_CQM_RSSI_THRESHOLD_EVENT_HIGH,
+ GFP_KERNEL);
+ mwifiex_send_cmd_async(priv, HostCmd_CMD_RSSI_INFO,
+ HostCmd_ACT_GEN_GET, 0, NULL);
+ priv->subsc_evt_rssi_state = RSSI_HIGH_RECVD;
dev_dbg(adapter->dev, "event: Beacon RSSI_HIGH\n");
break;
case EVENT_SNR_HIGH:
* information.
*/
int mwifiex_fill_new_bss_desc(struct mwifiex_private *priv,
- u8 *bssid, s32 rssi, u8 *ie_buf,
- size_t ie_len, u16 beacon_period,
- u16 cap_info_bitmap, u8 band,
+ struct cfg80211_bss *bss,
struct mwifiex_bssdescriptor *bss_desc)
{
int ret;
+ u8 *beacon_ie;
+ struct mwifiex_bss_priv *bss_priv = (void *)bss->priv;
- memcpy(bss_desc->mac_address, bssid, ETH_ALEN);
- bss_desc->rssi = rssi;
- bss_desc->beacon_buf = ie_buf;
- bss_desc->beacon_buf_size = ie_len;
- bss_desc->beacon_period = beacon_period;
- bss_desc->cap_info_bitmap = cap_info_bitmap;
- bss_desc->bss_band = band;
+ beacon_ie = kmemdup(bss->information_elements, bss->len_beacon_ies,
+ GFP_KERNEL);
+ if (!beacon_ie) {
+ dev_err(priv->adapter->dev, " failed to alloc beacon_ie\n");
+ return -ENOMEM;
+ }
+
+ memcpy(bss_desc->mac_address, bss->bssid, ETH_ALEN);
+ bss_desc->rssi = bss->signal;
+ bss_desc->beacon_buf = beacon_ie;
+ bss_desc->beacon_buf_size = bss->len_beacon_ies;
+ bss_desc->beacon_period = bss->beacon_interval;
+ bss_desc->cap_info_bitmap = bss->capability;
+ bss_desc->bss_band = bss_priv->band;
+ bss_desc->fw_tsf = bss_priv->fw_tsf;
+ bss_desc->timestamp = bss->tsf;
if (bss_desc->cap_info_bitmap & WLAN_CAPABILITY_PRIVACY) {
dev_dbg(priv->adapter->dev, "info: InterpretIE: AP WEP enabled\n");
bss_desc->privacy = MWIFIEX_802_11_PRIV_FILTER_8021X_WEP;
else
bss_desc->bss_mode = NL80211_IFTYPE_STATION;
- ret = mwifiex_update_bss_desc_with_ie(priv->adapter, bss_desc,
- ie_buf, ie_len);
+ ret = mwifiex_update_bss_desc_with_ie(priv->adapter, bss_desc);
+ kfree(beacon_ie);
return ret;
}
int ret;
struct mwifiex_adapter *adapter = priv->adapter;
struct mwifiex_bssdescriptor *bss_desc = NULL;
- u8 *beacon_ie = NULL;
priv->scan_block = false;
return -ENOMEM;
}
- beacon_ie = kmemdup(bss->information_elements,
- bss->len_beacon_ies, GFP_KERNEL);
- if (!beacon_ie) {
- kfree(bss_desc);
- dev_err(priv->adapter->dev, " failed to alloc beacon_ie\n");
- return -ENOMEM;
- }
-
- ret = mwifiex_fill_new_bss_desc(priv, bss->bssid, bss->signal,
- beacon_ie, bss->len_beacon_ies,
- bss->beacon_interval,
- bss->capability,
- *(u8 *)bss->priv, bss_desc);
+ ret = mwifiex_fill_new_bss_desc(priv, bss, bss_desc);
if (ret)
goto done;
}
(!mwifiex_ssid_cmp(&priv->curr_bss_params.bss_descriptor.
ssid, &bss_desc->ssid))) {
kfree(bss_desc);
- kfree(beacon_ie);
return 0;
}
done:
kfree(bss_desc);
- kfree(beacon_ie);
return ret;
}
info->bss_chan = bss_desc->channel;
- info->region_code = adapter->region_code;
+ memcpy(info->country_code, priv->country_code,
+ IEEE80211_COUNTRY_STRING_LEN);
info->media_connected = priv->media_connected;
return 0;
}
+/*
+ * IOCTL request handler to set/reset WPS IE.
+ *
+ * The supplied WPS IE is treated as a opaque buffer. Only the first field
+ * is checked to internally enable WPS. If buffer length is zero, the existing
+ * WPS IE is reset.
+ */
+static int mwifiex_set_wps_ie(struct mwifiex_private *priv,
+ u8 *ie_data_ptr, u16 ie_len)
+{
+ if (ie_len) {
+ priv->wps_ie = kzalloc(MWIFIEX_MAX_VSIE_LEN, GFP_KERNEL);
+ if (!priv->wps_ie)
+ return -ENOMEM;
+ if (ie_len > sizeof(priv->wps_ie)) {
+ dev_dbg(priv->adapter->dev,
+ "info: failed to copy WPS IE, too big\n");
+ kfree(priv->wps_ie);
+ return -1;
+ }
+ memcpy(priv->wps_ie, ie_data_ptr, ie_len);
+ priv->wps_ie_len = ie_len;
+ dev_dbg(priv->adapter->dev, "cmd: Set wps_ie_len=%d IE=%#x\n",
+ priv->wps_ie_len, priv->wps_ie[0]);
+ } else {
+ kfree(priv->wps_ie);
+ priv->wps_ie_len = ie_len;
+ dev_dbg(priv->adapter->dev,
+ "info: Reset wps_ie_len=%d\n", priv->wps_ie_len);
+ }
+ return 0;
+}
+
/*
* IOCTL request handler to set WAPI key.
*
return 0;
}
-/*
- * Sends IOCTL request to get signal information.
- *
- * This function allocates the IOCTL request buffer, fills it
- * with requisite parameters and calls the IOCTL handler.
- */
-int mwifiex_get_signal_info(struct mwifiex_private *priv,
- struct mwifiex_ds_get_signal *signal)
-{
- int status;
-
- signal->selector = ALL_RSSI_INFO_MASK;
-
- /* Signal info can be obtained only if connected */
- if (!priv->media_connected) {
- dev_dbg(priv->adapter->dev,
- "info: Can not get signal in disconnected state\n");
- return -1;
- }
-
- status = mwifiex_send_cmd_sync(priv, HostCmd_CMD_RSSI_INFO,
- HostCmd_ACT_GEN_GET, 0, signal);
-
- if (!status) {
- if (signal->selector & BCN_RSSI_AVG_MASK)
- priv->qual_level = signal->bcn_rssi_avg;
- if (signal->selector & BCN_NF_AVG_MASK)
- priv->qual_noise = signal->bcn_nf_avg;
- }
-
- return status;
-}
-
/*
* Sends IOCTL request to set encoding parameters.
*
priv->wps.session_enable = true;
dev_dbg(priv->adapter->dev,
"info: WPS Session Enabled.\n");
+ ret = mwifiex_set_wps_ie(priv, ie_data_ptr, ie_len);
}
/* Append the passed data to the end of the
if (parp == NULL)
p = of_get_parent(child);
else {
+ of_node_put(child);
if (of_irq_workarounds & OF_IMAP_NO_PHANDLE)
p = of_node_get(of_irq_dflt_pic);
else
p = of_find_node_by_phandle(be32_to_cpup(parp));
+ return p;
}
of_node_put(child);
child = p;
- } while (p && of_get_property(p, "#interrupt-cells", NULL) == NULL);
+ } while (p);
return p;
}
INIT_LIST_HEAD(&intc_parent_list);
for_each_matching_node(np, matches) {
- if (!of_find_property(np, "interrupt-controller", NULL))
- continue;
/*
* Here, we allocate and populate an intc_desc with the node
* pointer, interrupt-parent device_node etc.
if (desc->interrupt_parent != parent)
continue;
- list_del(&desc->list);
match = of_match_node(matches, desc->dev);
if (WARN(!match->data,
"of_irq_init: no init function for %s\n",
desc->dev, desc->interrupt_parent);
irq_init_cb = match->data;
ret = irq_init_cb(desc->dev, desc->interrupt_parent);
+ if (ret == -EAGAIN)
+ /*
+ * Interrupt controller's initialization did not
+ * complete and should be retried. So let its
+ * intc_desc be on intc_desc_list.
+ */
+ continue;
+ list_del(&desc->list);
+
if (ret) {
kfree(desc);
continue;
desc = list_first_entry(&intc_parent_list, typeof(*desc), list);
if (list_empty(&intc_parent_list) || !desc) {
pr_err("of_irq_init: children remain, but no parents\n");
- break;
+ /*
+ * If a search with NULL as parent did not result in any
+ * new parent being found, then the scan for matching
+ * interrupt controller nodes is considered as complete.
+ * Otherwise, if there are pending elements on the
+ * intc_desc_list, then retry this process again with
+ * NULL as parent.
+ */
+ if (!parent)
+ break;
+ parent = NULL;
+ continue;
}
list_del(&desc->list);
parent = desc->dev;
obj-$(CONFIG_ARM) += arm/
obj-$(CONFIG_X86) += x86/
obj-$(CONFIG_CHROMEOS) += chromeos.o
+obj-$(CONFIG_MFD_CHROMEOS_EC) += chromeos_ec-fw.o
--- /dev/null
+/*
+ * Copyright (C) 2012 Google, Inc
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ *
+ * Expose the ChromeOS EC firmware information.
+ */
+
+#include <linux/module.h>
+#include <linux/mfd/chromeos_ec.h>
+#include <linux/mfd/chromeos_ec_commands.h>
+#include <linux/platform_device.h>
+
+static ssize_t ec_fw_version_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct chromeos_ec_device *ec = dev_get_drvdata(dev);
+ struct ec_response_get_version info;
+ const char * const copy_name[] = {"?", "RO", "A", "B"};
+ int ret;
+
+ ret = ec->command_recv(ec, EC_CMD_GET_VERSION, &info,
+ sizeof(struct ec_response_get_version));
+ if (ret < 0)
+ return ret;
+
+ if (info.current_image > EC_IMAGE_RW_B)
+ info.current_image = EC_IMAGE_UNKNOWN;
+
+ return scnprintf(buf, PAGE_SIZE, "Current: %s\nRO: %s\nA: %s\nB: %s\n",
+ copy_name[info.current_image], info.version_string_ro,
+ info.version_string_rw_a, info.version_string_rw_b);
+}
+
+static ssize_t ec_build_info_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct chromeos_ec_device *ec = dev_get_drvdata(dev);
+ struct ec_response_get_build_info info;
+ int ret;
+
+ ret = ec->command_recv(ec, EC_CMD_GET_BUILD_INFO, &info,
+ sizeof(struct ec_response_get_build_info));
+ if (ret < 0)
+ return ret;
+
+ return scnprintf(buf, PAGE_SIZE, "%s\n", info.build_string);
+}
+
+static ssize_t ec_chip_info_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct chromeos_ec_device *ec = dev_get_drvdata(dev);
+ struct ec_response_get_chip_info info;
+ int ret;
+
+ ret = ec->command_recv(ec, EC_CMD_GET_CHIP_INFO, &info,
+ sizeof(struct ec_response_get_chip_info));
+ if (ret < 0)
+ return ret;
+
+ return scnprintf(buf, PAGE_SIZE, "%s %s %s\n",
+ info.vendor, info.name, info.revision);
+}
+
+static DEVICE_ATTR(fw_version, S_IRUGO, ec_fw_version_show, NULL);
+static DEVICE_ATTR(build_info, S_IRUGO, ec_build_info_show, NULL);
+static DEVICE_ATTR(chip_info, S_IRUGO, ec_chip_info_show, NULL);
+
+static struct attribute *ec_fw_attrs[] = {
+ &dev_attr_fw_version.attr,
+ &dev_attr_build_info.attr,
+ &dev_attr_chip_info.attr,
+ NULL
+};
+
+static const struct attribute_group ec_fw_attr_group = {
+ .attrs = ec_fw_attrs,
+};
+
+static int __devinit ec_fw_probe(struct platform_device *pdev)
+{
+ struct chromeos_ec_device *ec = dev_get_drvdata(pdev->dev.parent);
+ struct device *dev = ec->dev;
+ int err;
+
+ err = sysfs_create_group(&dev->kobj, &ec_fw_attr_group);
+ if (err)
+ dev_warn(dev, "error creating sysfs entries.\n");
+ return err;
+}
+
+static int __devexit ec_fw_remove(struct platform_device *pdev)
+{
+ struct chromeos_ec_device *ec = dev_get_drvdata(pdev->dev.parent);
+
+ sysfs_remove_group(&ec->dev->kobj, &ec_fw_attr_group);
+
+ return 0;
+}
+
+static struct platform_driver ec_fw_driver = {
+ .probe = ec_fw_probe,
+ .remove = __devexit_p(ec_fw_remove),
+ .driver = {
+ .name = "cros_ec-fw",
+ },
+};
+
+module_platform_driver(ec_fw_driver);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("ChromeOS EC firmware");
+MODULE_ALIAS("platform:cros_ec-fw");
via I2C bus. The provided regulator is suitable for S3C6410
and S5PC1XX chips to control VCC_CORE and VCC_USIM voltages.
+config REGULATOR_MAX77686
+ tristate "Maxim 77686 regulator"
+ depends on MFD_MAX77686
+ help
+ This driver controls a Maxim 77686 voltage regulator via I2C
+ bus. The provided regulator is suitable for Exynos5 chips to
+ control VDD_ARM and VDD_INT voltages.It supports LDOs[1-26]
+ and BUCKs[1-9].
+
config REGULATOR_PCAP
tristate "Motorola PCAP2 regulator driver"
depends on EZX_PCAP
config REGULATOR_TPS65023
tristate "TI TPS65023 Power regulators"
depends on I2C
+ depends on OF
select REGMAP_I2C
help
This driver supports TPS65023 voltage regulator chips. TPS65023 provides
three step-down converters and two general-purpose LDO voltage regulators.
It supports TI's software based Class-2 SmartReflex implementation.
+config REGULATOR_TPS65090
+ tristate "TI TPS65090 Power regulator"
+ depends on MFD_TPS65090
+ help
+ This driver provides support for the voltage regulators on the
+ TI TPS65090 PMIC.
+
config REGULATOR_TPS65217
tristate "TI TPS65217 Power regulators"
depends on MFD_TPS65217
obj-$(CONFIG_REGULATOR_MAX8952) += max8952.o
obj-$(CONFIG_REGULATOR_MAX8997) += max8997.o
obj-$(CONFIG_REGULATOR_MAX8998) += max8998.o
+obj-$(CONFIG_REGULATOR_MAX77686) += max77686.o
obj-$(CONFIG_REGULATOR_MC13783) += mc13783-regulator.o
obj-$(CONFIG_REGULATOR_MC13892) += mc13892-regulator.o
obj-$(CONFIG_REGULATOR_MC13XXX_CORE) += mc13xxx-regulator-core.o
obj-$(CONFIG_REGULATOR_TPS62360) += tps62360-regulator.o
obj-$(CONFIG_REGULATOR_TPS65023) += tps65023-regulator.o
obj-$(CONFIG_REGULATOR_TPS6507X) += tps6507x-regulator.o
+obj-$(CONFIG_REGULATOR_TPS65090) += tps65090-regulator.o
obj-$(CONFIG_REGULATOR_TPS65217) += tps65217-regulator.o
obj-$(CONFIG_REGULATOR_TPS6524X) += tps6524x-regulator.o
obj-$(CONFIG_REGULATOR_TPS6586X) += tps6586x-regulator.o
--- /dev/null
+/*
+ * max77686.c - Regulator driver for the Maxim 77686
+ *
+ * Copyright (C) 2012 Samsung Electronics
+ * Chiwoong Byun <woong.byun@smasung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * This driver is based on max8997.c
+ */
+
+#include <linux/module.h>
+#include <linux/bug.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/gpio.h>
+#include <linux/slab.h>
+#include <linux/platform_device.h>
+#include <linux/regulator/driver.h>
+#include <linux/regulator/machine.h>
+#include <linux/regulator/of_regulator.h>
+#include <linux/mfd/max77686.h>
+#include <linux/mfd/max77686-private.h>
+
+struct max77686_data {
+ struct device *dev;
+ struct max77686_dev *iodev;
+ int num_regulators;
+ struct regulator_dev **rdev;
+ int ramp_delay; /* index of ramp_delay */
+
+ /*GPIO-DVS feature is not enabled with the
+ *current version of MAX77686 driver.*/
+};
+
+struct voltage_map_desc {
+ int min;
+ int max;
+ int step;
+ unsigned int n_bits;
+};
+
+/* Voltage maps in mV */
+static const struct voltage_map_desc ldo_voltage_map_desc = {
+ .min = 800, .max = 3950, .step = 50, .n_bits = 6,
+}; /* LDO3 ~ 5, 9 ~ 14, 16 ~ 26 */
+
+static const struct voltage_map_desc ldo_low_voltage_map_desc = {
+ .min = 800, .max = 2375, .step = 25, .n_bits = 6,
+}; /* LDO1 ~ 2, 6 ~ 8, 15 */
+
+static const struct voltage_map_desc buck_dvs_voltage_map_desc = {
+ .min = 600000, .max = 3787500, .step = 12500, .n_bits = 8,
+}; /* Buck2, 3, 4 (uV) */
+
+static const struct voltage_map_desc buck_voltage_map_desc = {
+ .min = 750, .max = 3900, .step = 50, .n_bits = 6,
+}; /* Buck1, 5 ~ 9 */
+
+static const struct voltage_map_desc *reg_voltage_map[] = {
+ [MAX77686_LDO1] = &ldo_low_voltage_map_desc,
+ [MAX77686_LDO2] = &ldo_low_voltage_map_desc,
+ [MAX77686_LDO3] = &ldo_voltage_map_desc,
+ [MAX77686_LDO4] = &ldo_voltage_map_desc,
+ [MAX77686_LDO5] = &ldo_voltage_map_desc,
+ [MAX77686_LDO6] = &ldo_low_voltage_map_desc,
+ [MAX77686_LDO7] = &ldo_low_voltage_map_desc,
+ [MAX77686_LDO8] = &ldo_low_voltage_map_desc,
+ [MAX77686_LDO9] = &ldo_voltage_map_desc,
+ [MAX77686_LDO10] = &ldo_voltage_map_desc,
+ [MAX77686_LDO11] = &ldo_voltage_map_desc,
+ [MAX77686_LDO12] = &ldo_voltage_map_desc,
+ [MAX77686_LDO13] = &ldo_voltage_map_desc,
+ [MAX77686_LDO14] = &ldo_voltage_map_desc,
+ [MAX77686_LDO15] = &ldo_low_voltage_map_desc,
+ [MAX77686_LDO16] = &ldo_voltage_map_desc,
+ [MAX77686_LDO17] = &ldo_voltage_map_desc,
+ [MAX77686_LDO18] = &ldo_voltage_map_desc,
+ [MAX77686_LDO19] = &ldo_voltage_map_desc,
+ [MAX77686_LDO20] = &ldo_voltage_map_desc,
+ [MAX77686_LDO21] = &ldo_voltage_map_desc,
+ [MAX77686_LDO22] = &ldo_voltage_map_desc,
+ [MAX77686_LDO23] = &ldo_voltage_map_desc,
+ [MAX77686_LDO24] = &ldo_voltage_map_desc,
+ [MAX77686_LDO25] = &ldo_voltage_map_desc,
+ [MAX77686_LDO26] = &ldo_voltage_map_desc,
+ [MAX77686_BUCK1] = &buck_voltage_map_desc,
+ [MAX77686_BUCK2] = &buck_dvs_voltage_map_desc,
+ [MAX77686_BUCK3] = &buck_dvs_voltage_map_desc,
+ [MAX77686_BUCK4] = &buck_dvs_voltage_map_desc,
+ [MAX77686_BUCK5] = &buck_voltage_map_desc,
+ [MAX77686_BUCK6] = &buck_voltage_map_desc,
+ [MAX77686_BUCK7] = &buck_voltage_map_desc,
+ [MAX77686_BUCK8] = &buck_voltage_map_desc,
+ [MAX77686_BUCK9] = &buck_voltage_map_desc,
+ [MAX77686_EN32KHZ_AP] = NULL,
+ [MAX77686_EN32KHZ_CP] = NULL,
+ [MAX77686_P32KH] = NULL,
+};
+
+static int max77686_get_voltage_unit(int rid)
+{
+ int unit = 0;
+
+ switch (rid) {
+ case MAX77686_BUCK2...MAX77686_BUCK4:
+ unit = 1; /* BUCK2,3,4 is uV */
+ break;
+ default:
+ unit = 1000;
+ break;
+ }
+
+ return unit;
+}
+
+static int max77686_list_voltage(struct regulator_dev *rdev,
+ unsigned int selector)
+{
+ const struct voltage_map_desc *desc;
+ int rid = rdev_get_id(rdev);
+ int val;
+
+ if (rid >= ARRAY_SIZE(reg_voltage_map) || rid < 0)
+ return -EINVAL;
+
+ desc = reg_voltage_map[rid];
+ if (desc == NULL)
+ return -EINVAL;
+
+ val = desc->min + desc->step * selector;
+ if (val > desc->max)
+ return -EINVAL;
+
+ return val * max77686_get_voltage_unit(rid);
+}
+
+static int max77686_get_enable_register(struct regulator_dev *rdev,
+ int *reg, int *mask, int *pattern)
+{
+ int rid = rdev_get_id(rdev);
+
+ switch (rid) {
+ case MAX77686_LDO1...MAX77686_LDO26:
+ *reg = MAX77686_REG_LDO1CTRL1 + (rid - MAX77686_LDO1);
+ *mask = 0xC0;
+ *pattern = 0xC0;
+ break;
+ case MAX77686_BUCK1:
+ *reg = MAX77686_REG_BUCK1CTRL;
+ *mask = 0x03;
+ *pattern = 0x03;
+ break;
+ case MAX77686_BUCK2:
+ *reg = MAX77686_REG_BUCK2CTRL1;
+ *mask = 0x30;
+ *pattern = 0x10;
+ break;
+ case MAX77686_BUCK3:
+ *reg = MAX77686_REG_BUCK3CTRL1;
+ *mask = 0x30;
+ *pattern = 0x10;
+ break;
+ case MAX77686_BUCK4:
+ *reg = MAX77686_REG_BUCK4CTRL1;
+ *mask = 0x30;
+ *pattern = 0x10;
+ break;
+ case MAX77686_BUCK5...MAX77686_BUCK9:
+ *reg = MAX77686_REG_BUCK5CTRL + (rid - MAX77686_BUCK5) * 2;
+ *mask = 0x03;
+ *pattern = 0x03;
+ break;
+ case MAX77686_EN32KHZ_AP...MAX77686_P32KH:
+ *reg = MAX77686_REG_32KHZ_;
+ *mask = 0x01 << (rid - MAX77686_EN32KHZ_AP);
+ *pattern = 0x01 << (rid - MAX77686_EN32KHZ_AP);
+ break;
+ default:
+ /* Not controllable or not exists */
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int max77686_reg_is_enabled(struct regulator_dev *rdev)
+{
+ struct max77686_data *max77686 = rdev_get_drvdata(rdev);
+ struct i2c_client *i2c = max77686->iodev->i2c;
+ int ret, reg, mask, pattern;
+ u8 val;
+
+ ret = max77686_get_enable_register(rdev, ®, &mask, &pattern);
+ if (ret)
+ return ret;
+
+ ret = max77686_read_reg(i2c, reg, &val);
+ if (ret)
+ return ret;
+
+ return (val & mask) == pattern;
+}
+
+static int max77686_reg_enable(struct regulator_dev *rdev)
+{
+ struct max77686_data *max77686 = rdev_get_drvdata(rdev);
+ struct i2c_client *i2c = max77686->iodev->i2c;
+ int ret, reg, mask, pattern;
+
+ ret = max77686_get_enable_register(rdev, ®, &mask, &pattern);
+ if (ret)
+ return ret;
+
+ return max77686_update_reg(i2c, reg, pattern, mask);
+}
+
+static int max77686_reg_disable(struct regulator_dev *rdev)
+{
+ struct max77686_data *max77686 = rdev_get_drvdata(rdev);
+ struct i2c_client *i2c = max77686->iodev->i2c;
+ int ret, reg, mask, pattern;
+
+ ret = max77686_get_enable_register(rdev, ®, &mask, &pattern);
+ if (ret)
+ return ret;
+
+ return max77686_update_reg(i2c, reg, ~mask, mask);
+}
+
+static int max77686_get_voltage_register(struct regulator_dev *rdev,
+ int *_reg, int *_shift, int *_mask)
+{
+ int rid = rdev_get_id(rdev);
+ int reg, shift = 0, mask = 0x3f;
+
+ switch (rid) {
+ case MAX77686_LDO1...MAX77686_LDO26:
+ reg = MAX77686_REG_LDO1CTRL1 + (rid - MAX77686_LDO1);
+ break;
+ case MAX77686_BUCK1:
+ reg = MAX77686_REG_BUCK1OUT;
+ break;
+ case MAX77686_BUCK2:
+ reg = MAX77686_REG_BUCK2DVS1;
+ mask = 0xff;
+ break;
+ case MAX77686_BUCK3:
+ reg = MAX77686_REG_BUCK3DVS1;
+ mask = 0xff;
+ break;
+ case MAX77686_BUCK4:
+ reg = MAX77686_REG_BUCK4DVS1;
+ mask = 0xff;
+ break;
+ case MAX77686_BUCK5...MAX77686_BUCK9:
+ reg = MAX77686_REG_BUCK5OUT + (rid - MAX77686_BUCK5) * 2;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ *_reg = reg;
+ *_shift = shift;
+ *_mask = mask;
+
+ return 0;
+}
+
+static int max77686_get_voltage(struct regulator_dev *rdev)
+{
+ struct max77686_data *max77686 = rdev_get_drvdata(rdev);
+ struct i2c_client *i2c = max77686->iodev->i2c;
+ int reg, shift, mask, ret;
+ u8 val;
+
+ ret = max77686_get_voltage_register(rdev, ®, &shift, &mask);
+ if (ret)
+ return ret;
+
+ ret = max77686_read_reg(i2c, reg, &val);
+ if (ret)
+ return ret;
+
+ val >>= shift;
+ val &= mask;
+
+ return max77686_list_voltage(rdev, val);
+}
+
+static inline int max77686_get_voltage_proper_val(const struct voltage_map_desc
+ *desc, int min_vol,
+ int max_vol)
+{
+ int i = 0;
+
+ if (desc == NULL)
+ return -EINVAL;
+
+ if (max_vol < desc->min || min_vol > desc->max)
+ return -EINVAL;
+
+ while (desc->min + desc->step * i < min_vol &&
+ desc->min + desc->step * i < desc->max)
+ i++;
+
+ if (desc->min + desc->step * i > max_vol)
+ return -EINVAL;
+
+ if (i >= (1 << desc->n_bits))
+ return -EINVAL;
+
+ return i;
+}
+
+static int max77686_set_voltage(struct regulator_dev *rdev,
+ int min_uV, int max_uV, unsigned *selector)
+{
+ struct max77686_data *max77686 = rdev_get_drvdata(rdev);
+ struct i2c_client *i2c = max77686->iodev->i2c;
+ int min_vol = min_uV, max_vol = max_uV, unit = 0;
+ const struct voltage_map_desc *desc;
+ int rid = rdev_get_id(rdev);
+ int reg, shift = 0, mask, ret;
+ int i;
+ int ramp[] = {13, 27, 57, 100}; /* ramp_rate in mV/uS */
+ u8 org;
+
+ unit = max77686_get_voltage_unit(rid);
+ min_vol /= unit;
+ max_vol /= unit;
+
+ desc = reg_voltage_map[rid];
+
+ i = max77686_get_voltage_proper_val(desc, min_vol, max_vol);
+ if (i < 0)
+ return i;
+
+ ret = max77686_get_voltage_register(rdev, ®, &shift, &mask);
+ if (ret)
+ return ret;
+
+ max77686_read_reg(i2c, reg, &org);
+ org = (org & mask) >> shift;
+
+ ret = max77686_update_reg(i2c, reg, i << shift, mask << shift);
+ *selector = i;
+
+ if (rid == MAX77686_BUCK2 || rid == MAX77686_BUCK3 ||
+ rid == MAX77686_BUCK4) {
+ /* If the voltage is increasing */
+ if (org < i)
+ udelay(DIV_ROUND_UP(desc->step * (i - org),
+ ramp[max77686->ramp_delay]));
+ }
+
+ return ret;
+}
+
+static struct regulator_ops max77686_ops = {
+ .list_voltage = max77686_list_voltage,
+ .is_enabled = max77686_reg_is_enabled,
+ .enable = max77686_reg_enable,
+ .disable = max77686_reg_disable,
+ .get_voltage = max77686_get_voltage,
+ .set_voltage = max77686_set_voltage,
+ .set_suspend_enable = max77686_reg_enable,
+ .set_suspend_disable = max77686_reg_disable,
+};
+
+static struct regulator_ops max77686_fixedvolt_ops = {
+ .list_voltage = max77686_list_voltage,
+ .is_enabled = max77686_reg_is_enabled,
+ .enable = max77686_reg_enable,
+ .disable = max77686_reg_disable,
+ .set_suspend_enable = max77686_reg_enable,
+ .set_suspend_disable = max77686_reg_disable,
+};
+
+#define regulator_desc_ldo(num) { \
+ .name = "LDO"#num, \
+ .id = MAX77686_LDO##num, \
+ .ops = &max77686_ops, \
+ .type = REGULATOR_VOLTAGE, \
+ .owner = THIS_MODULE, \
+}
+#define regulator_desc_buck(num) { \
+ .name = "BUCK"#num, \
+ .id = MAX77686_BUCK##num, \
+ .ops = &max77686_ops, \
+ .type = REGULATOR_VOLTAGE, \
+ .owner = THIS_MODULE, \
+}
+
+static struct regulator_desc regulators[] = {
+ regulator_desc_ldo(1),
+ regulator_desc_ldo(2),
+ regulator_desc_ldo(3),
+ regulator_desc_ldo(4),
+ regulator_desc_ldo(5),
+ regulator_desc_ldo(6),
+ regulator_desc_ldo(7),
+ regulator_desc_ldo(8),
+ regulator_desc_ldo(9),
+ regulator_desc_ldo(10),
+ regulator_desc_ldo(11),
+ regulator_desc_ldo(12),
+ regulator_desc_ldo(13),
+ regulator_desc_ldo(14),
+ regulator_desc_ldo(15),
+ regulator_desc_ldo(16),
+ regulator_desc_ldo(17),
+ regulator_desc_ldo(18),
+ regulator_desc_ldo(19),
+ regulator_desc_ldo(20),
+ regulator_desc_ldo(21),
+ regulator_desc_ldo(22),
+ regulator_desc_ldo(23),
+ regulator_desc_ldo(24),
+ regulator_desc_ldo(25),
+ regulator_desc_ldo(26),
+ regulator_desc_buck(1),
+ regulator_desc_buck(2),
+ regulator_desc_buck(3),
+ regulator_desc_buck(4),
+ regulator_desc_buck(5),
+ regulator_desc_buck(6),
+ regulator_desc_buck(7),
+ regulator_desc_buck(8),
+ regulator_desc_buck(9),
+ {
+ .name = "EN32KHz_AP",
+ .id = MAX77686_EN32KHZ_AP,
+ .ops = &max77686_fixedvolt_ops,
+ .type = REGULATOR_VOLTAGE,
+ .owner = THIS_MODULE,
+ }, {
+ .name = "EN32KHz_CP",
+ .id = MAX77686_EN32KHZ_CP,
+ .ops = &max77686_fixedvolt_ops,
+ .type = REGULATOR_VOLTAGE,
+ .owner = THIS_MODULE,
+ }, {
+ .name = "ENP32KHz",
+ .id = MAX77686_P32KH,
+ .ops = &max77686_fixedvolt_ops,
+ .type = REGULATOR_VOLTAGE,
+ .owner = THIS_MODULE,
+ },
+};
+
+#ifdef CONFIG_OF
+static int max77686_pmic_dt_parse_pdata(struct max77686_dev *iodev,
+ struct max77686_platform_data *pdata)
+{
+ struct device_node *pmic_np, *regulators_np, *reg_np;
+ struct max77686_regulator_data *rdata;
+ unsigned int i;
+
+ pmic_np = iodev->dev->of_node;
+ if (!pmic_np) {
+ dev_err(iodev->dev, "could not find pmic sub-node\n");
+ return -ENODEV;
+ }
+
+ regulators_np = of_find_node_by_name(pmic_np, "voltage-regulators");
+ if (!regulators_np) {
+ dev_err(iodev->dev, "could not find regulators sub-node\n");
+ return -EINVAL;
+ }
+
+/* count the number of regulators to be supported in pmic */
+ pdata->num_regulators = 0;
+ for_each_child_of_node(regulators_np, reg_np)
+ pdata->num_regulators++;
+
+ rdata = devm_kzalloc(iodev->dev, sizeof(*rdata) *
+ pdata->num_regulators, GFP_KERNEL);
+ if (!rdata) {
+ dev_err(iodev->dev,
+ "could not allocate memory for regulator data\n");
+ return -ENOMEM;
+ }
+
+ pdata->regulators = rdata;
+ for_each_child_of_node(regulators_np, reg_np) {
+ for (i = 0; i < ARRAY_SIZE(regulators); i++)
+ if (!of_node_cmp(reg_np->name, regulators[i].name))
+ break;
+
+ if (i == ARRAY_SIZE(regulators)) {
+ dev_warn(iodev->dev,
+ "No configuration data for regulator %s\n",
+ reg_np->name);
+ continue;
+ }
+
+ rdata->id = i;
+ rdata->initdata = of_get_regulator_init_data(
+ iodev->dev, reg_np);
+ rdata->reg_node = reg_np;
+ rdata++;
+ }
+
+ if (of_property_read_u32(pmic_np,
+ "max77686,buck_ramp_delay", &i))
+ pdata->ramp_delay = i & 0xff;
+
+ return 0;
+}
+#else
+static int max8997_pmic_dt_parse_pdata(struct max8997_dev *iodev,
+ struct max8997_platform_data *pdata)
+{
+ return 0;
+}
+#endif /* CONFIG_OF */
+
+#define RAMP_VALUE (max77686->ramp_delay << 6)
+
+static __devinit int max77686_pmic_probe(struct platform_device *pdev)
+{
+ struct max77686_dev *iodev = dev_get_drvdata(pdev->dev.parent);
+ struct max77686_platform_data *pdata = iodev->pdata;
+ struct regulator_dev **rdev;
+ struct max77686_data *max77686;
+ struct i2c_client *i2c = iodev->i2c;
+ int i, ret, size;
+
+ if (iodev->dev->of_node) {
+ ret = max77686_pmic_dt_parse_pdata(iodev, pdata);
+ if (ret)
+ return ret;
+ }
+
+ if (!pdata) {
+ dev_err(&pdev->dev, "platform data not found\n");
+ return -ENODEV;
+ }
+
+ max77686 = kzalloc(sizeof(struct max77686_data), GFP_KERNEL);
+ if (!max77686)
+ return -ENOMEM;
+
+ size = sizeof(struct regulator_dev *) * pdata->num_regulators;
+ max77686->rdev = kzalloc(size, GFP_KERNEL);
+ if (!max77686->rdev) {
+ kfree(max77686);
+ return -ENOMEM;
+ }
+
+ rdev = max77686->rdev;
+
+ max77686->dev = &pdev->dev;
+ max77686->iodev = iodev;
+ max77686->num_regulators = pdata->num_regulators;
+
+ if (pdata->ramp_delay) {
+ max77686->ramp_delay = pdata->ramp_delay;
+ max77686_update_reg(i2c, MAX77686_REG_BUCK2CTRL1,
+ RAMP_VALUE, RAMP_MASK);
+ max77686_update_reg(i2c, MAX77686_REG_BUCK3CTRL1,
+ RAMP_VALUE, RAMP_MASK);
+ max77686_update_reg(i2c, MAX77686_REG_BUCK4CTRL1,
+ RAMP_VALUE, RAMP_MASK);
+ } else {
+ /* Default/Reset value is 27.5 mV/uS */
+ max77686->ramp_delay = MAX77686_RAMP_RATE_27MV;
+ }
+
+ platform_set_drvdata(pdev, max77686);
+
+ for (i = 0; i < pdata->num_regulators; i++) {
+ const struct voltage_map_desc *desc;
+ int id = pdata->regulators[i].id;
+
+ desc = reg_voltage_map[id];
+ if (desc)
+ regulators[id].n_voltages =
+ (desc->max - desc->min) / desc->step + 1;
+
+ rdev[i] = regulator_register(®ulators[id], max77686->dev,
+ pdata->regulators[i].initdata,
+ max77686, NULL);
+ if (IS_ERR(rdev[i])) {
+ ret = PTR_ERR(rdev[i]);
+ dev_err(max77686->dev,
+ "regulator init failed for id : %d\n", id);
+ rdev[i] = NULL;
+ goto err;
+ }
+ }
+
+ return 0;
+ err:
+ for (i = 0; i < max77686->num_regulators; i++)
+ if (rdev[i])
+ regulator_unregister(rdev[i]);
+
+ kfree(max77686->rdev);
+ kfree(max77686);
+
+ return ret;
+}
+
+static int __devexit max77686_pmic_remove(struct platform_device *pdev)
+{
+ struct max77686_data *max77686 = platform_get_drvdata(pdev);
+ struct regulator_dev **rdev = max77686->rdev;
+ int i;
+
+ for (i = 0; i < max77686->num_regulators; i++)
+ if (rdev[i])
+ regulator_unregister(rdev[i]);
+
+ kfree(max77686->rdev);
+ kfree(max77686);
+
+ return 0;
+}
+
+static const struct platform_device_id max77686_pmic_id[] = {
+ {"max77686-pmic", 0},
+ {},
+};
+
+MODULE_DEVICE_TABLE(platform, max77686_pmic_id);
+
+static struct platform_driver max77686_pmic_driver = {
+ .driver = {
+ .name = "max77686-pmic",
+ .owner = THIS_MODULE,
+ },
+ .probe = max77686_pmic_probe,
+ .remove = __devexit_p(max77686_pmic_remove),
+ .id_table = max77686_pmic_id,
+};
+
+static int __init max77686_pmic_init(void)
+{
+ return platform_driver_register(&max77686_pmic_driver);
+}
+
+subsys_initcall(max77686_pmic_init);
+
+static void __exit max77686_pmic_cleanup(void)
+{
+ platform_driver_unregister(&max77686_pmic_driver);
+}
+
+module_exit(max77686_pmic_cleanup);
+
+MODULE_DESCRIPTION("MAXIM 77686 Regulator Driver");
+MODULE_AUTHOR("Chiwoong Byun <woong.byun@samsung.com>");
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * Regulator driver for tps65090 power management chip.
+ *
+ * Copyright (c) 2012, NVIDIA CORPORATION. All rights reserved.
+
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>
+ */
+
+#include <linux/module.h>
+#include <linux/delay.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/err.h>
+#include <linux/platform_device.h>
+#include <linux/regulator/driver.h>
+#include <linux/regulator/of_regulator.h>
+#include <linux/regulator/machine.h>
+#include <linux/mfd/tps65090.h>
+#include <linux/of.h>
+
+/* TPS65090 has 3 DCDC-regulators and 7 FETs. */
+
+#define MAX_REGULATORS 10
+
+struct tps65090_regulator {
+ /* Regulator register address.*/
+ u32 reg_en_reg;
+ u32 en_bit;
+
+ /* used by regulator core */
+ struct regulator_desc desc;
+
+ struct regulator_dev *rdev;
+};
+
+struct tps65090_regulator_drvdata {
+ struct tps65090_regulator *regulators[MAX_REGULATORS];
+};
+
+static inline struct device *to_tps65090_dev(struct regulator_dev *rdev)
+{
+ return rdev_get_dev(rdev)->parent->parent;
+}
+
+static int tps65090_reg_is_enabled(struct regulator_dev *rdev)
+{
+ struct tps65090_regulator *ri = rdev_get_drvdata(rdev);
+ struct device *parent = to_tps65090_dev(rdev);
+ uint8_t control;
+ int ret;
+
+ ret = tps65090_read(parent, ri->reg_en_reg, &control);
+ if (ret < 0) {
+ dev_err(&rdev->dev, "Error in reading reg 0x%x\n",
+ ri->reg_en_reg);
+ return ret;
+ }
+ return (((control >> ri->en_bit) & 1) == 1);
+}
+
+static int tps65090_reg_enable(struct regulator_dev *rdev)
+{
+ struct tps65090_regulator *ri = rdev_get_drvdata(rdev);
+ struct device *parent = to_tps65090_dev(rdev);
+ int ret;
+
+ ret = tps65090_set_bits(parent, ri->reg_en_reg, ri->en_bit);
+ if (ret < 0)
+ dev_err(&rdev->dev, "Error in updating reg 0x%x\n",
+ ri->reg_en_reg);
+ return ret;
+}
+
+static int tps65090_reg_disable(struct regulator_dev *rdev)
+{
+ struct tps65090_regulator *ri = rdev_get_drvdata(rdev);
+ struct device *parent = to_tps65090_dev(rdev);
+ int ret;
+
+ ret = tps65090_clr_bits(parent, ri->reg_en_reg, ri->en_bit);
+ if (ret < 0)
+ dev_err(&rdev->dev, "Error in updating reg 0x%x\n",
+ ri->reg_en_reg);
+
+ return ret;
+}
+
+static int tps65090_set_voltage(struct regulator_dev *rdev, int min,
+ int max, unsigned *sel)
+{
+ /*
+ * Only needed for the core code to set constraints; the voltage
+ * isn't actually adjustable on tps65090.
+ */
+ return 0;
+}
+
+static struct regulator_ops tps65090_ops = {
+ .enable = tps65090_reg_enable,
+ .disable = tps65090_reg_disable,
+ .set_voltage = tps65090_set_voltage,
+ .is_enabled = tps65090_reg_is_enabled,
+};
+
+static void tps65090_unregister_regulators(struct tps65090_regulator *regs[])
+{
+ int i;
+
+ for (i = 0; i < MAX_REGULATORS; i++)
+ if (regs[i]) {
+ regulator_unregister(regs[i]->rdev);
+ kfree(regs[i]->rdev->desc->name);
+ kfree(regs[i]->rdev);
+ }
+}
+
+
+static int __devinit tps65090_regulator_probe(struct platform_device *pdev)
+{
+ struct tps65090_regulator_drvdata *drvdata;
+ struct tps65090_regulator *reg;
+ struct device_node *mfdnp, *regnp, *np;
+ struct regulator_init_data *ri;
+ u32 id;
+
+ mfdnp = pdev->dev.parent->of_node;
+
+ if (!mfdnp) {
+ dev_err(&pdev->dev, "no device tree data available\n");
+ return -EINVAL;
+ }
+
+ regnp = of_find_node_by_name(mfdnp, "voltage-regulators");
+ if (!regnp) {
+ dev_err(&pdev->dev, "no OF regulator data found at %s\n",
+ mfdnp->full_name);
+ return -EINVAL;
+ }
+
+ drvdata = devm_kzalloc(&pdev->dev, sizeof(*drvdata), GFP_KERNEL);
+ if (!drvdata) {
+ of_node_put(regnp);
+ return -ENOMEM;
+ }
+
+ id = 0;
+ for_each_child_of_node(regnp, np) {
+ ri = of_get_regulator_init_data(&pdev->dev, np);
+ if (!ri) {
+ dev_err(&pdev->dev, "regulator_init_data failed for %s\n",
+ np->full_name);
+ goto out;
+ }
+
+ reg = devm_kzalloc(&pdev->dev,
+ sizeof(struct tps65090_regulator),
+ GFP_KERNEL);
+ reg->desc.name = kstrdup(of_get_property(np, "regulator-name",
+ NULL), GFP_KERNEL);
+ if (!reg->desc.name) {
+ dev_err(&pdev->dev,
+ "no regulator-name specified at %s\n", np->full_name);
+ goto out;
+ }
+
+ if (of_property_read_u32(np, "tps65090-control-reg-offset",
+ ®->reg_en_reg)) {
+ dev_err(&pdev->dev,
+ "no control-reg-offset property at %s\n",
+ np->full_name);
+ kfree(reg->desc.name);
+ goto out;
+ }
+
+ reg->desc.id = id;
+ reg->desc.ops = &tps65090_ops;
+ reg->desc.type = REGULATOR_VOLTAGE;
+ reg->desc.owner = THIS_MODULE;
+ reg->rdev = regulator_register(®->desc, &pdev->dev,
+ ri, reg, np);
+ drvdata->regulators[id] = reg;
+ id++;
+ }
+
+ platform_set_drvdata(pdev, drvdata);
+ of_node_put(regnp);
+
+ return 0;
+
+out:
+ dev_err(&pdev->dev, "bad OF regulator data in %s\n", regnp->full_name);
+ tps65090_unregister_regulators(drvdata->regulators);
+ of_node_put(regnp);
+ return -EINVAL;
+}
+
+static int __devexit tps65090_regulator_remove(struct platform_device *pdev)
+{
+ struct tps65090_regulator_drvdata *drvdata = platform_get_drvdata(pdev);
+
+ tps65090_unregister_regulators(drvdata->regulators);
+
+ return 0;
+}
+
+static struct platform_driver tps65090_regulator_driver = {
+ .driver = {
+ .name = "tps65090-regulator",
+ .owner = THIS_MODULE,
+ },
+ .probe = tps65090_regulator_probe,
+ .remove = __devexit_p(tps65090_regulator_remove),
+};
+
+static int __init tps65090_regulator_init(void)
+{
+ return platform_driver_register(&tps65090_regulator_driver);
+}
+subsys_initcall(tps65090_regulator_init);
+
+static void __exit tps65090_regulator_exit(void)
+{
+ platform_driver_unregister(&tps65090_regulator_driver);
+}
+module_exit(tps65090_regulator_exit);
+
+MODULE_DESCRIPTION("tps65090 regulator driver");
+MODULE_AUTHOR("Venu Byravarasu <vbyravarasu@nvidia.com>");
+MODULE_LICENSE("GPL v2");
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/spi/spi.h>
+#include <linux/of.h>
+#include <linux/of_gpio.h>
#include <mach/dma.h>
#include <plat/s3c64xx-spi.h>
+#define MAX_SPI_PORTS 3
+
/* Registers and bit-fields */
#define S3C64XX_SPI_CH_CFG 0x00
#define S3C64XX_SPI_FBCLK_MSK (3<<0)
-#define S3C64XX_SPI_ST_TRLCNTZ(v, i) ((((v) >> (i)->rx_lvl_offset) & \
- (((i)->fifo_lvl_mask + 1))) \
- ? 1 : 0)
-
-#define S3C64XX_SPI_ST_TX_DONE(v, i) (((v) & (1 << (i)->tx_st_done)) ? 1 : 0)
-#define TX_FIFO_LVL(v, i) (((v) >> 6) & (i)->fifo_lvl_mask)
-#define RX_FIFO_LVL(v, i) (((v) >> (i)->rx_lvl_offset) & (i)->fifo_lvl_mask)
+#define FIFO_LVL_MASK(i) ((i)->port_conf->fifo_lvl_mask[i->port_id])
+#define S3C64XX_SPI_ST_TX_DONE(v, i) (((v) & \
+ (1 << (i)->port_conf->tx_st_done)) ? 1 : 0)
+#define TX_FIFO_LVL(v, i) (((v) >> 6) & FIFO_LVL_MASK(i))
+#define RX_FIFO_LVL(v, i) (((v) >> (i)->port_conf->rx_lvl_offset) & \
+ FIFO_LVL_MASK(i))
#define S3C64XX_SPI_MAX_TRAILCNT 0x3ff
#define S3C64XX_SPI_TRAILCNT_OFF 19
unsigned ch;
enum dma_data_direction direction;
enum dma_ch dmach;
+ struct property *dma_prop;
+};
+
+/**
+ * struct s3c64xx_spi_info - SPI Controller hardware info
+ * @fifo_lvl_mask: Bit-mask for {TX|RX}_FIFO_LVL bits in SPI_STATUS register.
+ * @rx_lvl_offset: Bit offset of RX_FIFO_LVL bits in SPI_STATUS regiter.
+ * @tx_st_done: Bit offset of TX_DONE bit in SPI_STATUS regiter.
+ * @high_speed: True, if the controller supports HIGH_SPEED_EN bit.
+ * @clk_from_cmu: True, if the controller does not include a clock mux and
+ * prescaler unit.
+ *
+ * The Samsung s3c64xx SPI controller are used on various Samsung SoC's but
+ * differ in some aspects such as the size of the fifo and spi bus clock
+ * setup. Such differences are specified to the driver using this structure
+ * which is provided as driver data to the driver.
+ */
+struct s3c64xx_spi_port_config {
+ int fifo_lvl_mask[MAX_SPI_PORTS];
+ int rx_lvl_offset;
+ int tx_st_done;
+ bool high_speed;
+ bool clk_from_cmu;
};
/**
struct s3c64xx_spi_dma_data rx_dma;
struct s3c64xx_spi_dma_data tx_dma;
struct samsung_dma_ops *ops;
+ struct s3c64xx_spi_port_config *port_conf;
+ unsigned int port_id;
+ unsigned long gpios[4];
};
static struct s3c2410_dma_client s3c64xx_spi_dma_client = {
static void flush_fifo(struct s3c64xx_spi_driver_data *sdd)
{
- struct s3c64xx_spi_info *sci = sdd->cntrlr_info;
void __iomem *regs = sdd->regs;
unsigned long loops;
u32 val;
loops = msecs_to_loops(1);
do {
val = readl(regs + S3C64XX_SPI_STATUS);
- } while (TX_FIFO_LVL(val, sci) && loops--);
+ } while (TX_FIFO_LVL(val, sdd) && loops--);
if (loops == 0)
dev_warn(&sdd->pdev->dev, "Timed out flushing TX FIFO\n");
loops = msecs_to_loops(1);
do {
val = readl(regs + S3C64XX_SPI_STATUS);
- if (RX_FIFO_LVL(val, sci))
+ if (RX_FIFO_LVL(val, sdd))
readl(regs + S3C64XX_SPI_RX_DATA);
else
break;
info.direction = sdd->rx_dma.direction;
info.fifo = sdd->sfr_start + S3C64XX_SPI_RX_DATA;
+ info.dt_dmach_prop = sdd->rx_dma.dma_prop;
sdd->rx_dma.ch = sdd->ops->request(sdd->rx_dma.dmach, &info);
info.direction = sdd->tx_dma.direction;
info.fifo = sdd->sfr_start + S3C64XX_SPI_TX_DATA;
+ info.dt_dmach_prop = sdd->tx_dma.dma_prop;
sdd->tx_dma.ch = sdd->ops->request(sdd->tx_dma.dmach, &info);
return 1;
struct spi_device *spi,
struct spi_transfer *xfer, int dma_mode)
{
- struct s3c64xx_spi_info *sci = sdd->cntrlr_info;
void __iomem *regs = sdd->regs;
u32 modecfg, chcfg;
if (xfer->rx_buf != NULL) {
sdd->state |= RXBUSY;
- if (sci->high_speed && sdd->cur_speed >= 30000000UL
+ if (sdd->port_conf->high_speed && sdd->cur_speed >= 30000000UL
&& !(sdd->cur_mode & SPI_CPHA))
chcfg |= S3C64XX_SPI_CH_HS_EN;
if (sdd->tgl_spi != spi) { /* if last mssg on diff device */
/* Deselect the last toggled device */
cs = sdd->tgl_spi->controller_data;
- cs->set_level(cs->line,
- spi->mode & SPI_CS_HIGH ? 0 : 1);
+ gpio_set_value(cs->line,
+ spi->mode & SPI_CS_HIGH ? 0 : 1);
}
sdd->tgl_spi = NULL;
}
cs = spi->controller_data;
- cs->set_level(cs->line, spi->mode & SPI_CS_HIGH ? 1 : 0);
+ gpio_set_value(cs->line, spi->mode & SPI_CS_HIGH ? 1 : 0);
}
static int wait_for_xfer(struct s3c64xx_spi_driver_data *sdd,
struct spi_transfer *xfer, int dma_mode)
{
- struct s3c64xx_spi_info *sci = sdd->cntrlr_info;
void __iomem *regs = sdd->regs;
unsigned long val;
int ms;
val = msecs_to_loops(ms);
do {
status = readl(regs + S3C64XX_SPI_STATUS);
- } while (RX_FIFO_LVL(status, sci) < xfer->len && --val);
+ } while (RX_FIFO_LVL(status, sdd) < xfer->len && --val);
}
if (!val)
if (xfer->rx_buf == NULL) {
val = msecs_to_loops(10);
status = readl(regs + S3C64XX_SPI_STATUS);
- while ((TX_FIFO_LVL(status, sci)
- || !S3C64XX_SPI_ST_TX_DONE(status, sci))
+ while ((TX_FIFO_LVL(status, sdd)
+ || !S3C64XX_SPI_ST_TX_DONE(status, sdd))
&& --val) {
cpu_relax();
status = readl(regs + S3C64XX_SPI_STATUS);
if (sdd->tgl_spi == spi)
sdd->tgl_spi = NULL;
- cs->set_level(cs->line, spi->mode & SPI_CS_HIGH ? 0 : 1);
+ gpio_set_value(cs->line, spi->mode & SPI_CS_HIGH ? 0 : 1);
}
static void s3c64xx_spi_config(struct s3c64xx_spi_driver_data *sdd)
{
- struct s3c64xx_spi_info *sci = sdd->cntrlr_info;
void __iomem *regs = sdd->regs;
u32 val;
/* Disable Clock */
- if (sci->clk_from_cmu) {
+ if (sdd->port_conf->clk_from_cmu) {
clk_disable(sdd->src_clk);
} else {
val = readl(regs + S3C64XX_SPI_CLK_CFG);
writel(val, regs + S3C64XX_SPI_MODE_CFG);
- if (sci->clk_from_cmu) {
+ if (sdd->port_conf->clk_from_cmu) {
/* Configure Clock */
/* There is half-multiplier before the SPI */
clk_set_rate(sdd->src_clk, sdd->cur_speed * 2);
static int s3c64xx_spi_map_mssg(struct s3c64xx_spi_driver_data *sdd,
struct spi_message *msg)
{
- struct s3c64xx_spi_info *sci = sdd->cntrlr_info;
struct device *dev = &sdd->pdev->dev;
struct spi_transfer *xfer;
/* Map until end or first fail */
list_for_each_entry(xfer, &msg->transfers, transfer_list) {
- if (xfer->len <= ((sci->fifo_lvl_mask >> 1) + 1))
+ if (xfer->len <= ((FIFO_LVL_MASK(sdd) >> 1) + 1))
continue;
if (xfer->tx_buf != NULL) {
static void s3c64xx_spi_unmap_mssg(struct s3c64xx_spi_driver_data *sdd,
struct spi_message *msg)
{
- struct s3c64xx_spi_info *sci = sdd->cntrlr_info;
struct device *dev = &sdd->pdev->dev;
struct spi_transfer *xfer;
list_for_each_entry(xfer, &msg->transfers, transfer_list) {
- if (xfer->len <= ((sci->fifo_lvl_mask >> 1) + 1))
+ if (xfer->len <= ((FIFO_LVL_MASK(sdd) >> 1) + 1))
continue;
if (xfer->rx_buf != NULL
struct spi_message *msg)
{
struct s3c64xx_spi_driver_data *sdd = spi_master_get_devdata(master);
- struct s3c64xx_spi_info *sci = sdd->cntrlr_info;
struct spi_device *spi = msg->spi;
struct s3c64xx_spi_csinfo *cs = spi->controller_data;
struct spi_transfer *xfer;
}
/* Polling method for xfers not bigger than FIFO capacity */
- if (xfer->len <= ((sci->fifo_lvl_mask >> 1) + 1))
+ if (xfer->len <= ((FIFO_LVL_MASK(sdd) >> 1) + 1))
use_dma = 0;
else
use_dma = 1;
return 0;
}
+static struct s3c64xx_spi_csinfo *s3c64xx_get_slave_ctrldata(
+ struct s3c64xx_spi_driver_data *sdd,
+ struct spi_device *spi)
+{
+ struct s3c64xx_spi_csinfo *cs;
+ struct device_node *slave_np, *data_np;
+ u32 fb_delay = 0;
+
+ slave_np = spi->dev.of_node;
+ if (!slave_np) {
+ dev_err(&spi->dev, "device node not found\n");
+ return ERR_PTR(-EINVAL);
+ }
+
+ for_each_child_of_node(slave_np, data_np)
+ if (!strcmp(data_np->name, "controller-data"))
+ break;
+ if (!data_np) {
+ dev_err(&spi->dev, "child node 'controller-data' not found\n");
+ return ERR_PTR(-EINVAL);
+ }
+
+ cs = devm_kzalloc(&sdd->pdev->dev, sizeof(*cs), GFP_KERNEL);
+ if (!cs) {
+ dev_err(&spi->dev, "could not allocate memory for controller"
+ " data\n");
+ return ERR_PTR(-ENOMEM);
+ }
+
+ cs->line = of_get_named_gpio(data_np, "cs-gpio", 0);
+ if (!gpio_is_valid(cs->line)) {
+ dev_err(&spi->dev, "chip select gpio is invalid\n");
+ return ERR_PTR(-EINVAL);
+ }
+ if (devm_gpio_request(&sdd->pdev->dev, cs->line, "spi-cs")) {
+ dev_err(&spi->dev, "gpio [%d] request failed\n", cs->line);
+ return ERR_PTR(-EBUSY);
+ }
+
+ of_property_read_u32(data_np, "samsung,spi-feedback-delay", &fb_delay);
+ cs->fb_delay = fb_delay;
+ return cs;
+}
+
/*
* Here we only check the validity of requested configuration
* and save the configuration in a local data-structure.
unsigned long flags;
int err = 0;
- if (cs == NULL || cs->set_level == NULL) {
+ sdd = spi_master_get_devdata(spi->master);
+ if (!cs && spi->dev.of_node) {
+ cs = s3c64xx_get_slave_ctrldata(sdd, spi);
+ spi->controller_data = cs;
+ }
+
+ if (IS_ERR_OR_NULL(cs)) {
dev_err(&spi->dev, "No CS for SPI(%d)\n", spi->chip_select);
return -ENODEV;
}
- sdd = spi_master_get_devdata(spi->master);
sci = sdd->cntrlr_info;
spin_lock_irqsave(&sdd->lock, flags);
pm_runtime_get_sync(&sdd->pdev->dev);
/* Check if we can provide the requested rate */
- if (!sci->clk_from_cmu) {
+ if (!sdd->port_conf->clk_from_cmu) {
u32 psr, speed;
/* Max possible */
/* Disable Interrupts - we use Polling if not DMA mode */
writel(0, regs + S3C64XX_SPI_INT_EN);
- if (!sci->clk_from_cmu)
+ if (!sdd->port_conf->clk_from_cmu)
writel(sci->src_clk_nr << S3C64XX_SPI_CLKSEL_SRCSHFT,
regs + S3C64XX_SPI_CLK_CFG);
writel(0, regs + S3C64XX_SPI_MODE_CFG);
flush_fifo(sdd);
}
-static int __init s3c64xx_spi_probe(struct platform_device *pdev)
+static int __devinit s3c64xx_spi_get_dmares(
+ struct s3c64xx_spi_driver_data *sdd, bool tx)
+{
+ struct platform_device *pdev = sdd->pdev;
+ struct s3c64xx_spi_dma_data *dma_data;
+ struct property *prop;
+ struct resource *res;
+ char prop_name[15], *chan_str;
+
+ if (tx) {
+ dma_data = &sdd->tx_dma;
+ dma_data->direction = DMA_TO_DEVICE;
+ chan_str = "tx";
+ } else {
+ dma_data = &sdd->rx_dma;
+ dma_data->direction = DMA_FROM_DEVICE;
+ chan_str = "rx";
+ }
+
+ if (!sdd->pdev->dev.of_node) {
+ res = platform_get_resource(pdev, IORESOURCE_DMA, tx ? 0 : 1);
+ if (!res) {
+ dev_err(&pdev->dev, "Unable to get SPI-%s dma "
+ "resource\n", chan_str);
+ return -ENXIO;
+ }
+ dma_data->dmach = res->start;
+ return 0;
+ }
+
+ sprintf(prop_name, "%s-dma-channel", chan_str);
+ prop = of_find_property(pdev->dev.of_node, prop_name, NULL);
+ if (!prop) {
+ dev_err(&pdev->dev, "%s dma channel property not specified\n",
+ chan_str);
+ return -ENXIO;
+ }
+
+ dma_data->dmach = DMACH_DT_PROP;
+ dma_data->dma_prop = prop;
+ return 0;
+}
+
+#ifdef CONFIG_OF
+static int s3c64xx_spi_parse_dt_gpio(struct s3c64xx_spi_driver_data *sdd)
+{
+ struct device *dev = &sdd->pdev->dev;
+ int idx, gpio, ret;
+
+ /* find gpios for mosi, miso and clock lines */
+ for (idx = 0; idx < 3; idx++) {
+ gpio = of_get_gpio(dev->of_node, idx);
+ if (!gpio_is_valid(gpio)) {
+ dev_err(dev, "invalid gpio[%d]: %d\n", idx, gpio);
+ goto free_gpio;
+ }
+ sdd->gpios[idx] = gpio;
+ ret = gpio_request(gpio, "spi-bus");
+ if (ret) {
+ dev_err(dev, "gpio [%d] request failed\n", gpio);
+ goto free_gpio;
+ }
+ }
+ return 0;
+
+free_gpio:
+ while (--idx >= 0)
+ gpio_free(sdd->gpios[idx]);
+ return -EINVAL;
+}
+
+static void s3c64xx_spi_dt_gpio_free(struct s3c64xx_spi_driver_data *sdd)
+{
+ unsigned int idx;
+ for (idx = 0; idx < 3; idx++)
+ gpio_free(sdd->gpios[idx]);
+}
+
+static struct __devinit s3c64xx_spi_info * s3c64xx_spi_parse_dt(
+ struct device *dev)
{
- struct resource *mem_res, *dmatx_res, *dmarx_res;
- struct s3c64xx_spi_driver_data *sdd;
struct s3c64xx_spi_info *sci;
- struct spi_master *master;
- int ret, irq;
- char clk_name[16];
+ u32 temp;
- if (pdev->id < 0) {
- dev_err(&pdev->dev,
- "Invalid platform device id-%d\n", pdev->id);
- return -ENODEV;
+ sci = devm_kzalloc(dev, sizeof(*sci), GFP_KERNEL);
+ if (!sci) {
+ dev_err(dev, "memory allocation for spi_info failed\n");
+ return ERR_PTR(-ENOMEM);
}
- if (pdev->dev.platform_data == NULL) {
- dev_err(&pdev->dev, "platform_data missing!\n");
- return -ENODEV;
+ if (of_property_read_u32(dev->of_node, "samsung,spi-src-clk", &temp)) {
+ dev_warn(dev, "spi bus clock parent not specified, using "
+ "clock at index 0 as parent\n");
+ sci->src_clk_nr = 0;
+ } else {
+ sci->src_clk_nr = temp;
+ }
+
+ if (of_property_read_u32(dev->of_node, "num-cs", &temp)) {
+ dev_warn(dev, "number of chip select lines not specified, "
+ "assuming 1 chip select line\n");
+ sci->num_cs = 1;
+ } else {
+ sci->num_cs = temp;
}
- sci = pdev->dev.platform_data;
+ return sci;
+}
+#else
+static struct s3c64xx_spi_info *s3c64xx_spi_parse_dt(struct device *dev)
+{
+ return dev->platform_data;
+}
- /* Check for availability of necessary resource */
+static int s3c64xx_spi_parse_dt_gpio(struct s3c64xx_spi_driver_data *sdd)
+{
+ return -EINVAL;
+}
- dmatx_res = platform_get_resource(pdev, IORESOURCE_DMA, 0);
- if (dmatx_res == NULL) {
- dev_err(&pdev->dev, "Unable to get SPI-Tx dma resource\n");
- return -ENXIO;
+static void s3c64xx_spi_dt_gpio_free(struct s3c64xx_spi_driver_data *sdd)
+{
+}
+#endif
+
+static const struct of_device_id s3c64xx_spi_dt_match[];
+
+static inline struct s3c64xx_spi_port_config *s3c64xx_spi_get_port_config(
+ struct platform_device *pdev)
+{
+#ifdef CONFIG_OF
+ if (pdev->dev.of_node) {
+ const struct of_device_id *match;
+ match = of_match_node(s3c64xx_spi_dt_match, pdev->dev.of_node);
+ return (struct s3c64xx_spi_port_config *)match->data;
}
+#endif
+ return (struct s3c64xx_spi_port_config *)
+ platform_get_device_id(pdev)->driver_data;
+}
- dmarx_res = platform_get_resource(pdev, IORESOURCE_DMA, 1);
- if (dmarx_res == NULL) {
- dev_err(&pdev->dev, "Unable to get SPI-Rx dma resource\n");
- return -ENXIO;
+static int __init s3c64xx_spi_probe(struct platform_device *pdev)
+{
+ struct resource *mem_res;
+ struct s3c64xx_spi_driver_data *sdd;
+ struct s3c64xx_spi_info *sci = pdev->dev.platform_data;
+ struct spi_master *master;
+ int ret, irq;
+ char clk_name[16];
+
+ if (!sci && pdev->dev.of_node) {
+ sci = s3c64xx_spi_parse_dt(&pdev->dev);
+ if (IS_ERR(sci))
+ return PTR_ERR(sci);
+ }
+
+ if (!sci) {
+ dev_err(&pdev->dev, "platform_data missing!\n");
+ return -ENODEV;
}
mem_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
platform_set_drvdata(pdev, master);
sdd = spi_master_get_devdata(master);
+ sdd->port_conf = s3c64xx_spi_get_port_config(pdev);
+
sdd->master = master;
sdd->cntrlr_info = sci;
sdd->pdev = pdev;
sdd->sfr_start = mem_res->start;
- sdd->tx_dma.dmach = dmatx_res->start;
- sdd->tx_dma.direction = DMA_MEM_TO_DEV;
- sdd->rx_dma.dmach = dmarx_res->start;
- sdd->rx_dma.direction = DMA_DEV_TO_MEM;
+ if (pdev->dev.of_node) {
+ ret = of_alias_get_id(pdev->dev.of_node, "spi");
+ if (ret < 0) {
+ dev_err(&pdev->dev, "failed to get alias id, "
+ "errno %d\n", ret);
+ goto err0;
+ }
+ sdd->port_id = ret;
+ } else {
+ sdd->port_id = pdev->id;
+ }
sdd->cur_bpw = 8;
- master->bus_num = pdev->id;
+ ret = s3c64xx_spi_get_dmares(sdd, true);
+ if (ret)
+ goto err0;
+
+ ret = s3c64xx_spi_get_dmares(sdd, false);
+ if (ret)
+ goto err0;
+
+ master->dev.of_node = pdev->dev.of_node;
+ master->bus_num = sdd->port_id;
master->setup = s3c64xx_spi_setup;
master->prepare_transfer_hardware = s3c64xx_spi_prepare_transfer;
master->transfer_one_message = s3c64xx_spi_transfer_one_message;
goto err1;
}
- if (sci->cfg_gpio == NULL || sci->cfg_gpio(pdev)) {
+ if (!sci->cfg_gpio && pdev->dev.of_node) {
+ if (s3c64xx_spi_parse_dt_gpio(sdd))
+ return -EBUSY;
+ } else if (sci->cfg_gpio == NULL || sci->cfg_gpio()) {
dev_err(&pdev->dev, "Unable to config gpio\n");
ret = -EBUSY;
goto err2;
}
/* Setup Deufult Mode */
- s3c64xx_spi_hwinit(sdd, pdev->id);
+ s3c64xx_spi_hwinit(sdd, sdd->port_id);
spin_lock_init(&sdd->lock);
init_completion(&sdd->xfer_completion);
dev_dbg(&pdev->dev, "Samsung SoC SPI Driver loaded for Bus SPI-%d "
"with %d Slaves attached\n",
- pdev->id, master->num_chipselect);
+ sdd->port_id, master->num_chipselect);
dev_dbg(&pdev->dev, "\tIOmem=[0x%x-0x%x]\tDMA=[Rx-%d, Tx-%d]\n",
mem_res->end, mem_res->start,
sdd->rx_dma.dmach, sdd->tx_dma.dmach);
err4:
clk_put(sdd->clk);
err3:
+ if (!sdd->cntrlr_info->cfg_gpio && pdev->dev.of_node)
+ s3c64xx_spi_dt_gpio_free(sdd);
err2:
iounmap((void *) sdd->regs);
err1:
clk_disable(sdd->clk);
clk_put(sdd->clk);
+ if (!sdd->cntrlr_info->cfg_gpio && pdev->dev.of_node)
+ s3c64xx_spi_dt_gpio_free(sdd);
+
iounmap((void *) sdd->regs);
mem_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
clk_disable(sdd->src_clk);
clk_disable(sdd->clk);
+ if (!sdd->cntrlr_info->cfg_gpio && dev->of_node)
+ s3c64xx_spi_dt_gpio_free(sdd);
+
sdd->cur_speed = 0; /* Output Clock is stopped */
return 0;
static int s3c64xx_spi_resume(struct device *dev)
{
- struct platform_device *pdev = to_platform_device(dev);
struct spi_master *master = spi_master_get(dev_get_drvdata(dev));
struct s3c64xx_spi_driver_data *sdd = spi_master_get_devdata(master);
struct s3c64xx_spi_info *sci = sdd->cntrlr_info;
- sci->cfg_gpio(pdev);
+ if (!sci->cfg_gpio && dev->of_node)
+ s3c64xx_spi_parse_dt_gpio(sdd);
+ else
+ sci->cfg_gpio();
+
/* Enable the clock */
clk_enable(sdd->src_clk);
clk_enable(sdd->clk);
- s3c64xx_spi_hwinit(sdd, pdev->id);
+ s3c64xx_spi_hwinit(sdd, sdd->port_id);
spi_master_resume(master);
s3c64xx_spi_runtime_resume, NULL)
};
+struct s3c64xx_spi_port_config s3c2443_spi_port_config = {
+ .fifo_lvl_mask = { 0x7f },
+ .rx_lvl_offset = 13,
+ .tx_st_done = 21,
+ .high_speed = true,
+};
+
+struct s3c64xx_spi_port_config s3c6410_spi_port_config = {
+ .fifo_lvl_mask = { 0x7f, 0x7F },
+ .rx_lvl_offset = 13,
+ .tx_st_done = 21,
+};
+
+struct s3c64xx_spi_port_config s5p64x0_spi_port_config = {
+ .fifo_lvl_mask = { 0x1ff, 0x7F },
+ .rx_lvl_offset = 15,
+ .tx_st_done = 25,
+};
+
+struct s3c64xx_spi_port_config s5pc100_spi_port_config = {
+ .fifo_lvl_mask = { 0x7f, 0x7F },
+ .rx_lvl_offset = 13,
+ .tx_st_done = 21,
+ .high_speed = true,
+};
+
+struct s3c64xx_spi_port_config s5pv210_spi_port_config = {
+ .fifo_lvl_mask = { 0x1ff, 0x7F },
+ .rx_lvl_offset = 15,
+ .tx_st_done = 25,
+ .high_speed = true,
+};
+
+struct s3c64xx_spi_port_config exynos4_spi_port_config = {
+ .fifo_lvl_mask = { 0x1ff, 0x7F, 0x7F },
+ .rx_lvl_offset = 15,
+ .tx_st_done = 25,
+ .high_speed = true,
+ .clk_from_cmu = true,
+};
+
+static struct platform_device_id s3c64xx_spi_driver_ids[] = {
+ {
+ .name = "s3c2443-spi",
+ .driver_data = (kernel_ulong_t)&s3c2443_spi_port_config,
+ }, {
+ .name = "s3c6410-spi",
+ .driver_data = (kernel_ulong_t)&s3c6410_spi_port_config,
+ }, {
+ .name = "s5p64x0-spi",
+ .driver_data = (kernel_ulong_t)&s5p64x0_spi_port_config,
+ }, {
+ .name = "s5pc100-spi",
+ .driver_data = (kernel_ulong_t)&s5pc100_spi_port_config,
+ }, {
+ .name = "s5pv210-spi",
+ .driver_data = (kernel_ulong_t)&s5pv210_spi_port_config,
+ }, {
+ .name = "exynos4210-spi",
+ .driver_data = (kernel_ulong_t)&exynos4_spi_port_config,
+ },
+ { },
+};
+
+#ifdef CONFIG_OF
+static const struct of_device_id s3c64xx_spi_dt_match[] = {
+ { .compatible = "samsung,exynos4210-spi",
+ .data = (void *)&exynos4_spi_port_config,
+ },
+ { },
+};
+MODULE_DEVICE_TABLE(of, s3c64xx_spi_dt_match);
+#endif /* CONFIG_OF */
+
static struct platform_driver s3c64xx_spi_driver = {
.driver = {
.name = "s3c64xx-spi",
.owner = THIS_MODULE,
.pm = &s3c64xx_spi_pm,
+ .of_match_table = of_match_ptr(s3c64xx_spi_dt_match),
},
.remove = s3c64xx_spi_remove,
+ .id_table = s3c64xx_spi_driver_ids,
};
MODULE_ALIAS("platform:s3c64xx-spi");
/* Lock queue and check for queue work */
spin_lock_irqsave(&master->queue_lock, flags);
if (list_empty(&master->queue) || !master->running) {
- if (master->busy) {
- ret = master->unprepare_transfer_hardware(master);
- if (ret) {
- spin_unlock_irqrestore(&master->queue_lock, flags);
- dev_err(&master->dev,
- "failed to unprepare transfer hardware\n");
- return;
- }
+ if (!master->busy) {
+ spin_unlock_irqrestore(&master->queue_lock, flags);
+ return;
}
master->busy = false;
spin_unlock_irqrestore(&master->queue_lock, flags);
+ ret = master->unprepare_transfer_hardware(master);
+ if (ret)
+ dev_err(&master->dev,
+ "failed to unprepare transfer hardware\n");
return;
}
depends on HWMON=y || HWMON=THERMAL
default y
+config CPU_THERMAL
+ bool "generic cpu cooling support"
+ depends on THERMAL && CPU_FREQ
+ help
+ This implements the generic cpu cooling mechanism through frequency
+ reduction, cpu hotplug and any other ways of reducing temperature. An
+ ACPI version of this already exists(drivers/acpi/processor_thermal.c).
+ This will be useful for platforms using the generic thermal interface
+ and not the ACPI interface.
+ If you want this support, you should say Y or M here.
+
config SPEAR_THERMAL
bool "SPEAr thermal sensor driver"
depends on THERMAL
help
Enable this to plug the SPEAr thermal sensor driver into the Linux
thermal framework
+
+config EXYNOS_THERMAL
+ tristate "Temperature sensor on Samsung EXYNOS"
+ depends on (ARCH_EXYNOS4 || ARCH_EXYNOS5) && THERMAL
+ help
+ If you say yes here you get support for TMU (Thermal Managment
+ Unit) on SAMSUNG EXYNOS series of SoC.
+ This driver can also be built as a module. If so, the module
+ will be called exynos4-tmu
#
obj-$(CONFIG_THERMAL) += thermal_sys.o
-obj-$(CONFIG_SPEAR_THERMAL) += spear_thermal.o
\ No newline at end of file
+obj-$(CONFIG_CPU_THERMAL) += cpu_cooling.o
+obj-$(CONFIG_SPEAR_THERMAL) += spear_thermal.o
+obj-$(CONFIG_EXYNOS_THERMAL) += exynos_thermal.o
--- /dev/null
+/*
+ * linux/drivers/thermal/cpu_cooling.c
+ *
+ * Copyright (C) 2012 Samsung Electronics Co., Ltd(http://www.samsung.com)
+ * Copyright (C) 2012 Amit Daniel <amit.kachhap@linaro.org>
+ *
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/thermal.h>
+#include <linux/platform_device.h>
+#include <linux/cpufreq.h>
+#include <linux/err.h>
+#include <linux/slab.h>
+#include <linux/cpu.h>
+#include <linux/cpu_cooling.h>
+
+/**
+ * struct cpufreq_cooling_device
+ * @id: unique integer value corresponding to each cpufreq_cooling_device
+ * registered.
+ * @cool_dev: thermal_cooling_device pointer to keep track of the the
+ * egistered cooling device.
+ * @tab_ptr: freq_clip_table table containing the maximum value of frequency to
+ * be set for different cooling state.
+ * @tab_size: integer value representing a count of the above table.
+ * @cpufreq_state: integer value representing the current state of cpufreq
+ * cooling devices.
+ * @allowed_cpus: all the cpus involved for this cpufreq_cooling_device.
+ * @node: list_head to link all cpufreq_cooling_device together.
+ *
+ * This structure is required for keeping information of each
+ * cpufreq_cooling_device registered as a list whose head is represented by
+ * cooling_cpufreq_list. In order to prevent corruption of this list a
+ * mutex lock cooling_cpufreq_lock is used.
+ */
+struct cpufreq_cooling_device {
+ int id;
+ struct thermal_cooling_device *cool_dev;
+ struct freq_clip_table *tab_ptr;
+ unsigned int tab_size;
+ unsigned int cpufreq_state;
+ struct cpumask allowed_cpus;
+ struct list_head node;
+};
+static LIST_HEAD(cooling_cpufreq_list);
+static DEFINE_MUTEX(cooling_cpufreq_lock);
+static DEFINE_IDR(cpufreq_idr);
+
+/*per cpu variable to store the previous max frequency allowed*/
+static DEFINE_PER_CPU(unsigned int, max_policy_freq);
+
+/*notify_table passes value to the CPUFREQ_ADJUST callback function.*/
+#define NOTIFY_INVALID NULL
+static struct freq_clip_table *notify_table;
+
+/*Head of the blocking notifier chain to inform about frequency clamping*/
+static BLOCKING_NOTIFIER_HEAD(cputherm_state_notifier_list);
+
+/**
+ * get_idr - function to get a unique id.
+ * @idr: struct idr * handle used to create a id.
+ * @id: int * value generated by this function.
+ */
+static int get_idr(struct idr *idr, int *id)
+{
+ int err;
+again:
+ if (unlikely(idr_pre_get(idr, GFP_KERNEL) == 0))
+ return -ENOMEM;
+
+ mutex_lock(&cooling_cpufreq_lock);
+ err = idr_get_new(idr, NULL, id);
+ mutex_unlock(&cooling_cpufreq_lock);
+
+ if (unlikely(err == -EAGAIN))
+ goto again;
+ else if (unlikely(err))
+ return err;
+
+ *id = *id & MAX_ID_MASK;
+ return 0;
+}
+
+/**
+ * release_idr - function to free the unique id.
+ * @idr: struct idr * handle used for creating the id.
+ * @id: int value representing the unique id.
+ */
+static void release_idr(struct idr *idr, int id)
+{
+ mutex_lock(&cooling_cpufreq_lock);
+ idr_remove(idr, id);
+ mutex_unlock(&cooling_cpufreq_lock);
+}
+
+/**
+ * cputherm_register_notifier - Register a notifier with cpu cooling interface.
+ * @nb: struct notifier_block * with callback info.
+ * @list: integer value for which notification is needed. possible values are
+ * CPUFREQ_COOLING_START and CPUFREQ_COOLING_STOP.
+ *
+ * This exported function registers a driver with cpu cooling layer. The driver
+ * will be notified when any cpu cooling action is called.
+ */
+int cputherm_register_notifier(struct notifier_block *nb, unsigned int list)
+{
+ int ret = 0;
+
+ switch (list) {
+ case CPUFREQ_COOLING_START:
+ case CPUFREQ_COOLING_STOP:
+ ret = blocking_notifier_chain_register(
+ &cputherm_state_notifier_list, nb);
+ break;
+ default:
+ ret = -EINVAL;
+ }
+ return ret;
+}
+EXPORT_SYMBOL(cputherm_register_notifier);
+
+/**
+ * cputherm_unregister_notifier - Un-register a notifier.
+ * @nb: struct notifier_block * with callback info.
+ * @list: integer value for which notification is needed. values possible are
+ * CPUFREQ_COOLING_START or CPUFREQ_COOLING_STOP.
+ *
+ * This exported function un-registers a driver with cpu cooling layer.
+ */
+int cputherm_unregister_notifier(struct notifier_block *nb, unsigned int list)
+{
+ int ret = 0;
+
+ switch (list) {
+ case CPUFREQ_COOLING_START:
+ case CPUFREQ_COOLING_STOP:
+ ret = blocking_notifier_chain_unregister(
+ &cputherm_state_notifier_list, nb);
+ break;
+ default:
+ ret = -EINVAL;
+ }
+ return ret;
+}
+EXPORT_SYMBOL(cputherm_unregister_notifier);
+
+/*Below codes defines functions to be used for cpufreq as cooling device*/
+
+/**
+ * is_cpufreq_valid - function to check if a cpu has frequency transition policy.
+ * @cpu: cpu for which check is needed.
+ */
+static int is_cpufreq_valid(int cpu)
+{
+ struct cpufreq_policy policy;
+ return !cpufreq_get_policy(&policy, cpu);
+}
+
+/**
+ * cpufreq_apply_cooling - function to apply frequency clipping.
+ * @cpufreq_device: cpufreq_cooling_device pointer containing frequency
+ * clipping data.
+ * @cooling_state: value of the cooling state.
+ */
+static int cpufreq_apply_cooling(struct cpufreq_cooling_device *cpufreq_device,
+ unsigned long cooling_state)
+{
+ unsigned int event, cpuid, state;
+ struct freq_clip_table *th_table, *table_ptr;
+ const struct cpumask *maskPtr = &cpufreq_device->allowed_cpus;
+ struct cpufreq_cooling_device *cpufreq_ptr;
+
+ if (cooling_state > cpufreq_device->tab_size)
+ return -EINVAL;
+
+ /*Check if the old cooling action is same as new cooling action*/
+ if (cpufreq_device->cpufreq_state == cooling_state)
+ return 0;
+
+ /*pass cooling table info to the cpufreq_thermal_notifier callback*/
+ notify_table = NOTIFY_INVALID;
+
+ if (cooling_state > 0) {
+ th_table = &(cpufreq_device->tab_ptr[cooling_state - 1]);
+ notify_table = th_table;
+ }
+
+ /*check if any lower clip frequency active in other cpufreq_device's*/
+ list_for_each_entry(cpufreq_ptr, &cooling_cpufreq_list, node) {
+
+ state = cpufreq_ptr->cpufreq_state;
+ if (state == 0 || cpufreq_ptr == cpufreq_device)
+ continue;
+
+ if (!cpumask_equal(&cpufreq_ptr->allowed_cpus,
+ &cpufreq_device->allowed_cpus))
+ continue;
+
+ table_ptr = &(cpufreq_ptr->tab_ptr[state - 1]);
+ if (notify_table == NULL ||
+ (table_ptr->freq_clip_max <
+ notify_table->freq_clip_max))
+ notify_table = table_ptr;
+ }
+
+ cpufreq_device->cpufreq_state = cooling_state;
+
+ if (notify_table != NOTIFY_INVALID) {
+ event = CPUFREQ_COOLING_START;
+ maskPtr = notify_table->mask_val;
+ } else {
+ event = CPUFREQ_COOLING_STOP;
+ }
+
+ blocking_notifier_call_chain(&cputherm_state_notifier_list,
+ event, notify_table);
+
+ for_each_cpu(cpuid, maskPtr) {
+ if (is_cpufreq_valid(cpuid))
+ cpufreq_update_policy(cpuid);
+ }
+
+ notify_table = NOTIFY_INVALID;
+
+ return 0;
+}
+
+/**
+ * cpufreq_thermal_notifier - notifier callback for cpufreq policy change.
+ * @nb: struct notifier_block * with callback info.
+ * @event: value showing cpufreq event for which this function invoked.
+ * @data: callback-specific data
+ */
+static int cpufreq_thermal_notifier(struct notifier_block *nb,
+ unsigned long event, void *data)
+{
+ struct cpufreq_policy *policy = data;
+ unsigned long max_freq = 0;
+
+ if (event != CPUFREQ_ADJUST)
+ return 0;
+
+ if (notify_table != NOTIFY_INVALID) {
+ max_freq = notify_table->freq_clip_max;
+
+ if (!per_cpu(max_policy_freq, policy->cpu))
+ per_cpu(max_policy_freq, policy->cpu) = policy->max;
+ } else {
+ if (per_cpu(max_policy_freq, policy->cpu)) {
+ max_freq = per_cpu(max_policy_freq, policy->cpu);
+ per_cpu(max_policy_freq, policy->cpu) = 0;
+ } else {
+ max_freq = policy->max;
+ }
+ }
+
+ /* Never exceed user_policy.max*/
+ if (max_freq > policy->user_policy.max)
+ max_freq = policy->user_policy.max;
+
+ if (policy->max != max_freq)
+ cpufreq_verify_within_limits(policy, 0, max_freq);
+
+ return 0;
+}
+
+/*
+ * cpufreq cooling device callback functions are defined below
+ */
+
+/**
+ * cpufreq_get_max_state - callback function to get the max cooling state.
+ * @cdev: thermal cooling device pointer.
+ * @state: fill this variable with the max cooling state.
+ */
+static int cpufreq_get_max_state(struct thermal_cooling_device *cdev,
+ unsigned long *state)
+{
+ int ret = -EINVAL;
+ struct cpufreq_cooling_device *cpufreq_device;
+
+ mutex_lock(&cooling_cpufreq_lock);
+ list_for_each_entry(cpufreq_device, &cooling_cpufreq_list, node) {
+ if (cpufreq_device && cpufreq_device->cool_dev == cdev) {
+ *state = cpufreq_device->tab_size;
+ ret = 0;
+ break;
+ }
+ }
+ mutex_unlock(&cooling_cpufreq_lock);
+ return ret;
+}
+
+/**
+ * cpufreq_get_cur_state - callback function to get the current cooling state.
+ * @cdev: thermal cooling device pointer.
+ * @state: fill this variable with the current cooling state.
+ */
+static int cpufreq_get_cur_state(struct thermal_cooling_device *cdev,
+ unsigned long *state)
+{
+ int ret = -EINVAL;
+ struct cpufreq_cooling_device *cpufreq_device;
+
+ mutex_lock(&cooling_cpufreq_lock);
+ list_for_each_entry(cpufreq_device, &cooling_cpufreq_list, node) {
+ if (cpufreq_device && cpufreq_device->cool_dev == cdev) {
+ *state = cpufreq_device->cpufreq_state;
+ ret = 0;
+ break;
+ }
+ }
+ mutex_unlock(&cooling_cpufreq_lock);
+ return ret;
+}
+
+/**
+ * cpufreq_set_cur_state - callback function to set the current cooling state.
+ * @cdev: thermal cooling device pointer.
+ * @state: set this variable to the current cooling state.
+ */
+static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
+ unsigned long state)
+{
+ int ret = -EINVAL;
+ struct cpufreq_cooling_device *cpufreq_device;
+
+ mutex_lock(&cooling_cpufreq_lock);
+ list_for_each_entry(cpufreq_device, &cooling_cpufreq_list, node) {
+ if (cpufreq_device && cpufreq_device->cool_dev == cdev) {
+ ret = 0;
+ break;
+ }
+ }
+ if (!ret)
+ ret = cpufreq_apply_cooling(cpufreq_device, state);
+
+ mutex_unlock(&cooling_cpufreq_lock);
+
+ return ret;
+}
+
+/*Bind cpufreq callbacks to thermal cooling device ops*/
+static struct thermal_cooling_device_ops const cpufreq_cooling_ops = {
+ .get_max_state = cpufreq_get_max_state,
+ .get_cur_state = cpufreq_get_cur_state,
+ .set_cur_state = cpufreq_set_cur_state,
+};
+
+/*Notifier for cpufreq policy change*/
+static struct notifier_block thermal_cpufreq_notifier_block = {
+ .notifier_call = cpufreq_thermal_notifier,
+};
+
+/**
+ * cpufreq_cooling_register - function to create cpufreq cooling device.
+ * @tab_ptr: table ptr containing the maximum value of frequency to be clipped
+ * for each cooling state.
+ * @tab_size: count of entries in the above table.
+ * happen.
+ */
+struct thermal_cooling_device *cpufreq_cooling_register(
+ struct freq_clip_table *tab_ptr, unsigned int tab_size)
+{
+ struct thermal_cooling_device *cool_dev;
+ struct cpufreq_cooling_device *cpufreq_dev = NULL;
+ struct freq_clip_table *clip_tab;
+ unsigned int cpufreq_dev_count = 0;
+ char dev_name[THERMAL_NAME_LENGTH];
+ int ret = 0, id = 0, i;
+
+ if (tab_ptr == NULL || tab_size == 0)
+ return ERR_PTR(-EINVAL);
+
+ list_for_each_entry(cpufreq_dev, &cooling_cpufreq_list, node)
+ cpufreq_dev_count++;
+
+ cpufreq_dev = kzalloc(sizeof(struct cpufreq_cooling_device),
+ GFP_KERNEL);
+ if (!cpufreq_dev)
+ return ERR_PTR(-ENOMEM);
+
+ /*Verify that all the entries of freq_clip_table are present*/
+ for (i = 0; i < tab_size; i++) {
+ clip_tab = ((struct freq_clip_table *)&tab_ptr[i]);
+ if (!clip_tab->freq_clip_max || !clip_tab->mask_val
+ || !clip_tab->temp_level) {
+ kfree(cpufreq_dev);
+ return ERR_PTR(-EINVAL);
+ }
+ /*
+ *Consolidate all the cpumask for all the individual entries
+ *of the trip table. This is useful in resetting all the
+ *clipped frequencies to the normal level for each cpufreq
+ *cooling device.
+ */
+ cpumask_or(&cpufreq_dev->allowed_cpus,
+ &cpufreq_dev->allowed_cpus, clip_tab->mask_val);
+ }
+
+ cpufreq_dev->tab_ptr = tab_ptr;
+ cpufreq_dev->tab_size = tab_size;
+
+ ret = get_idr(&cpufreq_idr, &cpufreq_dev->id);
+ if (ret) {
+ kfree(cpufreq_dev);
+ return ERR_PTR(-EINVAL);
+ }
+
+ sprintf(dev_name, "thermal-cpufreq-%d", cpufreq_dev->id);
+
+ cool_dev = thermal_cooling_device_register(dev_name, cpufreq_dev,
+ &cpufreq_cooling_ops);
+ if (!cool_dev) {
+ release_idr(&cpufreq_idr, cpufreq_dev->id);
+ kfree(cpufreq_dev);
+ return ERR_PTR(-EINVAL);
+ }
+ cpufreq_dev->id = id;
+ cpufreq_dev->cool_dev = cool_dev;
+ mutex_lock(&cooling_cpufreq_lock);
+ list_add_tail(&cpufreq_dev->node, &cooling_cpufreq_list);
+
+ /*Register the notifier for first cpufreq cooling device*/
+ if (cpufreq_dev_count == 0)
+ cpufreq_register_notifier(&thermal_cpufreq_notifier_block,
+ CPUFREQ_POLICY_NOTIFIER);
+
+ mutex_unlock(&cooling_cpufreq_lock);
+ return cool_dev;
+}
+EXPORT_SYMBOL(cpufreq_cooling_register);
+
+/**
+ * cpufreq_cooling_unregister - function to remove cpufreq cooling device.
+ * @cdev: thermal cooling device pointer.
+ */
+void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev)
+{
+ struct cpufreq_cooling_device *cpufreq_dev = NULL;
+ unsigned int cpufreq_dev_count = 0;
+
+ mutex_lock(&cooling_cpufreq_lock);
+ list_for_each_entry(cpufreq_dev, &cooling_cpufreq_list, node) {
+ if (cpufreq_dev && cpufreq_dev->cool_dev == cdev)
+ break;
+ cpufreq_dev_count++;
+ }
+
+ if (!cpufreq_dev || cpufreq_dev->cool_dev != cdev) {
+ mutex_unlock(&cooling_cpufreq_lock);
+ return;
+ }
+
+ list_del(&cpufreq_dev->node);
+
+ /*Unregister the notifier for the last cpufreq cooling device*/
+ if (cpufreq_dev_count == 1) {
+ cpufreq_unregister_notifier(&thermal_cpufreq_notifier_block,
+ CPUFREQ_POLICY_NOTIFIER);
+ }
+ mutex_unlock(&cooling_cpufreq_lock);
+ thermal_cooling_device_unregister(cpufreq_dev->cool_dev);
+ release_idr(&cpufreq_idr, cpufreq_dev->id);
+ kfree(cpufreq_dev);
+}
+EXPORT_SYMBOL(cpufreq_cooling_unregister);
--- /dev/null
+/*
+ * exynos_thermal.c - Samsung EXYNOS TMU (Thermal Management Unit)
+ *
+ * Copyright (C) 2011 Samsung Electronics
+ * Donggeun Kim <dg77.kim@samsung.com>
+ * Amit Daniel Kachhap <amit.kachhap@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/err.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/platform_device.h>
+#include <linux/interrupt.h>
+#include <linux/clk.h>
+#include <linux/workqueue.h>
+#include <linux/sysfs.h>
+#include <linux/kobject.h>
+#include <linux/io.h>
+#include <linux/mutex.h>
+#include <linux/err.h>
+#include <linux/platform_data/exynos_thermal.h>
+#include <linux/thermal.h>
+#include <linux/cpufreq.h>
+#include <linux/cpu_cooling.h>
+#include <linux/of.h>
+
+#include <plat/cpu.h>
+
+/*Exynos generic registers*/
+#define EXYNOS_TMU_REG_TRIMINFO 0x0
+#define EXYNOS_TMU_REG_CONTROL 0x20
+#define EXYNOS_TMU_REG_STATUS 0x28
+#define EXYNOS_TMU_REG_CURRENT_TEMP 0x40
+#define EXYNOS_TMU_REG_INTEN 0x70
+#define EXYNOS_TMU_REG_INTSTAT 0x74
+#define EXYNOS_TMU_REG_INTCLEAR 0x78
+
+#define EXYNOS_TMU_TRIM_TEMP_MASK 0xff
+#define EXYNOS_TMU_GAIN_SHIFT 8
+#define EXYNOS_TMU_REF_VOLTAGE_SHIFT 24
+#define EXYNOS_TMU_CORE_ON 3
+#define EXYNOS_TMU_CORE_OFF 2
+#define EXYNOS_TMU_DEF_CODE_TO_TEMP_OFFSET 50
+
+/*Exynos4 specific registers*/
+#define EXYNOS4_TMU_REG_THRESHOLD_TEMP 0x44
+#define EXYNOS4_TMU_REG_TRIG_LEVEL0 0x50
+#define EXYNOS4_TMU_REG_TRIG_LEVEL1 0x54
+#define EXYNOS4_TMU_REG_TRIG_LEVEL2 0x58
+#define EXYNOS4_TMU_REG_TRIG_LEVEL3 0x5C
+#define EXYNOS4_TMU_REG_PAST_TEMP0 0x60
+#define EXYNOS4_TMU_REG_PAST_TEMP1 0x64
+#define EXYNOS4_TMU_REG_PAST_TEMP2 0x68
+#define EXYNOS4_TMU_REG_PAST_TEMP3 0x6C
+
+#define EXYNOS4_TMU_TRIG_LEVEL0_MASK 0x1
+#define EXYNOS4_TMU_TRIG_LEVEL1_MASK 0x10
+#define EXYNOS4_TMU_TRIG_LEVEL2_MASK 0x100
+#define EXYNOS4_TMU_TRIG_LEVEL3_MASK 0x1000
+#define EXYNOS4_TMU_INTCLEAR_VAL 0x1111
+
+/*Exynos5 specific registers*/
+#define EXYNOS5_TMU_TRIMINFO_CON 0x14
+#define EXYNOS5_THD_TEMP_RISE 0x50
+#define EXYNOS5_THD_TEMP_FALL 0x54
+#define EXYNOS5_EMUL_CON 0x80
+
+#define EXYNOS5_TRIMINFO_RELOAD 0x1
+#define EXYNOS5_TMU_CLEAR_RISE_INT 0x111
+#define EXYNOS5_TMU_CLEAR_FALL_INT (0x111 << 16)
+#define EXYNOS5_MUX_ADDR_VALUE 6
+#define EXYNOS5_MUX_ADDR_SHIFT 20
+#define EXYNOS5_TMU_TRIP_MODE_SHIFT 13
+
+#define EFUSE_MIN_VALUE 40
+#define EFUSE_MAX_VALUE 100
+
+/*In-kernel thermal framework related macros & definations*/
+#define SENSOR_NAME_LEN 16
+#define MAX_TRIP_COUNT 8
+#define MAX_COOLING_DEVICE 4
+
+#define ACTIVE_INTERVAL 500
+#define IDLE_INTERVAL 10000
+#define MCELSIUS 1000
+
+/* CPU Zone information */
+#define PANIC_ZONE 4
+#define WARN_ZONE 3
+#define MONITOR_ZONE 2
+#define SAFE_ZONE 1
+
+#define GET_ZONE(trip) (trip + 2)
+#define GET_TRIP(zone) (zone - 2)
+
+#define EXYNOS_ZONE_COUNT 3
+
+struct exynos_tmu_data {
+ struct exynos_tmu_platform_data *pdata;
+ struct resource *mem;
+ void __iomem *base;
+ int irq;
+ enum soc_type soc;
+ struct work_struct irq_work;
+ struct mutex lock;
+ struct clk *clk;
+ u8 temp_error1, temp_error2;
+};
+
+struct thermal_trip_point_conf {
+ int trip_val[MAX_TRIP_COUNT];
+ int trip_count;
+};
+
+struct thermal_cooling_conf {
+ struct freq_clip_table freq_data[MAX_TRIP_COUNT];
+ int freq_clip_count;
+};
+
+struct thermal_sensor_conf {
+ char name[SENSOR_NAME_LEN];
+ int (*read_temperature)(void *data);
+ struct thermal_trip_point_conf trip_data;
+ struct thermal_cooling_conf cooling_data;
+ void *private_data;
+};
+
+struct exynos_thermal_zone {
+ enum thermal_device_mode mode;
+ struct thermal_zone_device *therm_dev;
+ struct thermal_cooling_device *cool_dev[MAX_COOLING_DEVICE];
+ unsigned int cool_dev_size;
+ struct platform_device *exynos4_dev;
+ struct thermal_sensor_conf *sensor_conf;
+};
+
+static struct exynos_thermal_zone *th_zone;
+static void exynos_unregister_thermal(void);
+static int exynos_register_thermal(struct thermal_sensor_conf *sensor_conf);
+
+/* Get mode callback functions for thermal zone */
+static int exynos_get_mode(struct thermal_zone_device *thermal,
+ enum thermal_device_mode *mode)
+{
+ if (th_zone)
+ *mode = th_zone->mode;
+ return 0;
+}
+
+/* Set mode callback functions for thermal zone */
+static int exynos_set_mode(struct thermal_zone_device *thermal,
+ enum thermal_device_mode mode)
+{
+ if (!th_zone->therm_dev) {
+ pr_notice("thermal zone not registered\n");
+ return 0;
+ }
+
+ mutex_lock(&th_zone->therm_dev->lock);
+
+ if (mode == THERMAL_DEVICE_ENABLED)
+ th_zone->therm_dev->polling_delay = IDLE_INTERVAL;
+ else
+ th_zone->therm_dev->polling_delay = 0;
+
+ mutex_unlock(&th_zone->therm_dev->lock);
+
+ th_zone->mode = mode;
+ thermal_zone_device_update(th_zone->therm_dev);
+ pr_info("thermal polling set for duration=%d msec\n",
+ th_zone->therm_dev->polling_delay);
+ return 0;
+}
+
+/*
+ * This function may be called from interrupt based temperature sensor
+ * when threshold is changed.
+ */
+static void exynos_report_trigger(void)
+{
+ unsigned int i;
+ char data[10];
+ char *envp[] = { data, NULL };
+
+ if (!th_zone || !th_zone->therm_dev)
+ return;
+
+ thermal_zone_device_update(th_zone->therm_dev);
+
+ mutex_lock(&th_zone->therm_dev->lock);
+ /* Find the level for which trip happened */
+ for (i = 0; i < th_zone->sensor_conf->trip_data.trip_count; i++) {
+ if (th_zone->therm_dev->last_temperature <
+ th_zone->sensor_conf->trip_data.trip_val[i] * MCELSIUS)
+ break;
+ }
+
+ if (th_zone->mode == THERMAL_DEVICE_ENABLED) {
+ if (i > 0)
+ th_zone->therm_dev->polling_delay = ACTIVE_INTERVAL;
+ else
+ th_zone->therm_dev->polling_delay = IDLE_INTERVAL;
+ }
+
+ snprintf(data, sizeof(data), "%u", i);
+ kobject_uevent_env(&th_zone->therm_dev->device.kobj, KOBJ_CHANGE, envp);
+ mutex_unlock(&th_zone->therm_dev->lock);
+}
+
+/* Get trip type callback functions for thermal zone */
+static int exynos_get_trip_type(struct thermal_zone_device *thermal, int trip,
+ enum thermal_trip_type *type)
+{
+ switch (GET_ZONE(trip)) {
+ case MONITOR_ZONE:
+ case WARN_ZONE:
+ *type = THERMAL_TRIP_ACTIVE;
+ break;
+ case PANIC_ZONE:
+ *type = THERMAL_TRIP_CRITICAL;
+ break;
+ default:
+ return -EINVAL;
+ }
+ return 0;
+}
+
+/* Get trip temperature callback functions for thermal zone */
+static int exynos_get_trip_temp(struct thermal_zone_device *thermal, int trip,
+ unsigned long *temp)
+{
+ if (trip < GET_TRIP(MONITOR_ZONE) || trip > GET_TRIP(PANIC_ZONE))
+ return -EINVAL;
+
+ *temp = th_zone->sensor_conf->trip_data.trip_val[trip];
+ /* convert the temperature into millicelsius */
+ *temp = *temp * MCELSIUS;
+
+ return 0;
+}
+
+/* Get critical temperature callback functions for thermal zone */
+static int exynos_get_crit_temp(struct thermal_zone_device *thermal,
+ unsigned long *temp)
+{
+ int ret;
+ /* Panic zone */
+ ret = exynos_get_trip_temp(thermal, GET_TRIP(PANIC_ZONE), temp);
+ return ret;
+}
+
+/* Bind callback functions for thermal zone */
+static int exynos_bind(struct thermal_zone_device *thermal,
+ struct thermal_cooling_device *cdev)
+{
+ int ret = 0, i;
+
+ /* find the cooling device registered*/
+ for (i = 0; i < th_zone->cool_dev_size; i++)
+ if (cdev == th_zone->cool_dev[i])
+ break;
+
+ /*No matching cooling device*/
+ if (i == th_zone->cool_dev_size)
+ return 0;
+
+ switch (GET_ZONE(i)) {
+ case MONITOR_ZONE:
+ case WARN_ZONE:
+ if (thermal_zone_bind_cooling_device(thermal, i, cdev)) {
+ pr_err("error binding cooling dev inst 0\n");
+ ret = -EINVAL;
+ }
+ break;
+ default:
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+
+/* Unbind callback functions for thermal zone */
+static int exynos_unbind(struct thermal_zone_device *thermal,
+ struct thermal_cooling_device *cdev)
+{
+ int ret = 0, i;
+
+ /* find the cooling device registered*/
+ for (i = 0; i < th_zone->cool_dev_size; i++)
+ if (cdev == th_zone->cool_dev[i])
+ break;
+
+ /*No matching cooling device*/
+ if (i == th_zone->cool_dev_size)
+ return 0;
+
+ switch (GET_ZONE(i)) {
+ case MONITOR_ZONE:
+ case WARN_ZONE:
+ if (thermal_zone_unbind_cooling_device(thermal, i, cdev)) {
+ pr_err("error unbinding cooling dev\n");
+ ret = -EINVAL;
+ }
+ break;
+ default:
+ ret = -EINVAL;
+ }
+ return ret;
+}
+
+/* Get temperature callback functions for thermal zone */
+static int exynos_get_temp(struct thermal_zone_device *thermal,
+ unsigned long *temp)
+{
+ void *data;
+
+ if (!th_zone->sensor_conf) {
+ pr_info("Temperature sensor not initialised\n");
+ return -EINVAL;
+ }
+ data = th_zone->sensor_conf->private_data;
+ *temp = th_zone->sensor_conf->read_temperature(data);
+ /* convert the temperature into millicelsius */
+ *temp = *temp * MCELSIUS;
+ return 0;
+}
+
+/* Operation callback functions for thermal zone */
+static struct thermal_zone_device_ops const exynos_dev_ops = {
+ .bind = exynos_bind,
+ .unbind = exynos_unbind,
+ .get_temp = exynos_get_temp,
+ .get_mode = exynos_get_mode,
+ .set_mode = exynos_set_mode,
+ .get_trip_type = exynos_get_trip_type,
+ .get_trip_temp = exynos_get_trip_temp,
+ .get_crit_temp = exynos_get_crit_temp,
+};
+
+/* Register with the in-kernel thermal management */
+static int exynos_register_thermal(struct thermal_sensor_conf *sensor_conf)
+{
+ int ret, count, tab_size;
+ struct freq_clip_table *tab_ptr, *clip_data;
+
+ if (!sensor_conf || !sensor_conf->read_temperature) {
+ pr_err("Temperature sensor not initialised\n");
+ return -EINVAL;
+ }
+
+ th_zone = kzalloc(sizeof(struct exynos_thermal_zone), GFP_KERNEL);
+ if (!th_zone)
+ return -ENOMEM;
+
+ th_zone->sensor_conf = sensor_conf;
+
+ tab_ptr = (struct freq_clip_table *)sensor_conf->cooling_data.freq_data;
+ tab_size = sensor_conf->cooling_data.freq_clip_count;
+
+ /* Register the cpufreq cooling device */
+ for (count = 0; count < tab_size; count++) {
+ clip_data = (struct freq_clip_table *)&(tab_ptr[count]);
+ clip_data->mask_val = cpumask_of(0);
+ th_zone->cool_dev[count] = cpufreq_cooling_register(
+ clip_data, 1);
+ if (IS_ERR(th_zone->cool_dev[count])) {
+ pr_err("Failed to register cpufreq cooling device\n");
+ ret = -EINVAL;
+ th_zone->cool_dev_size = count;
+ goto err_unregister;
+ }
+ }
+ th_zone->cool_dev_size = count;
+
+ th_zone->therm_dev = thermal_zone_device_register(sensor_conf->name,
+ EXYNOS_ZONE_COUNT, NULL, &exynos_dev_ops, 0, 0, 0,
+ IDLE_INTERVAL);
+
+ if (IS_ERR(th_zone->therm_dev)) {
+ pr_err("Failed to register thermal zone device\n");
+ ret = -EINVAL;
+ goto err_unregister;
+ }
+ th_zone->mode = THERMAL_DEVICE_ENABLED;
+
+ pr_info("Exynos: Kernel Thermal management registered\n");
+
+ return 0;
+
+err_unregister:
+ exynos_unregister_thermal();
+ return ret;
+}
+
+/* Un-Register with the in-kernel thermal management */
+static void exynos_unregister_thermal(void)
+{
+ int i;
+
+ for (i = 0; i < th_zone->cool_dev_size; i++) {
+ if (th_zone && th_zone->cool_dev[i])
+ cpufreq_cooling_unregister(th_zone->cool_dev[i]);
+ }
+
+ if (th_zone && th_zone->therm_dev)
+ thermal_zone_device_unregister(th_zone->therm_dev);
+
+ kfree(th_zone);
+
+ pr_info("Exynos: Kernel Thermal management unregistered\n");
+}
+
+/*
+ * TMU treats temperature as a mapped temperature code.
+ * The temperature is converted differently depending on the calibration type.
+ */
+static int temp_to_code(struct exynos_tmu_data *data, u8 temp)
+{
+ struct exynos_tmu_platform_data *pdata = data->pdata;
+ int temp_code;
+
+ if (data->soc == SOC_ARCH_EXYNOS4)
+ /* temp should range between 25 and 125 */
+ if (temp < 25 || temp > 125) {
+ temp_code = -EINVAL;
+ goto out;
+ }
+
+ switch (pdata->cal_type) {
+ case TYPE_TWO_POINT_TRIMMING:
+ temp_code = (temp - 25) *
+ (data->temp_error2 - data->temp_error1) /
+ (85 - 25) + data->temp_error1;
+ break;
+ case TYPE_ONE_POINT_TRIMMING:
+ temp_code = temp + data->temp_error1 - 25;
+ break;
+ default:
+ temp_code = temp + EXYNOS_TMU_DEF_CODE_TO_TEMP_OFFSET;
+ break;
+ }
+out:
+ return temp_code;
+}
+
+/*
+ * Calculate a temperature value from a temperature code.
+ * The unit of the temperature is degree Celsius.
+ */
+static int code_to_temp(struct exynos_tmu_data *data, u8 temp_code)
+{
+ struct exynos_tmu_platform_data *pdata = data->pdata;
+ int temp;
+
+ if (data->soc == SOC_ARCH_EXYNOS4)
+ /* temp_code should range between 75 and 175 */
+ if (temp_code < 75 || temp_code > 175) {
+ temp = -ENODATA;
+ goto out;
+ }
+
+ switch (pdata->cal_type) {
+ case TYPE_TWO_POINT_TRIMMING:
+ temp = (temp_code - data->temp_error1) * (85 - 25) /
+ (data->temp_error2 - data->temp_error1) + 25;
+ break;
+ case TYPE_ONE_POINT_TRIMMING:
+ temp = temp_code - data->temp_error1 + 25;
+ break;
+ default:
+ temp = temp_code - EXYNOS_TMU_DEF_CODE_TO_TEMP_OFFSET;
+ break;
+ }
+out:
+ return temp;
+}
+
+static int exynos_tmu_initialize(struct platform_device *pdev)
+{
+ struct exynos_tmu_data *data = platform_get_drvdata(pdev);
+ struct exynos_tmu_platform_data *pdata = data->pdata;
+ unsigned int status, trim_info, rising_threshold;
+ int ret = 0, threshold_code;
+
+ mutex_lock(&data->lock);
+ clk_enable(data->clk);
+
+ status = readb(data->base + EXYNOS_TMU_REG_STATUS);
+ if (!status) {
+ ret = -EBUSY;
+ goto out;
+ }
+
+ if (data->soc == SOC_ARCH_EXYNOS5) {
+ __raw_writel(EXYNOS5_TRIMINFO_RELOAD,
+ data->base + EXYNOS5_TMU_TRIMINFO_CON);
+ }
+ /* Save trimming info in order to perform calibration */
+ trim_info = readl(data->base + EXYNOS_TMU_REG_TRIMINFO);
+ data->temp_error1 = trim_info & EXYNOS_TMU_TRIM_TEMP_MASK;
+ data->temp_error2 = ((trim_info >> 8) & EXYNOS_TMU_TRIM_TEMP_MASK);
+
+ if ((EFUSE_MIN_VALUE > data->temp_error1) ||
+ (data->temp_error1 > EFUSE_MAX_VALUE) ||
+ (data->temp_error2 != 0))
+ data->temp_error1 = pdata->efuse_value;
+
+ if (data->soc == SOC_ARCH_EXYNOS4) {
+ /* Write temperature code for threshold */
+ threshold_code = temp_to_code(data, pdata->threshold);
+ if (threshold_code < 0) {
+ ret = threshold_code;
+ goto out;
+ }
+ writeb(threshold_code,
+ data->base + EXYNOS4_TMU_REG_THRESHOLD_TEMP);
+
+ writeb(pdata->trigger_levels[0],
+ data->base + EXYNOS4_TMU_REG_TRIG_LEVEL0);
+ writeb(pdata->trigger_levels[1],
+ data->base + EXYNOS4_TMU_REG_TRIG_LEVEL1);
+ writeb(pdata->trigger_levels[2],
+ data->base + EXYNOS4_TMU_REG_TRIG_LEVEL2);
+ writeb(pdata->trigger_levels[3],
+ data->base + EXYNOS4_TMU_REG_TRIG_LEVEL3);
+
+ writel(EXYNOS4_TMU_INTCLEAR_VAL,
+ data->base + EXYNOS_TMU_REG_INTCLEAR);
+ } else if (data->soc == SOC_ARCH_EXYNOS5) {
+ /* Write temperature code for threshold */
+ threshold_code = temp_to_code(data, pdata->trigger_levels[0]);
+ if (threshold_code < 0) {
+ ret = threshold_code;
+ goto out;
+ }
+ rising_threshold = threshold_code;
+ threshold_code = temp_to_code(data, pdata->trigger_levels[1]);
+ if (threshold_code < 0) {
+ ret = threshold_code;
+ goto out;
+ }
+ rising_threshold |= (threshold_code << 8);
+ threshold_code = temp_to_code(data, pdata->trigger_levels[2]);
+ if (threshold_code < 0) {
+ ret = threshold_code;
+ goto out;
+ }
+ rising_threshold |= (threshold_code << 16);
+
+ writel(rising_threshold,
+ data->base + EXYNOS5_THD_TEMP_RISE);
+ writel(0, data->base + EXYNOS5_THD_TEMP_FALL);
+
+ writel(EXYNOS5_TMU_CLEAR_RISE_INT|EXYNOS5_TMU_CLEAR_FALL_INT,
+ data->base + EXYNOS_TMU_REG_INTCLEAR);
+ }
+out:
+ clk_disable(data->clk);
+ mutex_unlock(&data->lock);
+
+ return ret;
+}
+
+static void exynos_tmu_control(struct platform_device *pdev, bool on)
+{
+ struct exynos_tmu_data *data = platform_get_drvdata(pdev);
+ struct exynos_tmu_platform_data *pdata = data->pdata;
+ unsigned int con, interrupt_en;
+
+ mutex_lock(&data->lock);
+ clk_enable(data->clk);
+
+ con = pdata->reference_voltage << EXYNOS_TMU_REF_VOLTAGE_SHIFT |
+ pdata->gain << EXYNOS_TMU_GAIN_SHIFT;
+
+ if (data->soc == SOC_ARCH_EXYNOS5) {
+ con |= pdata->noise_cancel_mode << EXYNOS5_TMU_TRIP_MODE_SHIFT;
+ con |= (EXYNOS5_MUX_ADDR_VALUE << EXYNOS5_MUX_ADDR_SHIFT);
+ }
+
+ if (on) {
+ con |= EXYNOS_TMU_CORE_ON;
+ interrupt_en = pdata->trigger_level3_en << 12 |
+ pdata->trigger_level2_en << 8 |
+ pdata->trigger_level1_en << 4 |
+ pdata->trigger_level0_en;
+ } else {
+ con |= EXYNOS_TMU_CORE_OFF;
+ interrupt_en = 0; /* Disable all interrupts */
+ }
+ writel(interrupt_en, data->base + EXYNOS_TMU_REG_INTEN);
+ writel(con, data->base + EXYNOS_TMU_REG_CONTROL);
+
+ clk_disable(data->clk);
+ mutex_unlock(&data->lock);
+}
+
+static int exynos_tmu_read(struct exynos_tmu_data *data)
+{
+ u8 temp_code;
+ int temp;
+
+ mutex_lock(&data->lock);
+ clk_enable(data->clk);
+
+ temp_code = readb(data->base + EXYNOS_TMU_REG_CURRENT_TEMP);
+ temp = code_to_temp(data, temp_code);
+
+ clk_disable(data->clk);
+ mutex_unlock(&data->lock);
+
+ return temp;
+}
+
+static void exynos_tmu_work(struct work_struct *work)
+{
+ struct exynos_tmu_data *data = container_of(work,
+ struct exynos_tmu_data, irq_work);
+
+ mutex_lock(&data->lock);
+ clk_enable(data->clk);
+
+
+ if (data->soc == SOC_ARCH_EXYNOS5)
+ writel(EXYNOS5_TMU_CLEAR_RISE_INT,
+ data->base + EXYNOS_TMU_REG_INTCLEAR);
+ else
+ writel(EXYNOS4_TMU_INTCLEAR_VAL,
+ data->base + EXYNOS_TMU_REG_INTCLEAR);
+
+ clk_disable(data->clk);
+ mutex_unlock(&data->lock);
+ exynos_report_trigger();
+ enable_irq(data->irq);
+}
+
+static irqreturn_t exynos_tmu_irq(int irq, void *id)
+{
+ struct exynos_tmu_data *data = id;
+
+ disable_irq_nosync(irq);
+ schedule_work(&data->irq_work);
+
+ return IRQ_HANDLED;
+}
+static struct thermal_sensor_conf exynos_sensor_conf = {
+ .name = "exynos-therm",
+ .read_temperature = (int (*)(void *))exynos_tmu_read,
+};
+
+#if defined(CONFIG_CPU_EXYNOS4210)
+static struct exynos_tmu_platform_data const exynos4_default_tmu_data = {
+ .threshold = 80,
+ .trigger_levels[0] = 5,
+ .trigger_levels[1] = 20,
+ .trigger_levels[2] = 30,
+ .trigger_level0_en = 1,
+ .trigger_level1_en = 1,
+ .trigger_level2_en = 1,
+ .trigger_level3_en = 0,
+ .gain = 15,
+ .reference_voltage = 7,
+ .cal_type = TYPE_ONE_POINT_TRIMMING,
+ .freq_tab[0] = {
+ .freq_clip_max = 800 * 1000,
+ .temp_level = 85,
+ },
+ .freq_tab[1] = {
+ .freq_clip_max = 200 * 1000,
+ .temp_level = 100,
+ },
+ .freq_tab_count = 2,
+ .type = SOC_ARCH_EXYNOS4,
+};
+#define EXYNOS4_TMU_DRV_DATA (&exynos4_default_tmu_data)
+#else
+#define EXYNOS4_TMU_DRV_DATA (NULL)
+#endif
+
+#if defined(CONFIG_SOC_EXYNOS5250)
+static struct exynos_tmu_platform_data const exynos5_default_tmu_data = {
+ .trigger_levels[0] = 85,
+ .trigger_levels[1] = 103,
+ .trigger_levels[2] = 110,
+ .trigger_level0_en = 1,
+ .trigger_level1_en = 1,
+ .trigger_level2_en = 1,
+ .trigger_level3_en = 0,
+ .gain = 8,
+ .reference_voltage = 16,
+ .noise_cancel_mode = 4,
+ .cal_type = TYPE_ONE_POINT_TRIMMING,
+ .efuse_value = 55,
+ .freq_tab[0] = {
+ .freq_clip_max = 800 * 1000,
+ .temp_level = 85,
+ },
+ .freq_tab[1] = {
+ .freq_clip_max = 200 * 1000,
+ .temp_level = 103,
+ },
+ .freq_tab_count = 2,
+ .type = SOC_ARCH_EXYNOS5,
+};
+#define EXYNOS5_TMU_DRV_DATA (&exynos5_default_tmu_data)
+#else
+#define EXYNOS5_TMU_DRV_DATA (NULL)
+#endif
+
+#ifdef CONFIG_OF
+static const struct of_device_id exynos_tmu_match[] = {
+ {
+ .compatible = "samsung,exynos4-tmu",
+ .data = (void *)EXYNOS4_TMU_DRV_DATA,
+ },
+ {
+ .compatible = "samsung,exynos5-tmu",
+ .data = (void *)EXYNOS5_TMU_DRV_DATA,
+ },
+ {},
+};
+MODULE_DEVICE_TABLE(of, exynos_tmu_match);
+#else
+#define exynos_tmu_match NULL
+#endif
+
+static struct platform_device_id exynos_tmu_driver_ids[] = {
+ {
+ .name = "exynos4-tmu",
+ .driver_data = (kernel_ulong_t)EXYNOS4_TMU_DRV_DATA,
+ },
+ {
+ .name = "exynos5-tmu",
+ .driver_data = (kernel_ulong_t)EXYNOS5_TMU_DRV_DATA,
+ },
+ { },
+};
+MODULE_DEVICE_TABLE(platform, exynos4_tmu_driver_ids);
+
+static inline struct exynos_tmu_platform_data *exynos_get_driver_data(
+ struct platform_device *pdev)
+{
+#ifdef CONFIG_OF
+ if (pdev->dev.of_node) {
+ const struct of_device_id *match;
+ match = of_match_node(exynos_tmu_match, pdev->dev.of_node);
+ if (!match)
+ return NULL;
+ return (struct exynos_tmu_platform_data *) match->data;
+ }
+#endif
+ return (struct exynos_tmu_platform_data *)
+ platform_get_device_id(pdev)->driver_data;
+}
+static int __devinit exynos_tmu_probe(struct platform_device *pdev)
+{
+ struct exynos_tmu_data *data;
+ struct exynos_tmu_platform_data *pdata = pdev->dev.platform_data;
+ int ret, i;
+
+ if (!pdata)
+ pdata = exynos_get_driver_data(pdev);
+
+ if (!pdata) {
+ dev_err(&pdev->dev, "No platform init data supplied.\n");
+ return -ENODEV;
+ }
+ data = kzalloc(sizeof(struct exynos_tmu_data), GFP_KERNEL);
+ if (!data) {
+ dev_err(&pdev->dev, "Failed to allocate driver structure\n");
+ return -ENOMEM;
+ }
+
+ data->irq = platform_get_irq(pdev, 0);
+ if (data->irq < 0) {
+ ret = data->irq;
+ dev_err(&pdev->dev, "Failed to get platform irq\n");
+ goto err_free;
+ }
+
+ INIT_WORK(&data->irq_work, exynos_tmu_work);
+
+ data->mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!data->mem) {
+ ret = -ENOENT;
+ dev_err(&pdev->dev, "Failed to get platform resource\n");
+ goto err_free;
+ }
+
+ data->mem = request_mem_region(data->mem->start,
+ resource_size(data->mem), pdev->name);
+ if (!data->mem) {
+ ret = -ENODEV;
+ dev_err(&pdev->dev, "Failed to request memory region\n");
+ goto err_free;
+ }
+
+ data->base = ioremap(data->mem->start, resource_size(data->mem));
+ if (!data->base) {
+ ret = -ENODEV;
+ dev_err(&pdev->dev, "Failed to ioremap memory\n");
+ goto err_mem_region;
+ }
+
+ ret = request_irq(data->irq, exynos_tmu_irq,
+ IRQF_TRIGGER_RISING, "exynos-tmu", data);
+ if (ret) {
+ dev_err(&pdev->dev, "Failed to request irq: %d\n", data->irq);
+ goto err_io_remap;
+ }
+
+ data->clk = clk_get(NULL, "tmu_apbif");
+ if (IS_ERR(data->clk)) {
+ ret = PTR_ERR(data->clk);
+ dev_err(&pdev->dev, "Failed to get clock\n");
+ goto err_irq;
+ }
+
+ if (pdata->type == SOC_ARCH_EXYNOS5 ||
+ pdata->type == SOC_ARCH_EXYNOS4)
+ data->soc = pdata->type;
+ else {
+ ret = -EINVAL;
+ dev_err(&pdev->dev, "Platform not supported\n");
+ goto err_clk;
+ }
+
+ data->pdata = pdata;
+ platform_set_drvdata(pdev, data);
+ mutex_init(&data->lock);
+
+ ret = exynos_tmu_initialize(pdev);
+ if (ret) {
+ dev_err(&pdev->dev, "Failed to initialize TMU\n");
+ goto err_clk;
+ }
+
+ exynos_tmu_control(pdev, true);
+
+ /*Register the sensor with thermal management interface*/
+ (&exynos_sensor_conf)->private_data = data;
+ exynos_sensor_conf.trip_data.trip_count = pdata->trigger_level0_en +
+ pdata->trigger_level1_en + pdata->trigger_level2_en +
+ pdata->trigger_level3_en;
+
+ for (i = 0; i < exynos_sensor_conf.trip_data.trip_count; i++)
+ exynos_sensor_conf.trip_data.trip_val[i] =
+ pdata->threshold + pdata->trigger_levels[i];
+
+ exynos_sensor_conf.cooling_data.freq_clip_count =
+ pdata->freq_tab_count;
+ for (i = 0; i < pdata->freq_tab_count; i++) {
+ exynos_sensor_conf.cooling_data.freq_data[i].freq_clip_max =
+ pdata->freq_tab[i].freq_clip_max;
+ exynos_sensor_conf.cooling_data.freq_data[i].temp_level =
+ pdata->freq_tab[i].temp_level;
+ }
+
+ ret = exynos_register_thermal(&exynos_sensor_conf);
+ if (ret) {
+ dev_err(&pdev->dev, "Failed to register thermal interface\n");
+ goto err_clk;
+ }
+ return 0;
+err_clk:
+ platform_set_drvdata(pdev, NULL);
+ clk_put(data->clk);
+err_irq:
+ free_irq(data->irq, data);
+err_io_remap:
+ iounmap(data->base);
+err_mem_region:
+ release_mem_region(data->mem->start, resource_size(data->mem));
+err_free:
+ kfree(data);
+
+ return ret;
+}
+
+static int __devexit exynos_tmu_remove(struct platform_device *pdev)
+{
+ struct exynos_tmu_data *data = platform_get_drvdata(pdev);
+
+ exynos_tmu_control(pdev, false);
+
+ exynos_unregister_thermal();
+
+ clk_put(data->clk);
+
+ free_irq(data->irq, data);
+
+ iounmap(data->base);
+ release_mem_region(data->mem->start, resource_size(data->mem));
+
+ platform_set_drvdata(pdev, NULL);
+
+ kfree(data);
+
+ return 0;
+}
+
+#ifdef CONFIG_PM
+static int exynos_tmu_suspend(struct platform_device *pdev, pm_message_t state)
+{
+ exynos_tmu_control(pdev, false);
+
+ return 0;
+}
+
+static int exynos_tmu_resume(struct platform_device *pdev)
+{
+ exynos_tmu_initialize(pdev);
+ exynos_tmu_control(pdev, true);
+
+ return 0;
+}
+#else
+#define exynos_tmu_suspend NULL
+#define exynos_tmu_resume NULL
+#endif
+
+static struct platform_driver exynos_tmu_driver = {
+ .driver = {
+ .name = "exynos-tmu",
+ .owner = THIS_MODULE,
+ .of_match_table = exynos_tmu_match,
+ },
+ .probe = exynos_tmu_probe,
+ .remove = __devexit_p(exynos_tmu_remove),
+ .suspend = exynos_tmu_suspend,
+ .resume = exynos_tmu_resume,
+ .id_table = exynos_tmu_driver_ids,
+};
+
+module_platform_driver(exynos_tmu_driver);
+
+MODULE_DESCRIPTION("EXYNOS TMU Driver");
+MODULE_AUTHOR("Donggeun Kim <dg77.kim@samsung.com>");
+MODULE_LICENSE("GPL");
+MODULE_ALIAS("platform:exynos-tmu");
wr_regl(port, S3C2410_UFCON, cfg->ufcon | S3C2410_UFCON_RESETBOTH);
wr_regl(port, S3C2410_UFCON, cfg->ufcon);
+ wr_regl(port, S3C64XX_UINTM, 0xf);
+ wr_regl(port, S3C64XX_UINTP, 0xf);
+
/* some delay is required after fifo reset */
udelay(1);
}
return ret;
}
- mode = DWC3_MODE(dwc->hwparams.hwparams0);
+ /* Putting controller in Host mode here */
+ mode = DWC3_MODE_HOST; /* Just a hack for time being */
switch (mode) {
case DWC3_MODE_DEVICE:
#include <linux/dma-mapping.h>
#include <linux/module.h>
#include <linux/clk.h>
+#include <linux/of.h>
+#include <linux/of_gpio.h>
#include "core.h"
struct clk *clk;
};
+static int dwc3_setup_vbus_gpio(struct platform_device *pdev)
+{
+ int err = 0;
+ int gpio;
+
+ if (!pdev->dev.of_node)
+ return 0;
+
+ gpio = of_get_named_gpio(pdev->dev.of_node,
+ "samsung,vbus-gpio", 0);
+ if (!gpio_is_valid(gpio))
+ return 0;
+
+ err = gpio_request(gpio, "dwc3_vbus_gpio");
+ if (err) {
+ dev_err(&pdev->dev, "can't request dwc3 vbus gpio %d", gpio);
+ return err;
+ }
+ gpio_set_value(gpio, 1);
+
+ return err;
+}
+
+static u64 dwc3_exynos_dma_mask = DMA_BIT_MASK(32);
+
static int __devinit dwc3_exynos_probe(struct platform_device *pdev)
{
struct dwc3_exynos_data *pdata = pdev->dev.platform_data;
goto err0;
}
+ if (!pdev->dev.dma_mask)
+ pdev->dev.dma_mask = &dwc3_exynos_dma_mask;
+
+ if (!pdev->dev.coherent_dma_mask)
+ pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32);
+
+ dwc3_setup_vbus_gpio(pdev);
+
platform_set_drvdata(pdev, exynos);
devid = dwc3_get_device_id();
return 0;
}
+#ifdef CONFIG_OF
+static const struct of_device_id exynos_xhci_match[] = {
+ { .compatible = "samsung,exynos-xhci" },
+ {},
+};
+MODULE_DEVICE_TABLE(of, exynos_xhci_match);
+#endif
+
static struct platform_driver dwc3_exynos_driver = {
.probe = dwc3_exynos_probe,
.remove = __devexit_p(dwc3_exynos_remove),
.driver = {
.name = "exynos-dwc3",
+ .of_match_table = of_match_ptr(exynos_xhci_match),
},
};
*/
#include <linux/clk.h>
+#include <linux/of.h>
#include <linux/platform_device.h>
+#include <linux/of_gpio.h>
#include <plat/ehci.h>
#include <plat/usb-phy.h>
.clear_tt_buffer_complete = ehci_clear_tt_buffer_complete,
};
+static int s5p_ehci_setup_gpio(struct platform_device *pdev)
+{
+ int err = 0;
+ int gpio;
+
+ if (!pdev->dev.of_node)
+ return 0;
+
+ gpio = of_get_named_gpio(pdev->dev.of_node,
+ "samsung,vbus-gpio", 0);
+ if (!gpio_is_valid(gpio))
+ return 0;
+
+ err = gpio_request(gpio, "ehci_vbus_gpio");
+ if (err) {
+ dev_err(&pdev->dev, "can't request ehci vbus gpio %d", gpio);
+ return err;
+ }
+ gpio_set_value(gpio, 1);
+
+ return err;
+}
+
+static u64 ehci_s5p_dma_mask = DMA_BIT_MASK(32);
static int __devinit s5p_ehci_probe(struct platform_device *pdev)
{
struct s5p_ehci_platdata *pdata;
return -EINVAL;
}
+ pdev->dev.dma_mask = &ehci_s5p_dma_mask;
+ pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32);
+
+ s5p_ehci_setup_gpio(pdev);
+
s5p_ehci = kzalloc(sizeof(struct s5p_ehci_hcd), GFP_KERNEL);
if (!s5p_ehci)
return -ENOMEM;
#define s5p_ehci_resume NULL
#endif
+#ifdef CONFIG_OF
+static const struct of_device_id exynos_ehci_match[] = {
+ { .compatible = "samsung,exynos-ehci" },
+ {},
+};
+MODULE_DEVICE_TABLE(of, exynos_ehci_match);
+#endif
+
static const struct dev_pm_ops s5p_ehci_pm_ops = {
.suspend = s5p_ehci_suspend,
.resume = s5p_ehci_resume,
.name = "s5p-ehci",
.owner = THIS_MODULE,
.pm = &s5p_ehci_pm_ops,
+ .of_match_table = of_match_ptr(exynos_ehci_match),
}
};
*/
#include <linux/clk.h>
+#include <linux/of.h>
#include <linux/platform_device.h>
#include <mach/ohci.h>
#include <plat/usb-phy.h>
.start_port_reset = ohci_start_port_reset,
};
+static u64 ohci_exynos_dma_mask = DMA_BIT_MASK(32);
static int __devinit exynos_ohci_probe(struct platform_device *pdev)
{
struct exynos4_ohci_platdata *pdata;
return -EINVAL;
}
+ pdev->dev.dma_mask = &ohci_exynos_dma_mask;
+ pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32);
+
exynos_ohci = kzalloc(sizeof(struct exynos_ohci_hcd), GFP_KERNEL);
if (!exynos_ohci)
return -ENOMEM;
.resume = exynos_ohci_resume,
};
+#ifdef CONFIG_OF
+static const struct of_device_id exynos_ohci_match[] = {
+ { .compatible = "samsung,exynos-ohci" },
+ {},
+};
+MODULE_DEVICE_TABLE(of, exynos_ohci_match);
+#endif
+
static struct platform_driver exynos_ohci_driver = {
.probe = exynos_ohci_probe,
.remove = __devexit_p(exynos_ohci_remove),
.name = "exynos-ohci",
.owner = THIS_MODULE,
.pm = &exynos_ohci_pm_ops,
+ .of_match_table = of_match_ptr(exynos_ohci_match),
}
};
hcd->rsrc_start = res->start;
hcd->rsrc_len = resource_size(res);
- if (!request_mem_region(hcd->rsrc_start, hcd->rsrc_len,
- driver->description)) {
- dev_dbg(&pdev->dev, "controller already in use\n");
- ret = -EBUSY;
- goto put_hcd;
- }
+ /* Hack: Removing the request_mem_region here
+ * to make DWC3 host work on exynos5
+ */
hcd->regs = ioremap(hcd->rsrc_start, hcd->rsrc_len);
if (!hcd->regs) {
dev_dbg(&pdev->dev, "error mapping memory\n");
ret = -EFAULT;
- goto release_mem_region;
+ goto put_hcd;
}
ret = usb_add_hcd(hcd, irq, IRQF_SHARED);
unmap_registers:
iounmap(hcd->regs);
-release_mem_region:
- release_mem_region(hcd->rsrc_start, hcd->rsrc_len);
-
put_hcd:
usb_put_hcd(hcd);
config FB_S3C
tristate "Samsung S3C framebuffer support"
- depends on FB && (S3C_DEV_FB || S5P_DEV_FIMD0)
+ depends on FB
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
Currently the support is only for the S3C6400 and S3C6410 SoCs.
+config FB_EXYNOS_FIMD_V8
+ bool "register extensions for FIMD version 8"
+ depends on ARCH_EXYNOS5
+ ---help---
+ This uses register extensions for FIMD version 8
config FB_S3C_DEBUG_REGWRITE
bool "Debug register writes"
depends on FB_S3C
Turn on debugging messages. Note that you can set/unset at run time
through sysfs
+config FB_MIPI_DSIM
+ bool "Samsung MIPI DSIM"
+ depends on FB_S3C || DRM_EXYNOS_FIMD
+ default n
+ ---help---
+ This enables support for Samsung MIPI DSIM feature
+
config FB_NUC900
bool "NUC900 LCD framebuffer support"
depends on FB && ARCH_W90X900
obj-$(CONFIG_FB_S1D13XXX) += s1d13xxxfb.o
obj-$(CONFIG_FB_SH7760) += sh7760fb.o
obj-$(CONFIG_FB_IMX) += imxfb.o
+obj-$(CONFIG_FB_MIPI_DSIM) += s5p_mipi_dsi.o s5p_mipi_dsi_lowlevel.o
obj-$(CONFIG_FB_S3C) += s3c-fb.o
obj-$(CONFIG_FB_S3C2410) += s3c2410fb.o
obj-$(CONFIG_FB_FSL_DIU) += fsl-diu-fb.o
If you have an S6E63M0 LCD Panel, say Y to enable its
LCD control driver.
+config LCD_MIPI_TC358764
+ tristate "1280 X 800 TC358764 AMOLED MIPI LCD Driver"
+ depends on BACKLIGHT_CLASS_DEVICE
+ default n
+ help
+ If you have an TC358764 MIPI LCD Panel, say Y to enable its
+ LCD control driver.
+
config LCD_LD9040
tristate "LD9040 AMOLED LCD Driver"
depends on SPI && BACKLIGHT_CLASS_DEVICE
obj-$(CONFIG_LCD_TDO24M) += tdo24m.o
obj-$(CONFIG_LCD_TOSA) += tosa_lcd.o
obj-$(CONFIG_LCD_S6E63M0) += s6e63m0.o
+obj-$(CONFIG_LCD_MIPI_TC358764) += tc358764_mipi_lcd.o
obj-$(CONFIG_LCD_LD9040) += ld9040.o
obj-$(CONFIG_LCD_AMS369FG06) += ams369fg06.o
--- /dev/null
+/* linux/drivers/video/backlight/tc358764_mipi_lcd.c
+ *
+ *
+ * Copyright (c) 2011 Samsung Electronics
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ *
+*/
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/mutex.h>
+#include <linux/wait.h>
+#include <linux/ctype.h>
+#include <linux/io.h>
+#include <linux/delay.h>
+#include <linux/irq.h>
+#include <linux/interrupt.h>
+#include <linux/gpio.h>
+#include <linux/workqueue.h>
+#include <linux/backlight.h>
+#include <linux/lcd.h>
+
+#include <video/mipi_display.h>
+
+#include <plat/gpio-cfg.h>
+#include <plat/regs-mipidsim.h>
+
+#include <plat/dsim.h>
+#include <plat/mipi_dsi.h>
+
+static int init_lcd(struct mipi_dsim_device *dsim)
+{
+ unsigned char initcode_013c[6] = {0x3c, 0x01, 0x03, 0x00, 0x02, 0x00};
+ unsigned char initcode_0114[6] = {0x14, 0x01, 0x02, 0x00, 0x00, 0x00};
+ unsigned char initcode_0164[6] = {0x64, 0x01, 0x05, 0x00, 0x00, 0x00};
+ unsigned char initcode_0168[6] = {0x68, 0x01, 0x05, 0x00, 0x00, 0x00};
+ unsigned char initcode_016c[6] = {0x6c, 0x01, 0x05, 0x00, 0x00, 0x00};
+ unsigned char initcode_0170[6] = {0x70, 0x01, 0x05, 0x00, 0x00, 0x00};
+ unsigned char initcode_0134[6] = {0x34, 0x01, 0x1f, 0x00, 0x00, 0x00};
+ unsigned char initcode_0210[6] = {0x10, 0x02, 0x1f, 0x00, 0x00, 0x00};
+ unsigned char initcode_0104[6] = {0x04, 0x01, 0x01, 0x00, 0x00, 0x00};
+ unsigned char initcode_0204[6] = {0x04, 0x02, 0x01, 0x00, 0x00, 0x00};
+ unsigned char initcode_0450[6] = {0x50, 0x04, 0x20, 0x01, 0xfa, 0x00};
+ unsigned char initcode_0454[6] = {0x54, 0x04, 0x20, 0x00, 0x50, 0x00};
+ unsigned char initcode_0458[6] = {0x58, 0x04, 0x00, 0x05, 0x30, 0x00};
+ unsigned char initcode_045c[6] = {0x5c, 0x04, 0x05, 0x00, 0x0a, 0x00};
+ unsigned char initcode_0460[6] = {0x60, 0x04, 0x20, 0x03, 0x0a, 0x00};
+ unsigned char initcode_0464[6] = {0x64, 0x04, 0x01, 0x00, 0x00, 0x00};
+ unsigned char initcode_04a0_1[6] = {0xa0, 0x04, 0x06, 0x80, 0x44, 0x00};
+ unsigned char initcode_04a0_2[6] = {0xa0, 0x04, 0x06, 0x80, 0x04, 0x00};
+ unsigned char initcode_0504[6] = {0x04, 0x05, 0x04, 0x00, 0x00, 0x00};
+ unsigned char initcode_049c[6] = {0x9c, 0x04, 0x0d, 0x00, 0x00, 0x00};
+
+ if (s5p_mipi_dsi_wr_data(dsim, MIPI_DSI_GENERIC_LONG_WRITE,
+ (unsigned int) initcode_013c, sizeof(initcode_013c)) == -1)
+ return 0;
+ mdelay(6);
+ if (s5p_mipi_dsi_wr_data(dsim, MIPI_DSI_GENERIC_LONG_WRITE,
+ (unsigned int) initcode_0114, sizeof(initcode_0114)) == -1)
+ return 0;
+ mdelay(6);
+ if (s5p_mipi_dsi_wr_data(dsim, MIPI_DSI_GENERIC_LONG_WRITE,
+ (unsigned int) initcode_0164, sizeof(initcode_0164)) == -1)
+ return 0;
+ mdelay(6);
+ if (s5p_mipi_dsi_wr_data(dsim, MIPI_DSI_GENERIC_LONG_WRITE,
+ (unsigned int) initcode_0168, sizeof(initcode_0168)) == -1)
+ return 0;
+ mdelay(6);
+ if (s5p_mipi_dsi_wr_data(dsim, MIPI_DSI_GENERIC_LONG_WRITE,
+ (unsigned int) initcode_016c, sizeof(initcode_016c)) == -1)
+ return 0;
+ mdelay(6);
+ if (s5p_mipi_dsi_wr_data(dsim, MIPI_DSI_GENERIC_LONG_WRITE,
+ (unsigned int) initcode_0170, sizeof(initcode_0170)) == -1)
+ return 0;
+ mdelay(6);
+ if (s5p_mipi_dsi_wr_data(dsim, MIPI_DSI_GENERIC_LONG_WRITE,
+ (unsigned int) initcode_0134, sizeof(initcode_0134)) == -1)
+ return 0;
+ mdelay(6);
+ if (s5p_mipi_dsi_wr_data(dsim, MIPI_DSI_GENERIC_LONG_WRITE,
+ (unsigned int) initcode_0210, sizeof(initcode_0210)) == -1)
+ return 0;
+ mdelay(6);
+ if (s5p_mipi_dsi_wr_data(dsim, MIPI_DSI_GENERIC_LONG_WRITE,
+ (unsigned int) initcode_0104, sizeof(initcode_0104)) == -1)
+ return 0;
+ mdelay(6);
+ if (s5p_mipi_dsi_wr_data(dsim, MIPI_DSI_GENERIC_LONG_WRITE,
+ (unsigned int) initcode_0204, sizeof(initcode_0204)) == -1)
+ return 0;
+ mdelay(6);
+ if (s5p_mipi_dsi_wr_data(dsim, MIPI_DSI_GENERIC_LONG_WRITE,
+ (unsigned int) initcode_0450, sizeof(initcode_0450)) == -1)
+ return 0;
+ mdelay(6);
+ if (s5p_mipi_dsi_wr_data(dsim, MIPI_DSI_GENERIC_LONG_WRITE,
+ (unsigned int) initcode_0454, sizeof(initcode_0454)) == -1)
+ return 0;
+ mdelay(6);
+ if (s5p_mipi_dsi_wr_data(dsim, MIPI_DSI_GENERIC_LONG_WRITE,
+ (unsigned int) initcode_0458, sizeof(initcode_0458)) == -1)
+ return 0;
+ mdelay(6);
+ if (s5p_mipi_dsi_wr_data(dsim, MIPI_DSI_GENERIC_LONG_WRITE,
+ (unsigned int) initcode_045c, sizeof(initcode_045c)) == -1)
+ return 0;
+ mdelay(6);
+ if (s5p_mipi_dsi_wr_data(dsim, MIPI_DSI_GENERIC_LONG_WRITE,
+ (unsigned int) initcode_0460, sizeof(initcode_0460)) == -1)
+ return 0;
+ mdelay(6);
+ if (s5p_mipi_dsi_wr_data(dsim, MIPI_DSI_GENERIC_LONG_WRITE,
+ (unsigned int) initcode_0464, sizeof(initcode_0464)) == -1)
+ return 0;
+ mdelay(6);
+ if (s5p_mipi_dsi_wr_data(dsim, MIPI_DSI_GENERIC_LONG_WRITE,
+ (unsigned int) initcode_04a0_1, sizeof(initcode_04a0_1)) == -1)
+ return 0;
+ mdelay(12);
+ if (s5p_mipi_dsi_wr_data(dsim, MIPI_DSI_GENERIC_LONG_WRITE,
+ (unsigned int) initcode_04a0_2, sizeof(initcode_04a0_2)) == -1)
+ return 0;
+ mdelay(6);
+ if (s5p_mipi_dsi_wr_data(dsim, MIPI_DSI_GENERIC_LONG_WRITE,
+ (unsigned int) initcode_0504, sizeof(initcode_0504)) == -1)
+ return 0;
+ mdelay(6);
+ if (s5p_mipi_dsi_wr_data(dsim, MIPI_DSI_GENERIC_LONG_WRITE,
+ (unsigned int) initcode_049c, sizeof(initcode_049c)) == -1)
+ return 0;
+ mdelay(800);
+
+ return 1;
+}
+
+void tc358764_mipi_lcd_off(struct mipi_dsim_device *dsim)
+{
+ mdelay(1);
+}
+
+static int tc358764_mipi_lcd_bl_update_status(struct backlight_device *bd)
+{
+ return 0;
+}
+
+static const struct backlight_ops tc358764_mipi_lcd_bl_ops = {
+ .update_status = tc358764_mipi_lcd_bl_update_status,
+};
+
+static int tc358764_mipi_lcd_probe(struct mipi_dsim_device *dsim)
+{
+ struct mipi_dsim_device *dsim_drv;
+ struct backlight_device *bd = NULL;
+ struct backlight_properties props;
+
+ dsim_drv = kzalloc(sizeof(struct mipi_dsim_device), GFP_KERNEL);
+ if (!dsim_drv)
+ return -ENOMEM;
+
+ dsim_drv = (struct mipi_dsim_device *) dsim;
+
+ props.max_brightness = 1;
+ props.type = BACKLIGHT_PLATFORM;
+
+ bd = backlight_device_register("pwm-backlight",
+ dsim_drv->dev, dsim_drv, &tc358764_mipi_lcd_bl_ops, &props);
+
+ return 0;
+}
+
+static int tc358764_mipi_lcd_suspend(struct mipi_dsim_device *dsim)
+{
+ tc358764_mipi_lcd_off(dsim);
+ return 0;
+}
+
+static int tc358764_mipi_lcd_displayon(struct mipi_dsim_device *dsim)
+{
+ return init_lcd(dsim);
+}
+
+static int tc358764_mipi_lcd_resume(struct mipi_dsim_device *dsim)
+{
+ return init_lcd(dsim);
+}
+
+struct mipi_dsim_lcd_driver tc358764_mipi_lcd_driver = {
+ .probe = tc358764_mipi_lcd_probe,
+ .suspend = tc358764_mipi_lcd_suspend,
+ .displayon = tc358764_mipi_lcd_displayon,
+ .resume = tc358764_mipi_lcd_resume,
+};
#include <linux/io.h>
#include <linux/interrupt.h>
#include <linux/delay.h>
+#include <linux/of.h>
#include <video/exynos_dp.h>
/* SW defined function Normal operation */
exynos_dp_enable_sw_function(dp);
- exynos_dp_config_interrupt(dp);
exynos_dp_init_analog_func(dp);
exynos_dp_init_hpd(dp);
struct exynos_dp_device *dp,
int lane)
{
- u32 reg;
+ u32 reg = 0;
switch (lane) {
case 0:
int lane_count;
u8 buf[5];
- u8 *adjust_request;
- u8 voltage_swing;
+ u8 adjust_request[2];
+ u8 voltage_swing = 0;
u8 pre_emphasis;
u8 training_lane;
/* set training pattern 2 for EQ */
exynos_dp_set_training_pattern(dp, TRAINING_PTN2);
- adjust_request = link_status + (DPCD_ADDR_ADJUST_REQUEST_LANE0_1
- - DPCD_ADDR_LANE0_1_STATUS);
+ adjust_request[0] = link_status[4];
+ adjust_request[1] = link_status[5];
exynos_dp_get_adjust_train(dp, adjust_request);
u8 buf[5];
u32 reg;
- u8 *adjust_request;
+ u8 adjust_request[2];
udelay(400);
lane_count = dp->link_train.lane_count;
if (exynos_dp_clock_recovery_ok(link_status, lane_count) == 0) {
- adjust_request = link_status + (DPCD_ADDR_ADJUST_REQUEST_LANE0_1
- - DPCD_ADDR_LANE0_1_STATUS);
+ adjust_request[0] = link_status[4];
+ adjust_request[1] = link_status[5];
if (exynos_dp_channel_eq_ok(link_status, lane_count) == 0) {
/* traing pattern Set to Normal */
return retval;
}
+static int exynos_dp_set_hw_link_train(struct exynos_dp_device *dp,
+ u32 max_lane,
+ u32 max_rate)
+{
+ u32 status;
+ int lane;
+
+ exynos_dp_stop_video(dp);
+
+ if (exynos_dp_get_pll_lock_status(dp) == PLL_UNLOCKED) {
+ dev_err(dp->dev, "PLL is not locked yet.\n");
+ return -EINVAL;
+ }
+
+ exynos_dp_reset_macro(dp);
+
+ /* Set TX pre-emphasis to minimum */
+ for (lane = 0; lane < max_lane; lane++)
+ exynos_dp_set_lane_lane_pre_emphasis(dp,
+ PRE_EMPHASIS_LEVEL_0, lane);
+
+ /* All DP analog module power up */
+ exynos_dp_set_analog_power_down(dp, POWER_ALL, 0);
+
+ /* Initialize by reading RX's DPCD */
+ exynos_dp_get_max_rx_bandwidth(dp, &dp->link_train.link_rate);
+ exynos_dp_get_max_rx_lane_count(dp, &dp->link_train.lane_count);
+
+ if ((dp->link_train.link_rate != LINK_RATE_1_62GBPS) &&
+ (dp->link_train.link_rate != LINK_RATE_2_70GBPS)) {
+ dev_err(dp->dev, "Rx Max Link Rate is abnormal :%x !\n",
+ dp->link_train.link_rate);
+ dp->link_train.link_rate = LINK_RATE_1_62GBPS;
+ }
+
+ if (dp->link_train.lane_count == 0) {
+ dev_err(dp->dev, "Rx Max Lane count is abnormal :%x !\n",
+ dp->link_train.lane_count);
+ dp->link_train.lane_count = (u8)LANE_COUNT1;
+ }
+
+ /* Setup TX lane count & rate */
+ if (dp->link_train.lane_count > max_lane)
+ dp->link_train.lane_count = max_lane;
+ if (dp->link_train.link_rate > max_rate)
+ dp->link_train.link_rate = max_rate;
+
+ /* Set link rate and count as you want to establish*/
+ exynos_dp_set_lane_count(dp, dp->video_info->lane_count);
+ exynos_dp_set_link_bandwidth(dp, dp->video_info->link_rate);
+
+ /* Set sink to D0 (Sink Not Ready) mode. */
+ exynos_dp_write_byte_to_dpcd(dp, DPCD_ADDR_SINK_POWER_STATE,
+ DPCD_SET_POWER_STATE_D0);
+
+ /* Enable H/W Link Training */
+ status = exynos_dp_enable_hw_link_training(dp);
+
+ if (status != 0) {
+ dev_err(dp->dev, " H/W link training failure: 0x%x\n", status);
+ return -EINVAL;
+ }
+
+ exynos_dp_get_link_bandwidth(dp, &status);
+ dp->link_train.link_rate = status;
+ dev_dbg(dp->dev, "final bandwidth = %.2x\n",
+ dp->link_train.link_rate);
+
+ exynos_dp_get_lane_count(dp, &status);
+ dp->link_train.lane_count = status;
+ dev_dbg(dp->dev, "final lane count = %.2x\n",
+ dp->link_train.lane_count);
+
+ return 0;
+}
+
static int exynos_dp_set_link_train(struct exynos_dp_device *dp,
u32 count,
u32 bwtype)
goto err_ioremap;
}
+ dp->video_info = pdata->video_info;
+ if (pdata->phy_init)
+ pdata->phy_init();
+
+ exynos_dp_init_dp(dp);
+
ret = request_irq(dp->irq, exynos_dp_irq_handler, 0,
"exynos-dp", dp);
if (ret) {
goto err_ioremap;
}
- dp->video_info = pdata->video_info;
- if (pdata->phy_init)
- pdata->phy_init();
-
- exynos_dp_init_dp(dp);
-
ret = exynos_dp_detect_hpd(dp);
if (ret) {
dev_err(&pdev->dev, "unable to detect hpd\n");
exynos_dp_handle_edid(dp);
- ret = exynos_dp_set_link_train(dp, dp->video_info->lane_count,
- dp->video_info->link_rate);
+ if (pdata->training_type == SW_LINK_TRAINING)
+ ret = exynos_dp_set_link_train(dp, dp->video_info->lane_count,
+ dp->video_info->link_rate);
+ else
+ ret = exynos_dp_set_hw_link_train(dp,
+ dp->video_info->lane_count, dp->video_info->link_rate);
if (ret) {
dev_err(&pdev->dev, "unable to do link train\n");
goto err_irq;
exynos_dp_detect_hpd(dp);
exynos_dp_handle_edid(dp);
- exynos_dp_set_link_train(dp, dp->video_info->lane_count,
- dp->video_info->link_rate);
+ if (pdata->training_type == SW_LINK_TRAINING)
+ exynos_dp_set_link_train(dp, dp->video_info->lane_count,
+ dp->video_info->link_rate);
+ else
+ exynos_dp_set_hw_link_train(dp,
+ dp->video_info->lane_count, dp->video_info->link_rate);
exynos_dp_enable_scramble(dp, 1);
exynos_dp_enable_rx_to_enhanced_mode(dp, 1);
SET_SYSTEM_SLEEP_PM_OPS(exynos_dp_suspend, exynos_dp_resume)
};
+#ifdef CONFIG_OF
+static const struct of_device_id exynos_dp_match[] = {
+ { .compatible = "samsung,exynos5-dp" },
+ {},
+};
+MODULE_DEVICE_TABLE(of, exynos_dp_match);
+#endif
+
static struct platform_driver exynos_dp_driver = {
.probe = exynos_dp_probe,
.remove = __devexit_p(exynos_dp_remove),
.driver = {
- .name = "exynos-dp",
+ .name = "s5p-dp",
.owner = THIS_MODULE,
.pm = &exynos_dp_pm_ops,
+ .of_match_table = of_match_ptr(exynos_dp_match),
},
};
-module_platform_driver(exynos_dp_driver);
+static int __init exynos_dp_init(void)
+{
+ return platform_driver_probe(&exynos_dp_driver, exynos_dp_probe);
+}
+
+static void __exit exynos_dp_exit(void)
+{
+ platform_driver_unregister(&exynos_dp_driver);
+}
+/* TODO: Register as module_platform_driver */
+/* Currently, we make it late_initcall to make */
+/* sure that s3c-fb is probed before DP driver */
+late_initcall(exynos_dp_init);
+module_exit(exynos_dp_exit);
MODULE_AUTHOR("Jingoo Han <jg1.han@samsung.com>");
MODULE_DESCRIPTION("Samsung SoC DP Driver");
void exynos_dp_lane_swap(struct exynos_dp_device *dp, bool enable);
void exynos_dp_init_interrupt(struct exynos_dp_device *dp);
void exynos_dp_reset(struct exynos_dp_device *dp);
-void exynos_dp_config_interrupt(struct exynos_dp_device *dp);
u32 exynos_dp_get_pll_lock_status(struct exynos_dp_device *dp);
void exynos_dp_set_pll_power_down(struct exynos_dp_device *dp, bool enable);
void exynos_dp_set_analog_power_down(struct exynos_dp_device *dp,
void exynos_dp_enable_scrambling(struct exynos_dp_device *dp);
void exynos_dp_disable_scrambling(struct exynos_dp_device *dp);
+u32 exynos_dp_enable_hw_link_training(struct exynos_dp_device *dp);
+
/* I2C EDID Chip ID, Slave Address */
#define I2C_EDID_DEVICE_ADDR 0x50
#define I2C_E_EDID_DEVICE_ADDR 0x30
#include <linux/device.h>
#include <linux/io.h>
#include <linux/delay.h>
+#include <linux/jiffies.h>
#include <video/exynos_dp.h>
#include "exynos_dp_core.h"
#include "exynos_dp_reg.h"
-#define COMMON_INT_MASK_1 (0)
-#define COMMON_INT_MASK_2 (0)
-#define COMMON_INT_MASK_3 (0)
-#define COMMON_INT_MASK_4 (0)
-#define INT_STA_MASK (0)
-
void exynos_dp_enable_video_mute(struct exynos_dp_device *dp, bool enable)
{
u32 reg;
writel(reg, dp->reg_base + EXYNOS_DP_LANE_MAP);
}
+void exynos_dp_init_analog_param(struct exynos_dp_device *dp)
+{
+ /* Set analog parameters for Tx */
+ /* Set power source and terminal resistor values */
+ writel(0x10, dp->reg_base + EXYNOS_DP_ANALOG_CTL_1);
+ writel(0x0C, dp->reg_base + EXYNOS_DP_ANALOG_CTL_2);
+ writel(0x85, dp->reg_base + EXYNOS_DP_ANALOG_CTL_3);
+ writel(0x66, dp->reg_base + EXYNOS_DP_PLL_FILTER_CTL_1);
+ writel(0x0, dp->reg_base + EXYNOS_DP_TX_AMP_TUNING_CTL);
+}
+
void exynos_dp_init_interrupt(struct exynos_dp_device *dp)
{
/* Set interrupt pin assertion polarity as high */
writel(0x00000101, dp->reg_base + EXYNOS_DP_SOC_GENERAL_CTL);
+ exynos_dp_init_analog_param(dp);
exynos_dp_init_interrupt(dp);
}
-void exynos_dp_config_interrupt(struct exynos_dp_device *dp)
-{
- u32 reg;
-
- /* 0: mask, 1: unmask */
- reg = COMMON_INT_MASK_1;
- writel(reg, dp->reg_base + EXYNOS_DP_COMMON_INT_MASK_1);
-
- reg = COMMON_INT_MASK_2;
- writel(reg, dp->reg_base + EXYNOS_DP_COMMON_INT_MASK_2);
-
- reg = COMMON_INT_MASK_3;
- writel(reg, dp->reg_base + EXYNOS_DP_COMMON_INT_MASK_3);
-
- reg = COMMON_INT_MASK_4;
- writel(reg, dp->reg_base + EXYNOS_DP_COMMON_INT_MASK_4);
-
- reg = INT_STA_MASK;
- writel(reg, dp->reg_base + EXYNOS_DP_INT_STA_MASK);
-}
-
u32 exynos_dp_get_pll_lock_status(struct exynos_dp_device *dp)
{
u32 reg;
void exynos_dp_init_analog_func(struct exynos_dp_device *dp)
{
u32 reg;
+ int timeout_loop = 0;
exynos_dp_set_analog_power_down(dp, POWER_ALL, 0);
writel(reg, dp->reg_base + EXYNOS_DP_DEBUG_CTL);
/* Power up PLL */
- if (exynos_dp_get_pll_lock_status(dp) == PLL_UNLOCKED)
+ if (exynos_dp_get_pll_lock_status(dp) == PLL_UNLOCKED) {
exynos_dp_set_pll_power_down(dp, 0);
+ while (exynos_dp_get_pll_lock_status(dp) == PLL_UNLOCKED) {
+ timeout_loop++;
+ if (DP_TIMEOUT_LOOP_COUNT < timeout_loop) {
+ dev_err(dp->dev, "failed to get pll lock status\n");
+ return;
+ }
+ udelay(10);
+ }
+ }
+
/* Enable Serdes FIFO function and Link symbol clock domain module */
reg = readl(dp->reg_base + EXYNOS_DP_FUNC_EN_2);
reg &= ~(SERDES_FIFO_FUNC_EN_N | LS_CLK_DOMAIN_FUNC_EN_N
reg |= SCRAMBLING_DISABLE;
writel(reg, dp->reg_base + EXYNOS_DP_TRAINING_PTN_SET);
}
+
+u32 exynos_dp_enable_hw_link_training(struct exynos_dp_device *dp)
+{
+ u32 reg;
+ unsigned long timeout;
+
+ reg = HW_TRAINING_EN;
+ writel(reg, dp->reg_base + EXYNOS_DP_HW_LINK_TRAINING_CTL);
+
+ /* wait for maximum of 100 msec */
+ timeout = jiffies + msecs_to_jiffies(100);
+ do {
+ reg = readl(dp->reg_base + EXYNOS_DP_HW_LINK_TRAINING_CTL);
+ if (!(reg & HW_TRAINING_EN))
+ return 0;
+ udelay(10);
+ } while (time_before(jiffies, timeout));
+
+ dev_warn(dp->dev, "H/W Link training failed\n");
+ return -ETIMEDOUT;
+}
#define EXYNOS_DP_LANE_MAP 0x35C
+#define EXYNOS_DP_ANALOG_CTL_1 0x370
+#define EXYNOS_DP_ANALOG_CTL_2 0x374
+#define EXYNOS_DP_ANALOG_CTL_3 0x378
+#define EXYNOS_DP_PLL_FILTER_CTL_1 0x37C
+#define EXYNOS_DP_TX_AMP_TUNING_CTL 0x380
+
#define EXYNOS_DP_AUX_HW_RETRY_CTL 0x390
#define EXYNOS_DP_COMMON_INT_STA_1 0x3C4
#define VIDEO_MODE_SLAVE_MODE (0x1 << 0)
#define VIDEO_MODE_MASTER_MODE (0x0 << 0)
+#define EXYNOS_DP_HW_LINK_TRAINING_CTL 0x6A0
+#define HW_TRAINING_EN (0x1<<0)
+
#endif /* _EXYNOS_DP_REG_H */
#include <mach/map.h>
#include <plat/regs-fb-v4.h>
#include <plat/fb.h>
+#include <mach/regs-pmu.h>
/* This driver will export a number of framebuffer interfaces depending
* on the configuration passed in via the platform data. Each fb instance
unsigned int has_prtcon:1;
unsigned int has_shadowcon:1;
unsigned int has_blendcon:1;
+ unsigned int has_alphacon:1;
unsigned int has_clksel:1;
unsigned int has_fixvclk:1;
};
-/**
+/*
* struct s3c_fb_win_variant
* @has_osd_c: Set if has OSD C register.
* @has_osd_d: Set if has OSD D register.
{
struct fb_var_screeninfo *var = &info->var;
struct s3c_fb_win *win = info->par;
+ struct s3c_fb_pd_win *windata = win->windata;
struct s3c_fb *sfb = win->parent;
void __iomem *regs = sfb->regs;
void __iomem *buf = regs;
break;
}
+ if (!win->index) {
+ var->xres_virtual = windata->virtual_x;
+ var->yres_virtual = windata->virtual_y;
+ }
info->fix.line_length = (var->xres_virtual * var->bits_per_pixel) / 8;
info->fix.xpanstep = info->var.xres_virtual > info->var.xres ? 1 : 0;
if (sfb->variant.is_2443)
data |= (1 << 5);
+ data |= VIDCON0_ENVID | VIDCON0_ENVID_F;
writel(data, regs + VIDCON0);
- s3c_fb_enable(sfb, 1);
-
data = VIDTCON0_VBPD(var->upper_margin - 1) |
VIDTCON0_VFPD(var->lower_margin - 1) |
VIDTCON0_VSPW(var->vsync_len - 1);
/* VIDTCON1 */
writel(data, regs + sfb->variant.vidtcon + 4);
- data = VIDTCON2_LINEVAL(var->yres - 1) |
- VIDTCON2_HOZVAL(var->xres - 1) |
- VIDTCON2_LINEVAL_E(var->yres - 1) |
- VIDTCON2_HOZVAL_E(var->xres - 1);
+ data = VIDTCON2_LINEVAL(windata->win_mode.yres - 1) |
+ VIDTCON2_HOZVAL(windata->win_mode.xres - 1) |
+ VIDTCON2_LINEVAL_E(windata->win_mode.yres - 1) |
+ VIDTCON2_HOZVAL_E(windata->win_mode.xres - 1);
writel(data, regs + sfb->variant.vidtcon + 8);
}
/* write 'OSD' registers to control position of framebuffer */
- data = VIDOSDxA_TOPLEFT_X(0) | VIDOSDxA_TOPLEFT_Y(0) |
- VIDOSDxA_TOPLEFT_X_E(0) | VIDOSDxA_TOPLEFT_Y_E(0);
+ data = VIDOSDxA_TOPLEFT_X(0) | VIDOSDxA_TOPLEFT_Y(0);
writel(data, regs + VIDOSD_A(win_no, sfb->variant));
data = VIDOSDxB_BOTRIGHT_X(s3c_fb_align_word(var->bits_per_pixel,
writel(data, sfb->regs + SHADOWCON);
}
- data = WINCONx_ENWIN;
- sfb->enabled |= (1 << win->index);
+ if (win_no == sfb->pdata->default_win) {
+ data = WINCONx_ENWIN;
+ sfb->enabled |= (1 << win->index);
+ } else
+ data = 0;
/* note, since we have to round up the bits-per-pixel, we end up
* relying on the bitfield information for r/g/b/a to work out
/* we're stuck with this until we can do something about overriding
* the power control using the blanking event for a single fb.
*/
- if (index == sfb->pdata->default_win) {
- shadow_protect_win(win, 1);
- s3c_fb_enable(sfb, blank_mode != FB_BLANK_POWERDOWN ? 1 : 0);
- shadow_protect_win(win, 0);
- }
+ shadow_protect_win(win, 1);
+ s3c_fb_enable(sfb, sfb->enabled ? 1 : 0);
+ shadow_protect_win(win, 0);
pm_runtime_put_sync(sfb->dev);
return 0;
}
+struct s3c_fb_user_window {
+ int x;
+ int y;
+};
+
+struct s3c_fb_user_plane_alpha {
+ int channel;
+ unsigned char red;
+ unsigned char green;
+ unsigned char blue;
+};
+
+struct s3c_fb_user_chroma {
+ int enabled;
+ unsigned char red;
+ unsigned char green;
+ unsigned char blue;
+};
+
+struct s3c_fb_user_ion_client {
+ int fd;
+};
+
+int s3c_fb_set_window_position(struct fb_info *info,
+ struct s3c_fb_user_window user_window)
+{
+ struct s3c_fb_win *win = info->par;
+ struct s3c_fb *sfb = win->parent;
+ struct fb_var_screeninfo *var = &info->var;
+ int win_no = win->index;
+ void __iomem *regs = sfb->regs;
+ u32 data;
+
+ shadow_protect_win(win, 1);
+ /* write 'OSD' registers to control position of framebuffer */
+ data = VIDOSDxA_TOPLEFT_X(user_window.x) |
+ VIDOSDxA_TOPLEFT_Y(user_window.y) |
+ VIDOSDxA_TOPLEFT_X_E(user_window.x) |
+ VIDOSDxA_TOPLEFT_Y_E(user_window.y);
+ writel(data, regs + VIDOSD_A(win_no, sfb->variant));
+
+ data = VIDOSDxB_BOTRIGHT_X(s3c_fb_align_word(var->bits_per_pixel,
+ user_window.x + var->xres - 1)) |
+ VIDOSDxB_BOTRIGHT_Y(user_window.y + var->yres - 1) |
+ VIDOSDxB_BOTRIGHT_X_E(s3c_fb_align_word(var->bits_per_pixel,
+ user_window.x + var->xres - 1)) |
+ VIDOSDxB_BOTRIGHT_Y_E(user_window.y + var->yres - 1);
+ writel(data, regs + VIDOSD_B(win_no, sfb->variant));
+
+ shadow_protect_win(win, 0);
+ return 0;
+}
+
+int s3c_fb_set_plane_alpha_blending(struct fb_info *info,
+ struct s3c_fb_user_plane_alpha user_alpha)
+{
+ struct s3c_fb_win *win = info->par;
+ struct s3c_fb *sfb = win->parent;
+ int win_no = win->index;
+ void __iomem *regs = sfb->regs;
+ u32 data;
+
+ u32 alpha_high = 0;
+ u32 alpha_low = 0;
+
+ alpha_high = ((((user_alpha.red & 0xf0) >> 4) << 8) |
+ (((user_alpha.green & 0xf0) >> 4) << 4) |
+ (((user_alpha.blue & 0xf0) >> 4) << 0));
+
+ alpha_low = ((((user_alpha.red & 0xf)) << 16) |
+ (((user_alpha.green & 0xf)) << 8) |
+ (((user_alpha.blue & 0xf)) << 0));
+
+ shadow_protect_win(win, 1);
+
+ data = readl(regs + sfb->variant.wincon + (win_no * 4));
+ data &= ~(WINCON1_BLD_PIX | WINCON1_ALPHA_SEL);
+ data |= WINCON1_BLD_PLANE;
+
+ if (user_alpha.channel == 0)
+ alpha_high = alpha_high << 12;
+ else {
+ data |= WINCON1_ALPHA_SEL;
+ alpha_high = alpha_high << 0;
+ }
+
+ writel(data, regs + sfb->variant.wincon + (win_no * 4));
+ writel(alpha_high, regs + VIDOSD_C(win_no, sfb->variant));
+
+ if (sfb->variant.has_alphacon) {
+ if (user_alpha.channel == 0)
+ writel(alpha_low, regs + VIDW0ALPHA0 + (win_no * 8));
+ else
+ writel(alpha_low, regs + VIDW0ALPHA1 + (win_no * 8));
+ }
+
+ shadow_protect_win(win, 0);
+
+ return 0;
+}
+
+int s3c_fb_set_chroma_key(struct fb_info *info,
+ struct s3c_fb_user_chroma user_chroma)
+{
+ struct s3c_fb_win *win = info->par;
+ struct s3c_fb *sfb = win->parent;
+ int win_no = win->index;
+ void __iomem *regs = sfb->regs;
+ void __iomem *keycon = regs + sfb->variant.keycon;
+
+ u32 data = 0;
+
+ u32 chroma_value;
+
+ chroma_value = (((user_chroma.red & 0xff) << 16) |
+ ((user_chroma.green & 0xff) << 8) |
+ ((user_chroma.blue & 0xff) << 0));
+
+ shadow_protect_win(win, 1);
+
+ if (user_chroma.enabled)
+ data |= WxKEYCON0_KEYEN_F;
+
+ keycon += (win_no-1) * 8;
+ writel(data, keycon + WKEYCON0);
+
+ data = (chroma_value & 0xffffff);
+ writel(data, keycon + WKEYCON1);
+
+ shadow_protect_win(win, 0);
+
+ return 0;
+}
+
static int s3c_fb_ioctl(struct fb_info *info, unsigned int cmd,
unsigned long arg)
{
int ret;
u32 crtc;
+ union {
+ struct s3c_fb_user_window user_window;
+ struct s3c_fb_user_plane_alpha user_alpha;
+ struct s3c_fb_user_chroma user_chroma;
+ struct s3c_fb_user_ion_client user_ion_client;
+ } p;
switch (cmd) {
case FBIO_WAITFORVSYNC:
if (get_user(crtc, (u32 __user *)arg)) {
ret = s3c_fb_wait_for_vsync(sfb, crtc);
break;
+
+ case S3CFB_WIN_POSITION:
+ if (copy_from_user(&p.user_window,
+ (struct s3c_fb_user_window __user *)arg,
+ sizeof(p.user_window))) {
+ ret = -EFAULT;
+ break;
+ }
+
+ if (p.user_window.x < 0)
+ p.user_window.x = 0;
+ if (p.user_window.y < 0)
+ p.user_window.y = 0;
+
+ ret = s3c_fb_set_window_position(info, p.user_window);
+ break;
+
+ case S3CFB_WIN_SET_PLANE_ALPHA:
+ if (copy_from_user(&p.user_alpha,
+ (struct s3c_fb_user_plane_alpha __user *)arg,
+ sizeof(p.user_alpha))) {
+ ret = -EFAULT;
+ break;
+ }
+
+ ret = s3c_fb_set_plane_alpha_blending(info, p.user_alpha);
+ break;
+
+ case S3CFB_WIN_SET_CHROMA:
+ if (copy_from_user(&p.user_chroma,
+ (struct s3c_fb_user_chroma __user *)arg,
+ sizeof(p.user_chroma))) {
+ ret = -EFAULT;
+ break;
+ }
+
+ ret = s3c_fb_set_chroma_key(info, p.user_chroma);
+ break;
+
+ case S3CFB_SET_VSYNC_INT:
+ /* unnecessary, but for compatibility */
+ ret = 0;
+ break;
+
default:
ret = -ENOTTY;
}
fbinfo->var.activate = FB_ACTIVATE_NOW;
fbinfo->var.vmode = FB_VMODE_NONINTERLACED;
fbinfo->var.bits_per_pixel = windata->default_bpp;
+ fbinfo->var.width = windata->width;
+ fbinfo->var.height = windata->height;
fbinfo->fbops = &s3c_fb_ops;
fbinfo->flags = FBINFO_FLAG_DEFAULT;
fbinfo->pseudo_palette = &win->pseudo_palette;
struct s3c_fb_platdata *pd;
struct s3c_fb *sfb;
struct resource *res;
- int win;
+ int i, win, default_win;
int ret = 0;
u32 reg;
+ struct clk *clk_parent;
+ struct clk *sclk;
+
+ pd = pdev->dev.platform_data;
+ if (!pd) {
+ dev_err(dev, "no platform data specified\n");
+ return -EINVAL;
+ }
+
+ /* HACK: This should be added from pdata/device tree */
+ sclk = clk_get(&pdev->dev, "sclk_fimd");
+ if (IS_ERR(sclk))
+ return PTR_ERR(sclk);
+
+ clk_parent = clk_get(NULL, "mout_mpll_user");
+ if (IS_ERR(clk_parent)) {
+ clk_put(sclk);
+ return PTR_ERR(clk_parent);
+ }
+
+ if (clk_set_parent(sclk, clk_parent)) {
+ clk_put(sclk);
+ clk_put(clk_parent);
+ return PTR_ERR(sclk);
+ }
+
+ if (clk_set_rate(sclk, pd->clock_rate)) {
+ clk_put(sclk);
+ clk_put(clk_parent);
+ return PTR_ERR(sclk);
+ }
+
+ clk_put(sclk);
+ clk_put(clk_parent);
platid = platform_get_device_id(pdev);
fbdrv = (struct s3c_fb_driverdata *)platid->driver_data;
return -EINVAL;
}
- pd = pdev->dev.platform_data;
- if (!pd) {
- dev_err(dev, "no platform data specified\n");
- return -EINVAL;
- }
-
sfb = devm_kzalloc(dev, sizeof(struct s3c_fb), GFP_KERNEL);
if (!sfb) {
dev_err(dev, "no memory for framebuffers\n");
writel(reg, sfb->regs + VIDCON1);
}
+ /* disable auto-clock gate mode */
+ writel(REG_CLKGATE_MODE_NON_CLOCK_GATE, sfb->regs + REG_CLKGATE_MODE);
+
/* zero all windows before we do anything */
for (win = 0; win < fbdrv->variant.nr_windows; win++)
}
/* we have the register setup, start allocating framebuffers */
-
- for (win = 0; win < fbdrv->variant.nr_windows; win++) {
+ default_win = sfb->pdata->default_win;
+ for (i = 0; i < fbdrv->variant.nr_windows; i++) {
+ win = i;
+ if (i == 0)
+ win = default_win;
+ if (i == default_win)
+ win = 0;
if (!pd->win[win])
continue;
}
}
+ if (pd->panel_type == DP_LCD)
+ writel(DPCLKCON_ENABLE, sfb->regs + DPCLKCON);
+
platform_set_drvdata(pdev, sfb);
pm_runtime_put_sync(sfb->dev);
return 0;
}
+static inline void s3c_fb_enable_fimd_bypass_disp1(void)
+{
+ u32 reg;
+
+ reg = __raw_readl(EXYNOS5_SYS_DISP1BLK_CFG);
+ reg |= ENABLE_FIMDBYPASS_DISP1;
+ __raw_writel(reg, EXYNOS5_SYS_DISP1BLK_CFG);
+}
+
#ifdef CONFIG_PM_SLEEP
static int s3c_fb_suspend(struct device *dev)
{
{
struct platform_device *pdev = to_platform_device(dev);
struct s3c_fb *sfb = platform_get_drvdata(pdev);
- struct s3c_fb_platdata *pd = sfb->pdata;
+ struct s3c_fb_platdata *pd;
struct s3c_fb_win *win;
int win_no;
u32 reg;
if (!sfb->variant.has_clksel)
clk_enable(sfb->lcd_clk);
- /* setup gpio and output polarity controls */
- pd->setup_gpio();
- writel(pd->vidcon1, sfb->regs + VIDCON1);
+ s3c_fb_enable_fimd_bypass_disp1();
+
+ writel(VIDCON1_INV_VCLK, sfb->regs + VIDCON1);
/* set video clock running at under-run */
if (sfb->variant.has_fixvclk) {
writel(reg, sfb->regs + VIDCON1);
}
+ /* disable auto-clock gate mode */
+ writel(REG_CLKGATE_MODE_NON_CLOCK_GATE, sfb->regs + REG_CLKGATE_MODE);
/* zero all windows before we do anything */
for (win_no = 0; win_no < sfb->variant.nr_windows; win_no++)
s3c_fb_clear_win(sfb, win_no);
s3c_fb_set_par(win->fbinfo);
}
+ pd = pdev->dev.platform_data;
+
+ if (pd->panel_type == DP_LCD)
+ writel(DPCLKCON_ENABLE, sfb->regs + DPCLKCON);
+
return 0;
}
#endif
{
struct platform_device *pdev = to_platform_device(dev);
struct s3c_fb *sfb = platform_get_drvdata(pdev);
- struct s3c_fb_platdata *pd = sfb->pdata;
clk_enable(sfb->bus_clk);
clk_enable(sfb->lcd_clk);
/* setup gpio and output polarity controls */
- pd->setup_gpio();
- writel(pd->vidcon1, sfb->regs + VIDCON1);
+ s3c_fb_enable_fimd_bypass_disp1();
return 0;
}
NULL)
};
+#ifdef CONFIG_OF
+static const struct of_device_id exynos_fimd_match[] = {
+ { .compatible = "samsung,s3c-fb" },
+ {},
+};
+MODULE_DEVICE_TABLE(of, exynos_fimd_match);
+#endif
+
static struct platform_driver s3c_fb_driver = {
.probe = s3c_fb_probe,
.remove = __devexit_p(s3c_fb_remove),
.name = "s3c-fb",
.owner = THIS_MODULE,
.pm = &s3cfb_pm_ops,
+ .of_match_table = of_match_ptr(exynos_fimd_match),
},
};
--- /dev/null
+/* linux/drivers/video/s5p_mipi_dsi.c
+ *
+ * Samsung SoC MIPI-DSIM driver.
+ *
+ * Copyright (c) 2011 Samsung Electronics
+ *
+ * InKi Dae, <inki.dae@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/clk.h>
+#include <linux/mutex.h>
+#include <linux/wait.h>
+#include <linux/fs.h>
+#include <linux/mm.h>
+#include <linux/fb.h>
+#include <linux/ctype.h>
+#include <linux/platform_device.h>
+#include <linux/io.h>
+#include <linux/irq.h>
+#include <linux/memory.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/kthread.h>
+#include <linux/regulator/consumer.h>
+#include <linux/notifier.h>
+#include <linux/pm_runtime.h>
+
+#include <linux/gpio.h>
+
+#include <video/mipi_display.h>
+
+#include <plat/fb.h>
+#include <plat/regs-mipidsim.h>
+#include <plat/dsim.h>
+
+#include <mach/map.h>
+
+#include "s5p_mipi_dsi_lowlevel.h"
+#include "s5p_mipi_dsi.h"
+
+#ifdef CONFIG_HAS_EARLYSUSPEND
+#include <linux/earlysuspend.h>
+#endif
+
+static unsigned int dpll_table[15] = {
+ 100, 120, 170, 220, 270,
+ 320, 390, 450, 510, 560,
+ 640, 690, 770, 870, 950 };
+
+int s5p_mipi_dsi_wait_int_status(struct mipi_dsim_device *dsim,
+ unsigned int intSrc)
+{
+ while (1) {
+ if ((s5p_mipi_dsi_get_int_status(dsim) & intSrc)) {
+ s5p_mipi_dsi_clear_int_status(dsim, intSrc);
+ return 1;
+ } else if ((s5p_mipi_dsi_get_FIFOCTRL_status(dsim)
+ & 0xf00000) == 0)
+ return 0;
+ }
+}
+
+static int s5p_mipi_dsi_fb_notifier_callback(struct notifier_block *self,
+ unsigned long event, void *data)
+{
+ struct mipi_dsim_device *dsim;
+
+ if (event != FB_EVENT_RESUME)
+ return 0;
+
+ dsim = container_of(self, struct mipi_dsim_device, fb_notif);
+ s5p_mipi_dsi_func_reset(dsim);
+
+ return 0;
+}
+
+static int s5p_mipi_dsi_register_fb(struct mipi_dsim_device *dsim)
+{
+ memset(&dsim->fb_notif, 0, sizeof(dsim->fb_notif));
+ dsim->fb_notif.notifier_call = s5p_mipi_dsi_fb_notifier_callback;
+
+ return fb_register_client(&dsim->fb_notif);
+}
+
+static void s5p_mipi_dsi_long_data_wr(struct mipi_dsim_device *dsim,
+ unsigned int data0, unsigned int data1)
+{
+ unsigned int data_cnt = 0, payload = 0;
+
+ /* in case that data count is more then 4 */
+ for (data_cnt = 0; data_cnt < data1; data_cnt += 4) {
+ /*
+ * after sending 4bytes per one time,
+ * send remainder data less then 4.
+ */
+ if ((data1 - data_cnt) < 4) {
+ if ((data1 - data_cnt) == 3) {
+ payload = *(u8 *)(data0 + data_cnt) |
+ (*(u8 *)(data0 + (data_cnt + 1))) << 8 |
+ (*(u8 *)(data0 + (data_cnt + 2))) << 16;
+ dev_dbg(dsim->dev, "count = 3 payload = %x, %x %x %x\n",
+ payload, *(u8 *)(data0 + data_cnt),
+ *(u8 *)(data0 + (data_cnt + 1)),
+ *(u8 *)(data0 + (data_cnt + 2)));
+ } else if ((data1 - data_cnt) == 2) {
+ payload = *(u8 *)(data0 + data_cnt) |
+ (*(u8 *)(data0 + (data_cnt + 1))) << 8;
+ dev_dbg(dsim->dev,
+ "count = 2 payload = %x, %x %x\n", payload,
+ *(u8 *)(data0 + data_cnt),
+ *(u8 *)(data0 + (data_cnt + 1)));
+ } else if ((data1 - data_cnt) == 1) {
+ payload = *(u8 *)(data0 + data_cnt);
+ }
+
+ s5p_mipi_dsi_wr_tx_data(dsim, payload);
+ /* send 4bytes per one time. */
+ } else {
+ payload = *(u8 *)(data0 + data_cnt) |
+ (*(u8 *)(data0 + (data_cnt + 1))) << 8 |
+ (*(u8 *)(data0 + (data_cnt + 2))) << 16 |
+ (*(u8 *)(data0 + (data_cnt + 3))) << 24;
+
+ dev_dbg(dsim->dev,
+ "count = 4 payload = %x, %x %x %x %x\n",
+ payload, *(u8 *)(data0 + data_cnt),
+ *(u8 *)(data0 + (data_cnt + 1)),
+ *(u8 *)(data0 + (data_cnt + 2)),
+ *(u8 *)(data0 + (data_cnt + 3)));
+
+ s5p_mipi_dsi_wr_tx_data(dsim, payload);
+ }
+ }
+}
+
+int s5p_mipi_dsi_wr_data(struct mipi_dsim_device *dsim, unsigned int data_id,
+ unsigned int data0, unsigned int data1)
+{
+ unsigned long delay_val, udelay;
+ unsigned int check_rx_ack = 0;
+
+ if (dsim->state == DSIM_STATE_ULPS) {
+ dev_err(dsim->dev, "state is ULPS.\n");
+
+ return -EINVAL;
+ }
+
+ delay_val = MHZ / dsim->dsim_config->esc_clk;
+ udelay = 10 * delay_val;
+
+ mdelay(udelay);
+
+ switch (data_id) {
+ /* short packet types of packet types for command. */
+ case MIPI_DSI_GENERIC_SHORT_WRITE_0_PARAM:
+ case MIPI_DSI_GENERIC_SHORT_WRITE_1_PARAM:
+ case MIPI_DSI_GENERIC_SHORT_WRITE_2_PARAM:
+ case MIPI_DSI_DCS_SHORT_WRITE:
+ case MIPI_DSI_DCS_SHORT_WRITE_PARAM:
+ case MIPI_DSI_SET_MAXIMUM_RETURN_PACKET_SIZE:
+ s5p_mipi_dsi_wr_tx_header(dsim, data_id, data0, data1);
+ if (check_rx_ack)
+ /* process response func should be implemented */
+ return 0;
+ else
+ return -EINVAL;
+
+ /* general command */
+ case MIPI_DSI_COLOR_MODE_OFF:
+ case MIPI_DSI_COLOR_MODE_ON:
+ case MIPI_DSI_SHUTDOWN_PERIPHERAL:
+ case MIPI_DSI_TURN_ON_PERIPHERAL:
+ s5p_mipi_dsi_wr_tx_header(dsim, data_id, data0, data1);
+ if (check_rx_ack)
+ /* process response func should be implemented. */
+ return 0;
+ else
+ return -EINVAL;
+
+ /* packet types for video data */
+ case MIPI_DSI_V_SYNC_START:
+ case MIPI_DSI_V_SYNC_END:
+ case MIPI_DSI_H_SYNC_START:
+ case MIPI_DSI_H_SYNC_END:
+ case MIPI_DSI_END_OF_TRANSMISSION:
+ return 0;
+
+ /* short and response packet types for command */
+ case MIPI_DSI_GENERIC_READ_REQUEST_0_PARAM:
+ case MIPI_DSI_GENERIC_READ_REQUEST_1_PARAM:
+ case MIPI_DSI_GENERIC_READ_REQUEST_2_PARAM:
+ case MIPI_DSI_DCS_READ:
+ s5p_mipi_dsi_clear_all_interrupt(dsim);
+ s5p_mipi_dsi_wr_tx_header(dsim, data_id, data0, data1);
+ /* process response func should be implemented. */
+ return 0;
+
+ /* long packet type and null packet */
+ case MIPI_DSI_NULL_PACKET:
+ case MIPI_DSI_BLANKING_PACKET:
+ return 0;
+ case MIPI_DSI_GENERIC_LONG_WRITE:
+ case MIPI_DSI_DCS_LONG_WRITE:
+ {
+ unsigned int size, data_cnt = 0, payload = 0;
+
+ size = data1 * 4;
+
+ /* if data count is less then 4, then send 3bytes data. */
+ if (data1 < 4) {
+ payload = *(u8 *)(data0) |
+ *(u8 *)(data0 + 1) << 8 |
+ *(u8 *)(data0 + 2) << 16;
+
+ s5p_mipi_dsi_wr_tx_data(dsim, payload);
+
+ dev_dbg(dsim->dev, "count = %d payload = %x,%x %x %x\n",
+ data1, payload,
+ *(u8 *)(data0 + data_cnt),
+ *(u8 *)(data0 + (data_cnt + 1)),
+ *(u8 *)(data0 + (data_cnt + 2)));
+ /* in case that data count is more then 4 */
+ } else
+ s5p_mipi_dsi_long_data_wr(dsim, data0, data1);
+
+ /* put data into header fifo */
+ s5p_mipi_dsi_wr_tx_header(dsim, data_id, data1 & 0xff,
+ (data1 & 0xff00) >> 8);
+ }
+ if (s5p_mipi_dsi_wait_int_status(dsim, INTSRC_SFR_FIFO_EMPTY) == 0)
+ return -1;
+
+ if (check_rx_ack)
+ /* process response func should be implemented. */
+ return 0;
+ else
+ return -EINVAL;
+
+ /* packet typo for video data */
+ case MIPI_DSI_PACKED_PIXEL_STREAM_16:
+ case MIPI_DSI_PACKED_PIXEL_STREAM_18:
+ case MIPI_DSI_PIXEL_STREAM_3BYTE_18:
+ case MIPI_DSI_PACKED_PIXEL_STREAM_24:
+ if (check_rx_ack)
+ /* process response func should be implemented. */
+ return 0;
+ else
+ return -EINVAL;
+ default:
+ dev_warn(dsim->dev,
+ "data id %x is not supported current DSI spec.\n",
+ data_id);
+
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int s5p_mipi_dsi_pll_on(struct mipi_dsim_device *dsim, unsigned int enable)
+{
+ int sw_timeout;
+
+ if (enable) {
+ sw_timeout = 1000;
+
+ s5p_mipi_dsi_clear_interrupt(dsim, INTSRC_PLL_STABLE);
+ s5p_mipi_dsi_enable_pll(dsim, 1);
+ while (1) {
+ sw_timeout--;
+ if (s5p_mipi_dsi_is_pll_stable(dsim))
+ return 0;
+ if (sw_timeout == 0)
+ return -EINVAL;
+ }
+ } else
+ s5p_mipi_dsi_enable_pll(dsim, 0);
+
+ return 0;
+}
+
+unsigned long s5p_mipi_dsi_change_pll(struct mipi_dsim_device *dsim,
+ unsigned int pre_divider, unsigned int main_divider,
+ unsigned int scaler)
+{
+ unsigned long dfin_pll, dfvco, dpll_out;
+ unsigned int i, freq_band = 0xf;
+
+ dfin_pll = (FIN_HZ / pre_divider);
+
+ if (dfin_pll < DFIN_PLL_MIN_HZ || dfin_pll > DFIN_PLL_MAX_HZ) {
+ dev_warn(dsim->dev, "fin_pll range should be 6MHz ~ 12MHz\n");
+ s5p_mipi_dsi_enable_afc(dsim, 0, 0);
+ } else {
+ if (dfin_pll < 7 * MHZ)
+ s5p_mipi_dsi_enable_afc(dsim, 1, 0x1);
+ else if (dfin_pll < 8 * MHZ)
+ s5p_mipi_dsi_enable_afc(dsim, 1, 0x0);
+ else if (dfin_pll < 9 * MHZ)
+ s5p_mipi_dsi_enable_afc(dsim, 1, 0x3);
+ else if (dfin_pll < 10 * MHZ)
+ s5p_mipi_dsi_enable_afc(dsim, 1, 0x2);
+ else if (dfin_pll < 11 * MHZ)
+ s5p_mipi_dsi_enable_afc(dsim, 1, 0x5);
+ else
+ s5p_mipi_dsi_enable_afc(dsim, 1, 0x4);
+ }
+
+ dfvco = dfin_pll * main_divider;
+ dev_dbg(dsim->dev, "dfvco = %lu, dfin_pll = %lu, main_divider = %d\n",
+ dfvco, dfin_pll, main_divider);
+ if (dfvco < DFVCO_MIN_HZ || dfvco > DFVCO_MAX_HZ)
+ dev_warn(dsim->dev, "fvco range should be 500MHz ~ 1000MHz\n");
+
+ dpll_out = dfvco / (1 << scaler);
+ dev_dbg(dsim->dev, "dpll_out = %lu, dfvco = %lu, scaler = %d\n",
+ dpll_out, dfvco, scaler);
+
+ for (i = 0; i < ARRAY_SIZE(dpll_table); i++) {
+ if (dpll_out < dpll_table[i] * MHZ) {
+ freq_band = i;
+ break;
+ }
+ }
+
+ dev_dbg(dsim->dev, "freq_band = %d\n", freq_band);
+
+ s5p_mipi_dsi_pll_freq(dsim, pre_divider, main_divider, scaler);
+
+ s5p_mipi_dsi_hs_zero_ctrl(dsim, 0);
+ s5p_mipi_dsi_prep_ctrl(dsim, 0);
+
+ /* Freq Band */
+ s5p_mipi_dsi_pll_freq_band(dsim, freq_band);
+
+ /* Stable time */
+ s5p_mipi_dsi_pll_stable_time(dsim, dsim->dsim_config->pll_stable_time);
+
+ /* Enable PLL */
+ dev_dbg(dsim->dev, "FOUT of mipi dphy pll is %luMHz\n",
+ (dpll_out / MHZ));
+
+ return dpll_out;
+}
+
+int s5p_mipi_dsi_set_clock(struct mipi_dsim_device *dsim,
+ unsigned int byte_clk_sel, unsigned int enable)
+{
+ unsigned int esc_div;
+ unsigned long esc_clk_error_rate;
+
+ if (enable) {
+ dsim->e_clk_src = byte_clk_sel;
+
+ /* Escape mode clock and byte clock source */
+ s5p_mipi_dsi_set_byte_clock_src(dsim, byte_clk_sel);
+
+ /* DPHY, DSIM Link : D-PHY clock out */
+ if (byte_clk_sel == DSIM_PLL_OUT_DIV8) {
+ dsim->hs_clk = s5p_mipi_dsi_change_pll(dsim,
+ dsim->dsim_config->p, dsim->dsim_config->m,
+ dsim->dsim_config->s);
+ if (dsim->hs_clk == 0) {
+ dev_err(dsim->dev,
+ "failed to get hs clock.\n");
+ return -EINVAL;
+ }
+
+ dsim->byte_clk = dsim->hs_clk / 8;
+ s5p_mipi_dsi_enable_pll_bypass(dsim, 0);
+ s5p_mipi_dsi_pll_on(dsim, 1);
+ /* DPHY : D-PHY clock out, DSIM link : external clock out */
+ } else if (byte_clk_sel == DSIM_EXT_CLK_DIV8)
+ dev_warn(dsim->dev,
+ "this project is not support \
+ external clock source for MIPI DSIM\n");
+ else if (byte_clk_sel == DSIM_EXT_CLK_BYPASS)
+ dev_warn(dsim->dev,
+ "this project is not support \
+ external clock source for MIPI DSIM\n");
+
+ /* escape clock divider */
+ esc_div = dsim->byte_clk / (dsim->dsim_config->esc_clk);
+ dev_dbg(dsim->dev,
+ "esc_div = %d, byte_clk = %lu, esc_clk = %lu\n",
+ esc_div, dsim->byte_clk, dsim->dsim_config->esc_clk);
+ if ((dsim->byte_clk / esc_div) >= (20 * MHZ) ||
+ (dsim->byte_clk / esc_div) >
+ dsim->dsim_config->esc_clk)
+ esc_div += 1;
+
+ dsim->escape_clk = dsim->byte_clk / esc_div;
+ dev_dbg(dsim->dev,
+ "escape_clk = %lu, byte_clk = %lu, esc_div = %d\n",
+ dsim->escape_clk, dsim->byte_clk, esc_div);
+
+ /* enable escape clock. */
+ s5p_mipi_dsi_enable_byte_clock(dsim, DSIM_ESCCLK_ON);
+
+ /* enable byte clk and escape clock */
+ s5p_mipi_dsi_set_esc_clk_prs(dsim, 1, esc_div);
+ /* escape clock on lane */
+ s5p_mipi_dsi_enable_esc_clk_on_lane(dsim,
+ (DSIM_LANE_CLOCK | dsim->data_lane), 1);
+
+ dev_dbg(dsim->dev, "byte clock is %luMHz\n",
+ (dsim->byte_clk / MHZ));
+ dev_dbg(dsim->dev, "escape clock that user's need is %lu\n",
+ (dsim->dsim_config->esc_clk / MHZ));
+ dev_dbg(dsim->dev, "escape clock divider is %x\n", esc_div);
+ dev_dbg(dsim->dev, "escape clock is %luMHz\n",
+ ((dsim->byte_clk / esc_div) / MHZ));
+
+ if ((dsim->byte_clk / esc_div) > dsim->escape_clk) {
+ esc_clk_error_rate = dsim->escape_clk /
+ (dsim->byte_clk / esc_div);
+ dev_warn(dsim->dev, "error rate is %lu over.\n",
+ (esc_clk_error_rate / 100));
+ } else if ((dsim->byte_clk / esc_div) < (dsim->escape_clk)) {
+ esc_clk_error_rate = (dsim->byte_clk / esc_div) /
+ dsim->escape_clk;
+ dev_warn(dsim->dev, "error rate is %lu under.\n",
+ (esc_clk_error_rate / 100));
+ }
+ } else {
+ s5p_mipi_dsi_enable_esc_clk_on_lane(dsim,
+ (DSIM_LANE_CLOCK | dsim->data_lane), 0);
+ s5p_mipi_dsi_set_esc_clk_prs(dsim, 0, 0);
+
+ /* disable escape clock. */
+ s5p_mipi_dsi_enable_byte_clock(dsim, DSIM_ESCCLK_OFF);
+
+ if (byte_clk_sel == DSIM_PLL_OUT_DIV8)
+ s5p_mipi_dsi_pll_on(dsim, 0);
+ }
+
+ return 0;
+}
+
+void s5p_mipi_dsi_d_phy_onoff(struct mipi_dsim_device *dsim,
+ unsigned int enable)
+{
+ if (dsim->pd->init_d_phy)
+ dsim->pd->init_d_phy(dsim, enable);
+}
+
+int s5p_mipi_dsi_init_dsim(struct mipi_dsim_device *dsim)
+{
+ s5p_mipi_dsi_d_phy_onoff(dsim, 1);
+
+ dsim->state = DSIM_STATE_INIT;
+
+ switch (dsim->dsim_config->e_no_data_lane) {
+ case DSIM_DATA_LANE_1:
+ dsim->data_lane = DSIM_LANE_DATA0;
+ break;
+ case DSIM_DATA_LANE_2:
+ dsim->data_lane = DSIM_LANE_DATA0 | DSIM_LANE_DATA1;
+ break;
+ case DSIM_DATA_LANE_3:
+ dsim->data_lane = DSIM_LANE_DATA0 | DSIM_LANE_DATA1 |
+ DSIM_LANE_DATA2;
+ break;
+ case DSIM_DATA_LANE_4:
+ dsim->data_lane = DSIM_LANE_DATA0 | DSIM_LANE_DATA1 |
+ DSIM_LANE_DATA2 | DSIM_LANE_DATA3;
+ break;
+ default:
+ dev_info(dsim->dev, "data lane is invalid.\n");
+ return -EINVAL;
+ };
+
+ s5p_mipi_dsi_sw_reset(dsim);
+ s5p_mipi_dsi_dp_dn_swap(dsim, 0);
+
+ return 0;
+}
+
+int s5p_mipi_dsi_enable_frame_done_int(struct mipi_dsim_device *dsim,
+ unsigned int enable)
+{
+ /* enable only frame done interrupt */
+ s5p_mipi_dsi_set_interrupt_mask(dsim, INTMSK_FRAME_DONE, enable);
+
+ return 0;
+}
+
+int s5p_mipi_dsi_set_display_mode(struct mipi_dsim_device *dsim,
+ struct mipi_dsim_config *dsim_config)
+{
+ struct fb_videomode *lcd_video = NULL;
+ struct s3c_fb_pd_win *pd;
+ unsigned int width = 0, height = 0;
+ pd = (struct s3c_fb_pd_win *)dsim->dsim_config->lcd_panel_info;
+ lcd_video = (struct fb_videomode *)&pd->win_mode;
+
+ width = dsim->pd->dsim_lcd_config->lcd_size.width;
+ height = dsim->pd->dsim_lcd_config->lcd_size.height;
+
+ /* in case of VIDEO MODE (RGB INTERFACE) */
+ if (dsim->dsim_config->e_interface == (u32) DSIM_VIDEO) {
+ s5p_mipi_dsi_set_main_disp_vporch(dsim,
+ DSIM_CMD_LEN,
+ dsim->pd->dsim_lcd_config->rgb_timing.left_margin,
+ dsim->pd->dsim_lcd_config->rgb_timing.right_margin);
+ s5p_mipi_dsi_set_main_disp_hporch(dsim,
+ dsim->pd->dsim_lcd_config->rgb_timing.upper_margin,
+ dsim->pd->dsim_lcd_config->rgb_timing.lower_margin);
+ s5p_mipi_dsi_set_main_disp_sync_area(dsim,
+ dsim->pd->dsim_lcd_config->rgb_timing.vsync_len,
+ dsim->pd->dsim_lcd_config->rgb_timing.hsync_len);
+ }
+ s5p_mipi_dsi_set_main_disp_resol(dsim, height, width);
+ s5p_mipi_dsi_display_config(dsim);
+ return 0;
+}
+
+int s5p_mipi_dsi_init_link(struct mipi_dsim_device *dsim)
+{
+ unsigned int time_out = 100;
+ unsigned int id;
+ id = dsim->id;
+ switch (dsim->state) {
+ case DSIM_STATE_INIT:
+ s5p_mipi_dsi_sw_reset(dsim);
+ s5p_mipi_dsi_init_fifo_pointer(dsim, 0x1f);
+
+ /* dsi configuration */
+ s5p_mipi_dsi_init_config(dsim);
+ s5p_mipi_dsi_enable_lane(dsim, DSIM_LANE_CLOCK, 1);
+ s5p_mipi_dsi_enable_lane(dsim, dsim->data_lane, 1);
+
+ /* set clock configuration */
+ s5p_mipi_dsi_set_clock(dsim, dsim->dsim_config->e_byte_clk, 1);
+
+ mdelay(100);
+
+ /* check clock and data lane state are stop state */
+ while (!(s5p_mipi_dsi_is_lane_state(dsim))) {
+ time_out--;
+ if (time_out == 0) {
+ dev_err(dsim->dev,
+ "DSI Master is not stop state.\n");
+ dev_err(dsim->dev,
+ "Check initialization process\n");
+
+ return -EINVAL;
+ }
+ }
+
+ if (time_out != 0) {
+ dev_info(dsim->dev,
+ "DSI Master driver has been completed.\n");
+ dev_info(dsim->dev, "DSI Master state is stop state\n");
+ }
+
+ dsim->state = DSIM_STATE_STOP;
+
+ /* BTA sequence counters */
+ s5p_mipi_dsi_set_stop_state_counter(dsim,
+ dsim->dsim_config->stop_holding_cnt);
+ s5p_mipi_dsi_set_bta_timeout(dsim,
+ dsim->dsim_config->bta_timeout);
+ s5p_mipi_dsi_set_lpdr_timeout(dsim,
+ dsim->dsim_config->rx_timeout);
+
+ return 0;
+ default:
+ dev_info(dsim->dev, "DSI Master is already init.\n");
+ return 0;
+ }
+
+ return 0;
+}
+
+int s5p_mipi_dsi_set_hs_enable(struct mipi_dsim_device *dsim)
+{
+ if (dsim->state == DSIM_STATE_STOP) {
+ if (dsim->e_clk_src != DSIM_EXT_CLK_BYPASS) {
+ dsim->state = DSIM_STATE_HSCLKEN;
+
+ /* set LCDC and CPU transfer mode to HS. */
+ s5p_mipi_dsi_set_lcdc_transfer_mode(dsim, 0);
+ s5p_mipi_dsi_set_cpu_transfer_mode(dsim, 0);
+
+ s5p_mipi_dsi_enable_hs_clock(dsim, 1);
+
+ return 0;
+ } else
+ dev_warn(dsim->dev,
+ "clock source is external bypass.\n");
+ } else
+ dev_warn(dsim->dev, "DSIM is not stop state.\n");
+
+ return 0;
+}
+
+int s5p_mipi_dsi_set_data_transfer_mode(struct mipi_dsim_device *dsim,
+ unsigned int mode)
+{
+ if (mode) {
+ if (dsim->state != DSIM_STATE_HSCLKEN) {
+ dev_err(dsim->dev, "HS Clock lane is not enabled.\n");
+ return -EINVAL;
+ }
+
+ s5p_mipi_dsi_set_lcdc_transfer_mode(dsim, 0);
+ } else {
+ if (dsim->state == DSIM_STATE_INIT || dsim->state ==
+ DSIM_STATE_ULPS) {
+ dev_err(dsim->dev,
+ "DSI Master is not STOP or HSDT state.\n");
+ return -EINVAL;
+ }
+
+ s5p_mipi_dsi_set_cpu_transfer_mode(dsim, 0);
+ }
+ return 0;
+}
+
+int s5p_mipi_dsi_get_frame_done_status(struct mipi_dsim_device *dsim)
+{
+ return _s5p_mipi_dsi_get_frame_done_status(dsim);
+}
+
+int s5p_mipi_dsi_clear_frame_done(struct mipi_dsim_device *dsim)
+{
+ _s5p_mipi_dsi_clear_frame_done(dsim);
+
+ return 0;
+}
+
+static irqreturn_t s5p_mipi_dsi_interrupt_handler(int irq, void *dev_id)
+{
+ unsigned int int_src;
+ struct mipi_dsim_device *dsim = dev_id;
+
+ s5p_mipi_dsi_set_interrupt_mask(dsim, 0xffffffff, 1);
+
+ int_src = readl(dsim->reg_base + S5P_DSIM_INTSRC);
+ s5p_mipi_dsi_clear_interrupt(dsim, int_src);
+
+ if (!(int_src & (INTSRC_PLL_STABLE | INTSRC_FRAME_DONE)))
+ printk(KERN_ERR "mipi dsi interrupt source (%x).\n", int_src);
+
+ s5p_mipi_dsi_set_interrupt_mask(dsim, 0xffffffff, 0);
+ return IRQ_HANDLED;
+}
+
+static inline void s5p_mipi_initialize_mipi_client(struct device *dev,
+ struct mipi_dsim_device *dsim)
+{
+ int again = 1;
+ while (again == 1) {
+ pm_runtime_get_sync(dev);
+ clk_enable(dsim->clock);
+ s5p_mipi_dsi_init_dsim(dsim);
+ s5p_mipi_dsi_init_link(dsim);
+ s5p_mipi_dsi_enable_hs_clock(dsim, 1);
+ s5p_mipi_dsi_set_cpu_transfer_mode(dsim, 1);
+ s5p_mipi_dsi_set_display_mode(dsim, dsim->dsim_config);
+ s5p_mipi_dsi_clear_int_status(dsim, INTSRC_SFR_FIFO_EMPTY);
+ if (dsim->dsim_lcd_drv->displayon(dsim) == 0)
+ again = 1;
+ else
+ again = 0;
+ s5p_mipi_dsi_set_cpu_transfer_mode(dsim, 0);
+ if (again == 1)
+ s5p_mipi_dsi_sw_reset(dsim);
+ }
+}
+
+#ifdef CONFIG_PM
+#ifdef CONFIG_HAS_EARLYSUSPEND
+static void s5p_mipi_dsi_early_suspend(struct early_suspend *handler)
+{
+ struct mipi_dsim_device *dsim =
+ container_of(handler, struct mipi_dsim_device, early_suspend);
+ struct platform_device *pdev = to_platform_device(dsim->dev);
+
+ dsim->dsim_lcd_drv->suspend(dsim);
+ s5p_mipi_dsi_d_phy_onoff(dsim, 0);
+ clk_disable(dsim->clock);
+ pm_runtime_put_sync(&pdev->dev);
+}
+
+static void s5p_mipi_dsi_late_resume(struct early_suspend *handler)
+{
+ struct mipi_dsim_device *dsim =
+ container_of(handler, struct mipi_dsim_device, early_suspend);
+ struct platform_device *pdev = to_platform_device(dsim->dev);
+ s5p_mipi_initialize_mipi_client(&pdev->dev, dsim);
+}
+#else
+static int s5p_mipi_dsi_suspend(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct mipi_dsim_device *dsim = platform_get_drvdata(pdev);
+
+ dsim->dsim_lcd_drv->suspend(dsim);
+ s5p_mipi_dsi_d_phy_onoff(dsim, 0);
+ clk_disable(dsim->clock);
+ pm_runtime_put_sync(dev);
+ return 0;
+}
+
+static int s5p_mipi_dsi_resume(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct mipi_dsim_device *dsim = platform_get_drvdata(pdev);
+
+ return 0;
+ s5p_mipi_initialize_mipi_client(dev, dsim);
+}
+#endif
+static int s5p_mipi_dsi_runtime_suspend(struct device *dev)
+{
+ return 0;
+}
+
+static int s5p_mipi_dsi_runtime_resume(struct device *dev)
+{
+ return 0;
+}
+#else
+#define s5p_mipi_dsi_suspend NULL
+#define s5p_mipi_dsi_resume NULL
+#define s5p_mipi_dsi_runtime_suspend NULL
+#define s5p_mipi_dsi_runtime_resume NULL
+#endif
+
+
+static int s5p_mipi_dsi_probe(struct platform_device *pdev)
+{
+ struct resource *res;
+ struct mipi_dsim_device *dsim = NULL;
+ struct mipi_dsim_config *dsim_config;
+ struct s5p_platform_mipi_dsim *dsim_pd;
+ int ret = -1;
+
+ if (!dsim)
+ dsim = kzalloc(sizeof(struct mipi_dsim_device),
+ GFP_KERNEL);
+ if (!dsim) {
+ dev_err(&pdev->dev, "failed to allocate dsim object.\n");
+ return -EFAULT;
+ }
+
+ dsim->pd = to_dsim_plat(&pdev->dev);
+ dsim->dev = &pdev->dev;
+ dsim->id = pdev->id;
+
+ ret = s5p_mipi_dsi_register_fb(dsim);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to register fb notifier chain\n");
+ return -EFAULT;
+ }
+
+ pm_runtime_enable(&pdev->dev);
+
+ /* get s5p_platform_mipi_dsim. */
+ dsim_pd = (struct s5p_platform_mipi_dsim *)dsim->pd;
+ /* get mipi_dsim_config. */
+ dsim_config = dsim_pd->dsim_config;
+ dsim->dsim_config = dsim_config;
+
+ dsim->clock = clk_get(&pdev->dev, dsim->pd->clk_name);
+ if (IS_ERR(dsim->clock)) {
+ dev_err(&pdev->dev, "failed to get dsim clock source\n");
+ goto err_clock_get;
+ }
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ dev_err(&pdev->dev, "failed to get io memory region\n");
+ ret = -EINVAL;
+ goto err_platform_get;
+ }
+ res = request_mem_region(res->start, resource_size(res),
+ dev_name(&pdev->dev));
+ if (!res) {
+ dev_err(&pdev->dev, "failed to request io memory region\n");
+ ret = -EINVAL;
+ goto err_mem_region;
+ }
+
+ dsim->res = res;
+ dsim->reg_base = ioremap(res->start, resource_size(res));
+ if (!dsim->reg_base) {
+ dev_err(&pdev->dev, "failed to remap io region\n");
+ ret = -EINVAL;
+ goto err_mem_region;
+ }
+
+ /*
+ * it uses frame done interrupt handler
+ * only in case of MIPI Video mode.
+ */
+ if (dsim->pd->dsim_config->e_interface == DSIM_VIDEO) {
+ dsim->irq = platform_get_irq(pdev, 0);
+ if (request_irq(dsim->irq, s5p_mipi_dsi_interrupt_handler,
+ IRQF_DISABLED, "mipi-dsi", dsim)) {
+ dev_err(&pdev->dev, "request_irq failed.\n");
+ goto err_irq;
+ }
+ }
+
+ dsim->dsim_lcd_drv = dsim->dsim_config->dsim_ddi_pd;
+
+ if (dsim->dsim_config == NULL) {
+ dev_err(&pdev->dev, "dsim_config is NULL.\n");
+ goto err_dsim_config;
+ }
+
+ dsim->dsim_lcd_drv->probe(dsim);
+
+ s5p_mipi_initialize_mipi_client(&pdev->dev, dsim);
+
+ dev_info(&pdev->dev, "mipi-dsi driver(%s mode) has been probed.\n",
+ (dsim_config->e_interface == DSIM_COMMAND) ?
+ "CPU" : "RGB");
+
+#ifdef CONFIG_HAS_EARLYSUSPEND
+ dsim->early_suspend.suspend = s5p_mipi_dsi_early_suspend;
+ dsim->early_suspend.resume = s5p_mipi_dsi_late_resume;
+ dsim->early_suspend.level = EARLY_SUSPEND_LEVEL_DISABLE_FB + 1;
+ register_early_suspend(&(dsim->early_suspend));
+#endif
+ platform_set_drvdata(pdev, dsim);
+
+ return 0;
+
+err_dsim_config:
+err_irq:
+ release_resource(dsim->res);
+ kfree(dsim->res);
+
+ iounmap((void __iomem *) dsim->reg_base);
+
+err_mem_region:
+err_platform_get:
+ clk_disable(dsim->clock);
+ clk_put(dsim->clock);
+
+err_clock_get:
+ kfree(dsim);
+ pm_runtime_put_sync(&pdev->dev);
+ return ret;
+
+}
+
+static int __devexit s5p_mipi_dsi_remove(struct platform_device *pdev)
+{
+ struct mipi_dsim_device *dsim = platform_get_drvdata(pdev);
+
+ if (dsim->dsim_config->e_interface == DSIM_VIDEO)
+ free_irq(dsim->irq, dsim);
+
+ iounmap(dsim->reg_base);
+
+ clk_disable(dsim->clock);
+ clk_put(dsim->clock);
+
+ release_resource(dsim->res);
+ kfree(dsim->res);
+
+ kfree(dsim);
+
+ return 0;
+}
+
+static const struct dev_pm_ops mipi_dsi_pm_ops = {
+#ifndef CONFIG_HAS_EARLYSUSPEND
+ .suspend = s5p_mipi_dsi_suspend,
+ .resume = s5p_mipi_dsi_resume,
+#endif
+ .runtime_suspend = s5p_mipi_dsi_runtime_suspend,
+ .runtime_resume = s5p_mipi_dsi_runtime_resume,
+};
+
+#ifdef CONFIG_OF
+ static const struct of_device_id exynos_mipi_match[] = {
+ { .compatible = "samsung,exynos-mipi" },
+ {},
+ };
+ MODULE_DEVICE_TABLE(of, exynos_mipi_match);
+#endif
+
+static struct platform_driver s5p_mipi_dsi_driver = {
+ .probe = s5p_mipi_dsi_probe,
+ .remove = __devexit_p(s5p_mipi_dsi_remove),
+ .driver = {
+ .name = "s5p-mipi-dsim",
+ .owner = THIS_MODULE,
+ .pm = &mipi_dsi_pm_ops,
+ .of_match_table = of_match_ptr(exynos_mipi_match),
+ },
+};
+
+static int s5p_mipi_dsi_register(void)
+{
+ platform_driver_register(&s5p_mipi_dsi_driver);
+
+ return 0;
+}
+
+static void s5p_mipi_dsi_unregister(void)
+{
+ platform_driver_unregister(&s5p_mipi_dsi_driver);
+}
+module_init(s5p_mipi_dsi_register);
+module_exit(s5p_mipi_dsi_unregister);
+
+MODULE_AUTHOR("InKi Dae <inki.dae@samsung.com>");
+MODULE_DESCRIPTION("Samusung MIPI-DSI driver");
+MODULE_LICENSE("GPL");
--- /dev/null
+/* linux/drivers/video/s5p_mipi_dsi.h
+ *
+ * Header file for Samsung MIPI-DSI common driver.
+ *
+ * Copyright (c) 2011 Samsung Electronics
+ * InKi Dae <inki.dae@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+#ifndef _S5P_MIPI_DSI_H
+#define _S5P_MIPI_DSI_H
+
+/* MIPI-DSIM status types. */
+enum {
+ DSIM_STATE_INIT, /* should be initialized. */
+ DSIM_STATE_STOP, /* CPU and LCDC are LP mode. */
+ DSIM_STATE_HSCLKEN, /* HS clock was enabled. */
+ DSIM_STATE_ULPS
+};
+
+/* define DSI lane types. */
+enum {
+ DSIM_LANE_CLOCK = (1 << 0),
+ DSIM_LANE_DATA0 = (1 << 1),
+ DSIM_LANE_DATA1 = (1 << 2),
+ DSIM_LANE_DATA2 = (1 << 3),
+ DSIM_LANE_DATA3 = (1 << 4),
+};
+
+#define MHZ (1000 * 1000)
+#define FIN_HZ (24 * MHZ)
+
+#define DFIN_PLL_MIN_HZ (6 * MHZ)
+#define DFIN_PLL_MAX_HZ (12 * MHZ)
+
+#define DFVCO_MIN_HZ (500 * MHZ)
+#define DFVCO_MAX_HZ (1000 * MHZ)
+
+#define TRY_GET_FIFO_TIMEOUT (5000 * 2)
+
+#define DSIM_ESCCLK_ON (0x1)
+#define DSIM_ESCCLK_OFF (0x0)
+
+#define DSIM_CMD_LEN (0xf)
+
+int s5p_mipi_dsi_wr_data(struct mipi_dsim_device *dsim, unsigned int
+ data_id,
+ unsigned int data0, unsigned int data1);
+#endif /* _S5P_MIPI_DSI_H */
--- /dev/null
+/* linux/drivers/video/s5p_mipi_dsi_lowlevel.c
+ *
+ * Samsung MIPI-DSI lowlevel driver.
+ *
+ * Copyright (c) 2011 Samsung Electronics
+ *
+ * InKi Dae, <inki.dae@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/mutex.h>
+#include <linux/wait.h>
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/mm.h>
+#include <linux/ctype.h>
+#include <linux/io.h>
+
+#include <mach/map.h>
+
+#include <plat/dsim.h>
+#include <plat/regs-mipidsim.h>
+
+void s5p_mipi_dsi_func_reset(struct mipi_dsim_device *dsim)
+{
+ unsigned int reg;
+
+ reg = readl(dsim->reg_base + S5P_DSIM_SWRST);
+
+ reg |= DSIM_FUNCRST;
+
+ writel(reg, dsim->reg_base + S5P_DSIM_SWRST);
+}
+
+void s5p_mipi_dsi_sw_reset(struct mipi_dsim_device *dsim)
+{
+ unsigned int reg;
+
+ reg = readl(dsim->reg_base + S5P_DSIM_SWRST);
+
+ reg |= DSIM_SWRST;
+
+ writel(reg, dsim->reg_base + S5P_DSIM_SWRST);
+}
+
+void s5p_mipi_dsi_set_interrupt_mask(struct mipi_dsim_device *dsim,
+ unsigned int mode, unsigned int mask)
+{
+ unsigned int reg = readl(dsim->reg_base + S5P_DSIM_INTMSK);
+
+ if (mask)
+ reg |= mode;
+ else
+ reg &= ~mode;
+
+ writel(reg, dsim->reg_base + S5P_DSIM_INTMSK);
+}
+
+void s5p_mipi_dsi_init_fifo_pointer(struct mipi_dsim_device *dsim,
+ unsigned int cfg)
+{
+ unsigned int reg;
+
+ reg = readl(dsim->reg_base + S5P_DSIM_FIFOCTRL);
+
+ writel(reg & ~(cfg), dsim->reg_base + S5P_DSIM_FIFOCTRL);
+ mdelay(10);
+ reg |= cfg;
+
+ writel(reg, dsim->reg_base + S5P_DSIM_FIFOCTRL);
+}
+
+/*
+ * this function set PLL P, M and S value in D-PHY
+ */
+void s5p_mipi_dsi_set_phy_tunning(struct mipi_dsim_device *dsim,
+ unsigned int value)
+{
+ writel(DSIM_AFC_CTL(value), dsim->reg_base + S5P_DSIM_PHYACCHR);
+}
+
+void s5p_mipi_dsi_set_main_disp_resol(struct mipi_dsim_device *dsim,
+ unsigned int vert_resol, unsigned int hori_resol)
+{
+ unsigned int reg;
+
+ /* standby should be set after configuration so set to not ready*/
+ reg = (readl(dsim->reg_base + S5P_DSIM_MDRESOL)) &
+ ~(DSIM_MAIN_STAND_BY);
+ writel(reg, dsim->reg_base + S5P_DSIM_MDRESOL);
+
+ reg &= ~(0x7ff << 16) & ~(0x7ff << 0);
+ reg |= DSIM_MAIN_VRESOL(vert_resol) | DSIM_MAIN_HRESOL(hori_resol);
+
+ reg |= DSIM_MAIN_STAND_BY;
+ writel(reg, dsim->reg_base + S5P_DSIM_MDRESOL);
+}
+
+void s5p_mipi_dsi_set_main_disp_vporch(struct mipi_dsim_device *dsim,
+ unsigned int cmd_allow, unsigned int vfront, unsigned int vback)
+{
+ unsigned int reg;
+
+ reg = (readl(dsim->reg_base + S5P_DSIM_MVPORCH)) &
+ ~(DSIM_CMD_ALLOW_MASK) & ~(DSIM_STABLE_VFP_MASK) &
+ ~(DSIM_MAIN_VBP_MASK);
+
+ reg |= ((cmd_allow & 0xf) << DSIM_CMD_ALLOW_SHIFT) |
+ ((vfront & 0x7ff) << DSIM_STABLE_VFP_SHIFT) |
+ ((vback & 0x7ff) << DSIM_MAIN_VBP_SHIFT);
+
+ writel(reg, dsim->reg_base + S5P_DSIM_MVPORCH);
+}
+
+void s5p_mipi_dsi_set_main_disp_hporch(struct mipi_dsim_device *dsim,
+ unsigned int front, unsigned int back)
+{
+ unsigned int reg;
+
+ reg = (readl(dsim->reg_base + S5P_DSIM_MHPORCH)) &
+ ~(DSIM_MAIN_HFP_MASK) & ~(DSIM_MAIN_HBP_MASK);
+
+ reg |= (front << DSIM_MAIN_HFP_SHIFT) | (back << DSIM_MAIN_HBP_SHIFT);
+
+ writel(reg, dsim->reg_base + S5P_DSIM_MHPORCH);
+}
+
+void s5p_mipi_dsi_set_main_disp_sync_area(struct mipi_dsim_device *dsim,
+ unsigned int vert, unsigned int hori)
+{
+ unsigned int reg;
+
+ reg = (readl(dsim->reg_base + S5P_DSIM_MSYNC)) &
+ ~(DSIM_MAIN_VSA_MASK) & ~(DSIM_MAIN_HSA_MASK);
+
+ reg |= ((vert & 0x3ff) << DSIM_MAIN_VSA_SHIFT) |
+ (hori << DSIM_MAIN_HSA_SHIFT);
+
+ writel(reg, dsim->reg_base + S5P_DSIM_MSYNC);
+}
+
+void s5p_mipi_dsi_set_sub_disp_resol(struct mipi_dsim_device *dsim,
+ unsigned int vert, unsigned int hori)
+{
+ unsigned int reg;
+
+ reg = (readl(dsim->reg_base + S5P_DSIM_SDRESOL)) &
+ ~(DSIM_SUB_STANDY_MASK);
+
+ writel(reg, dsim->reg_base + S5P_DSIM_SDRESOL);
+
+ reg &= ~(DSIM_SUB_VRESOL_MASK) | ~(DSIM_SUB_HRESOL_MASK);
+ reg |= ((vert & 0x7ff) << DSIM_SUB_VRESOL_SHIFT) |
+ ((hori & 0x7ff) << DSIM_SUB_HRESOL_SHIFT);
+ writel(reg, dsim->reg_base + S5P_DSIM_SDRESOL);
+
+ reg |= (1 << DSIM_SUB_STANDY_SHIFT);
+ writel(reg, dsim->reg_base + S5P_DSIM_SDRESOL);
+}
+
+void s5p_mipi_dsi_init_config(struct mipi_dsim_device *dsim)
+{
+ struct mipi_dsim_config *dsim_config = dsim->dsim_config;
+
+ unsigned int cfg = (readl(dsim->reg_base + S5P_DSIM_CONFIG)) &
+ ~(1 << 28) & ~(0x1f << 20) & ~(0x3 << 5);
+
+ cfg = (dsim_config->auto_flush << 29) |
+ (dsim_config->eot_disable << 28) |
+ (dsim_config->auto_vertical_cnt << DSIM_AUTO_MODE_SHIFT) |
+ (dsim_config->hse << DSIM_HSE_MODE_SHIFT) |
+ (dsim_config->hfp << DSIM_HFP_MODE_SHIFT) |
+ (dsim_config->hbp << DSIM_HBP_MODE_SHIFT) |
+ (dsim_config->hsa << DSIM_HSA_MODE_SHIFT) |
+ (dsim_config->e_no_data_lane << DSIM_NUM_OF_DATALANE_SHIFT);
+
+ writel(cfg, dsim->reg_base + S5P_DSIM_CONFIG);
+}
+
+void s5p_mipi_dsi_display_config(struct mipi_dsim_device *dsim)
+{
+ u32 reg = (readl(dsim->reg_base + S5P_DSIM_CONFIG)) &
+ ~(0x3 << 26) & ~(1 << 25) & ~(0x3 << 18) & ~(0x7 << 12) &
+ ~(0x3 << 16) & ~(0x7 << 8);
+
+ if (dsim->pd->dsim_config->e_interface == DSIM_VIDEO)
+ reg |= (1 << 25);
+ else if (dsim->pd->dsim_config->e_interface == DSIM_COMMAND)
+ reg &= ~(1 << 25);
+ else {
+ dev_err(dsim->dev, "this ddi is not MIPI interface.\n");
+ return;
+ }
+
+ /* main lcd */
+ reg |= ((u8) (dsim->pd->dsim_config->e_burst_mode) & 0x3) << 26 |
+ ((u8) (dsim->pd->dsim_config->e_virtual_ch) & 0x3) << 18 |
+ ((u8) (dsim->pd->dsim_config->e_pixel_format) & 0x7) << 12;
+
+ writel(reg, dsim->reg_base + S5P_DSIM_CONFIG);
+}
+
+void s5p_mipi_dsi_enable_lane(struct mipi_dsim_device *dsim, unsigned int lane,
+ unsigned int enable)
+{
+ unsigned int reg;
+
+ reg = readl(dsim->reg_base + S5P_DSIM_CONFIG);
+
+ if (enable)
+ reg |= DSIM_LANE_ENx(lane);
+ else
+ reg &= ~DSIM_LANE_ENx(lane);
+
+ writel(reg, dsim->reg_base + S5P_DSIM_CONFIG);
+}
+
+
+void s5p_mipi_dsi_set_data_lane_number(struct mipi_dsim_device *dsim,
+ unsigned int count)
+{
+ unsigned int cfg;
+
+ /* get the data lane number. */
+ cfg = DSIM_NUM_OF_DATA_LANE(count);
+
+ writel(cfg, dsim->reg_base + S5P_DSIM_CONFIG);
+}
+
+void s5p_mipi_dsi_enable_afc(struct mipi_dsim_device *dsim, unsigned int enable,
+ unsigned int afc_code)
+{
+ unsigned int reg = readl(dsim->reg_base + S5P_DSIM_PHYACCHR);
+
+ if (enable) {
+ reg |= (1 << 14);
+ reg &= ~(0x7 << 5);
+ reg |= (afc_code & 0x7) << 5;
+ } else
+ reg &= ~(1 << 14);
+
+ writel(reg, dsim->reg_base + S5P_DSIM_PHYACCHR);
+}
+
+void s5p_mipi_dsi_enable_pll_bypass(struct mipi_dsim_device *dsim,
+ unsigned int enable)
+{
+ unsigned int reg = (readl(dsim->reg_base + S5P_DSIM_CLKCTRL)) &
+ ~(DSIM_PLL_BYPASS_EXTERNAL);
+
+ reg |= enable << DSIM_PLL_BYPASS_SHIFT;
+
+ writel(reg, dsim->reg_base + S5P_DSIM_CLKCTRL);
+}
+
+void s5p_mipi_dsi_set_pll_pms(struct mipi_dsim_device *dsim, unsigned int p,
+ unsigned int m, unsigned int s)
+{
+ unsigned int reg = readl(dsim->reg_base + S5P_DSIM_PLLCTRL);
+
+ reg |= ((p & 0x3f) << 13) | ((m & 0x1ff) << 4) | ((s & 0x7) << 1);
+
+ writel(reg, dsim->reg_base + S5P_DSIM_PLLCTRL);
+}
+
+void s5p_mipi_dsi_pll_freq_band(struct mipi_dsim_device *dsim,
+ unsigned int freq_band)
+{
+ unsigned int reg = (readl(dsim->reg_base + S5P_DSIM_PLLCTRL)) &
+ ~(0x1f << DSIM_FREQ_BAND_SHIFT);
+
+ reg |= ((freq_band & 0x1f) << DSIM_FREQ_BAND_SHIFT);
+
+ writel(reg, dsim->reg_base + S5P_DSIM_PLLCTRL);
+}
+
+void s5p_mipi_dsi_pll_freq(struct mipi_dsim_device *dsim,
+ unsigned int pre_divider, unsigned int main_divider,
+ unsigned int scaler)
+{
+ unsigned int reg = (readl(dsim->reg_base + S5P_DSIM_PLLCTRL)) &
+ ~(0x7ffff << 1);
+
+ reg |= (pre_divider & 0x3f) << 13 | (main_divider & 0x1ff) << 4 |
+ (scaler & 0x7) << 1;
+
+ writel(reg, dsim->reg_base + S5P_DSIM_PLLCTRL);
+}
+
+void s5p_mipi_dsi_pll_stable_time(struct mipi_dsim_device *dsim,
+ unsigned int lock_time)
+{
+ writel(lock_time, dsim->reg_base + S5P_DSIM_PLLTMR);
+}
+
+void s5p_mipi_dsi_enable_pll(struct mipi_dsim_device *dsim, unsigned int enable)
+{
+ unsigned int reg = (readl(dsim->reg_base + S5P_DSIM_PLLCTRL)) &
+ ~(0x1 << DSIM_PLL_EN_SHIFT);
+
+ reg |= ((enable & 0x1) << DSIM_PLL_EN_SHIFT);
+
+ writel(reg, dsim->reg_base + S5P_DSIM_PLLCTRL);
+}
+
+void s5p_mipi_dsi_set_byte_clock_src(struct mipi_dsim_device *dsim,
+ unsigned int src)
+{
+ unsigned int reg = (readl(dsim->reg_base + S5P_DSIM_CLKCTRL)) &
+ ~(0x3 << DSIM_BYTE_CLK_SRC_SHIFT);
+
+ reg |= ((unsigned int) src) << DSIM_BYTE_CLK_SRC_SHIFT;
+
+ writel(reg, dsim->reg_base + S5P_DSIM_CLKCTRL);
+}
+
+void s5p_mipi_dsi_enable_byte_clock(struct mipi_dsim_device *dsim,
+ unsigned int enable)
+{
+ unsigned int reg = (readl(dsim->reg_base + S5P_DSIM_CLKCTRL)) &
+ ~(1 << DSIM_BYTE_CLKEN_SHIFT);
+
+ reg |= enable << DSIM_BYTE_CLKEN_SHIFT;
+ writel(reg, dsim->reg_base + S5P_DSIM_CLKCTRL);
+}
+
+void s5p_mipi_dsi_set_esc_clk_prs(struct mipi_dsim_device *dsim,
+ unsigned int enable, unsigned int prs_val)
+{
+ unsigned int reg = (readl(dsim->reg_base + S5P_DSIM_CLKCTRL)) &
+ ~(1 << DSIM_ESC_CLKEN_SHIFT) & ~(0xffff);
+
+ reg |= enable << DSIM_ESC_CLKEN_SHIFT;
+ if (enable)
+ reg |= prs_val;
+
+ writel(reg, dsim->reg_base + S5P_DSIM_CLKCTRL);
+}
+
+void s5p_mipi_dsi_enable_esc_clk_on_lane(struct mipi_dsim_device *dsim,
+ unsigned int lane_sel, unsigned int enable)
+{
+ unsigned int reg = readl(dsim->reg_base + S5P_DSIM_CLKCTRL);
+
+ if (enable)
+ reg |= DSIM_LANE_ESC_CLKEN(lane_sel);
+ else
+
+ reg &= ~DSIM_LANE_ESC_CLKEN(lane_sel);
+
+ writel(reg, dsim->reg_base + S5P_DSIM_CLKCTRL);
+}
+
+void s5p_mipi_dsi_force_dphy_stop_state(struct mipi_dsim_device *dsim,
+ unsigned int enable)
+{
+ unsigned int reg = (readl(dsim->reg_base + S5P_DSIM_ESCMODE)) &
+ ~(0x1 << DSIM_FORCE_STOP_STATE_SHIFT);
+
+ reg |= ((enable & 0x1) << DSIM_FORCE_STOP_STATE_SHIFT);
+
+ writel(reg, dsim->reg_base + S5P_DSIM_ESCMODE);
+}
+
+unsigned int s5p_mipi_dsi_is_lane_state(struct mipi_dsim_device *dsim)
+{
+ unsigned int reg = readl(dsim->reg_base + S5P_DSIM_STATUS);
+ /**
+ * check clock and data lane states.
+ * if MIPI-DSI controller was enabled at bootloader then
+ * TX_READY_HS_CLK is enabled otherwise STOP_STATE_CLK.
+ * so it should be checked for two case.
+ */
+ if ((reg & DSIM_STOP_STATE_DAT(0xf)) &&
+ ((reg & DSIM_STOP_STATE_CLK) ||
+ (reg & DSIM_TX_READY_HS_CLK)))
+ return 1;
+ else
+ return 0;
+
+ return 0;
+}
+
+void s5p_mipi_dsi_set_stop_state_counter(struct mipi_dsim_device *dsim,
+ unsigned int cnt_val)
+{
+ unsigned int reg = (readl(dsim->reg_base + S5P_DSIM_ESCMODE)) &
+ ~(0x7ff << DSIM_STOP_STATE_CNT_SHIFT);
+
+ reg |= ((cnt_val & 0x7ff) << DSIM_STOP_STATE_CNT_SHIFT);
+
+ writel(reg, dsim->reg_base + S5P_DSIM_ESCMODE);
+}
+
+void s5p_mipi_dsi_set_bta_timeout(struct mipi_dsim_device *dsim,
+ unsigned int timeout)
+{
+ unsigned int reg = (readl(dsim->reg_base + S5P_DSIM_TIMEOUT)) &
+ ~(0xff << DSIM_BTA_TOUT_SHIFT);
+
+ reg |= (timeout << DSIM_BTA_TOUT_SHIFT);
+
+ writel(reg, dsim->reg_base + S5P_DSIM_TIMEOUT);
+}
+
+void s5p_mipi_dsi_set_lpdr_timeout(struct mipi_dsim_device *dsim,
+ unsigned int timeout)
+{
+ unsigned int reg = (readl(dsim->reg_base + S5P_DSIM_TIMEOUT)) &
+ ~(0xffff << DSIM_LPDR_TOUT_SHIFT);
+
+ reg |= (timeout << DSIM_LPDR_TOUT_SHIFT);
+
+ writel(reg, dsim->reg_base + S5P_DSIM_TIMEOUT);
+}
+
+void s5p_mipi_dsi_set_cpu_transfer_mode(struct mipi_dsim_device *dsim,
+ unsigned int lp)
+{
+ unsigned int reg = readl(dsim->reg_base + S5P_DSIM_ESCMODE);
+
+ reg &= ~DSIM_CMD_LPDT_LP;
+
+ reg |= lp << DSIM_CMD_LPDT_SHIFT;
+
+ writel(reg, dsim->reg_base + S5P_DSIM_ESCMODE);
+}
+
+void s5p_mipi_dsi_set_lcdc_transfer_mode(struct mipi_dsim_device *dsim,
+ unsigned int lp)
+{
+ unsigned int reg = readl(dsim->reg_base + S5P_DSIM_ESCMODE);
+
+ reg &= ~DSIM_TX_LPDT_LP;
+
+ reg |= lp << DSIM_TX_LPDT_SHIFT;
+
+ writel(reg, dsim->reg_base + S5P_DSIM_ESCMODE);
+}
+
+void s5p_mipi_dsi_enable_hs_clock(struct mipi_dsim_device *dsim,
+ unsigned int enable)
+{
+ unsigned int reg = (readl(dsim->reg_base + S5P_DSIM_CLKCTRL)) &
+ ~(1 << DSIM_TX_REQUEST_HSCLK_SHIFT);
+
+ reg |= enable << DSIM_TX_REQUEST_HSCLK_SHIFT;
+
+ writel(reg, dsim->reg_base + S5P_DSIM_CLKCTRL);
+}
+
+void s5p_mipi_dsi_dp_dn_swap(struct mipi_dsim_device *dsim,
+ unsigned int swap_en)
+{
+ unsigned int reg = readl(dsim->reg_base + S5P_DSIM_PHYACCHR1);
+
+ reg &= ~(0x3 << 0);
+ reg |= (swap_en & 0x3) << 0;
+
+ writel(reg, dsim->reg_base + S5P_DSIM_PHYACCHR1);
+}
+
+void s5p_mipi_dsi_hs_zero_ctrl(struct mipi_dsim_device *dsim,
+ unsigned int hs_zero)
+{
+ unsigned int reg = (readl(dsim->reg_base + S5P_DSIM_PLLCTRL)) &
+ ~(0xf << 28);
+
+ reg |= ((hs_zero & 0xf) << 28);
+
+ writel(reg, dsim->reg_base + S5P_DSIM_PLLCTRL);
+}
+
+void s5p_mipi_dsi_prep_ctrl(struct mipi_dsim_device *dsim, unsigned int prep)
+{
+ unsigned int reg = (readl(dsim->reg_base + S5P_DSIM_PLLCTRL)) &
+ ~(0x7 << 20);
+
+ reg |= ((prep & 0x7) << 20);
+
+ writel(reg, dsim->reg_base + S5P_DSIM_PLLCTRL);
+}
+
+void s5p_mipi_dsi_clear_interrupt(struct mipi_dsim_device *dsim,
+ unsigned int int_src)
+{
+ writel(int_src, dsim->reg_base + S5P_DSIM_INTSRC);
+}
+
+void s5p_mipi_dsi_clear_all_interrupt(struct mipi_dsim_device *dsim)
+{
+ unsigned int reg = readl(dsim->reg_base + S5P_DSIM_INTSRC);
+
+ reg |= 0xffffffff;
+
+ writel(reg, dsim->reg_base + S5P_DSIM_INTSRC);
+}
+
+unsigned int s5p_mipi_dsi_is_pll_stable(struct mipi_dsim_device *dsim)
+{
+ unsigned int reg;
+
+ reg = readl(dsim->reg_base + S5P_DSIM_STATUS);
+
+ return reg & (1 << 31) ? 1 : 0;
+}
+
+unsigned int s5p_mipi_dsi_get_fifo_state(struct mipi_dsim_device *dsim)
+{
+ unsigned int ret;
+
+ ret = readl(dsim->reg_base + S5P_DSIM_FIFOCTRL) & ~(0x1f);
+
+ return ret;
+}
+
+void s5p_mipi_dsi_wr_tx_header(struct mipi_dsim_device *dsim,
+ unsigned int di, unsigned int data0, unsigned int data1)
+{
+ unsigned int reg = (data1 << 16) | (data0 << 8) | ((di & 0x3f) << 0);
+ writel(reg, dsim->reg_base + S5P_DSIM_PKTHDR);
+}
+
+unsigned int _s5p_mipi_dsi_get_frame_done_status(struct mipi_dsim_device *dsim)
+{
+ unsigned int reg = readl(dsim->reg_base + S5P_DSIM_INTSRC);
+
+ return (reg & INTSRC_FRAME_DONE) ? 1 : 0;
+}
+
+void _s5p_mipi_dsi_clear_frame_done(struct mipi_dsim_device *dsim)
+{
+ unsigned int reg = readl(dsim->reg_base + S5P_DSIM_INTSRC);
+
+ writel(reg | INTSRC_FRAME_DONE, dsim->reg_base +
+ S5P_DSIM_INTSRC);
+}
+
+void s5p_mipi_dsi_wr_tx_data(struct mipi_dsim_device *dsim,
+ unsigned int tx_data)
+{
+ writel(tx_data, dsim->reg_base + S5P_DSIM_PAYLOAD);
+}
+
+unsigned int s5p_mipi_dsi_get_int_status(struct mipi_dsim_device *dsim)
+{
+ return readl(dsim->reg_base + S5P_DSIM_INTSRC);
+}
+
+void s5p_mipi_dsi_clear_int_status(struct mipi_dsim_device *dsim,
+ unsigned int intSrc)
+{
+ writel(intSrc, dsim->reg_base + S5P_DSIM_INTSRC);
+}
+
+unsigned int s5p_mipi_dsi_get_FIFOCTRL_status(struct mipi_dsim_device *dsim)
+{
+ return readl(dsim->reg_base + S5P_DSIM_FIFOCTRL);
+}
--- /dev/null
+/* linux/drivers/video/s5p_mipi_dsi_lowlevel.h
+ *
+ * Header file for Samsung MIPI-DSI lowlevel driver.
+ *
+ * Copyright (c) 2011 Samsung Electronics
+ * InKi Dae <inki.dae@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+#ifndef _S5P_MIPI_DSI_LOWLEVEL_H
+#define _S5P_MIPI_DSI_LOWLEVEL_H
+
+void s5p_mipi_dsi_func_reset(struct mipi_dsim_device *dsim);
+void s5p_mipi_dsi_sw_reset(struct mipi_dsim_device *dsim);
+void s5p_mipi_dsi_set_interrupt_mask(struct mipi_dsim_device *dsim,
+ unsigned int mode, unsigned int mask);
+void s5p_mipi_dsi_set_data_lane_number(struct mipi_dsim_device *dsim,
+ unsigned int count);
+void s5p_mipi_dsi_init_fifo_pointer(struct mipi_dsim_device *dsim,
+ unsigned int cfg);
+void s5p_mipi_dsi_set_phy_tunning(struct mipi_dsim_device *dsim,
+ unsigned int value);
+void s5p_mipi_dsi_set_phy_tunning(struct mipi_dsim_device *dsim,
+ unsigned int value);
+void s5p_mipi_dsi_set_main_disp_resol(struct mipi_dsim_device *dsim,
+ unsigned int vert_resol, unsigned int hori_resol);
+void s5p_mipi_dsi_set_main_disp_vporch(struct mipi_dsim_device *dsim,
+ unsigned int cmd_allow, unsigned int vfront, unsigned int vback);
+void s5p_mipi_dsi_set_main_disp_hporch(struct mipi_dsim_device *dsim,
+ unsigned int front, unsigned int back);
+void s5p_mipi_dsi_set_main_disp_sync_area(struct mipi_dsim_device *dsim,
+ unsigned int vert, unsigned int hori);
+void s5p_mipi_dsi_set_sub_disp_resol(struct mipi_dsim_device *dsim,
+ unsigned int vert, unsigned int hori);
+void s5p_mipi_dsi_init_config(struct mipi_dsim_device *dsim);
+void s5p_mipi_dsi_display_config(struct mipi_dsim_device *dsim);
+void s5p_mipi_dsi_set_data_lane_number(struct mipi_dsim_device *dsim,
+ unsigned int count);
+void s5p_mipi_dsi_enable_lane(struct mipi_dsim_device *dsim,
+ unsigned int lane, unsigned int enable);
+void s5p_mipi_dsi_enable_afc(struct mipi_dsim_device *dsim,
+ unsigned int enable, unsigned int afc_code);
+void s5p_mipi_dsi_enable_pll_bypass(struct mipi_dsim_device *dsim,
+ unsigned int enable);
+void s5p_mipi_dsi_set_pll_pms(struct mipi_dsim_device *dsim,
+ unsigned int p, unsigned int m, unsigned int s);
+void s5p_mipi_dsi_pll_freq_band(struct mipi_dsim_device *dsim,
+ unsigned int freq_band);
+void s5p_mipi_dsi_pll_freq(struct mipi_dsim_device *dsim,
+ unsigned int pre_divider, unsigned int main_divider,
+ unsigned int scaler);
+void s5p_mipi_dsi_pll_stable_time(struct mipi_dsim_device *dsim,
+ unsigned int lock_time);
+void s5p_mipi_dsi_enable_pll(struct mipi_dsim_device *dsim,
+ unsigned int enable);
+void s5p_mipi_dsi_set_byte_clock_src(struct mipi_dsim_device *dsim,
+ unsigned int src);
+void s5p_mipi_dsi_enable_byte_clock(struct mipi_dsim_device *dsim,
+ unsigned int enable);
+void s5p_mipi_dsi_set_esc_clk_prs(struct mipi_dsim_device *dsim,
+ unsigned int enable, unsigned int prs_val);
+void s5p_mipi_dsi_enable_esc_clk_on_lane(struct mipi_dsim_device *dsim,
+ unsigned int lane_sel, unsigned int enable);
+void s5p_mipi_dsi_force_dphy_stop_state(struct mipi_dsim_device *dsim,
+ unsigned int enable);
+unsigned int s5p_mipi_dsi_is_lane_state(struct mipi_dsim_device *dsim);
+void s5p_mipi_dsi_set_stop_state_counter(struct mipi_dsim_device *dsim,
+ unsigned int cnt_val);
+void s5p_mipi_dsi_set_bta_timeout(struct mipi_dsim_device *dsim,
+ unsigned int timeout);
+void s5p_mipi_dsi_set_lpdr_timeout(struct mipi_dsim_device *dsim,
+ unsigned int timeout);
+void s5p_mipi_dsi_set_lcdc_transfer_mode(struct mipi_dsim_device *dsim,
+ unsigned int lp);
+void s5p_mipi_dsi_set_cpu_transfer_mode(struct mipi_dsim_device *dsim,
+ unsigned int lp);
+void s5p_mipi_dsi_enable_hs_clock(struct mipi_dsim_device *dsim,
+ unsigned int enable);
+void s5p_mipi_dsi_dp_dn_swap(struct mipi_dsim_device *dsim,
+ unsigned int swap_en);
+void s5p_mipi_dsi_hs_zero_ctrl(struct mipi_dsim_device *dsim,
+ unsigned int hs_zero);
+void s5p_mipi_dsi_prep_ctrl(struct mipi_dsim_device *dsim, unsigned int
+ prep);
+void s5p_mipi_dsi_clear_interrupt(struct mipi_dsim_device *dsim,
+ unsigned int int_src);
+void s5p_mipi_dsi_clear_all_interrupt(struct mipi_dsim_device *dsim);
+unsigned int s5p_mipi_dsi_is_pll_stable(struct mipi_dsim_device *dsim);
+unsigned int s5p_mipi_dsi_get_fifo_state(struct mipi_dsim_device *dsim);
+unsigned int _s5p_mipi_dsi_get_frame_done_status(struct mipi_dsim_device *dsim);
+void _s5p_mipi_dsi_clear_frame_done(struct mipi_dsim_device *dsim);
+void s5p_mipi_dsi_wr_tx_header(struct mipi_dsim_device *dsim,
+ unsigned int di, unsigned int data0, unsigned int data1);
+void s5p_mipi_dsi_wr_tx_data(struct mipi_dsim_device *dsim,
+ unsigned int tx_data);
+unsigned int s5p_mipi_dsi_get_int_status(struct mipi_dsim_device *dsim);
+void s5p_mipi_dsi_clear_int_status(struct mipi_dsim_device *dsim,
+ unsigned int intSrc);
+unsigned int s5p_mipi_dsi_get_FIFOCTRL_status(struct mipi_dsim_device *dsim);
+
+#endif /* _S5P_MIPI_DSI_LOWLEVEL_H */
#ifdef CONFIG_HAVE_GENERIC_DMA_COHERENT
/*
- * These two functions are only for dma allocator.
+ * These three functions are only for dma allocator.
* Don't use them in device drivers.
*/
int dma_alloc_from_coherent(struct device *dev, ssize_t size,
dma_addr_t *dma_handle, void **ret);
int dma_release_from_coherent(struct device *dev, int order, void *vaddr);
+int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma,
+ void *cpu_addr, size_t size, int *ret);
/*
* Standard interface
*/
--- /dev/null
+#ifndef ASM_DMA_CONTIGUOUS_H
+#define ASM_DMA_CONTIGUOUS_H
+
+#ifdef __KERNEL__
+#ifdef CONFIG_CMA
+
+#include <linux/device.h>
+#include <linux/dma-contiguous.h>
+
+static inline struct cma *dev_get_cma_area(struct device *dev)
+{
+ if (dev && dev->cma_area)
+ return dev->cma_area;
+ return dma_contiguous_default_area;
+}
+
+static inline void dev_set_cma_area(struct device *dev, struct cma *cma)
+{
+ if (dev)
+ dev->cma_area = cma;
+ if (!dev || !dma_contiguous_default_area)
+ dma_contiguous_default_area = cma;
+}
+
+#endif
+#endif
+
+#endif
#define dma_map_sg(d, s, n, r) dma_map_sg_attrs(d, s, n, r, NULL)
#define dma_unmap_sg(d, s, n, r) dma_unmap_sg_attrs(d, s, n, r, NULL)
+int
+dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
+ void *cpu_addr, dma_addr_t dma_addr, size_t size);
+
+static inline int
+dma_get_sgtable_attrs(struct device *dev, struct sg_table *sgt, void *cpu_addr,
+ dma_addr_t dma_addr, size_t size, struct dma_attrs *attrs)
+{
+ struct dma_map_ops *ops = get_dma_ops(dev);
+ BUG_ON(!ops);
+ if (ops->get_sgtable)
+ return ops->get_sgtable(dev, sgt, cpu_addr, dma_addr, size,
+ attrs);
+ return dma_common_get_sgtable(dev, sgt, cpu_addr, dma_addr, size);
+}
+
+#define dma_get_sgtable(d, t, v, h, s) dma_get_sgtable_attrs(d, t, v, h, s, NULL)
+
#endif
u32 height_mm;
};
+enum disp_panel_type {
+ MIPI_LCD,
+ DP_LCD
+};
+
/**
* Platform Specific Structure for DRM based FIMD.
*
u32 vidcon1;
unsigned int default_win;
unsigned int bpp;
+ unsigned int clock_rate;
+ enum disp_panel_type panel_type;
};
/**
--- /dev/null
+/*
+ * linux/include/linux/cpu_cooling.h
+ *
+ * Copyright (C) 2012 Samsung Electronics Co., Ltd(http://www.samsung.com)
+ * Copyright (C) 2012 Amit Daniel <amit.kachhap@linaro.org>
+ *
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ */
+
+#ifndef __CPU_COOLING_H__
+#define __CPU_COOLING_H__
+
+#include <linux/thermal.h>
+
+#define CPUFREQ_COOLING_START 0
+#define CPUFREQ_COOLING_STOP 1
+
+/**
+ * struct freq_clip_table
+ * @freq_clip_max: maximum frequency allowed for this cooling state.
+ * @temp_level: Temperature level at which the temperature clipping will
+ * happen.
+ * @mask_val: cpumask of the allowed cpu's where the clipping will take place.
+ *
+ * This structure is required to be filled and passed to the
+ * cpufreq_cooling_unregister function.
+ */
+struct freq_clip_table {
+ unsigned int freq_clip_max;
+ unsigned int temp_level;
+ const struct cpumask *mask_val;
+};
+
+/**
+ * cputherm_register_notifier - Register a notifier with cpu cooling interface.
+ * @nb: struct notifier_block * with callback info.
+ * @list: integer value for which notification is needed. possible values are
+ * CPUFREQ_COOLING_TYPE and CPUHOTPLUG_COOLING_TYPE.
+ *
+ * This exported function registers a driver with cpu cooling layer. The driver
+ * will be notified when any cpu cooling action is called.
+ */
+int cputherm_register_notifier(struct notifier_block *nb, unsigned int list);
+
+/**
+ * cputherm_unregister_notifier - Un-register a notifier.
+ * @nb: struct notifier_block * with callback info.
+ * @list: integer value for which notification is needed. values possible are
+ * CPUFREQ_COOLING_TYPE.
+ *
+ * This exported function un-registers a driver with cpu cooling layer.
+ */
+int cputherm_unregister_notifier(struct notifier_block *nb, unsigned int list);
+
+#ifdef CONFIG_CPU_FREQ
+/**
+ * cpufreq_cooling_register - function to create cpufreq cooling device.
+ * @tab_ptr: table ptr containing the maximum value of frequency to be clipped
+ * for each cooling state.
+ * @tab_size: count of entries in the above table.
+ * @mask_val: cpumask containing the allowed cpu's where frequency clipping can
+ * happen.
+ */
+struct thermal_cooling_device *cpufreq_cooling_register(
+ struct freq_clip_table *tab_ptr, unsigned int tab_size);
+
+/**
+ * cpufreq_cooling_unregister - function to remove cpufreq cooling device.
+ * @cdev: thermal cooling device pointer.
+ */
+void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev);
+#else /*!CONFIG_CPU_FREQ*/
+static inline struct thermal_cooling_device *cpufreq_cooling_register(
+ struct freq_clip_table *tab_ptr, unsigned int tab_size);
+{
+ return NULL;
+}
+static inline void cpufreq_cooling_unregister(
+ struct thermal_cooling_device *cdev)
+{
+ return;
+}
+#endif /*CONFIG_CPU_FREQ*/
+
+#endif /* __CPU_COOLING_H__ */
extern int __must_check driver_register(struct device_driver *drv);
extern void driver_unregister(struct device_driver *drv);
+extern void put_driver(struct device_driver *drv);
extern struct device_driver *driver_find(const char *name,
struct bus_type *bus);
extern int driver_probe_done(void);
struct dma_coherent_mem *dma_mem; /* internal for coherent mem
override */
+#ifdef CONFIG_CMA
+ struct cma *cma_area; /* contiguous memory area for dma
+ allocations */
+#endif
/* arch specific additions */
struct dev_archdata archdata;
DMA_ATTR_WEAK_ORDERING,
DMA_ATTR_WRITE_COMBINE,
DMA_ATTR_NON_CONSISTENT,
+ DMA_ATTR_NO_KERNEL_MAPPING,
+ DMA_ATTR_SKIP_CPU_SYNC,
DMA_ATTR_MAX,
};
* This Callback must not sleep.
* @kmap: maps a page from the buffer into kernel address space.
* @kunmap: [optional] unmaps a page from the buffer.
+ * @mmap: used to expose the backing storage to userspace. Note that the
+ * mapping needs to be coherent - if the exporter doesn't directly
+ * support this, it needs to fake coherency by shooting down any ptes
+ * when transitioning away from the cpu domain.
+ * @vmap: [optional] creates a virtual mapping for the buffer into kernel
+ * address space. Same restrictions as for vmap and friends apply.
+ * @vunmap: [optional] unmaps a vmap from the buffer
*/
struct dma_buf_ops {
int (*attach)(struct dma_buf *, struct device *,
void (*kunmap_atomic)(struct dma_buf *, unsigned long, void *);
void *(*kmap)(struct dma_buf *, unsigned long);
void (*kunmap)(struct dma_buf *, unsigned long, void *);
+
+ int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
+
+ void *(*vmap)(struct dma_buf *);
+ void (*vunmap)(struct dma_buf *, void *vaddr);
};
/**
void dma_buf_kunmap_atomic(struct dma_buf *, unsigned long, void *);
void *dma_buf_kmap(struct dma_buf *, unsigned long);
void dma_buf_kunmap(struct dma_buf *, unsigned long, void *);
+
+int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
+ unsigned long);
+void *dma_buf_vmap(struct dma_buf *);
+void dma_buf_vunmap(struct dma_buf *, void *vaddr);
#else
static inline struct dma_buf_attachment *dma_buf_attach(struct dma_buf *dmabuf,
unsigned long pnum, void *vaddr)
{
}
+
+static inline int dma_buf_mmap(struct dma_buf *dmabuf,
+ struct vm_area_struct *vma,
+ unsigned long pgoff)
+{
+ return -ENODEV;
+}
+
+static inline void *dma_buf_vmap(struct dma_buf *dmabuf)
+{
+ return NULL;
+}
+
+static inline void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
+{
+}
#endif /* CONFIG_DMA_SHARED_BUFFER */
#endif /* __DMA_BUF_H__ */
--- /dev/null
+#ifndef __LINUX_CMA_H
+#define __LINUX_CMA_H
+
+/*
+ * Contiguous Memory Allocator for DMA mapping framework
+ * Copyright (c) 2010-2011 by Samsung Electronics.
+ * Written by:
+ * Marek Szyprowski <m.szyprowski@samsung.com>
+ * Michal Nazarewicz <mina86@mina86.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License or (at your optional) any later version of the license.
+ */
+
+/*
+ * Contiguous Memory Allocator
+ *
+ * The Contiguous Memory Allocator (CMA) makes it possible to
+ * allocate big contiguous chunks of memory after the system has
+ * booted.
+ *
+ * Why is it needed?
+ *
+ * Various devices on embedded systems have no scatter-getter and/or
+ * IO map support and require contiguous blocks of memory to
+ * operate. They include devices such as cameras, hardware video
+ * coders, etc.
+ *
+ * Such devices often require big memory buffers (a full HD frame
+ * is, for instance, more then 2 mega pixels large, i.e. more than 6
+ * MB of memory), which makes mechanisms such as kmalloc() or
+ * alloc_page() ineffective.
+ *
+ * At the same time, a solution where a big memory region is
+ * reserved for a device is suboptimal since often more memory is
+ * reserved then strictly required and, moreover, the memory is
+ * inaccessible to page system even if device drivers don't use it.
+ *
+ * CMA tries to solve this issue by operating on memory regions
+ * where only movable pages can be allocated from. This way, kernel
+ * can use the memory for pagecache and when device driver requests
+ * it, allocated pages can be migrated.
+ *
+ * Driver usage
+ *
+ * CMA should not be used by the device drivers directly. It is
+ * only a helper framework for dma-mapping subsystem.
+ *
+ * For more information, see kernel-docs in drivers/base/dma-contiguous.c
+ */
+
+#ifdef __KERNEL__
+
+struct cma;
+struct page;
+struct device;
+
+#ifdef CONFIG_CMA
+
+/*
+ * There is always at least global CMA area and a few optional device
+ * private areas configured in kernel .config.
+ */
+#define MAX_CMA_AREAS (1 + CONFIG_CMA_AREAS)
+
+extern struct cma *dma_contiguous_default_area;
+
+void dma_contiguous_reserve(phys_addr_t addr_limit);
+int dma_declare_contiguous(struct device *dev, unsigned long size,
+ phys_addr_t base, phys_addr_t limit);
+
+struct page *dma_alloc_from_contiguous(struct device *dev, int count,
+ unsigned int order);
+bool dma_release_from_contiguous(struct device *dev, struct page *pages,
+ int count);
+
+#else
+
+#define MAX_CMA_AREAS (0)
+
+static inline void dma_contiguous_reserve(phys_addr_t limit) { }
+
+static inline
+int dma_declare_contiguous(struct device *dev, unsigned long size,
+ phys_addr_t base, phys_addr_t limit)
+{
+ return -ENOSYS;
+}
+
+static inline
+struct page *dma_alloc_from_contiguous(struct device *dev, int count,
+ unsigned int order)
+{
+ return NULL;
+}
+
+static inline
+bool dma_release_from_contiguous(struct device *dev, struct page *pages,
+ int count)
+{
+ return false;
+}
+
+#endif
+
+#endif
+
+#endif
int (*mmap)(struct device *, struct vm_area_struct *,
void *, dma_addr_t, size_t, struct dma_attrs *attrs);
+ int (*get_sgtable)(struct device *dev, struct sg_table *sgt, void *,
+ dma_addr_t, size_t, struct dma_attrs *attrs);
+
dma_addr_t (*map_page)(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction dir,
}
#endif /* CONFIG_PM_SLEEP */
+#ifdef CONFIG_CMA
+
+/* The below functions must be run on a range from a single zone. */
+extern int alloc_contig_range(unsigned long start, unsigned long end,
+ unsigned migratetype);
+extern void free_contig_range(unsigned long pfn, unsigned nr_pages);
+
+/* CMA stuff */
+extern void init_cma_reserved_pageblock(struct page *page);
+
+#endif
+
#endif /* __LINUX_GFP_H */
--- /dev/null
+/*
+ * Copyright (C) 2012 Google, Inc
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ *
+ * ChromeOS EC multi function device
+ */
+
+#ifndef __LINUX_MFD_CHROMEOS_EC_H
+#define __LINUX_MFD_CHROMEOS_EC_H
+
+struct i2c_msg;
+
+struct chromeos_ec_msg {
+ u8 cmd;
+ uint8_t *out_buf;
+ int out_len;
+ uint8_t *in_buf;
+ int in_len;
+};
+
+struct chromeos_ec_device {
+ struct device *dev;
+ struct i2c_client *client;
+ int irq;
+ struct blocking_notifier_head event_notifier;
+ int (*command_send)(struct chromeos_ec_device *ec,
+ char cmd, void *out_buf, int out_len);
+ int (*command_recv)(struct chromeos_ec_device *ec,
+ char cmd, void *in_buf, int in_len);
+ int (*command_xfer)(struct chromeos_ec_device *ec,
+ struct chromeos_ec_msg *msg);
+ int (*command_raw)(struct chromeos_ec_device *ec,
+ struct i2c_msg *msgs, int num);
+};
+
+#endif /* __LINUX_MFD_CHROMEOS_EC_H */
--- /dev/null
+/* Copyright (c) 2012 The Chromium OS Authors. All rights reserved.
+ * Use of this source code is governed by a BSD-style license that can be
+ * found in the LICENSE file.
+ */
+
+/* Host communication command constants for Chrome EC */
+
+#ifndef __CROS_EC_COMMANDS_H
+#define __CROS_EC_COMMANDS_H
+
+/* Protocol overview
+ *
+ * request: CMD [ P0 P1 P2 ... Pn S ]
+ * response: ERR [ P0 P1 P2 ... Pn S ]
+ *
+ * where the bytes are defined as follow :
+ * - CMD is the command code. (defined by EC_CMD_ constants)
+ * - ERR is the error code. (defined by EC_RES_ constants)
+ * - Px is the optional payload.
+ * it is not sent if the error code is not success.
+ * (defined by ec_params_ and ec_response_ structures)
+ * - S is the checksum which is the sum of all payload bytes.
+ *
+ * On LPC, CMD and ERR are sent/received at EC_LPC_ADDR_KERNEL|USER_CMD
+ * and the payloads are sent/received at EC_LPC_ADDR_KERNEL|USER_PARAM.
+ * On I2C, all bytes are sent serially in the same message.
+ */
+
+
+/* During the development stage, the LPC bus has high error bit rate.
+ * Using checksum can detect the error and trigger re-transmit.
+ * FIXME: remove this after mass production.
+ */
+#define SUPPORT_CHECKSUM
+
+/* Current version of this protocol */
+#define EC_PROTO_VERSION 0x00000002
+
+/* I/O addresses for LPC commands */
+#define EC_LPC_ADDR_KERNEL_DATA 0x62
+#define EC_LPC_ADDR_KERNEL_CMD 0x66
+#define EC_LPC_ADDR_KERNEL_PARAM 0x800
+#define EC_LPC_ADDR_USER_DATA 0x200
+#define EC_LPC_ADDR_USER_CMD 0x204
+#define EC_LPC_ADDR_USER_PARAM 0x880
+#define EC_PARAM_SIZE 128 /* Size of each param area in bytes */
+
+/* EC command register bit functions */
+#define EC_LPC_CMDR_DATA (1 << 0)
+#define EC_LPC_CMDR_PENDING (1 << 1)
+#define EC_LPC_CMDR_BUSY (1 << 2)
+#define EC_LPC_CMDR_CMD (1 << 3)
+#define EC_LPC_CMDR_ACPI_BRST (1 << 4)
+#define EC_LPC_CMDR_SCI (1 << 5)
+#define EC_LPC_CMDR_SMI (1 << 6)
+
+#define EC_LPC_ADDR_MEMMAP 0x900
+#define EC_MEMMAP_SIZE 255 /* ACPI IO buffer max is 255 bytes */
+#define EC_MEMMAP_TEXT_MAX 8 /* Size of a string in the memory map */
+
+/* The offset address of each type of data in mapped memory. */
+#define EC_MEMMAP_TEMP_SENSOR 0x00
+#define EC_MEMMAP_FAN 0x10
+#define EC_MEMMAP_SWITCHES 0x30
+#define EC_MEMMAP_HOST_EVENTS 0x34
+#define EC_MEMMAP_BATT_VOLT 0x40 /* Battery Present Voltage */
+#define EC_MEMMAP_BATT_RATE 0x44 /* Battery Present Rate */
+#define EC_MEMMAP_BATT_CAP 0x48 /* Battery Remaining Capacity */
+#define EC_MEMMAP_BATT_FLAG 0x4c /* Battery State, defined below */
+#define EC_MEMMAP_BATT_DCAP 0x50 /* Battery Design Capacity */
+#define EC_MEMMAP_BATT_DVLT 0x54 /* Battery Design Voltage */
+#define EC_MEMMAP_BATT_LFCC 0x58 /* Battery Last Full Charge Capacity */
+#define EC_MEMMAP_BATT_CCNT 0x5c /* Battery Cycle Count */
+#define EC_MEMMAP_BATT_MFGR 0x60 /* Battery Manufacturer String */
+#define EC_MEMMAP_BATT_MODEL 0x68 /* Battery Model Number String */
+#define EC_MEMMAP_BATT_SERIAL 0x70 /* Battery Serial Number String */
+#define EC_MEMMAP_BATT_TYPE 0x78 /* Battery Type String */
+
+/* Battery bit flags at EC_MEMMAP_BATT_FLAG. */
+#define EC_BATT_FLAG_AC_PRESENT 0x01
+#define EC_BATT_FLAG_BATT_PRESENT 0x02
+#define EC_BATT_FLAG_DISCHARGING 0x04
+#define EC_BATT_FLAG_CHARGING 0x08
+#define EC_BATT_FLAG_LEVEL_CRITICAL 0x10
+
+/* Switch flags at EC_MEMMAP_SWITCHES */
+#define EC_SWITCH_LID_OPEN 0x01
+#define EC_SWITCH_POWER_BUTTON_PRESSED 0x02
+#define EC_SWITCH_WRITE_PROTECT_DISABLED 0x04
+/* Recovery requested via keyboard */
+#define EC_SWITCH_KEYBOARD_RECOVERY 0x08
+/* Recovery requested via dedicated signal (from servo board) */
+#define EC_SWITCH_DEDICATED_RECOVERY 0x10
+/* Fake developer switch (for testing) */
+#define EC_SWITCH_FAKE_DEVELOPER 0x20
+
+/* The offset of temperature value stored in mapped memory.
+ * This allows reporting a temperature range of
+ * 200K to 454K = -73C to 181C.
+ */
+#define EC_TEMP_SENSOR_OFFSET 200
+
+/*
+ * This header file is used in coreboot both in C and ACPI code.
+ * The ACPI code is pre-processed to handle constants but the ASL
+ * compiler is unable to handle actual C code so keep it separate.
+ */
+#ifndef __ACPI__
+
+/* LPC command status byte masks */
+/* EC has written a byte in the data register and host hasn't read it yet */
+#define EC_LPC_STATUS_TO_HOST 0x01
+/* Host has written a command/data byte and the EC hasn't read it yet */
+#define EC_LPC_STATUS_FROM_HOST 0x02
+/* EC is processing a command */
+#define EC_LPC_STATUS_PROCESSING 0x04
+/* Last write to EC was a command, not data */
+#define EC_LPC_STATUS_LAST_CMD 0x08
+/* EC is in burst mode. Chrome EC doesn't support this, so this bit is never
+ * set. */
+#define EC_LPC_STATUS_BURST_MODE 0x10
+/* SCI event is pending (requesting SCI query) */
+#define EC_LPC_STATUS_SCI_PENDING 0x20
+/* SMI event is pending (requesting SMI query) */
+#define EC_LPC_STATUS_SMI_PENDING 0x40
+/* (reserved) */
+#define EC_LPC_STATUS_RESERVED 0x80
+
+/* EC is busy. This covers both the EC processing a command, and the host has
+ * written a new command but the EC hasn't picked it up yet. */
+#define EC_LPC_STATUS_BUSY_MASK \
+ (EC_LPC_STATUS_FROM_HOST | EC_LPC_STATUS_PROCESSING)
+
+/* Host command response codes */
+/* TODO: move these so they don't overlap SCI/SMI data? */
+enum ec_status {
+ EC_RES_SUCCESS = 0,
+ EC_RES_INVALID_COMMAND = 1,
+ EC_RES_ERROR = 2,
+ EC_RES_INVALID_PARAM = 3,
+ EC_RES_ACCESS_DENIED = 4,
+};
+
+
+/* Host event codes. Note these are 1-based, not 0-based, because ACPI query
+ * EC command uses code 0 to mean "no event pending". We explicitly specify
+ * each value in the enum listing so they won't change if we delete/insert an
+ * item or rearrange the list (it needs to be stable across platforms, not
+ * just within a single compiled instance). */
+enum host_event_code {
+ EC_HOST_EVENT_LID_CLOSED = 1,
+ EC_HOST_EVENT_LID_OPEN = 2,
+ EC_HOST_EVENT_POWER_BUTTON = 3,
+ EC_HOST_EVENT_AC_CONNECTED = 4,
+ EC_HOST_EVENT_AC_DISCONNECTED = 5,
+ EC_HOST_EVENT_BATTERY_LOW = 6,
+ EC_HOST_EVENT_BATTERY_CRITICAL = 7,
+ EC_HOST_EVENT_BATTERY = 8,
+ EC_HOST_EVENT_THERMAL_THRESHOLD = 9,
+ EC_HOST_EVENT_THERMAL_OVERLOAD = 10,
+ EC_HOST_EVENT_THERMAL = 11,
+ EC_HOST_EVENT_USB_CHARGER = 12,
+ EC_HOST_EVENT_KEY_PRESSED = 13,
+};
+/* Host event mask */
+#define EC_HOST_EVENT_MASK(event_code) (1 << ((event_code) - 1))
+
+/* Notes on commands:
+ *
+ * Each command is an 8-byte command value. Commands which take
+ * params or return response data specify structs for that data. If
+ * no struct is specified, the command does not input or output data,
+ * respectively. */
+
+/*****************************************************************************/
+/* General / test commands */
+
+/* Get protocol version, used to deal with non-backward compatible protocol
+ * changes. */
+#define EC_CMD_PROTO_VERSION 0x00
+struct ec_response_proto_version {
+ uint32_t version;
+} __attribute__ ((packed));
+
+/* Hello. This is a simple command to test the EC is responsive to
+ * commands. */
+#define EC_CMD_HELLO 0x01
+struct ec_params_hello {
+ uint32_t in_data; /* Pass anything here */
+} __attribute__ ((packed));
+struct ec_response_hello {
+ uint32_t out_data; /* Output will be in_data + 0x01020304 */
+} __attribute__ ((packed));
+
+
+/* Get version number */
+#define EC_CMD_GET_VERSION 0x02
+enum ec_current_image {
+ EC_IMAGE_UNKNOWN = 0,
+ EC_IMAGE_RO,
+ EC_IMAGE_RW_A,
+ EC_IMAGE_RW_B
+};
+struct ec_response_get_version {
+ /* Null-terminated version strings for RO, RW-A, RW-B */
+ char version_string_ro[32];
+ char version_string_rw_a[32];
+ char version_string_rw_b[32];
+ uint32_t current_image; /* One of ec_current_image */
+} __attribute__ ((packed));
+
+
+/* Read test */
+#define EC_CMD_READ_TEST 0x03
+struct ec_params_read_test {
+ uint32_t offset; /* Starting value for read buffer */
+ uint32_t size; /* Size to read in bytes */
+} __attribute__ ((packed));
+struct ec_response_read_test {
+ uint32_t data[32];
+} __attribute__ ((packed));
+
+
+/* Get build information */
+#define EC_CMD_GET_BUILD_INFO 0x04
+struct ec_response_get_build_info {
+ char build_string[EC_PARAM_SIZE];
+} __attribute__ ((packed));
+
+
+/* Get chip info */
+#define EC_CMD_GET_CHIP_INFO 0x05
+struct ec_response_get_chip_info {
+ /* Null-terminated strings */
+ char vendor[32];
+ char name[32];
+ char revision[32]; /* Mask version */
+} __attribute__ ((packed));
+
+
+/*****************************************************************************/
+/* Flash commands */
+
+/* Maximum bytes that can be read/written in a single command */
+#define EC_FLASH_SIZE_MAX 64
+
+/* Get flash info */
+#define EC_CMD_FLASH_INFO 0x10
+struct ec_response_flash_info {
+ /* Usable flash size, in bytes */
+ uint32_t flash_size;
+ /* Write block size. Write offset and size must be a multiple
+ * of this. */
+ uint32_t write_block_size;
+ /* Erase block size. Erase offset and size must be a multiple
+ * of this. */
+ uint32_t erase_block_size;
+ /* Protection block size. Protection offset and size must be a
+ * multiple of this. */
+ uint32_t protect_block_size;
+} __attribute__ ((packed));
+
+/* Read flash */
+#define EC_CMD_FLASH_READ 0x11
+struct ec_params_flash_read {
+ uint32_t offset; /* Byte offset to read */
+ uint32_t size; /* Size to read in bytes */
+} __attribute__ ((packed));
+struct ec_response_flash_read {
+ uint8_t data[EC_FLASH_SIZE_MAX];
+} __attribute__ ((packed));
+
+/* Write flash */
+#define EC_CMD_FLASH_WRITE 0x12
+struct ec_params_flash_write {
+ uint32_t offset; /* Byte offset to write */
+ uint32_t size; /* Size to write in bytes */
+ uint8_t data[EC_FLASH_SIZE_MAX];
+} __attribute__ ((packed));
+
+/* Erase flash */
+#define EC_CMD_FLASH_ERASE 0x13
+struct ec_params_flash_erase {
+ uint32_t offset; /* Byte offset to erase */
+ uint32_t size; /* Size to erase in bytes */
+} __attribute__ ((packed));
+
+/* Flashmap offset */
+#define EC_CMD_FLASH_GET_FLASHMAP 0x14
+struct ec_response_flash_flashmap {
+ uint32_t offset; /* Flashmap offset */
+} __attribute__ ((packed));
+
+/* Enable/disable flash write protect */
+#define EC_CMD_FLASH_WP_ENABLE 0x15
+struct ec_params_flash_wp_enable {
+ uint32_t enable_wp;
+} __attribute__ ((packed));
+
+/* Get flash write protection commit state */
+#define EC_CMD_FLASH_WP_GET_STATE 0x16
+struct ec_response_flash_wp_enable {
+ uint32_t enable_wp;
+} __attribute__ ((packed));
+
+/* Set/get flash write protection range */
+#define EC_CMD_FLASH_WP_SET_RANGE 0x17
+struct ec_params_flash_wp_range {
+ /* Byte offset aligned to info.protect_block_size */
+ uint32_t offset;
+ /* Size should be multiply of info.protect_block_size */
+ uint32_t size;
+} __attribute__ ((packed));
+
+#define EC_CMD_FLASH_WP_GET_RANGE 0x18
+struct ec_response_flash_wp_range {
+ uint32_t offset;
+ uint32_t size;
+} __attribute__ ((packed));
+
+/* Read flash write protection GPIO pin */
+#define EC_CMD_FLASH_WP_GET_GPIO 0x19
+struct ec_params_flash_wp_gpio {
+ uint32_t pin_no;
+} __attribute__ ((packed));
+struct ec_response_flash_wp_gpio {
+ uint32_t value;
+} __attribute__ ((packed));
+
+#ifdef SUPPORT_CHECKSUM
+/* Checksum a range of flash datq */
+#define EC_CMD_FLASH_CHECKSUM 0x1f
+struct ec_params_flash_checksum {
+ uint32_t offset; /* Byte offset to read */
+ uint32_t size; /* Size to read in bytes */
+} __attribute__ ((packed));
+struct ec_response_flash_checksum {
+ uint8_t checksum;
+} __attribute__ ((packed));
+#define BYTE_IN(sum, byte) do { \
+ sum = (sum << 1) | (sum >> 7); \
+ sum ^= (byte ^ 0x53); \
+ } while (0)
+#endif /* SUPPORT_CHECKSUM */
+
+/*****************************************************************************/
+/* PWM commands */
+
+/* Get fan RPM */
+#define EC_CMD_PWM_GET_FAN_RPM 0x20
+struct ec_response_pwm_get_fan_rpm {
+ uint32_t rpm;
+} __attribute__ ((packed));
+
+/* Set target fan RPM */
+#define EC_CMD_PWM_SET_FAN_TARGET_RPM 0x21
+struct ec_params_pwm_set_fan_target_rpm {
+ uint32_t rpm;
+} __attribute__ ((packed));
+
+/* Get keyboard backlight */
+#define EC_CMD_PWM_GET_KEYBOARD_BACKLIGHT 0x22
+struct ec_response_pwm_get_keyboard_backlight {
+ uint8_t percent;
+} __attribute__ ((packed));
+
+/* Set keyboard backlight */
+#define EC_CMD_PWM_SET_KEYBOARD_BACKLIGHT 0x23
+struct ec_params_pwm_set_keyboard_backlight {
+ uint8_t percent;
+} __attribute__ ((packed));
+
+/*****************************************************************************/
+/* Lightbar commands. This looks worse than it is. Since we only use one LPC
+ * command to say "talk to the lightbar", we put the "and tell it to do X"
+ * part into a subcommand. We'll make separate structs for subcommands with
+ * different input args, so that we know how much to expect. */
+#define EC_CMD_LIGHTBAR_CMD 0x28
+struct ec_params_lightbar_cmd {
+ union {
+ union {
+ uint8_t cmd;
+ struct {
+ uint8_t cmd;
+ } dump, off, on, init, get_seq;
+ struct num {
+ uint8_t cmd;
+ uint8_t num;
+ } brightness, seq;
+
+ struct reg {
+ uint8_t cmd;
+ uint8_t ctrl, reg, value;
+ } reg;
+ struct rgb {
+ uint8_t cmd;
+ uint8_t led, red, green, blue;
+ } rgb;
+ } in;
+ union {
+ struct dump {
+ struct {
+ uint8_t reg;
+ uint8_t ic0;
+ uint8_t ic1;
+ } vals[23];
+ } dump;
+ struct get_seq {
+ uint8_t num;
+ } get_seq;
+ struct {
+ /* no return params */
+ } off, on, init, brightness, seq, reg, rgb;
+ } out;
+ };
+} __attribute__ ((packed));
+
+/*****************************************************************************/
+/* USB charging control commands */
+
+/* Set USB port charging mode */
+#define EC_CMD_USB_CHARGE_SET_MODE 0x30
+struct ec_params_usb_charge_set_mode {
+ uint8_t usb_port_id;
+ uint8_t mode;
+} __attribute__ ((packed));
+
+/*****************************************************************************/
+/* Persistent storage for host */
+
+/* Maximum bytes that can be read/written in a single command */
+#define EC_PSTORE_SIZE_MAX 64
+
+/* Get persistent storage info */
+#define EC_CMD_PSTORE_INFO 0x40
+struct ec_response_pstore_info {
+ /* Persistent storage size, in bytes */
+ uint32_t pstore_size;
+ /* Access size. Read/write offset and size must be a multiple
+ * of this. */
+ uint32_t access_size;
+} __attribute__ ((packed));
+
+/* Read persistent storage */
+#define EC_CMD_PSTORE_READ 0x41
+struct ec_params_pstore_read {
+ uint32_t offset; /* Byte offset to read */
+ uint32_t size; /* Size to read in bytes */
+} __attribute__ ((packed));
+struct ec_response_pstore_read {
+ uint8_t data[EC_PSTORE_SIZE_MAX];
+} __attribute__ ((packed));
+
+/* Write persistent storage */
+#define EC_CMD_PSTORE_WRITE 0x42
+struct ec_params_pstore_write {
+ uint32_t offset; /* Byte offset to write */
+ uint32_t size; /* Size to write in bytes */
+ uint8_t data[EC_PSTORE_SIZE_MAX];
+} __attribute__ ((packed));
+
+/*****************************************************************************/
+/* Thermal engine commands */
+
+/* Set thershold value */
+#define EC_CMD_THERMAL_SET_THRESHOLD 0x50
+struct ec_params_thermal_set_threshold {
+ uint8_t sensor_type;
+ uint8_t threshold_id;
+ uint16_t value;
+} __attribute__ ((packed));
+
+/* Get threshold value */
+#define EC_CMD_THERMAL_GET_THRESHOLD 0x51
+struct ec_params_thermal_get_threshold {
+ uint8_t sensor_type;
+ uint8_t threshold_id;
+} __attribute__ ((packed));
+struct ec_response_thermal_get_threshold {
+ uint16_t value;
+} __attribute__ ((packed));
+
+/* Toggling automatic fan control */
+#define EC_CMD_THERMAL_AUTO_FAN_CTRL 0x52
+
+/*****************************************************************************/
+/* Matrix KeyBoard Protocol */
+
+/* Read key state */
+#define EC_CMD_MKBP_STATE 0x60
+struct ec_response_mkbp_state {
+ uint8_t cols[32];
+} __attribute__ ((packed));
+
+/* Provide information about the matrix : number of rows and columns */
+#define EC_CMD_MKBP_INFO 0x61
+struct ec_response_mkbp_info {
+ uint32_t rows;
+ uint32_t cols;
+} __attribute__ ((packed));
+
+/*****************************************************************************/
+/* Host event commands */
+
+/* Host event mask params and response structures, shared by all of the host
+ * event commands below. */
+struct ec_params_host_event_mask {
+ uint32_t mask;
+} __attribute__ ((packed));
+
+struct ec_response_host_event_mask {
+ uint32_t mask;
+} __attribute__ ((packed));
+
+/* These all use ec_response_host_event_mask */
+#define EC_CMD_HOST_EVENT_GET_SMI_MASK 0x88
+#define EC_CMD_HOST_EVENT_GET_SCI_MASK 0x89
+#define EC_CMD_HOST_EVENT_GET_WAKE_MASK 0x8d
+
+/* These all use ec_params_host_event_mask */
+#define EC_CMD_HOST_EVENT_SET_SMI_MASK 0x8a
+#define EC_CMD_HOST_EVENT_SET_SCI_MASK 0x8b
+#define EC_CMD_HOST_EVENT_CLEAR 0x8c
+#define EC_CMD_HOST_EVENT_SET_WAKE_MASK 0x8e
+
+/*****************************************************************************/
+/* Special commands
+ *
+ * These do not follow the normal rules for commands. See each command for
+ * details. */
+
+/* ACPI Query Embedded Controller
+ *
+ * This clears the lowest-order bit in the currently pending host events, and
+ * sets the result code to the 1-based index of the bit (event 0x00000001 = 1,
+ * event 0x80000000 = 32), or 0 if no event was pending. */
+#define EC_CMD_ACPI_QUERY_EVENT 0x84
+
+/* Reboot
+ *
+ * This command will work even when the EC LPC interface is busy, because the
+ * reboot command is processed at interrupt level. Note that when the EC
+ * reboots, the host will reboot too, so there is no response to this
+ * command. */
+#define EC_CMD_REBOOT 0xd1 /* Think "die" */
+
+#define EC_CMD_REBOOT_EC 0xd2
+#define EC_CMD_REBOOT_BIT_RECOVERY (1 << 0)
+
+struct ec_params_reboot_ec {
+ uint8_t target; /* enum ec_current_image */
+ uint8_t reboot_flags;
+} __attribute__ ((packed));
+
+#endif /* !__ACPI__ */
+
+#endif /* __CROS_EC_COMMANDS_H */
--- /dev/null
+/*
+ * max77686.h - Voltage regulator driver for the Maxim 77686
+ *
+ * Copyright (C) 2011 Samsung Electrnoics
+ * Chiwoong Byun <woong.byun@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __LINUX_MFD_MAX77686_PRIV_H
+#define __LINUX_MFD_MAX77686_PRIV_H
+
+#include <linux/i2c.h>
+
+#define MAX77686_REG_INVALID (0xff)
+#define RAMP_MASK 0xC0
+
+enum max77686_pmic_reg {
+ MAX77686_REG_DEVICE_ID = 0x00,
+ MAX77686_REG_INTSRC = 0x01,
+ MAX77686_REG_INT1 = 0x02,
+ MAX77686_REG_INT2 = 0x03,
+
+ MAX77686_REG_INT1MSK = 0x04,
+ MAX77686_REG_INT2MSK = 0x05,
+
+ MAX77686_REG_STATUS1 = 0x06,
+ MAX77686_REG_STATUS2 = 0x07,
+
+ MAX77686_REG_PWRON = 0x08,
+ MAX77686_REG_ONOFF_DELAY = 0x09,
+ MAX77686_REG_MRSTB = 0x0A,
+ /* Reserved: 0x0B-0x0F */
+
+ MAX77686_REG_BUCK1CTRL = 0x10,
+ MAX77686_REG_BUCK1OUT = 0x11,
+ MAX77686_REG_BUCK2CTRL1 = 0x12,
+ MAX77686_REG_BUCK234FREQ = 0x13,
+ MAX77686_REG_BUCK2DVS1 = 0x14,
+ MAX77686_REG_BUCK2DVS2 = 0x15,
+ MAX77686_REG_BUCK2DVS3 = 0x16,
+ MAX77686_REG_BUCK2DVS4 = 0x17,
+ MAX77686_REG_BUCK2DVS5 = 0x18,
+ MAX77686_REG_BUCK2DVS6 = 0x19,
+ MAX77686_REG_BUCK2DVS7 = 0x1A,
+ MAX77686_REG_BUCK2DVS8 = 0x1B,
+ MAX77686_REG_BUCK3CTRL1 = 0x1C,
+ /* Reserved: 0x1D */
+ MAX77686_REG_BUCK3DVS1 = 0x1E,
+ MAX77686_REG_BUCK3DVS2 = 0x1F,
+ MAX77686_REG_BUCK3DVS3 = 0x20,
+ MAX77686_REG_BUCK3DVS4 = 0x21,
+ MAX77686_REG_BUCK3DVS5 = 0x22,
+ MAX77686_REG_BUCK3DVS6 = 0x23,
+ MAX77686_REG_BUCK3DVS7 = 0x24,
+ MAX77686_REG_BUCK3DVS8 = 0x25,
+ MAX77686_REG_BUCK4CTRL1 = 0x26,
+ /* Reserved: 0x27 */
+ MAX77686_REG_BUCK4DVS1 = 0x28,
+ MAX77686_REG_BUCK4DVS2 = 0x29,
+ MAX77686_REG_BUCK4DVS3 = 0x2A,
+ MAX77686_REG_BUCK4DVS4 = 0x2B,
+ MAX77686_REG_BUCK4DVS5 = 0x2C,
+ MAX77686_REG_BUCK4DVS6 = 0x2D,
+ MAX77686_REG_BUCK4DVS7 = 0x2E,
+ MAX77686_REG_BUCK4DVS8 = 0x2F,
+ MAX77686_REG_BUCK5CTRL = 0x30,
+ MAX77686_REG_BUCK5OUT = 0x31,
+ MAX77686_REG_BUCK6CTRL = 0x32,
+ MAX77686_REG_BUCK6OUT = 0x33,
+ MAX77686_REG_BUCK7CTRL = 0x34,
+ MAX77686_REG_BUCK7OUT = 0x35,
+ MAX77686_REG_BUCK8CTRL = 0x36,
+ MAX77686_REG_BUCK8OUT = 0x37,
+ MAX77686_REG_BUCK9CTRL = 0x38,
+ MAX77686_REG_BUCK9OUT = 0x39,
+ /* Reserved: 0x3A-0x3F */
+
+ MAX77686_REG_LDO1CTRL1 = 0x40,
+ MAX77686_REG_LDO2CTRL1 = 0x41,
+ MAX77686_REG_LDO3CTRL1 = 0x42,
+ MAX77686_REG_LDO4CTRL1 = 0x43,
+ MAX77686_REG_LDO5CTRL1 = 0x44,
+ MAX77686_REG_LDO6CTRL1 = 0x45,
+ MAX77686_REG_LDO7CTRL1 = 0x46,
+ MAX77686_REG_LDO8CTRL1 = 0x47,
+ MAX77686_REG_LDO9CTRL1 = 0x48,
+ MAX77686_REG_LDO10CTRL1 = 0x49,
+ MAX77686_REG_LDO11CTRL1 = 0x4A,
+ MAX77686_REG_LDO12CTRL1 = 0x4B,
+ MAX77686_REG_LDO13CTRL1 = 0x4C,
+ MAX77686_REG_LDO14CTRL1 = 0x4D,
+ MAX77686_REG_LDO15CTRL1 = 0x4E,
+ MAX77686_REG_LDO16CTRL1 = 0x4F,
+ MAX77686_REG_LDO17CTRL1 = 0x50,
+ MAX77686_REG_LDO18CTRL1 = 0x51,
+ MAX77686_REG_LDO19CTRL1 = 0x52,
+ MAX77686_REG_LDO20CTRL1 = 0x53,
+ MAX77686_REG_LDO21CTRL1 = 0x54,
+ MAX77686_REG_LDO22CTRL1 = 0x55,
+ MAX77686_REG_LDO23CTRL1 = 0x56,
+ MAX77686_REG_LDO24CTRL1 = 0x57,
+ MAX77686_REG_LDO25CTRL1 = 0x58,
+ MAX77686_REG_LDO26CTRL1 = 0x59,
+ /* Reserved: 0x5A-0x5F */
+ MAX77686_REG_LDO1CTRL2 = 0x60,
+ MAX77686_REG_LDO2CTRL2 = 0x61,
+ MAX77686_REG_LDO3CTRL2 = 0x62,
+ MAX77686_REG_LDO4CTRL2 = 0x63,
+ MAX77686_REG_LDO5CTRL2 = 0x64,
+ MAX77686_REG_LDO6CTRL2 = 0x65,
+ MAX77686_REG_LDO7CTRL2 = 0x66,
+ MAX77686_REG_LDO8CTRL2 = 0x67,
+ MAX77686_REG_LDO9CTRL2 = 0x68,
+ MAX77686_REG_LDO10CTRL2 = 0x69,
+ MAX77686_REG_LDO11CTRL2 = 0x6A,
+ MAX77686_REG_LDO12CTRL2 = 0x6B,
+ MAX77686_REG_LDO13CTRL2 = 0x6C,
+ MAX77686_REG_LDO14CTRL2 = 0x6D,
+ MAX77686_REG_LDO15CTRL2 = 0x6E,
+ MAX77686_REG_LDO16CTRL2 = 0x6F,
+ MAX77686_REG_LDO17CTRL2 = 0x70,
+ MAX77686_REG_LDO18CTRL2 = 0x71,
+ MAX77686_REG_LDO19CTRL2 = 0x72,
+ MAX77686_REG_LDO20CTRL2 = 0x73,
+ MAX77686_REG_LDO21CTRL2 = 0x74,
+ MAX77686_REG_LDO22CTRL2 = 0x75,
+ MAX77686_REG_LDO23CTRL2 = 0x76,
+ MAX77686_REG_LDO24CTRL2 = 0x77,
+ MAX77686_REG_LDO25CTRL2 = 0x78,
+ MAX77686_REG_LDO26CTRL2 = 0x79,
+ /* Reserved: 0x7A-0x7D */
+
+ MAX77686_REG_BBAT_CHG = 0x7E,
+ MAX77686_REG_32KHZ_ = 0x7F,
+
+ MAX77686_REG_PMIC_END = 0x80,
+};
+
+enum max77686_rtc_reg {
+ MAX77686_RTC_INT = 0x00,
+ MAX77686_RTC_INTM = 0x01,
+ MAX77686_RTC_CONTROLM = 0x02,
+ MAX77686_RTC_CONTROL = 0x03,
+ MAX77686_RTC_UPDATE0 = 0x04,
+ /* Reserved: 0x5 */
+ MAX77686_WTSR_SMPL_CNTL = 0x06,
+ MAX77686_RTC_SEC = 0x07,
+ MAX77686_RTC_MIN = 0x08,
+ MAX77686_RTC_HOUR = 0x09,
+ MAX77686_RTC_WEEKDAY = 0x0A,
+ MAX77686_RTC_MONTH = 0x0B,
+ MAX77686_RTC_YEAR = 0x0C,
+ MAX77686_RTC_DATE = 0x0D,
+ MAX77686_ALARM1_SEC = 0x0E,
+ MAX77686_ALARM1_MIN = 0x0F,
+ MAX77686_ALARM1_HOUR = 0x10,
+ MAX77686_ALARM1_WEEKDAY = 0x11,
+ MAX77686_ALARM1_MONTH = 0x12,
+ MAX77686_ALARM1_YEAR = 0x13,
+ MAX77686_ALARM1_DATE = 0x14,
+ MAX77686_ALARM2_SEC = 0x15,
+ MAX77686_ALARM2_MIN = 0x16,
+ MAX77686_ALARM2_HOUR = 0x17,
+ MAX77686_ALARM2_WEEKDAY = 0x18,
+ MAX77686_ALARM2_MONTH = 0x19,
+ MAX77686_ALARM2_YEAR = 0x1A,
+ MAX77686_ALARM2_DATE = 0x1B,
+};
+
+#define MAX77686_IRQSRC_PMIC (0)
+#define MAX77686_IRQSRC_RTC (1 << 0)
+
+enum max77686_irq_source {
+ PMIC_INT1 = 0,
+ PMIC_INT2,
+ RTC_INT,
+
+ MAX77686_IRQ_GROUP_NR,
+};
+
+enum max77686_irq {
+ MAX77686_PMICIRQ_PWRONF,
+ MAX77686_PMICIRQ_PWRONR,
+ MAX77686_PMICIRQ_JIGONBF,
+ MAX77686_PMICIRQ_JIGONBR,
+ MAX77686_PMICIRQ_ACOKBF,
+ MAX77686_PMICIRQ_ACOKBR,
+ MAX77686_PMICIRQ_ONKEY1S,
+ MAX77686_PMICIRQ_MRSTB,
+
+ MAX77686_PMICIRQ_140C,
+ MAX77686_PMICIRQ_120C,
+
+ MAX77686_RTCIRQ_RTC60S,
+ MAX77686_RTCIRQ_RTCA1,
+ MAX77686_RTCIRQ_RTCA2,
+ MAX77686_RTCIRQ_SMPL,
+ MAX77686_RTCIRQ_RTC1S,
+ MAX77686_RTCIRQ_WTSR,
+
+ MAX77686_IRQ_NR,
+};
+
+struct max77686_dev {
+ struct device *dev;
+ struct i2c_client *i2c; /* 0xcc / PMIC, Battery Control, and FLASH */
+ struct i2c_client *rtc; /* slave addr 0x0c */
+ struct mutex iolock;
+ int type;
+ int irq;
+ bool wakeup;
+ struct mutex irqlock;
+ int irq_masks_cur[MAX77686_IRQ_GROUP_NR];
+ int irq_masks_cache[MAX77686_IRQ_GROUP_NR];
+ struct irq_domain *irq_domain;
+ struct max77686_platform_data *pdata;
+};
+
+enum max77686_types {
+ TYPE_MAX77686,
+};
+
+extern int max77686_irq_init(struct max77686_dev *max77686);
+extern void max77686_irq_exit(struct max77686_dev *max77686);
+extern int max77686_irq_resume(struct max77686_dev *max77686);
+
+extern int max77686_read_reg(struct i2c_client *i2c, u8 reg, u8 *dest);
+extern int max77686_bulk_read(struct i2c_client *i2c, u8 reg, int count,
+ u8 *buf);
+extern int max77686_write_reg(struct i2c_client *i2c, u8 reg, u8 value);
+extern int max77686_bulk_write(struct i2c_client *i2c, u8 reg, int count,
+ u8 *buf);
+extern int max77686_update_reg(struct i2c_client *i2c, u8 reg, u8 val, u8 mask);
+
+#endif /* __LINUX_MFD_MAX77686_PRIV_H */
--- /dev/null
+/*
+ * max77686.h - Driver for the Maxim 77686
+ *
+ * Copyright (C) 2011 Samsung Electrnoics
+ * Chiwoong Byun <woong.byun@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * This driver is based on max8997.h
+ *
+ * MAX77686 has PMIC, RTC devices.
+ * The devices share the same I2C bus and included in
+ * this mfd driver.
+ */
+
+#ifndef __LINUX_MFD_MAX77686_H
+#define __LINUX_MFD_MAX77686_H
+
+#include <linux/regulator/consumer.h>
+
+/* MAX77686 regulator IDs */
+enum max77686_regulators {
+ MAX77686_LDO1 = 0,
+ MAX77686_LDO2,
+ MAX77686_LDO3,
+ MAX77686_LDO4,
+ MAX77686_LDO5,
+ MAX77686_LDO6,
+ MAX77686_LDO7,
+ MAX77686_LDO8,
+ MAX77686_LDO9,
+ MAX77686_LDO10,
+ MAX77686_LDO11,
+ MAX77686_LDO12,
+ MAX77686_LDO13,
+ MAX77686_LDO14,
+ MAX77686_LDO15,
+ MAX77686_LDO16,
+ MAX77686_LDO17,
+ MAX77686_LDO18,
+ MAX77686_LDO19,
+ MAX77686_LDO20,
+ MAX77686_LDO21,
+ MAX77686_LDO22,
+ MAX77686_LDO23,
+ MAX77686_LDO24,
+ MAX77686_LDO25,
+ MAX77686_LDO26,
+ MAX77686_BUCK1,
+ MAX77686_BUCK2,
+ MAX77686_BUCK3,
+ MAX77686_BUCK4,
+ MAX77686_BUCK5,
+ MAX77686_BUCK6,
+ MAX77686_BUCK7,
+ MAX77686_BUCK8,
+ MAX77686_BUCK9,
+ MAX77686_EN32KHZ_AP,
+ MAX77686_EN32KHZ_CP,
+ MAX77686_P32KH,
+
+ MAX77686_REG_MAX,
+};
+
+enum max77686_ramp_rate {
+ MAX77686_RAMP_RATE_13MV = 0,
+ MAX77686_RAMP_RATE_27MV, /* default */
+ MAX77686_RAMP_RATE_55MV,
+ MAX77686_RAMP_RATE_100MV,
+};
+
+struct max77686_regulator_data {
+ int id;
+ struct regulator_init_data *initdata;
+ struct device_node *reg_node;
+};
+
+struct max77686_platform_data {
+ bool wakeup;
+ u8 ramp_delay;
+ struct max77686_regulator_data *regulators;
+ int num_regulators;
+ struct max77686_opmode_data *opmode_data;
+
+ /*
+ * GPIO-DVS feature is not enabled with the current version of
+ * MAX77686 driver. Buck2/3/4_voltages[0] is used as the default
+ * voltage at probe.
+ */
+};
+
+
+extern int max77686_debug_mask; /* enables debug prints */
+
+enum {
+ MAX77686_DEBUG_INFO = 1 << 0,
+ MAX77686_DEBUG_MASK = 1 << 1,
+ MAX77686_DEBUG_INT = 1 << 2,
+};
+
+#ifndef CONFIG_DEBUG_MAX77686
+
+#define dbg_mask(fmt, ...) do { } while (0)
+#define dbg_info(fmt, ...) do { } while (0)
+#define dbg_int(fmt, ...) do { } while (0)
+
+#else
+
+#define dbg_mask(fmt, ...) \
+do { \
+ if (max77686_debug_mask & MAX77686_DEBUG_MASK) \
+ printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__); \
+} while (0)
+
+#define dbg_info(fmt, ...) \
+do { \
+ if (max77686_debug_mask & MAX77686_DEBUG_INFO) \
+ printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__); \
+} while (0)
+
+#define dbg_int(fmt, ...) \
+do { \
+ if (max77686_debug_mask & MAX77686_DEBUG_INT) \
+ printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__); \
+} while (0)
+#endif /* DEBUG_MAX77686 */
+
+#endif /* __LINUX_MFD_MAX77686_H */
#ifndef __LINUX_MFD_TPS65090_H
#define __LINUX_MFD_TPS65090_H
-struct tps65090_subdev_info {
- int id;
- const char *name;
- void *platform_data;
-};
-
-struct tps65090_platform_data {
- int irq_base;
- int num_subdevs;
- struct tps65090_subdev_info *subdevs;
-};
-
/*
* NOTE: the functions below are not intended for use outside
* of the TPS65090 sub-device drivers
* @data_offset: Set the offset of DATA register according to VERID.
* @dev: Device associated with the MMC controller.
* @pdata: Platform data associated with the MMC controller.
+ * @drv_data: Driver specific data for identified variant of the controller
+ * @biu_clk: Pointer to bus interface unit clock instance.
+ * @ciu_clk: Pointer to card interface unit clock instance.
* @slot: Slots sharing this MMC controller.
+ * @sdr_timing: Clock phase shifting for driving and sampling in sdr mode
+ * @ddr_timing: Clock phase shifting for driving and sampling in ddr mode
* @fifo_depth: depth of FIFO.
* @data_shift: log2 of FIFO item size.
* @part_buf_start: Start index in part_buf.
struct mmc_request *mrq;
struct mmc_command *cmd;
struct mmc_data *data;
+ struct workqueue_struct *card_workqueue;
/* DMA interface members*/
int use_dma;
u16 data_offset;
struct device dev;
struct dw_mci_board *pdata;
+ struct dw_mci_drv_data *drv_data;
+ struct clk *biu_clk;
+ struct clk *ciu_clk;
struct dw_mci_slot *slot[MAX_MCI_SLOTS];
+ /* Phase Shift Value (for exynos5250 variant) */
+ u32 sdr_timing;
+ u32 ddr_timing;
+
/* FIFO push and pull */
int fifo_depth;
int data_shift;
#define DW_MCI_QUIRK_HIGHSPEED BIT(2)
/* Unreliable card detection */
#define DW_MCI_QUIRK_BROKEN_CARD_DETECTION BIT(3)
-
+/* Write Protect detection not available */
+#define DW_MCI_QUIRK_NO_WRITE_PROTECT BIT(4)
struct dma_pdata;
*/
#define PAGE_ALLOC_COSTLY_ORDER 3
-#define MIGRATE_UNMOVABLE 0
-#define MIGRATE_RECLAIMABLE 1
-#define MIGRATE_MOVABLE 2
-#define MIGRATE_PCPTYPES 3 /* the number of types on the pcp lists */
-#define MIGRATE_RESERVE 3
-#define MIGRATE_ISOLATE 4 /* can't allocate from here */
-#define MIGRATE_TYPES 5
+enum {
+ MIGRATE_UNMOVABLE,
+ MIGRATE_RECLAIMABLE,
+ MIGRATE_MOVABLE,
+ MIGRATE_PCPTYPES, /* the number of types on the pcp lists */
+ MIGRATE_RESERVE = MIGRATE_PCPTYPES,
+#ifdef CONFIG_CMA
+ /*
+ * MIGRATE_CMA migration type is designed to mimic the way
+ * ZONE_MOVABLE works. Only movable pages can be allocated
+ * from MIGRATE_CMA pageblocks and page allocator never
+ * implicitly change migration type of MIGRATE_CMA pageblock.
+ *
+ * The way to use it is to change migratetype of a range of
+ * pageblocks to MIGRATE_CMA which can be done by
+ * __free_pageblock_cma() function. What is important though
+ * is that a range of pageblocks must be aligned to
+ * MAX_ORDER_NR_PAGES should biggest page be bigger then
+ * a single pageblock.
+ */
+ MIGRATE_CMA,
+#endif
+ MIGRATE_ISOLATE, /* can't allocate from here */
+ MIGRATE_TYPES
+};
+
+#ifdef CONFIG_CMA
+# define is_migrate_cma(migratetype) unlikely((migratetype) == MIGRATE_CMA)
+# define cma_wmark_pages(zone) zone->min_cma_pages
+#else
+# define is_migrate_cma(migratetype) false
+# define cma_wmark_pages(zone) 0
+#endif
#define for_each_migratetype_order(order, type) \
for (order = 0; order < MAX_ORDER; order++) \
#ifdef CONFIG_MEMORY_HOTPLUG
/* see spanned/present_pages for more description */
seqlock_t span_seqlock;
+#endif
+#ifdef CONFIG_CMA
+ /*
+ * CMA needs to increase watermark levels during the allocation
+ * process to make sure that the system is not starved.
+ */
+ unsigned long min_cma_pages;
#endif
struct free_area free_area[MAX_ORDER];
/*
* Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE.
- * If specified range includes migrate types other than MOVABLE,
+ * If specified range includes migrate types other than MOVABLE or CMA,
* this will fail with -EBUSY.
*
* For isolating all pages in the range finally, the caller have to
* test it.
*/
extern int
-start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn);
+start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
+ unsigned migratetype);
/*
* Changes MIGRATE_ISOLATE to MIGRATE_MOVABLE.
* target range is [start_pfn, end_pfn)
*/
extern int
-undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn);
+undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
+ unsigned migratetype);
/*
- * test all pages in [start_pfn, end_pfn)are isolated or not.
+ * Test all pages in [start_pfn, end_pfn) are isolated or not.
*/
-extern int
-test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn);
+int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn);
/*
- * Internal funcs.Changes pageblock's migrate type.
- * Please use make_pagetype_isolated()/make_pagetype_movable().
+ * Internal functions. Changes pageblock's migrate type.
*/
extern int set_migratetype_isolate(struct page *page);
-extern void unset_migratetype_isolate(struct page *page);
+extern void unset_migratetype_isolate(struct page *page, unsigned migratetype);
#endif
+++ /dev/null
-/*
- * exynos4_tmu.h - Samsung EXYNOS4 TMU (Thermal Management Unit)
- *
- * Copyright (C) 2011 Samsung Electronics
- * Donggeun Kim <dg77.kim@samsung.com>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
- */
-
-#ifndef _LINUX_EXYNOS4_TMU_H
-#define _LINUX_EXYNOS4_TMU_H
-
-enum calibration_type {
- TYPE_ONE_POINT_TRIMMING,
- TYPE_TWO_POINT_TRIMMING,
- TYPE_NONE,
-};
-
-/**
- * struct exynos4_tmu_platform_data
- * @threshold: basic temperature for generating interrupt
- * 25 <= threshold <= 125 [unit: degree Celsius]
- * @trigger_levels: array for each interrupt levels
- * [unit: degree Celsius]
- * 0: temperature for trigger_level0 interrupt
- * condition for trigger_level0 interrupt:
- * current temperature > threshold + trigger_levels[0]
- * 1: temperature for trigger_level1 interrupt
- * condition for trigger_level1 interrupt:
- * current temperature > threshold + trigger_levels[1]
- * 2: temperature for trigger_level2 interrupt
- * condition for trigger_level2 interrupt:
- * current temperature > threshold + trigger_levels[2]
- * 3: temperature for trigger_level3 interrupt
- * condition for trigger_level3 interrupt:
- * current temperature > threshold + trigger_levels[3]
- * @trigger_level0_en:
- * 1 = enable trigger_level0 interrupt,
- * 0 = disable trigger_level0 interrupt
- * @trigger_level1_en:
- * 1 = enable trigger_level1 interrupt,
- * 0 = disable trigger_level1 interrupt
- * @trigger_level2_en:
- * 1 = enable trigger_level2 interrupt,
- * 0 = disable trigger_level2 interrupt
- * @trigger_level3_en:
- * 1 = enable trigger_level3 interrupt,
- * 0 = disable trigger_level3 interrupt
- * @gain: gain of amplifier in the positive-TC generator block
- * 0 <= gain <= 15
- * @reference_voltage: reference voltage of amplifier
- * in the positive-TC generator block
- * 0 <= reference_voltage <= 31
- * @cal_type: calibration type for temperature
- *
- * This structure is required for configuration of exynos4_tmu driver.
- */
-struct exynos4_tmu_platform_data {
- u8 threshold;
- u8 trigger_levels[4];
- bool trigger_level0_en;
- bool trigger_level1_en;
- bool trigger_level2_en;
- bool trigger_level3_en;
-
- u8 gain;
- u8 reference_voltage;
-
- enum calibration_type cal_type;
-};
-#endif /* _LINUX_EXYNOS4_TMU_H */
--- /dev/null
+/*
+ * exynos_thermal.h - Samsung EXYNOS TMU (Thermal Management Unit)
+ *
+ * Copyright (C) 2011 Samsung Electronics
+ * Donggeun Kim <dg77.kim@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef _LINUX_EXYNOS_THERMAL_H
+#define _LINUX_EXYNOS_THERMAL_H
+#include <linux/cpu_cooling.h>
+
+enum calibration_type {
+ TYPE_ONE_POINT_TRIMMING,
+ TYPE_TWO_POINT_TRIMMING,
+ TYPE_NONE,
+};
+
+enum soc_type {
+ SOC_ARCH_EXYNOS4 = 1,
+ SOC_ARCH_EXYNOS5,
+};
+/**
+ * struct exynos_tmu_platform_data
+ * @threshold: basic temperature for generating interrupt
+ * 25 <= threshold <= 125 [unit: degree Celsius]
+ * @trigger_levels: array for each interrupt levels
+ * [unit: degree Celsius]
+ * 0: temperature for trigger_level0 interrupt
+ * condition for trigger_level0 interrupt:
+ * current temperature > threshold + trigger_levels[0]
+ * 1: temperature for trigger_level1 interrupt
+ * condition for trigger_level1 interrupt:
+ * current temperature > threshold + trigger_levels[1]
+ * 2: temperature for trigger_level2 interrupt
+ * condition for trigger_level2 interrupt:
+ * current temperature > threshold + trigger_levels[2]
+ * 3: temperature for trigger_level3 interrupt
+ * condition for trigger_level3 interrupt:
+ * current temperature > threshold + trigger_levels[3]
+ * @trigger_level0_en:
+ * 1 = enable trigger_level0 interrupt,
+ * 0 = disable trigger_level0 interrupt
+ * @trigger_level1_en:
+ * 1 = enable trigger_level1 interrupt,
+ * 0 = disable trigger_level1 interrupt
+ * @trigger_level2_en:
+ * 1 = enable trigger_level2 interrupt,
+ * 0 = disable trigger_level2 interrupt
+ * @trigger_level3_en:
+ * 1 = enable trigger_level3 interrupt,
+ * 0 = disable trigger_level3 interrupt
+ * @gain: gain of amplifier in the positive-TC generator block
+ * 0 <= gain <= 15
+ * @reference_voltage: reference voltage of amplifier
+ * in the positive-TC generator block
+ * 0 <= reference_voltage <= 31
+ * @noise_cancel_mode: noise cancellation mode
+ * 000, 100, 101, 110 and 111 can be different modes
+ * @type: determines the type of SOC
+ * @efuse_value: platform defined fuse value
+ * @cal_type: calibration type for temperature
+ * @freq_clip_table: Table representing frequency reduction percentage.
+ * @freq_tab_count: Count of the above table as frequency reduction may
+ * applicable to only some of the trigger levels.
+ *
+ * This structure is required for configuration of exynos_tmu driver.
+ */
+struct exynos_tmu_platform_data {
+ u8 threshold;
+ u8 trigger_levels[4];
+ bool trigger_level0_en;
+ bool trigger_level1_en;
+ bool trigger_level2_en;
+ bool trigger_level3_en;
+
+ u8 gain;
+ u8 reference_voltage;
+ u8 noise_cancel_mode;
+ u32 efuse_value;
+
+ enum calibration_type cal_type;
+ enum soc_type type;
+ struct freq_clip_table freq_tab[4];
+ unsigned int freq_tab_count;
+};
+#endif /* _LINUX_EXYNOS_THERMAL_H */
int __sg_alloc_table(struct sg_table *, unsigned int, unsigned int, gfp_t,
sg_alloc_fn *);
int sg_alloc_table(struct sg_table *, unsigned int, gfp_t);
+int sg_alloc_table_from_pages(struct sg_table *sgt,
+ struct page **pages, unsigned int n_pages,
+ unsigned long offset, unsigned long size,
+ gfp_t gfp_mask);
size_t sg_copy_from_buffer(struct scatterlist *sgl, unsigned int nents,
void *buf, size_t buflen);
V4L2_MBUS_FMT_BGR565_2X8_LE = 0x1006,
V4L2_MBUS_FMT_RGB565_2X8_BE = 0x1007,
V4L2_MBUS_FMT_RGB565_2X8_LE = 0x1008,
+ V4L2_MBUS_FMT_XRGB8888_4X8_LE = 0x1009,
/* YUV (including grey) - next is 0x2014 */
V4L2_MBUS_FMT_Y8_1X8 = 0x2001,
V4L2_MBUS_FMT_VYUY8_1X16 = 0x2010,
V4L2_MBUS_FMT_YUYV8_1X16 = 0x2011,
V4L2_MBUS_FMT_YVYU8_1X16 = 0x2012,
+ V4L2_MBUS_FMT_YUV8_1X24 = 0x2014,
V4L2_MBUS_FMT_YUYV10_1X20 = 0x200d,
V4L2_MBUS_FMT_YVYU10_1X20 = 0x200e,
V4L2_MEMORY_MMAP = 1,
V4L2_MEMORY_USERPTR = 2,
V4L2_MEMORY_OVERLAY = 3,
+ V4L2_MEMORY_DMABUF = 4,
};
/* see also http://vektor.theorem.ca/graphics/ycbcr/ */
* V I D E O I M A G E F O R M A T
*/
struct v4l2_pix_format {
- __u32 width;
+ __u32 width;
__u32 height;
__u32 pixelformat;
- enum v4l2_field field;
- __u32 bytesperline; /* for padding, zero if unused */
- __u32 sizeimage;
+ enum v4l2_field field;
+ __u32 bytesperline; /* for padding, zero if unused */
+ __u32 sizeimage;
enum v4l2_colorspace colorspace;
__u32 priv; /* private data, depends on pixelformat */
};
/* two non contiguous planes - one Y, one Cr + Cb interleaved */
#define V4L2_PIX_FMT_NV12M v4l2_fourcc('N', 'M', '1', '2') /* 12 Y/CbCr 4:2:0 */
+#define V4L2_PIX_FMT_NV21M v4l2_fourcc('N', 'M', '2', '1') /* 21 Y/CrCb 4:2:0 */
#define V4L2_PIX_FMT_NV12MT v4l2_fourcc('T', 'M', '1', '2') /* 12 Y/CbCr 4:2:0 64x32 macroblocks */
+#define V4L2_PIX_FMT_NV12MT_16X16 v4l2_fourcc('V', 'M', '1', '2') /* 12 Y/CbCr 4:2:0 16x16 macroblocks */
/* three non contiguous planes - Y, Cb, Cr */
#define V4L2_PIX_FMT_YUV420M v4l2_fourcc('Y', 'M', '1', '2') /* 12 YUV420 planar */
+#define V4L2_PIX_FMT_YVU420M v4l2_fourcc('Y', 'V', 'U', 'M') /* 12 YVU420 planar */
/* Bayer formats - see http://www.siliconimaging.com/RGB%20Bayer.htm */
#define V4L2_PIX_FMT_SBGGR8 v4l2_fourcc('B', 'A', '8', '1') /* 8 BGBG.. GRGR.. */
#define V4L2_PIX_FMT_MPEG v4l2_fourcc('M', 'P', 'E', 'G') /* MPEG-1/2/4 Multiplexed */
#define V4L2_PIX_FMT_H264 v4l2_fourcc('H', '2', '6', '4') /* H264 with start codes */
#define V4L2_PIX_FMT_H264_NO_SC v4l2_fourcc('A', 'V', 'C', '1') /* H264 without start codes */
+#define V4L2_PIX_FMT_H264_MVC v4l2_fourcc('M', '2', '6', '4') /* H264 MVC */
#define V4L2_PIX_FMT_H263 v4l2_fourcc('H', '2', '6', '3') /* H263 */
#define V4L2_PIX_FMT_MPEG1 v4l2_fourcc('M', 'P', 'G', '1') /* MPEG-1 ES */
#define V4L2_PIX_FMT_MPEG2 v4l2_fourcc('M', 'P', 'G', '2') /* MPEG-2 ES */
+#define V4L2_PIX_FMT_MPEG12 v4l2_fourcc('M', 'P', '1', '2') /* MPEG-1/2 */
#define V4L2_PIX_FMT_MPEG4 v4l2_fourcc('M', 'P', 'G', '4') /* MPEG-4 ES */
+#define V4L2_PIX_FMT_FIMV v4l2_fourcc('F', 'I', 'M', 'V') /* FIMV */
+#define V4L2_PIX_FMT_FIMV1 v4l2_fourcc('F', 'I', 'M', '1') /* FIMV1 */
+#define V4L2_PIX_FMT_FIMV2 v4l2_fourcc('F', 'I', 'M', '2') /* FIMV2 */
+#define V4L2_PIX_FMT_FIMV3 v4l2_fourcc('F', 'I', 'M', '3') /* FIMV3 */
+#define V4L2_PIX_FMT_FIMV4 v4l2_fourcc('F', 'I', 'M', '4') /* FIMV4 */
#define V4L2_PIX_FMT_XVID v4l2_fourcc('X', 'V', 'I', 'D') /* Xvid */
#define V4L2_PIX_FMT_VC1_ANNEX_G v4l2_fourcc('V', 'C', '1', 'G') /* SMPTE 421M Annex G compliant stream */
#define V4L2_PIX_FMT_VC1_ANNEX_L v4l2_fourcc('V', 'C', '1', 'L') /* SMPTE 421M Annex L compliant stream */
+#define V4L2_PIX_FMT_VC1 v4l2_fourcc('V', 'C', '1', 'A') /* VC-1 */
+#define V4L2_PIX_FMT_VC1_RCV v4l2_fourcc('V', 'C', '1', 'R') /* VC-1 RCV */
+#define V4L2_PIX_FMT_VP8 v4l2_fourcc('V', 'P', '8', '0') /* VP8 */
/* Vendor-specific formats */
#define V4L2_PIX_FMT_CPIA1 v4l2_fourcc('C', 'P', 'I', 'A') /* cpia1 YUV */
* should be passed to mmap() called on the video node)
* @userptr: when memory is V4L2_MEMORY_USERPTR, a userspace pointer
* pointing to this plane
+ * @fd: when memory is V4L2_MEMORY_DMABUF, a userspace file
+ * descriptor associated with this plane
* @data_offset: offset in the plane to the start of data; usually 0,
* unless there is a header in front of the data
*
union {
__u32 mem_offset;
unsigned long userptr;
+ int fd;
} m;
__u32 data_offset;
__u32 reserved[11];
* (or a "cookie" that should be passed to mmap() as offset)
* @userptr: for non-multiplanar buffers with memory == V4L2_MEMORY_USERPTR;
* a userspace pointer pointing to this buffer
+ * @fd: for non-multiplanar buffers with memory == V4L2_MEMORY_DMABUF;
+ * a userspace file descriptor associated with this buffer
* @planes: for multiplanar buffers; userspace pointer to the array of plane
* info structs for this buffer
* @length: size in bytes of the buffer (NOT its payload) for single-plane
__u32 offset;
unsigned long userptr;
struct v4l2_plane *planes;
+ int fd;
} m;
__u32 length;
__u32 input;
#define V4L2_BUF_FLAG_NO_CACHE_INVALIDATE 0x0800
#define V4L2_BUF_FLAG_NO_CACHE_CLEAN 0x1000
+/**
+ * struct v4l2_exportbuffer - export of video buffer as DMABUF file descriptor
+ *
+ * @fd: file descriptor associated with DMABUF (set by driver)
+ * @mem_offset: buffer memory offset as returned by VIDIOC_QUERYBUF in struct
+ * v4l2_buffer::m.offset (for single-plane formats) or
+ * v4l2_plane::m.offset (for multi-planar formats)
+ * @flags: flags for newly created file, currently only O_CLOEXEC is
+ * supported, refer to manual of open syscall for more details
+ *
+ * Contains data used for exporting a video buffer as DMABUF file descriptor.
+ * The buffer is identified by a 'cookie' returned by VIDIOC_QUERYBUF
+ * (identical to the cookie used to mmap() the buffer to userspace). All
+ * reserved fields must be set to zero. The field reserved0 is expected to
+ * become a structure 'type' allowing an alternative layout of the structure
+ * content. Therefore this field should not be used for any other extensions.
+ */
+struct v4l2_exportbuffer {
+ __u32 fd;
+ __u32 reserved0;
+ __u32 mem_offset;
+ __u32 flags;
+ __u32 reserved[12];
+};
+
/*
* O V E R L A Y P R E V I E W
*/
V4L2_STD_NTSC_M_JP |\
V4L2_STD_NTSC_M_KR)
/* Secam macros */
-#define V4L2_STD_SECAM_DK (V4L2_STD_SECAM_D |\
+#define V4L2_STD_SECAM_DK (V4L2_STD_SECAM_D |\
V4L2_STD_SECAM_K |\
V4L2_STD_SECAM_K1)
/* All Secam Standards */
#define V4L2_DV_1080P50 17 /* BT.1120 */
#define V4L2_DV_1080P60 18 /* BT.1120 */
+#define V4L2_DV_480P60 19
+#define V4L2_DV_1080I59_94 20
+#define V4L2_DV_1080P59_94 21
+
+#define V4L2_DV_720P60_FP 22
+#define V4L2_DV_720P60_SB_HALF 23
+#define V4L2_DV_720P60_TB 24
+#define V4L2_DV_720P59_94_FP 25
+#define V4L2_DV_720P59_94_SB_HALF 26
+#define V4L2_DV_720P59_94_TB 27
+#define V4L2_DV_720P50_FP 28
+#define V4L2_DV_720P50_SB_HALF 29
+#define V4L2_DV_720P50_TB 30
+#define V4L2_DV_1080P24_FP 31
+#define V4L2_DV_1080P24_SB_HALF 32
+#define V4L2_DV_1080P24_TB 33
+#define V4L2_DV_1080P23_98_FP 34
+#define V4L2_DV_1080P23_98_SB_HALF 35
+#define V4L2_DV_1080P23_98_TB 36
+#define V4L2_DV_1080I60_SB_HALF 37
+#define V4L2_DV_1080I59_94_SB_HALF 38
+#define V4L2_DV_1080I50_SB_HALF 39
+#define V4L2_DV_1080P60_SB_HALF 40
+#define V4L2_DV_1080P60_TB 41
+#define V4L2_DV_1080P30_FP 42
+#define V4L2_DV_1080P30_SB_HALF 43
+#define V4L2_DV_1080P30_TB 44
/*
* D V B T T I M I N G S
*/
#define V4L2_CTRL_CLASS_MPEG 0x00990000 /* MPEG-compression controls */
#define V4L2_CTRL_CLASS_CAMERA 0x009a0000 /* Camera class controls */
#define V4L2_CTRL_CLASS_FM_TX 0x009b0000 /* FM Modulator control class */
-#define V4L2_CTRL_CLASS_FLASH 0x009c0000 /* Camera flash controls */
-#define V4L2_CTRL_CLASS_JPEG 0x009d0000 /* JPEG-compression controls */
+#define V4L2_CTRL_CLASS_CODEC 0x009c0000 /* Codec control class */
+#define V4L2_CTRL_CLASS_FLASH 0x009d0000 /* Camera flash controls */
+#define V4L2_CTRL_CLASS_JPEG 0x009e0000 /* JPEG-compression controls */
+#define V4L2_CID_EXYNOS_BASE (V4L2_CTRL_CLASS_USER | 0x2000)
+#define V4L2_CID_TV_HPD_STATUS (V4L2_CID_EXYNOS_BASE + 55)
#define V4L2_CTRL_ID_MASK (0x0fffffff)
#define V4L2_CTRL_ID2CLASS(id) ((id) & 0x0fff0000UL)
/* Control flags */
#define V4L2_CTRL_FLAG_DISABLED 0x0001
#define V4L2_CTRL_FLAG_GRABBED 0x0002
-#define V4L2_CTRL_FLAG_READ_ONLY 0x0004
-#define V4L2_CTRL_FLAG_UPDATE 0x0008
-#define V4L2_CTRL_FLAG_INACTIVE 0x0010
-#define V4L2_CTRL_FLAG_SLIDER 0x0020
-#define V4L2_CTRL_FLAG_WRITE_ONLY 0x0040
+#define V4L2_CTRL_FLAG_READ_ONLY 0x0004
+#define V4L2_CTRL_FLAG_UPDATE 0x0008
+#define V4L2_CTRL_FLAG_INACTIVE 0x0010
+#define V4L2_CTRL_FLAG_SLIDER 0x0020
+#define V4L2_CTRL_FLAG_WRITE_ONLY 0x0040
#define V4L2_CTRL_FLAG_VOLATILE 0x0080
/* Query flag, to be ORed with the control ID */
/* IDs reserved for driver specific controls */
#define V4L2_CID_PRIVATE_BASE 0x08000000
-#define V4L2_CID_USER_CLASS (V4L2_CTRL_CLASS_USER | 1)
+#define V4L2_CID_USER_CLASS (V4L2_CTRL_CLASS_USER | 1)
#define V4L2_CID_BRIGHTNESS (V4L2_CID_BASE+0)
#define V4L2_CID_CONTRAST (V4L2_CID_BASE+1)
#define V4L2_CID_SATURATION (V4L2_CID_BASE+2)
#define V4L2_CID_HUE_AUTO (V4L2_CID_BASE+25)
#define V4L2_CID_WHITE_BALANCE_TEMPERATURE (V4L2_CID_BASE+26)
#define V4L2_CID_SHARPNESS (V4L2_CID_BASE+27)
-#define V4L2_CID_BACKLIGHT_COMPENSATION (V4L2_CID_BASE+28)
-#define V4L2_CID_CHROMA_AGC (V4L2_CID_BASE+29)
-#define V4L2_CID_COLOR_KILLER (V4L2_CID_BASE+30)
+#define V4L2_CID_BACKLIGHT_COMPENSATION (V4L2_CID_BASE+28)
+#define V4L2_CID_CHROMA_AGC (V4L2_CID_BASE+29)
+#define V4L2_CID_COLOR_KILLER (V4L2_CID_BASE+30)
#define V4L2_CID_COLORFX (V4L2_CID_BASE+31)
enum v4l2_colorfx {
V4L2_COLORFX_NONE = 0,
V4L2_COLORFX_BW = 1,
V4L2_COLORFX_SEPIA = 2,
- V4L2_COLORFX_NEGATIVE = 3,
- V4L2_COLORFX_EMBOSS = 4,
- V4L2_COLORFX_SKETCH = 5,
- V4L2_COLORFX_SKY_BLUE = 6,
+ V4L2_COLORFX_NEGATIVE = 3,
+ V4L2_COLORFX_EMBOSS = 4,
+ V4L2_COLORFX_SKETCH = 5,
+ V4L2_COLORFX_SKY_BLUE = 6,
V4L2_COLORFX_GRASS_GREEN = 7,
V4L2_COLORFX_SKIN_WHITEN = 8,
- V4L2_COLORFX_VIVID = 9,
+ V4L2_COLORFX_VIVID = 9,
};
#define V4L2_CID_AUTOBRIGHTNESS (V4L2_CID_BASE+32)
#define V4L2_CID_BAND_STOP_FILTER (V4L2_CID_BASE+33)
/* last CID + 1 */
#define V4L2_CID_LASTP1 (V4L2_CID_BASE+42)
+#define V4L2_CID_CODEC_DISPLAY_STATUS (V4L2_CID_BASE + 54)
+
/* MPEG-class control IDs defined by V4L2 */
#define V4L2_CID_MPEG_BASE (V4L2_CTRL_CLASS_MPEG | 0x900)
#define V4L2_CID_MPEG_CLASS (V4L2_CTRL_CLASS_MPEG | 1)
V4L2_MPEG_STREAM_TYPE_MPEG1_VCD = 4, /* MPEG-1 VCD-compatible stream */
V4L2_MPEG_STREAM_TYPE_MPEG2_SVCD = 5, /* MPEG-2 SVCD-compatible stream */
};
-#define V4L2_CID_MPEG_STREAM_PID_PMT (V4L2_CID_MPEG_BASE+1)
-#define V4L2_CID_MPEG_STREAM_PID_AUDIO (V4L2_CID_MPEG_BASE+2)
-#define V4L2_CID_MPEG_STREAM_PID_VIDEO (V4L2_CID_MPEG_BASE+3)
-#define V4L2_CID_MPEG_STREAM_PID_PCR (V4L2_CID_MPEG_BASE+4)
-#define V4L2_CID_MPEG_STREAM_PES_ID_AUDIO (V4L2_CID_MPEG_BASE+5)
-#define V4L2_CID_MPEG_STREAM_PES_ID_VIDEO (V4L2_CID_MPEG_BASE+6)
-#define V4L2_CID_MPEG_STREAM_VBI_FMT (V4L2_CID_MPEG_BASE+7)
+#define V4L2_CID_MPEG_STREAM_PID_PMT (V4L2_CID_MPEG_BASE+1)
+#define V4L2_CID_MPEG_STREAM_PID_AUDIO (V4L2_CID_MPEG_BASE+2)
+#define V4L2_CID_MPEG_STREAM_PID_VIDEO (V4L2_CID_MPEG_BASE+3)
+#define V4L2_CID_MPEG_STREAM_PID_PCR (V4L2_CID_MPEG_BASE+4)
+#define V4L2_CID_MPEG_STREAM_PES_ID_AUDIO (V4L2_CID_MPEG_BASE+5)
+#define V4L2_CID_MPEG_STREAM_PES_ID_VIDEO (V4L2_CID_MPEG_BASE+6)
+#define V4L2_CID_MPEG_STREAM_VBI_FMT (V4L2_CID_MPEG_BASE+7)
enum v4l2_mpeg_stream_vbi_fmt {
V4L2_MPEG_STREAM_VBI_FMT_NONE = 0, /* No VBI in the MPEG stream */
V4L2_MPEG_STREAM_VBI_FMT_IVTV = 1, /* VBI in private packets, IVTV format */
V4L2_MPEG_AUDIO_SAMPLING_FREQ_48000 = 1,
V4L2_MPEG_AUDIO_SAMPLING_FREQ_32000 = 2,
};
-#define V4L2_CID_MPEG_AUDIO_ENCODING (V4L2_CID_MPEG_BASE+101)
+#define V4L2_CID_MPEG_AUDIO_ENCODING (V4L2_CID_MPEG_BASE+101)
enum v4l2_mpeg_audio_encoding {
V4L2_MPEG_AUDIO_ENCODING_LAYER_1 = 0,
V4L2_MPEG_AUDIO_ENCODING_LAYER_2 = 1,
V4L2_MPEG_AUDIO_ENCODING_AAC = 3,
V4L2_MPEG_AUDIO_ENCODING_AC3 = 4,
};
-#define V4L2_CID_MPEG_AUDIO_L1_BITRATE (V4L2_CID_MPEG_BASE+102)
+#define V4L2_CID_MPEG_AUDIO_L1_BITRATE (V4L2_CID_MPEG_BASE+102)
enum v4l2_mpeg_audio_l1_bitrate {
V4L2_MPEG_AUDIO_L1_BITRATE_32K = 0,
V4L2_MPEG_AUDIO_L1_BITRATE_64K = 1,
V4L2_MPEG_AUDIO_L2_BITRATE_320K = 12,
V4L2_MPEG_AUDIO_L2_BITRATE_384K = 13,
};
-#define V4L2_CID_MPEG_AUDIO_L3_BITRATE (V4L2_CID_MPEG_BASE+104)
+#define V4L2_CID_MPEG_AUDIO_L3_BITRATE (V4L2_CID_MPEG_BASE+104)
enum v4l2_mpeg_audio_l3_bitrate {
V4L2_MPEG_AUDIO_L3_BITRATE_32K = 0,
V4L2_MPEG_AUDIO_L3_BITRATE_40K = 1,
V4L2_MPEG_AUDIO_L3_BITRATE_256K = 12,
V4L2_MPEG_AUDIO_L3_BITRATE_320K = 13,
};
-#define V4L2_CID_MPEG_AUDIO_MODE (V4L2_CID_MPEG_BASE+105)
+#define V4L2_CID_MPEG_AUDIO_MODE (V4L2_CID_MPEG_BASE+105)
enum v4l2_mpeg_audio_mode {
V4L2_MPEG_AUDIO_MODE_STEREO = 0,
V4L2_MPEG_AUDIO_MODE_JOINT_STEREO = 1,
V4L2_MPEG_AUDIO_MODE_DUAL = 2,
V4L2_MPEG_AUDIO_MODE_MONO = 3,
};
-#define V4L2_CID_MPEG_AUDIO_MODE_EXTENSION (V4L2_CID_MPEG_BASE+106)
+#define V4L2_CID_MPEG_AUDIO_MODE_EXTENSION (V4L2_CID_MPEG_BASE+106)
enum v4l2_mpeg_audio_mode_extension {
V4L2_MPEG_AUDIO_MODE_EXTENSION_BOUND_4 = 0,
V4L2_MPEG_AUDIO_MODE_EXTENSION_BOUND_8 = 1,
V4L2_MPEG_AUDIO_MODE_EXTENSION_BOUND_12 = 2,
V4L2_MPEG_AUDIO_MODE_EXTENSION_BOUND_16 = 3,
};
-#define V4L2_CID_MPEG_AUDIO_EMPHASIS (V4L2_CID_MPEG_BASE+107)
+#define V4L2_CID_MPEG_AUDIO_EMPHASIS (V4L2_CID_MPEG_BASE+107)
enum v4l2_mpeg_audio_emphasis {
V4L2_MPEG_AUDIO_EMPHASIS_NONE = 0,
V4L2_MPEG_AUDIO_EMPHASIS_50_DIV_15_uS = 1,
V4L2_MPEG_AUDIO_EMPHASIS_CCITT_J17 = 2,
};
-#define V4L2_CID_MPEG_AUDIO_CRC (V4L2_CID_MPEG_BASE+108)
+#define V4L2_CID_MPEG_AUDIO_CRC (V4L2_CID_MPEG_BASE+108)
enum v4l2_mpeg_audio_crc {
V4L2_MPEG_AUDIO_CRC_NONE = 0,
V4L2_MPEG_AUDIO_CRC_CRC16 = 1,
};
-#define V4L2_CID_MPEG_AUDIO_MUTE (V4L2_CID_MPEG_BASE+109)
+#define V4L2_CID_MPEG_AUDIO_MUTE (V4L2_CID_MPEG_BASE+109)
#define V4L2_CID_MPEG_AUDIO_AAC_BITRATE (V4L2_CID_MPEG_BASE+110)
#define V4L2_CID_MPEG_AUDIO_AC3_BITRATE (V4L2_CID_MPEG_BASE+111)
enum v4l2_mpeg_audio_ac3_bitrate {
#define V4L2_CID_MPEG_AUDIO_DEC_MULTILINGUAL_PLAYBACK (V4L2_CID_MPEG_BASE+113)
/* MPEG video controls specific to multiplexed streams */
-#define V4L2_CID_MPEG_VIDEO_ENCODING (V4L2_CID_MPEG_BASE+200)
+#define V4L2_CID_MPEG_VIDEO_ENCODING (V4L2_CID_MPEG_BASE+200)
enum v4l2_mpeg_video_encoding {
V4L2_MPEG_VIDEO_ENCODING_MPEG_1 = 0,
V4L2_MPEG_VIDEO_ENCODING_MPEG_2 = 1,
V4L2_MPEG_VIDEO_ENCODING_MPEG_4_AVC = 2,
};
-#define V4L2_CID_MPEG_VIDEO_ASPECT (V4L2_CID_MPEG_BASE+201)
+#define V4L2_CID_MPEG_VIDEO_ASPECT (V4L2_CID_MPEG_BASE+201)
enum v4l2_mpeg_video_aspect {
V4L2_MPEG_VIDEO_ASPECT_1x1 = 0,
V4L2_MPEG_VIDEO_ASPECT_4x3 = 1,
V4L2_MPEG_VIDEO_ASPECT_16x9 = 2,
V4L2_MPEG_VIDEO_ASPECT_221x100 = 3,
};
-#define V4L2_CID_MPEG_VIDEO_B_FRAMES (V4L2_CID_MPEG_BASE+202)
-#define V4L2_CID_MPEG_VIDEO_GOP_SIZE (V4L2_CID_MPEG_BASE+203)
-#define V4L2_CID_MPEG_VIDEO_GOP_CLOSURE (V4L2_CID_MPEG_BASE+204)
-#define V4L2_CID_MPEG_VIDEO_PULLDOWN (V4L2_CID_MPEG_BASE+205)
-#define V4L2_CID_MPEG_VIDEO_BITRATE_MODE (V4L2_CID_MPEG_BASE+206)
+#define V4L2_CID_MPEG_VIDEO_B_FRAMES (V4L2_CID_MPEG_BASE+202)
+#define V4L2_CID_MPEG_VIDEO_GOP_SIZE (V4L2_CID_MPEG_BASE+203)
+#define V4L2_CID_MPEG_VIDEO_GOP_CLOSURE (V4L2_CID_MPEG_BASE+204)
+#define V4L2_CID_MPEG_VIDEO_PULLDOWN (V4L2_CID_MPEG_BASE+205)
+#define V4L2_CID_MPEG_VIDEO_BITRATE_MODE (V4L2_CID_MPEG_BASE+206)
enum v4l2_mpeg_video_bitrate_mode {
V4L2_MPEG_VIDEO_BITRATE_MODE_VBR = 0,
V4L2_MPEG_VIDEO_BITRATE_MODE_CBR = 1,
};
-#define V4L2_CID_MPEG_VIDEO_BITRATE (V4L2_CID_MPEG_BASE+207)
-#define V4L2_CID_MPEG_VIDEO_BITRATE_PEAK (V4L2_CID_MPEG_BASE+208)
+#define V4L2_CID_MPEG_VIDEO_BITRATE (V4L2_CID_MPEG_BASE+207)
+#define V4L2_CID_MPEG_VIDEO_BITRATE_PEAK (V4L2_CID_MPEG_BASE+208)
#define V4L2_CID_MPEG_VIDEO_TEMPORAL_DECIMATION (V4L2_CID_MPEG_BASE+209)
-#define V4L2_CID_MPEG_VIDEO_MUTE (V4L2_CID_MPEG_BASE+210)
-#define V4L2_CID_MPEG_VIDEO_MUTE_YUV (V4L2_CID_MPEG_BASE+211)
+#define V4L2_CID_MPEG_VIDEO_MUTE (V4L2_CID_MPEG_BASE+210)
+#define V4L2_CID_MPEG_VIDEO_MUTE_YUV (V4L2_CID_MPEG_BASE+211)
#define V4L2_CID_MPEG_VIDEO_DECODER_SLICE_INTERFACE (V4L2_CID_MPEG_BASE+212)
#define V4L2_CID_MPEG_VIDEO_DECODER_MPEG4_DEBLOCK_FILTER (V4L2_CID_MPEG_BASE+213)
#define V4L2_CID_MPEG_VIDEO_CYCLIC_INTRA_REFRESH_MB (V4L2_CID_MPEG_BASE+214)
#define V4L2_CID_MPEG_VIDEO_MAX_REF_PIC (V4L2_CID_MPEG_BASE+217)
#define V4L2_CID_MPEG_VIDEO_MB_RC_ENABLE (V4L2_CID_MPEG_BASE+218)
#define V4L2_CID_MPEG_VIDEO_MULTI_SLICE_MAX_BYTES (V4L2_CID_MPEG_BASE+219)
+#define V4L2_CID_MPEG_VIDEO_MULTI_SLICE_MAX_BITS (V4L2_CID_MPEG_BASE+219)
#define V4L2_CID_MPEG_VIDEO_MULTI_SLICE_MAX_MB (V4L2_CID_MPEG_BASE+220)
#define V4L2_CID_MPEG_VIDEO_MULTI_SLICE_MODE (V4L2_CID_MPEG_BASE+221)
enum v4l2_mpeg_video_multi_slice_mode {
V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_SINGLE = 0,
V4L2_MPEG_VIDEO_MULTI_SICE_MODE_MAX_MB = 1,
V4L2_MPEG_VIDEO_MULTI_SICE_MODE_MAX_BYTES = 2,
+ V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_MAX_MB = 1,
+ V4L2_MPEG_VIDEO_MULTI_SLICE_MODE_MAX_BITS = 2,
};
#define V4L2_CID_MPEG_VIDEO_VBV_SIZE (V4L2_CID_MPEG_BASE+222)
#define V4L2_CID_MPEG_VIDEO_DEC_PTS (V4L2_CID_MPEG_BASE+223)
#define V4L2_CID_MPEG_VIDEO_DEC_FRAME (V4L2_CID_MPEG_BASE+224)
+#define V4L2_CID_MPEG_VIDEO_VBV_DELAY (V4L2_CID_MPEG_BASE+225)
#define V4L2_CID_MPEG_VIDEO_H263_I_FRAME_QP (V4L2_CID_MPEG_BASE+300)
#define V4L2_CID_MPEG_VIDEO_H263_P_FRAME_QP (V4L2_CID_MPEG_BASE+301)
V4L2_MPEG_VIDEO_H264_VUI_SAR_IDC_2x1 = 16,
V4L2_MPEG_VIDEO_H264_VUI_SAR_IDC_EXTENDED = 17,
};
+#define V4L2_CID_MPEG_VIDEO_H264_SEI_FRAME_PACKING (V4L2_CID_MPEG_BASE+368)
+#define V4L2_CID_MPEG_VIDEO_H264_SEI_FP_CURRENT_FRAME_0 (V4L2_CID_MPEG_BASE+369)
+#define V4L2_CID_MPEG_VIDEO_H264_SEI_FP_ARRANGEMENT_TYPE (V4L2_CID_MPEG_BASE+370)
+enum v4l2_mpeg_video_h264_sei_fp_arrangement_type {
+ V4L2_MPEG_VIDEO_H264_SEI_FP_ARRANGEMENT_TYPE_CHEKERBOARD = 0,
+ V4L2_MPEG_VIDEO_H264_SEI_FP_ARRANGEMENT_TYPE_COLUMN = 1,
+ V4L2_MPEG_VIDEO_H264_SEI_FP_ARRANGEMENT_TYPE_ROW = 2,
+ V4L2_MPEG_VIDEO_H264_SEI_FP_ARRANGEMENT_TYPE_SIDE_BY_SIDE = 3,
+ V4L2_MPEG_VIDEO_H264_SEI_FP_ARRANGEMENT_TYPE_TOP_BOTTOM = 4,
+ V4L2_MPEG_VIDEO_H264_SEI_FP_ARRANGEMENT_TYPE_TEMPORAL = 5,
+};
+#define V4L2_CID_MPEG_VIDEO_H264_FMO (V4L2_CID_MPEG_BASE+371)
+#define V4L2_CID_MPEG_VIDEO_H264_FMO_MAP_TYPE (V4L2_CID_MPEG_BASE+372)
+enum v4l2_mpeg_video_h264_fmo_map_type {
+ V4L2_MPEG_VIDEO_H264_FMO_MAP_TYPE_INTERLEAVED_SLICES = 0,
+ V4L2_MPEG_VIDEO_H264_FMO_MAP_TYPE_SCATTERED_SLICES = 1,
+ V4L2_MPEG_VIDEO_H264_FMO_MAP_TYPE_FOREGROUND_WITH_LEFT_OVER = 2,
+ V4L2_MPEG_VIDEO_H264_FMO_MAP_TYPE_BOX_OUT = 3,
+ V4L2_MPEG_VIDEO_H264_FMO_MAP_TYPE_RASTER_SCAN = 4,
+ V4L2_MPEG_VIDEO_H264_FMO_MAP_TYPE_WIPE_SCAN = 5,
+ V4L2_MPEG_VIDEO_H264_FMO_MAP_TYPE_EXPLICIT = 6,
+};
+#define V4L2_CID_MPEG_VIDEO_H264_FMO_SLICE_GROUP (V4L2_CID_MPEG_BASE+373)
+#define V4L2_CID_MPEG_VIDEO_H264_FMO_CHANGE_DIRECTION (V4L2_CID_MPEG_BASE+374)
+enum v4l2_mpeg_video_h264_fmo_change_dir {
+ V4L2_MPEG_VIDEO_H264_FMO_CHANGE_DIR_RIGHT = 0,
+ V4L2_MPEG_VIDEO_H264_FMO_CHANGE_DIR_LEFT = 1,
+};
+#define V4L2_CID_MPEG_VIDEO_H264_FMO_CHANGE_RATE (V4L2_CID_MPEG_BASE+375)
+#define V4L2_CID_MPEG_VIDEO_H264_FMO_RUN_LENGTH (V4L2_CID_MPEG_BASE+376)
+#define V4L2_CID_MPEG_VIDEO_H264_ASO (V4L2_CID_MPEG_BASE+377)
+#define V4L2_CID_MPEG_VIDEO_H264_ASO_SLICE_ORDER (V4L2_CID_MPEG_BASE+378)
+#define V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING (V4L2_CID_MPEG_BASE+379)
+#define V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING_TYPE (V4L2_CID_MPEG_BASE+380)
+enum v4l2_mpeg_video_h264_hierarchical_coding_type {
+ V4L2_MPEG_VIDEO_H264_HIERARCHICAL_CODING_B = 0,
+ V4L2_MPEG_VIDEO_H264_HIERARCHICAL_CODING_P = 1,
+};
+#define V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING_LAYER (V4L2_CID_MPEG_BASE+381)
+#define V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING_LAYER_QP (V4L2_CID_MPEG_BASE+382)
#define V4L2_CID_MPEG_VIDEO_MPEG4_I_FRAME_QP (V4L2_CID_MPEG_BASE+400)
#define V4L2_CID_MPEG_VIDEO_MPEG4_P_FRAME_QP (V4L2_CID_MPEG_BASE+401)
#define V4L2_CID_MPEG_VIDEO_MPEG4_B_FRAME_QP (V4L2_CID_MPEG_BASE+402)
#define V4L2_CID_MPEG_MFC51_VIDEO_H264_ADAPTIVE_RC_STATIC (V4L2_CID_MPEG_MFC51_BASE+53)
#define V4L2_CID_MPEG_MFC51_VIDEO_H264_NUM_REF_PIC_FOR_P (V4L2_CID_MPEG_MFC51_BASE+54)
+#define V4L2_CID_CODEC_BASE (V4L2_CTRL_CLASS_CODEC | 0x900)
+#define V4L2_CID_CODEC_CLASS (V4L2_CTRL_CLASS_CODEC | 1)
+
+/* Codec class control IDs specific to the MFC5X driver */
+#define V4L2_CID_CODEC_MFC5X_BASE (V4L2_CTRL_CLASS_CODEC | 0x1000)
+
+/* For both decoding and encoding */
+
+/* For decoding */
+
+#define V4L2_CID_CODEC_LOOP_FILTER_MPEG4_ENABLE (V4L2_CID_CODEC_BASE + 110)
+#define V4L2_CID_CODEC_DISPLAY_DELAY (V4L2_CID_CODEC_BASE + 137)
+#define V4L2_CID_CODEC_REQ_NUM_BUFS (V4L2_CID_CODEC_BASE + 140)
+#define V4L2_CID_CODEC_SLICE_INTERFACE (V4L2_CID_CODEC_BASE + 141)
+#define V4L2_CID_CODEC_PACKED_PB (V4L2_CID_CODEC_BASE + 142)
+#define V4L2_CID_CODEC_FRAME_TAG (V4L2_CID_CODEC_BASE + 143)
+#define V4L2_CID_CODEC_CRC_ENABLE (V4L2_CID_CODEC_BASE + 144)
+#define V4L2_CID_CODEC_CRC_DATA_LUMA (V4L2_CID_CODEC_BASE + 145)
+#define V4L2_CID_CODEC_CRC_DATA_CHROMA (V4L2_CID_CODEC_BASE + 146)
+#define V4L2_CID_CODEC_CRC_DATA_LUMA_BOT (V4L2_CID_CODEC_BASE + 147)
+#define V4L2_CID_CODEC_CRC_DATA_CHROMA_BOT (V4L2_CID_CODEC_BASE + 148)
+#define V4L2_CID_CODEC_CRC_GENERATED (V4L2_CID_CODEC_BASE + 149)
+#define V4L2_CID_CODEC_FRAME_TYPE (V4L2_CID_CODEC_BASE + 154)
+#define V4L2_CID_CODEC_CHECK_STATE (V4L2_CID_CODEC_BASE + 155)
+#define V4L2_CID_CODEC_FRAME_PACK_SEI_PARSE (V4L2_CID_CODEC_BASE + 157)
+#define V4L2_CID_CODEC_FRAME_PACK_SEI_AVAIL (V4L2_CID_CODEC_BASE + 158)
+#define V4L2_CID_CODEC_FRAME_PACK_ARRGMENT_ID (V4L2_CID_CODEC_BASE + 159)
+#define V4L2_CID_CODEC_FRAME_PACK_SEI_INFO (V4L2_CID_CODEC_BASE + 160)
+#define V4L2_CID_CODEC_FRAME_PACK_GRID_POS (V4L2_CID_CODEC_BASE + 161)
+
+/* For encoding */
+#define V4L2_CID_CODEC_LOOP_FILTER_H264 (V4L2_CID_CODEC_BASE + 9)
+enum v4l2_cid_codec_loop_filter_h264 {
+ V4L2_CID_CODEC_LOOP_FILTER_H264_ENABLE = 0,
+ V4L2_CID_CODEC_LOOP_FILTER_H264_DISABLE = 1,
+ V4L2_CID_CODEC_LOOP_FILTER_H264_DISABLE_AT_BOUNDARY = 2,
+};
+
+#define V4L2_CID_CODEC_FRAME_INSERTION (V4L2_CID_CODEC_BASE + 10)
+enum v4l2_cid_codec_frame_insertion {
+ V4L2_CID_CODEC_FRAME_INSERT_NONE = 0x0,
+ V4L2_CID_CODEC_FRAME_INSERT_I_FRAME = 0x1,
+ V4L2_CID_CODEC_FRAME_INSERT_NOT_CODED = 0x2,
+};
+
+#define V4L2_CID_CODEC_ENCODED_LUMA_ADDR (V4L2_CID_CODEC_BASE + 11)
+#define V4L2_CID_CODEC_ENCODED_CHROMA_ADDR (V4L2_CID_CODEC_BASE + 12)
+
+#define V4L2_CID_CODEC_ENCODED_I_PERIOD_CH V4L2_CID_CODEC_MFC5X_ENC_GOP_SIZE
+#define V4L2_CID_CODEC_ENCODED_FRAME_RATE_CH V4L2_CID_CODEC_MFC5X_ENC_H264_RC_FRAME_RATE
+#define V4L2_CID_CODEC_ENCODED_BIT_RATE_CH V4L2_CID_CODEC_MFC5X_ENC_RC_BIT_RATE
+
+#define V4L2_CID_CODEC_FRAME_PACK_SEI_GEN (V4L2_CID_CODEC_BASE + 13)
+#define V4L2_CID_CODEC_FRAME_PACK_FRM0_FLAG (V4L2_CID_CODEC_BASE + 14)
+enum v4l2_codec_mfc5x_enc_flag {
+ V4L2_CODEC_MFC5X_ENC_FLAG_DISABLE = 0,
+ V4L2_CODEC_MFC5X_ENC_FLAG_ENABLE = 1,
+};
+#define V4L2_CID_CODEC_FRAME_PACK_ARRGMENT_TYPE (V4L2_CID_CODEC_BASE + 15)
+enum v4l2_codec_mfc5x_enc_frame_pack_arrgment_type {
+ V4L2_CODEC_MFC5X_ENC_FRAME_PACK_SIDE_BY_SIDE = 0,
+ V4L2_CODEC_MFC5X_ENC_FRAME_PACK_TOP_AND_BOT = 1,
+ V4L2_CODEC_MFC5X_ENC_FRAME_PACK_TMP_INTER = 2,
+};
+
+/* common */
+enum v4l2_codec_mfc5x_enc_switch {
+ V4L2_CODEC_MFC5X_ENC_SW_DISABLE = 0,
+ V4L2_CODEC_MFC5X_ENC_SW_ENABLE = 1,
+};
+enum v4l2_codec_mfc5x_enc_switch_inv {
+ V4L2_CODEC_MFC5X_ENC_SW_INV_ENABLE = 0,
+ V4L2_CODEC_MFC5X_ENC_SW_INV_DISABLE = 1,
+};
+#define V4L2_CID_CODEC_MFC5X_ENC_GOP_SIZE (V4L2_CID_CODEC_MFC5X_BASE+300)
+#define V4L2_CID_CODEC_MFC5X_ENC_MULTI_SLICE_MODE (V4L2_CID_CODEC_MFC5X_BASE+301)
+enum v4l2_codec_mfc5x_enc_multi_slice_mode {
+ V4L2_CODEC_MFC5X_ENC_MULTI_SLICE_MODE_DISABLE = 0,
+ V4L2_CODEC_MFC5X_ENC_MULTI_SLICE_MODE_MACROBLOCK_COUNT = 1,
+ V4L2_CODEC_MFC5X_ENC_MULTI_SLICE_MODE_BIT_COUNT = 3,
+};
+#define V4L2_CID_CODEC_MFC5X_ENC_MULTI_SLICE_MB (V4L2_CID_CODEC_MFC5X_BASE+302)
+#define V4L2_CID_CODEC_MFC5X_ENC_MULTI_SLICE_BIT (V4L2_CID_CODEC_MFC5X_BASE+303)
+#define V4L2_CID_CODEC_MFC5X_ENC_INTRA_REFRESH_MB (V4L2_CID_CODEC_MFC5X_BASE+304)
+#define V4L2_CID_CODEC_MFC5X_ENC_PAD_CTRL_ENABLE (V4L2_CID_CODEC_MFC5X_BASE+305)
+#define V4L2_CID_CODEC_MFC5X_ENC_PAD_LUMA_VALUE (V4L2_CID_CODEC_MFC5X_BASE+306)
+#define V4L2_CID_CODEC_MFC5X_ENC_PAD_CB_VALUE (V4L2_CID_CODEC_MFC5X_BASE+307)
+#define V4L2_CID_CODEC_MFC5X_ENC_PAD_CR_VALUE (V4L2_CID_CODEC_MFC5X_BASE+308)
+#define V4L2_CID_CODEC_MFC5X_ENC_RC_FRAME_ENABLE (V4L2_CID_CODEC_MFC5X_BASE+309)
+#define V4L2_CID_CODEC_MFC5X_ENC_RC_BIT_RATE (V4L2_CID_CODEC_MFC5X_BASE+310)
+#define V4L2_CID_CODEC_MFC5X_ENC_RC_REACTION_COEFF (V4L2_CID_CODEC_MFC5X_BASE+311)
+#define V4L2_CID_CODEC_MFC5X_ENC_STREAM_SIZE (V4L2_CID_CODEC_MFC5X_BASE+312)
+#define V4L2_CID_CODEC_MFC5X_ENC_FRAME_COUNT (V4L2_CID_CODEC_MFC5X_BASE+313)
+#define V4L2_CID_CODEC_MFC5X_ENC_FRAME_TYPE (V4L2_CID_CODEC_MFC5X_BASE+314)
+enum v4l2_codec_mfc5x_enc_frame_type {
+ V4L2_CODEC_MFC5X_ENC_FRAME_TYPE_NOT_CODED = 0,
+ V4L2_CODEC_MFC5X_ENC_FRAME_TYPE_I_FRAME = 1,
+ V4L2_CODEC_MFC5X_ENC_FRAME_TYPE_P_FRAME = 2,
+ V4L2_CODEC_MFC5X_ENC_FRAME_TYPE_B_FRAME = 3,
+ V4L2_CODEC_MFC5X_ENC_FRAME_TYPE_SKIPPED = 4,
+ V4L2_CODEC_MFC5X_ENC_FRAME_TYPE_OTHERS = 5,
+};
+#define V4L2_CID_CODEC_MFC5X_ENC_FORCE_FRAME_TYPE (V4L2_CID_CODEC_MFC5X_BASE+315)
+enum v4l2_codec_mfc5x_enc_force_frame_type {
+ V4L2_CODEC_MFC5X_ENC_FORCE_FRAME_TYPE_I_FRAME = 1,
+ V4L2_CODEC_MFC5X_ENC_FORCE_FRAME_TYPE_NOT_CODED = 2,
+};
+#define V4L2_CID_CODEC_MFC5X_ENC_VBV_BUF_SIZE (V4L2_CID_CODEC_MFC5X_BASE+316)
+#define V4L2_CID_CODEC_MFC5X_ENC_SEQ_HDR_MODE (V4L2_CID_CODEC_MFC5X_BASE+317)
+enum v4l2_codec_mfc5x_enc_seq_hdr_mode {
+ V4L2_CODEC_MFC5X_ENC_SEQ_HDR_MODE_SEQ = 0,
+ V4L2_CODEC_MFC5X_ENC_SEQ_HDR_MODE_SEQ_FRAME = 1,
+};
+#define V4L2_CID_CODEC_MFC5X_ENC_FRAME_SKIP_MODE (V4L2_CID_CODEC_MFC5X_BASE+318)
+enum v4l2_codec_mfc5x_enc_frame_skip_mode {
+ V4L2_CODEC_MFC5X_ENC_FRAME_SKIP_MODE_DISABLE = 0,
+ V4L2_CODEC_MFC5X_ENC_FRAME_SKIP_MODE_LEVEL = 1,
+ V4L2_CODEC_MFC5X_ENC_FRAME_SKIP_MODE_VBV_BUF_SIZE = 2,
+};
+#define V4L2_CID_CODEC_MFC5X_ENC_RC_FIXED_TARGET_BIT (V4L2_CID_CODEC_MFC5X_BASE+319)
+#define V4L2_CID_CODEC_MFC5X_ENC_FRAME_DELTA (V4L2_CID_CODEC_MFC5X_BASE+320)
+
+/* codec specific */
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_B_FRAMES (V4L2_CID_CODEC_MFC5X_BASE+400)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_PROFILE (V4L2_CID_CODEC_MFC5X_BASE+401)
+enum v4l2_codec_mfc5x_enc_h264_profile {
+ V4L2_CODEC_MFC5X_ENC_H264_PROFILE_MAIN = 0,
+ V4L2_CODEC_MFC5X_ENC_H264_PROFILE_HIGH = 1,
+ V4L2_CODEC_MFC5X_ENC_H264_PROFILE_BASELINE = 2,
+};
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_LEVEL (V4L2_CID_CODEC_MFC5X_BASE+402)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_INTERLACE (V4L2_CID_CODEC_MFC5X_BASE+403)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_LOOP_FILTER_MODE (V4L2_CID_CODEC_MFC5X_BASE+404)
+enum v4l2_codec_mfc5x_enc_h264_loop_filter {
+ V4L2_CODEC_MFC5X_ENC_H264_LOOP_FILTER_ENABLE = 0,
+ V4L2_CODEC_MFC5X_ENC_H264_LOOP_FILTER_DISABLE = 1,
+ V4L2_CODEC_MFC5X_ENC_H264_LOOP_FILTER_DISABLE_AT_BOUNDARY = 2,
+};
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_LOOP_FILTER_ALPHA (V4L2_CID_CODEC_MFC5X_BASE+405)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_LOOP_FILTER_BETA (V4L2_CID_CODEC_MFC5X_BASE+406)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_ENTROPY_MODE (V4L2_CID_CODEC_MFC5X_BASE+407)
+enum v4l2_codec_mfc5x_enc_h264_entropy_mode {
+ V4L2_CODEC_MFC5X_ENC_H264_ENTROPY_MODE_CAVLC = 0,
+ V4L2_CODEC_MFC5X_ENC_H264_ENTROPY_MODE_CABAC = 1,
+};
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_MAX_REF_PIC (V4L2_CID_CODEC_MFC5X_BASE+408)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_NUM_REF_PIC_4P (V4L2_CID_CODEC_MFC5X_BASE+409)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_8X8_TRANSFORM (V4L2_CID_CODEC_MFC5X_BASE+410)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_RC_MB_ENABLE (V4L2_CID_CODEC_MFC5X_BASE+411)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_RC_FRAME_RATE (V4L2_CID_CODEC_MFC5X_BASE+412)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_RC_FRAME_QP (V4L2_CID_CODEC_MFC5X_BASE+413)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_RC_MIN_QP (V4L2_CID_CODEC_MFC5X_BASE+414)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_RC_MAX_QP (V4L2_CID_CODEC_MFC5X_BASE+415)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_RC_MB_DARK (V4L2_CID_CODEC_MFC5X_BASE+416)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_RC_MB_SMOOTH (V4L2_CID_CODEC_MFC5X_BASE+417)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_RC_MB_STATIC (V4L2_CID_CODEC_MFC5X_BASE+418)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_RC_MB_ACTIVITY (V4L2_CID_CODEC_MFC5X_BASE+419)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_RC_P_FRAME_QP (V4L2_CID_CODEC_MFC5X_BASE+420)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_RC_B_FRAME_QP (V4L2_CID_CODEC_MFC5X_BASE+421)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_AR_VUI_ENABLE (V4L2_CID_CODEC_MFC5X_BASE+422)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_AR_VUI_IDC (V4L2_CID_CODEC_MFC5X_BASE+423)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_EXT_SAR_WIDTH (V4L2_CID_CODEC_MFC5X_BASE+424)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_EXT_SAR_HEIGHT (V4L2_CID_CODEC_MFC5X_BASE+425)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_OPEN_GOP (V4L2_CID_CODEC_MFC5X_BASE+426)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_I_PERIOD (V4L2_CID_CODEC_MFC5X_BASE+427)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_HIER_P_ENABLE (V4L2_CID_CODEC_MFC5X_BASE+428)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_LAYER0_QP (V4L2_CID_CODEC_MFC5X_BASE+429)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_LAYER1_QP (V4L2_CID_CODEC_MFC5X_BASE+430)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_LAYER2_QP (V4L2_CID_CODEC_MFC5X_BASE+431)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_FMO_ENABLE (V4L2_CID_CODEC_MFC5X_BASE+432)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_ASO_ENABLE (V4L2_CID_CODEC_MFC5X_BASE+433)
+
+#define V4L2_CID_CODEC_MFC5X_ENC_MPEG4_B_FRAMES (V4L2_CID_CODEC_MFC5X_BASE+440)
+#define V4L2_CID_CODEC_MFC5X_ENC_MPEG4_PROFILE (V4L2_CID_CODEC_MFC5X_BASE+441)
+enum v4l2_codec_mfc5x_enc_mpeg4_profile {
+ V4L2_CODEC_MFC5X_ENC_MPEG4_PROFILE_SIMPLE = 0,
+ V4L2_CODEC_MFC5X_ENC_MPEG4_PROFILE_ADVANCED_SIMPLE = 1,
+};
+#define V4L2_CID_CODEC_MFC5X_ENC_MPEG4_LEVEL (V4L2_CID_CODEC_MFC5X_BASE+442)
+#define V4L2_CID_CODEC_MFC5X_ENC_MPEG4_RC_FRAME_QP (V4L2_CID_CODEC_MFC5X_BASE+443)
+#define V4L2_CID_CODEC_MFC5X_ENC_MPEG4_RC_MIN_QP (V4L2_CID_CODEC_MFC5X_BASE+444)
+#define V4L2_CID_CODEC_MFC5X_ENC_MPEG4_RC_MAX_QP (V4L2_CID_CODEC_MFC5X_BASE+445)
+#define V4L2_CID_CODEC_MFC5X_ENC_MPEG4_QUARTER_PIXEL (V4L2_CID_CODEC_MFC5X_BASE+446)
+#define V4L2_CID_CODEC_MFC5X_ENC_MPEG4_RC_P_FRAME_QP (V4L2_CID_CODEC_MFC5X_BASE+447)
+#define V4L2_CID_CODEC_MFC5X_ENC_MPEG4_RC_B_FRAME_QP (V4L2_CID_CODEC_MFC5X_BASE+448)
+#define V4L2_CID_CODEC_MFC5X_ENC_MPEG4_VOP_TIME_RES (V4L2_CID_CODEC_MFC5X_BASE+449)
+#define V4L2_CID_CODEC_MFC5X_ENC_MPEG4_VOP_FRM_DELTA (V4L2_CID_CODEC_MFC5X_BASE+450)
+#define V4L2_CID_CODEC_MFC5X_ENC_MPEG4_RC_MB_ENABLE (V4L2_CID_CODEC_MFC5X_BASE+451)
+
+#define V4L2_CID_CODEC_MFC5X_ENC_H263_RC_FRAME_RATE (V4L2_CID_CODEC_MFC5X_BASE+460)
+#define V4L2_CID_CODEC_MFC5X_ENC_H263_RC_FRAME_QP (V4L2_CID_CODEC_MFC5X_BASE+461)
+#define V4L2_CID_CODEC_MFC5X_ENC_H263_RC_MIN_QP (V4L2_CID_CODEC_MFC5X_BASE+462)
+#define V4L2_CID_CODEC_MFC5X_ENC_H263_RC_MAX_QP (V4L2_CID_CODEC_MFC5X_BASE+463)
+#define V4L2_CID_CODEC_MFC5X_ENC_H263_RC_P_FRAME_QP (V4L2_CID_CODEC_MFC5X_BASE+464)
+#define V4L2_CID_CODEC_MFC5X_ENC_H263_RC_MB_ENABLE (V4L2_CID_CODEC_MFC5X_BASE+465)
+
+/* FMO/ASO parameters */
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_FMO_MAP_TYPE (V4L2_CID_CODEC_MFC5X_BASE+480)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_FMO_SLICE_NUM (V4L2_CID_CODEC_MFC5X_BASE+481)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_FMO_RUN_LEN1 (V4L2_CID_CODEC_MFC5X_BASE+482)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_FMO_RUN_LEN2 (V4L2_CID_CODEC_MFC5X_BASE+483)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_FMO_RUN_LEN3 (V4L2_CID_CODEC_MFC5X_BASE+484)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_FMO_RUN_LEN4 (V4L2_CID_CODEC_MFC5X_BASE+485)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_FMO_SG_DIR (V4L2_CID_CODEC_MFC5X_BASE+486)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_FMO_SG_RATE (V4L2_CID_CODEC_MFC5X_BASE+487)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_ASO_SL_ORDER_0 (V4L2_CID_CODEC_MFC5X_BASE+488)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_ASO_SL_ORDER_1 (V4L2_CID_CODEC_MFC5X_BASE+489)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_ASO_SL_ORDER_2 (V4L2_CID_CODEC_MFC5X_BASE+490)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_ASO_SL_ORDER_3 (V4L2_CID_CODEC_MFC5X_BASE+491)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_ASO_SL_ORDER_4 (V4L2_CID_CODEC_MFC5X_BASE+492)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_ASO_SL_ORDER_5 (V4L2_CID_CODEC_MFC5X_BASE+493)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_ASO_SL_ORDER_6 (V4L2_CID_CODEC_MFC5X_BASE+494)
+#define V4L2_CID_CODEC_MFC5X_ENC_H264_ASO_SL_ORDER_7 (V4L2_CID_CODEC_MFC5X_BASE+495)
/* Camera class control IDs */
#define V4L2_CID_CAMERA_CLASS_BASE (V4L2_CTRL_CLASS_CAMERA | 0x900)
#define V4L2_CID_CAMERA_CLASS (V4L2_CTRL_CLASS_CAMERA | 1)
#define VIDIOC_S_FBUF _IOW('V', 11, struct v4l2_framebuffer)
#define VIDIOC_OVERLAY _IOW('V', 14, int)
#define VIDIOC_QBUF _IOWR('V', 15, struct v4l2_buffer)
+#define VIDIOC_EXPBUF _IOWR('V', 16, struct v4l2_exportbuffer)
#define VIDIOC_DQBUF _IOWR('V', 17, struct v4l2_buffer)
#define VIDIOC_STREAMON _IOW('V', 18, int)
#define VIDIOC_STREAMOFF _IOW('V', 19, int)
#define VM_USERMAP 0x00000008 /* suitable for remap_vmalloc_range */
#define VM_VPAGES 0x00000010 /* buffer for pages was vmalloc'ed */
#define VM_UNLIST 0x00000020 /* vm_struct is not listed in vmlist */
+#define VM_DMA 0x00000040 /* used by dma-mapping framework */
/* bits [20..32] reserved for arch specific ioremap internals */
/*
struct page **pages;
unsigned int nr_pages;
phys_addr_t phys_addr;
- void *caller;
+ const void *caller;
};
/*
extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot);
extern void *__vmalloc_node_range(unsigned long size, unsigned long align,
unsigned long start, unsigned long end, gfp_t gfp_mask,
- pgprot_t prot, int node, void *caller);
+ pgprot_t prot, int node, const void *caller);
extern void vfree(const void *addr);
extern void *vmap(struct page **pages, unsigned int count,
extern struct vm_struct *get_vm_area(unsigned long size, unsigned long flags);
extern struct vm_struct *get_vm_area_caller(unsigned long size,
- unsigned long flags, void *caller);
+ unsigned long flags, const void *caller);
extern struct vm_struct *__get_vm_area(unsigned long size, unsigned long flags,
unsigned long start, unsigned long end);
extern struct vm_struct *__get_vm_area_caller(unsigned long size,
unsigned long flags,
unsigned long start, unsigned long end,
- void *caller);
+ const void *caller);
extern struct vm_struct *remove_vm_area(const void *addr);
+extern struct vm_struct *find_vm_area(const void *addr);
extern int map_vm_area(struct vm_struct *area, pgprot_t prot,
struct page ***pages);
--- /dev/null
+/* include/media/exynos_camera.h
+ *
+ * Copyright (c) 2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com
+ *
+ * The header file related to camera
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef EXYNOS_CAMERA_H_
+#define EXYNOS_CAMERA_H_
+
+#include <media/exynos_mc.h>
+
+enum cam_bus_type {
+ CAM_TYPE_ITU = 1,
+ CAM_TYPE_MIPI,
+};
+
+enum cam_port {
+ CAM_PORT_A,
+ CAM_PORT_B,
+};
+
+#define CAM_CLK_INV_PCLK (1 << 0)
+#define CAM_CLK_INV_VSYNC (1 << 1)
+#define CAM_CLK_INV_HREF (1 << 2)
+#define CAM_CLK_INV_HSYNC (1 << 3)
+
+struct i2c_board_info;
+
+/**
+ * struct exynos_isp_info - image sensor information required for host
+ * interface configuration.
+ *
+ * @board_info: pointer to I2C subdevice's board info
+ * @clk_frequency: frequency of the clock the host interface provides to sensor
+ * @bus_type: determines bus type, MIPI, ITU-R BT.601 etc.
+ * @csi_data_align: MIPI-CSI interface data alignment in bits
+ * @i2c_bus_num: i2c control bus id the sensor is attached to
+ * @mux_id: FIMC camera interface multiplexer index (separate for MIPI and ITU)
+ * @flags: flags defining bus signals polarity inversion (High by default)
+ * @use_cam: a means of used by GSCALER
+ */
+struct exynos_isp_info {
+ struct i2c_board_info *board_info;
+ unsigned long clk_frequency;
+ const char *cam_srclk_name;
+ const char *cam_clk_name;
+ enum cam_bus_type bus_type;
+ u16 csi_data_align;
+ u16 i2c_bus_num;
+ enum cam_port cam_port;
+ u16 flags;
+};
+#endif /* EXYNOS_CAMERA_H_ */
--- /dev/null
+/* include/media/exynos_gscaler.h
+ *
+ * Copyright (c) 2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com
+ *
+ * Samsung EXYNOS SoC Gscaler driver header
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef EXYNOS_GSCALER_H_
+#define EXYNOS_GSCALER_H_
+
+#include <media/exynos_camera.h>
+
+/**
+ * struct exynos_platform_gscaler - camera host interface platform data
+ *
+ * @isp_info: properties of camera sensor required for host interface setup
+ */
+struct exynos_platform_gscaler {
+ struct exynos_isp_info *isp_info[MAX_CAMIF_CLIENTS];
+ u32 active_cam_index;
+ u32 num_clients;
+ u32 cam_preview:1;
+ u32 cam_camcording:1;
+};
+
+extern struct exynos_platform_gscaler exynos_gsc0_default_data;
+extern struct exynos_platform_gscaler exynos_gsc1_default_data;
+extern struct exynos_platform_gscaler exynos_gsc2_default_data;
+extern struct exynos_platform_gscaler exynos_gsc3_default_data;
+
+/**
+ * exynos5_gsc_set_parent_clock() = Exynos5 setup function for parent clock.
+ * @child: child clock used for gscaler
+ * @parent: parent clock used for gscaler
+ */
+int __init exynos5_gsc_set_parent_clock(const char *child, const char *parent);
+
+/**
+ * exynos5_gsc_set_clock_rate() = Exynos5 setup function for clock rate.
+ * @clk: name of clock used for gscaler
+ * @clk_rate: clock_rate for gscaler clock
+ */
+int __init exynos5_gsc_set_clock_rate(const char *clk, unsigned long clk_rate);
+#endif /* EXYNOS_GSCALER_H_ */
--- /dev/null
+/* linux/inclue/media/exynos_mc.h
+ *
+ * Copyright (c) 2011 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com
+ *
+ * header file for exynos media device driver
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef GSC_MDEVICE_H_
+#define GSC_MDEVICE_H_
+
+#include <linux/clk.h>
+#include <linux/platform_device.h>
+#include <linux/mutex.h>
+#include <linux/device.h>
+#include <media/media-device.h>
+#include <media/media-entity.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-subdev.h>
+
+#define err(fmt, args...) \
+ printk(KERN_ERR "%s:%d: " fmt "\n", __func__, __LINE__, ##args)
+
+#define MDEV_MODULE_NAME "exynos-mdev"
+#define MAX_GSC_SUBDEV 4
+#define MDEV_MAX_NUM 3
+
+#define GSC_OUT_PAD_SINK 0
+#define GSC_OUT_PAD_SOURCE 1
+
+#define GSC_CAP_PAD_SINK 0
+#define GSC_CAP_PAD_SOURCE 1
+
+#define FLITE_PAD_SINK 0
+#define FLITE_PAD_SOURCE_PREV 1
+#define FLITE_PAD_SOURCE_CAMCORD 2
+#define FLITE_PAD_SOURCE_MEM 3
+#define FLITE_PADS_NUM 4
+
+#define CSIS_PAD_SINK 0
+#define CSIS_PAD_SOURCE 1
+#define CSIS_PADS_NUM 2
+
+#define MAX_CAMIF_CLIENTS 2
+
+#define MXR_SUBDEV_NAME "s5p-mixer"
+
+#define GSC_MODULE_NAME "exynos-gsc"
+#define GSC_SUBDEV_NAME "exynos-gsc-sd"
+#define FIMD_MODULE_NAME "s5p-fimd1"
+#define FIMD_ENTITY_NAME "s3c-fb-window"
+#define FLITE_MODULE_NAME "exynos-fimc-lite"
+#define CSIS_MODULE_NAME "s5p-mipi-csis"
+
+#define GSC_CAP_GRP_ID (1 << 0)
+#define FLITE_GRP_ID (1 << 1)
+#define CSIS_GRP_ID (1 << 2)
+#define SENSOR_GRP_ID (1 << 3)
+#define FIMD_GRP_ID (1 << 4)
+
+#define SENSOR_MAX_ENTITIES MAX_CAMIF_CLIENTS
+#define FLITE_MAX_ENTITIES 2
+#define CSIS_MAX_ENTITIES 2
+
+enum mdev_node {
+ MDEV_OUTPUT,
+ MDEV_CAPTURE,
+ MDEV_ISP,
+};
+
+enum mxr_data_from {
+ FROM_GSC_SD,
+ FROM_MXR_VD,
+};
+
+struct exynos_media_ops {
+ int (*power_off)(struct v4l2_subdev *sd);
+};
+
+struct exynos_entity_data {
+ const struct exynos_media_ops *media_ops;
+ enum mxr_data_from mxr_data_from;
+};
+
+/**
+ * struct exynos_md - Exynos media device information
+ * @media_dev: top level media device
+ * @v4l2_dev: top level v4l2_device holding up the subdevs
+ * @pdev: platform device this media device is hooked up into
+ * @slock: spinlock protecting @sensor array
+ * @id: media device IDs
+ * @gsc_sd: each pointer of g-scaler's subdev array
+ */
+struct exynos_md {
+ struct media_device media_dev;
+ struct v4l2_device v4l2_dev;
+ struct platform_device *pdev;
+ struct v4l2_subdev *gsc_sd[MAX_GSC_SUBDEV];
+ struct v4l2_subdev *gsc_cap_sd[MAX_GSC_SUBDEV];
+ struct v4l2_subdev *csis_sd[CSIS_MAX_ENTITIES];
+ struct v4l2_subdev *flite_sd[FLITE_MAX_ENTITIES];
+ struct v4l2_subdev *sensor_sd[SENSOR_MAX_ENTITIES];
+ u16 id;
+ spinlock_t slock;
+};
+
+static int dummy_callback(struct device *dev, void *md)
+{
+ /* non-zero return stops iteration */
+ return -1;
+}
+
+static inline void *module_name_to_driver_data(char *module_name)
+{
+ struct device_driver *drv;
+ struct device *dev;
+ void *driver_data;
+
+ drv = driver_find(module_name, &platform_bus_type);
+ if (drv) {
+ dev = driver_find_device(drv, NULL, NULL, dummy_callback);
+ driver_data = dev_get_drvdata(dev);
+ return driver_data;
+ } else
+ return NULL;
+}
+
+/* print entity information for debug*/
+static inline void entity_info_print(struct media_entity *me, struct device *dev)
+{
+ u16 num_pads = me->num_pads;
+ u16 num_links = me->num_links;
+ int i;
+
+ dev_dbg(dev, "entity name : %s\n", me->name);
+ dev_dbg(dev, "number of pads = %d\n", num_pads);
+ for (i = 0; i < num_pads; ++i) {
+ dev_dbg(dev, "pad[%d] flag : %s\n", i,
+ (me->pads[i].flags == 1) ? "SINK" : "SOURCE");
+ }
+
+ dev_dbg(dev, "number of links = %d\n", num_links);
+
+ for (i = 0; i < num_links; ++i) {
+ dev_dbg(dev, "link[%d] info = ", i);
+ dev_dbg(dev, "%s : %s[%d] ---> %s : %s[%d]\n",
+ me->links[i].source->entity->name,
+ me->links[i].source->flags == 1 ? "SINK" : "SOURCE",
+ me->links[i].source->index,
+ me->links[i].sink->entity->name,
+ me->links[i].sink->flags == 1 ? "SINK" : "SOURCE",
+ me->links[i].sink->index);
+ }
+}
+#endif
/* ioctl callbacks */
const struct v4l2_ioctl_ops *ioctl_ops;
+ DECLARE_BITMAP(valid_ioctls, BASE_VIDIOC_PRIVATE);
/* serialization lock */
+ DECLARE_BITMAP(dont_use_lock, BASE_VIDIOC_PRIVATE);
struct mutex *lock;
};
a dubious construction at best. */
void video_device_release_empty(struct video_device *vdev);
+/* returns true if cmd is a known V4L2 ioctl */
+bool v4l2_is_known_ioctl(unsigned int cmd);
+
+/* mark that this command shouldn't use core locking */
+static inline void v4l2_dont_use_lock(struct video_device *vdev, unsigned int cmd)
+{
+ if (_IOC_NR(cmd) < BASE_VIDIOC_PRIVATE)
+ set_bit(_IOC_NR(cmd), vdev->dont_use_lock);
+}
+
+/* Mark that this command isn't implemented, must be called before
+ video_device_register. See also the comments in determine_valid_ioctls().
+ This function allows drivers to provide just one v4l2_ioctl_ops struct, but
+ disable ioctls based on the specific card that is actually found. */
+static inline void v4l2_dont_use_cmd(struct video_device *vdev, unsigned int cmd)
+{
+ if (_IOC_NR(cmd) < BASE_VIDIOC_PRIVATE)
+ set_bit(_IOC_NR(cmd), vdev->valid_ioctls);
+}
+
/* helper functions to access driver private data. */
static inline void *video_get_drvdata(struct video_device *vdev)
{
int (*vidioc_reqbufs) (struct file *file, void *fh, struct v4l2_requestbuffers *b);
int (*vidioc_querybuf)(struct file *file, void *fh, struct v4l2_buffer *b);
int (*vidioc_qbuf) (struct file *file, void *fh, struct v4l2_buffer *b);
+ int (*vidioc_expbuf) (struct file *file, void *fh,
+ struct v4l2_exportbuffer *e);
int (*vidioc_dqbuf) (struct file *file, void *fh, struct v4l2_buffer *b);
int (*vidioc_create_bufs)(struct file *file, void *fh, struct v4l2_create_buffers *b);
int v4l2_m2m_querybuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
struct v4l2_buffer *buf);
+int v4l2_m2m_expbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
+ struct v4l2_exportbuffer *eb);
+
int v4l2_m2m_qbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
struct v4l2_buffer *buf);
int v4l2_m2m_dqbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
#include <linux/mutex.h>
#include <linux/poll.h>
#include <linux/videodev2.h>
+#include <linux/dma-buf.h>
struct vb2_alloc_ctx;
struct vb2_fileio_data;
* argument to other ops in this structure
* @put_userptr: inform the allocator that a USERPTR buffer will no longer
* be used
+ * @attach_dmabuf: attach a shared struct dma_buf for a hardware operation;
+ * used for DMABUF memory types; alloc_ctx is the alloc context
+ * dbuf is the shared dma_buf; returns NULL on failure;
+ * allocator private per-buffer structure on success;
+ * this needs to be used for further accesses to the buffer
+ * @detach_dmabuf: inform the exporter of the buffer that the current DMABUF
+ * buffer is no longer used; the buf_priv argument is the
+ * allocator private per-buffer structure previously returned
+ * from the attach_dmabuf callback
+ * @map_dmabuf: request for access to the dmabuf from allocator; the allocator
+ * of dmabuf is informed that this driver is going to use the
+ * dmabuf
+ * @unmap_dmabuf: releases access control to the dmabuf - allocator is notified
+ * that this driver is done using the dmabuf for now
+ * @prepare: called everytime the buffer is passed from userspace to the
+ * driver, usefull for cache synchronisation, optional
+ * @finish: called everytime the buffer is passed back from the driver
+ * to the userspace, also optional
* @vaddr: return a kernel virtual address to a given memory buffer
* associated with the passed private structure or NULL if no
* such mapping exists
* Required ops for USERPTR types: get_userptr, put_userptr.
* Required ops for MMAP types: alloc, put, num_users, mmap.
* Required ops for read/write access types: alloc, put, num_users, vaddr
+ * Required ops for DMABUF types: attach_dmabuf, detach_dmabuf, map_dmabuf,
+ * unmap_dmabuf.
*/
struct vb2_mem_ops {
void *(*alloc)(void *alloc_ctx, unsigned long size);
void (*put)(void *buf_priv);
+ struct dma_buf *(*get_dmabuf)(void *buf_priv);
void *(*get_userptr)(void *alloc_ctx, unsigned long vaddr,
unsigned long size, int write);
void (*put_userptr)(void *buf_priv);
+ void (*prepare)(void *buf_priv);
+ void (*finish)(void *buf_priv);
+
+ void *(*attach_dmabuf)(void *alloc_ctx, struct dma_buf *dbuf,
+ unsigned long size, int write);
+ void (*detach_dmabuf)(void *buf_priv);
+ int (*map_dmabuf)(void *buf_priv);
+ void (*unmap_dmabuf)(void *buf_priv);
+
void *(*vaddr)(void *buf_priv);
void *(*cookie)(void *buf_priv);
struct vb2_plane {
void *mem_priv;
+ struct dma_buf *dbuf;
+ unsigned int dbuf_mapped;
};
/**
* @VB2_USERPTR: driver supports USERPTR with streaming API
* @VB2_READ: driver supports read() style access
* @VB2_WRITE: driver supports write() style access
+ * @VB2_DMABUF: driver supports DMABUF with streaming API
*/
enum vb2_io_modes {
VB2_MMAP = (1 << 0),
VB2_USERPTR = (1 << 1),
VB2_READ = (1 << 2),
VB2_WRITE = (1 << 3),
+ VB2_DMABUF = (1 << 4),
};
/**
void vb2_queue_release(struct vb2_queue *q);
int vb2_qbuf(struct vb2_queue *q, struct v4l2_buffer *b);
+int vb2_expbuf(struct vb2_queue *q, struct v4l2_exportbuffer *eb);
int vb2_dqbuf(struct vb2_queue *q, struct v4l2_buffer *b, bool nonblocking);
int vb2_streamon(struct vb2_queue *q, enum v4l2_buf_type type);
--- /dev/null
+/*
+ * videobuf2-fb.h - FrameBuffer API emulator on top of Videobuf2 framework
+ *
+ * Copyright (C) 2011 Samsung Electronics
+ *
+ * Author: Marek Szyprowski <m.szyprowski <at> samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation.
+ */
+
+#ifndef _MEDIA_VIDEOBUF2_FB_H
+#define _MEDIA_VIDEOBUF2_FB_H
+
+#include <media/v4l2-dev.h>
+#include <media/videobuf2-core.h>
+
+void *vb2_fb_register(struct vb2_queue *q, struct video_device *vfd);
+int vb2_fb_unregister(void *fb_emu);
+
+#endif
int vb2_get_contig_userptr(unsigned long vaddr, unsigned long size,
struct vm_area_struct **res_vma, dma_addr_t *res_pa);
-int vb2_mmap_pfn_range(struct vm_area_struct *vma, unsigned long paddr,
- unsigned long size,
- const struct vm_operations_struct *vm_ops,
- void *priv);
-
struct vm_area_struct *vb2_get_vma(struct vm_area_struct *vma);
void vb2_put_vma(struct vm_area_struct *vma);
POWER_ALL
};
+enum link_training_type {
+ SW_LINK_TRAINING,
+ HW_LINK_TRAINING,
+};
+
struct video_info {
char *name;
struct exynos_dp_platdata {
struct video_info *video_info;
+ enum link_training_type training_type;
+
void (*phy_init)(void);
void (*phy_exit)(void);
};
if (!left)
sg_mark_end(&sg[sg_size - 1]);
- /*
- * only really needed for mempool backed sg allocations (like
- * SCSI), a possible improvement here would be to pass the
- * table pointer into the allocator and let that clear these
- * flags
- */
- gfp_mask &= ~__GFP_WAIT;
- gfp_mask |= __GFP_HIGH;
prv = sg;
} while (left);
}
EXPORT_SYMBOL(sg_alloc_table);
+/**
+ * sg_alloc_table_from_pages - Allocate and initialize an sg table from
+ * an array of pages
+ * @sgt: The sg table header to use
+ * @pages: Pointer to an array of page pointers
+ * @n_pages: Number of pages in the pages array
+ * @offset: Offset from a start of the first page to a start of a buffer
+ * @size: Number of valid bytes in the buffer (after offset)
+ * @gfp_mask: GFP allocation mask
+ *
+ * Description:
+ * Allocate and initialize an sg table from a list of pages. Continuous
+ * ranges of the pages are squashed into a single scatterlist node. A user
+ * may provide an offset at a start and a size of valid data in a buffer
+ * specified by the page array. The returned sg table is released by
+ * sg_free_table.
+ *
+ * Returns:
+ * 0 on success, negative error on failure
+ **/
+int sg_alloc_table_from_pages(struct sg_table *sgt,
+ struct page **pages, unsigned int n_pages,
+ unsigned long offset, unsigned long size,
+ gfp_t gfp_mask)
+{
+ unsigned int chunks;
+ unsigned int i;
+ unsigned int cur_page;
+ int ret;
+ struct scatterlist *s;
+
+ /* compute number of contiguous chunks */
+ chunks = 1;
+ for (i = 1; i < n_pages; ++i)
+ if (pages[i] != pages[i - 1] + 1)
+ ++chunks;
+
+ ret = sg_alloc_table(sgt, chunks, gfp_mask);
+ if (unlikely(ret))
+ return ret;
+
+ /* merging chunks and putting them into the scatterlist */
+ cur_page = 0;
+ for_each_sg(sgt->sgl, s, sgt->orig_nents, i) {
+ unsigned long chunk_size;
+ unsigned int j;
+
+ /* looking for the end of the current chunk */
+ for (j = cur_page + 1; j < n_pages; ++j)
+ if (pages[j] != pages[j - 1] + 1)
+ break;
+
+ chunk_size = ((j - cur_page) << PAGE_SHIFT) - offset;
+ sg_set_page(s, pages[cur_page], min(size, chunk_size), offset);
+ size -= chunk_size;
+ offset = 0;
+ cur_page = j;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(sg_alloc_table_from_pages);
+
/**
* sg_miter_start - start mapping iteration over a sg list
* @miter: sg mapping iter to be started
config MIGRATION
bool "Page migration"
def_bool y
- depends on NUMA || ARCH_ENABLE_MEMORY_HOTREMOVE || COMPACTION
+ depends on NUMA || ARCH_ENABLE_MEMORY_HOTREMOVE || COMPACTION || CMA
help
Allows the migration of the physical location of pages of processes
while the virtual addresses are not changed. This is useful in
readahead.o swap.o truncate.o vmscan.o shmem.o \
prio_tree.o util.o mmzone.o vmstat.o backing-dev.o \
page_isolation.o mm_init.o mmu_context.o percpu.o \
- $(mmu-y)
+ compaction.o $(mmu-y)
obj-y += init-mm.o
ifdef CONFIG_NO_BOOTMEM
obj-$(CONFIG_SPARSEMEM) += sparse.o
obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
obj-$(CONFIG_SLOB) += slob.o
-obj-$(CONFIG_COMPACTION) += compaction.o
obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o
obj-$(CONFIG_KSM) += ksm.o
obj-$(CONFIG_PAGE_POISONING) += debug-pagealloc.o
#include <linux/sysfs.h>
#include "internal.h"
+#if defined CONFIG_COMPACTION || defined CONFIG_CMA
+
#define CREATE_TRACE_POINTS
#include <trace/events/compaction.h>
-/*
- * compact_control is used to track pages being migrated and the free pages
- * they are being migrated to during memory compaction. The free_pfn starts
- * at the end of a zone and migrate_pfn begins at the start. Movable pages
- * are moved to the end of a zone during a compaction run and the run
- * completes when free_pfn <= migrate_pfn
- */
-struct compact_control {
- struct list_head freepages; /* List of free pages to migrate to */
- struct list_head migratepages; /* List of pages being migrated */
- unsigned long nr_freepages; /* Number of isolated free pages */
- unsigned long nr_migratepages; /* Number of pages to migrate */
- unsigned long free_pfn; /* isolate_freepages search base */
- unsigned long migrate_pfn; /* isolate_migratepages search base */
- bool sync; /* Synchronous migration */
-
- int order; /* order a direct compactor needs */
- int migratetype; /* MOVABLE, RECLAIMABLE etc */
- struct zone *zone;
-};
-
static unsigned long release_freepages(struct list_head *freelist)
{
struct page *page, *next;
return count;
}
-/* Isolate free pages onto a private freelist. Must hold zone->lock */
-static unsigned long isolate_freepages_block(struct zone *zone,
- unsigned long blockpfn,
- struct list_head *freelist)
+static void map_pages(struct list_head *list)
+{
+ struct page *page;
+
+ list_for_each_entry(page, list, lru) {
+ arch_alloc_page(page, 0);
+ kernel_map_pages(page, 1, 1);
+ }
+}
+
+static inline bool migrate_async_suitable(int migratetype)
+{
+ return is_migrate_cma(migratetype) || migratetype == MIGRATE_MOVABLE;
+}
+
+/*
+ * Isolate free pages onto a private freelist. Caller must hold zone->lock.
+ * If @strict is true, will abort returning 0 on any invalid PFNs or non-free
+ * pages inside of the pageblock (even though it may still end up isolating
+ * some pages).
+ */
+static unsigned long isolate_freepages_block(unsigned long blockpfn,
+ unsigned long end_pfn,
+ struct list_head *freelist,
+ bool strict)
{
- unsigned long zone_end_pfn, end_pfn;
int nr_scanned = 0, total_isolated = 0;
struct page *cursor;
- /* Get the last PFN we should scan for free pages at */
- zone_end_pfn = zone->zone_start_pfn + zone->spanned_pages;
- end_pfn = min(blockpfn + pageblock_nr_pages, zone_end_pfn);
-
- /* Find the first usable PFN in the block to initialse page cursor */
- for (; blockpfn < end_pfn; blockpfn++) {
- if (pfn_valid_within(blockpfn))
- break;
- }
cursor = pfn_to_page(blockpfn);
/* Isolate free pages. This assumes the block is valid */
int isolated, i;
struct page *page = cursor;
- if (!pfn_valid_within(blockpfn))
+ if (!pfn_valid_within(blockpfn)) {
+ if (strict)
+ return 0;
continue;
+ }
nr_scanned++;
- if (!PageBuddy(page))
+ if (!PageBuddy(page)) {
+ if (strict)
+ return 0;
continue;
+ }
/* Found a free page, break it into order-0 pages */
isolated = split_free_page(page);
+ if (!isolated && strict)
+ return 0;
total_isolated += isolated;
for (i = 0; i < isolated; i++) {
list_add(&page->lru, freelist);
return total_isolated;
}
-/* Returns true if the page is within a block suitable for migration to */
-static bool suitable_migration_target(struct page *page)
-{
-
- int migratetype = get_pageblock_migratetype(page);
-
- /* Don't interfere with memory hot-remove or the min_free_kbytes blocks */
- if (migratetype == MIGRATE_ISOLATE || migratetype == MIGRATE_RESERVE)
- return false;
-
- /* If the page is a large free page, then allow migration */
- if (PageBuddy(page) && page_order(page) >= pageblock_order)
- return true;
-
- /* If the block is MIGRATE_MOVABLE, allow migration */
- if (migratetype == MIGRATE_MOVABLE)
- return true;
-
- /* Otherwise skip the block */
- return false;
-}
-
-/*
- * Based on information in the current compact_control, find blocks
- * suitable for isolating free pages from and then isolate them.
+/**
+ * isolate_freepages_range() - isolate free pages.
+ * @start_pfn: The first PFN to start isolating.
+ * @end_pfn: The one-past-last PFN.
+ *
+ * Non-free pages, invalid PFNs, or zone boundaries within the
+ * [start_pfn, end_pfn) range are considered errors, cause function to
+ * undo its actions and return zero.
+ *
+ * Otherwise, function returns one-past-the-last PFN of isolated page
+ * (which may be greater then end_pfn if end fell in a middle of
+ * a free page).
*/
-static void isolate_freepages(struct zone *zone,
- struct compact_control *cc)
+unsigned long
+isolate_freepages_range(unsigned long start_pfn, unsigned long end_pfn)
{
- struct page *page;
- unsigned long high_pfn, low_pfn, pfn;
- unsigned long flags;
- int nr_freepages = cc->nr_freepages;
- struct list_head *freelist = &cc->freepages;
-
- /*
- * Initialise the free scanner. The starting point is where we last
- * scanned from (or the end of the zone if starting). The low point
- * is the end of the pageblock the migration scanner is using.
- */
- pfn = cc->free_pfn;
- low_pfn = cc->migrate_pfn + pageblock_nr_pages;
+ unsigned long isolated, pfn, block_end_pfn, flags;
+ struct zone *zone = NULL;
+ LIST_HEAD(freelist);
- /*
- * Take care that if the migration scanner is at the end of the zone
- * that the free scanner does not accidentally move to the next zone
- * in the next isolation cycle.
- */
- high_pfn = min(low_pfn, pfn);
-
- /*
- * Isolate free pages until enough are available to migrate the
- * pages on cc->migratepages. We stop searching if the migrate
- * and free page scanners meet or enough free pages are isolated.
- */
- for (; pfn > low_pfn && cc->nr_migratepages > nr_freepages;
- pfn -= pageblock_nr_pages) {
- unsigned long isolated;
+ if (pfn_valid(start_pfn))
+ zone = page_zone(pfn_to_page(start_pfn));
- if (!pfn_valid(pfn))
- continue;
+ for (pfn = start_pfn; pfn < end_pfn; pfn += isolated) {
+ if (!pfn_valid(pfn) || zone != page_zone(pfn_to_page(pfn)))
+ break;
/*
- * Check for overlapping nodes/zones. It's possible on some
- * configurations to have a setup like
- * node0 node1 node0
- * i.e. it's possible that all pages within a zones range of
- * pages do not belong to a single zone.
+ * On subsequent iterations ALIGN() is actually not needed,
+ * but we keep it that we not to complicate the code.
*/
- page = pfn_to_page(pfn);
- if (page_zone(page) != zone)
- continue;
+ block_end_pfn = ALIGN(pfn + 1, pageblock_nr_pages);
+ block_end_pfn = min(block_end_pfn, end_pfn);
- /* Check the block is suitable for migration */
- if (!suitable_migration_target(page))
- continue;
+ spin_lock_irqsave(&zone->lock, flags);
+ isolated = isolate_freepages_block(pfn, block_end_pfn,
+ &freelist, true);
+ spin_unlock_irqrestore(&zone->lock, flags);
/*
- * Found a block suitable for isolating free pages from. Now
- * we disabled interrupts, double check things are ok and
- * isolate the pages. This is to minimise the time IRQs
- * are disabled
+ * In strict mode, isolate_freepages_block() returns 0 if
+ * there are any holes in the block (ie. invalid PFNs or
+ * non-free pages).
*/
- isolated = 0;
- spin_lock_irqsave(&zone->lock, flags);
- if (suitable_migration_target(page)) {
- isolated = isolate_freepages_block(zone, pfn, freelist);
- nr_freepages += isolated;
- }
- spin_unlock_irqrestore(&zone->lock, flags);
+ if (!isolated)
+ break;
/*
- * Record the highest PFN we isolated pages from. When next
- * looking for free pages, the search will restart here as
- * page migration may have returned some pages to the allocator
+ * If we managed to isolate pages, it is always (1 << n) *
+ * pageblock_nr_pages for some non-negative n. (Max order
+ * page may span two pageblocks).
*/
- if (isolated)
- high_pfn = max(high_pfn, pfn);
}
/* split_free_page does not map the pages */
- list_for_each_entry(page, freelist, lru) {
- arch_alloc_page(page, 0);
- kernel_map_pages(page, 1, 1);
+ map_pages(&freelist);
+
+ if (pfn < end_pfn) {
+ /* Loop terminated early, cleanup. */
+ release_freepages(&freelist);
+ return 0;
}
- cc->free_pfn = high_pfn;
- cc->nr_freepages = nr_freepages;
+ /* We don't use freelists for anything. */
+ return pfn;
}
/* Update the number of anon and file isolated pages in the zone */
return isolated > (inactive + active) / 2;
}
-/* possible outcome of isolate_migratepages */
-typedef enum {
- ISOLATE_ABORT, /* Abort compaction now */
- ISOLATE_NONE, /* No pages isolated, continue scanning */
- ISOLATE_SUCCESS, /* Pages isolated, migrate */
-} isolate_migrate_t;
-
-/*
- * Isolate all pages that can be migrated from the block pointed to by
- * the migrate scanner within compact_control.
+/**
+ * isolate_migratepages_range() - isolate all migrate-able pages in range.
+ * @zone: Zone pages are in.
+ * @cc: Compaction control structure.
+ * @low_pfn: The first PFN of the range.
+ * @end_pfn: The one-past-the-last PFN of the range.
+ *
+ * Isolate all pages that can be migrated from the range specified by
+ * [low_pfn, end_pfn). Returns zero if there is a fatal signal
+ * pending), otherwise PFN of the first page that was not scanned
+ * (which may be both less, equal to or more then end_pfn).
+ *
+ * Assumes that cc->migratepages is empty and cc->nr_migratepages is
+ * zero.
+ *
+ * Apart from cc->migratepages and cc->nr_migratetypes this function
+ * does not modify any cc's fields, in particular it does not modify
+ * (or read for that matter) cc->migrate_pfn.
*/
-static isolate_migrate_t isolate_migratepages(struct zone *zone,
- struct compact_control *cc)
+unsigned long
+isolate_migratepages_range(struct zone *zone, struct compact_control *cc,
+ unsigned long low_pfn, unsigned long end_pfn)
{
- unsigned long low_pfn, end_pfn;
unsigned long last_pageblock_nr = 0, pageblock_nr;
unsigned long nr_scanned = 0, nr_isolated = 0;
struct list_head *migratelist = &cc->migratepages;
isolate_mode_t mode = ISOLATE_ACTIVE|ISOLATE_INACTIVE;
- /* Do not scan outside zone boundaries */
- low_pfn = max(cc->migrate_pfn, zone->zone_start_pfn);
-
- /* Only scan within a pageblock boundary */
- end_pfn = ALIGN(low_pfn + pageblock_nr_pages, pageblock_nr_pages);
-
- /* Do not cross the free scanner or scan within a memory hole */
- if (end_pfn > cc->free_pfn || !pfn_valid(low_pfn)) {
- cc->migrate_pfn = end_pfn;
- return ISOLATE_NONE;
- }
-
/*
* Ensure that there are not too many pages isolated from the LRU
* list by either parallel reclaimers or compaction. If there are,
while (unlikely(too_many_isolated(zone))) {
/* async migration should just abort */
if (!cc->sync)
- return ISOLATE_ABORT;
+ return 0;
congestion_wait(BLK_RW_ASYNC, HZ/10);
if (fatal_signal_pending(current))
- return ISOLATE_ABORT;
+ return 0;
}
/* Time to isolate some pages for migration */
*/
pageblock_nr = low_pfn >> pageblock_order;
if (!cc->sync && last_pageblock_nr != pageblock_nr &&
- get_pageblock_migratetype(page) != MIGRATE_MOVABLE) {
+ !migrate_async_suitable(get_pageblock_migratetype(page))) {
low_pfn += pageblock_nr_pages;
low_pfn = ALIGN(low_pfn, pageblock_nr_pages) - 1;
last_pageblock_nr = pageblock_nr;
acct_isolated(zone, cc);
spin_unlock_irq(&zone->lru_lock);
- cc->migrate_pfn = low_pfn;
trace_mm_compaction_isolate_migratepages(nr_scanned, nr_isolated);
- return ISOLATE_SUCCESS;
+ return low_pfn;
+}
+
+#endif /* CONFIG_COMPACTION || CONFIG_CMA */
+#ifdef CONFIG_COMPACTION
+
+/* Returns true if the page is within a block suitable for migration to */
+static bool suitable_migration_target(struct page *page)
+{
+
+ int migratetype = get_pageblock_migratetype(page);
+
+ /* Don't interfere with memory hot-remove or the min_free_kbytes blocks */
+ if (migratetype == MIGRATE_ISOLATE || migratetype == MIGRATE_RESERVE)
+ return false;
+
+ /* If the page is a large free page, then allow migration */
+ if (PageBuddy(page) && page_order(page) >= pageblock_order)
+ return true;
+
+ /* If the block is MIGRATE_MOVABLE or MIGRATE_CMA, allow migration */
+ if (migrate_async_suitable(migratetype))
+ return true;
+
+ /* Otherwise skip the block */
+ return false;
+}
+
+/*
+ * Based on information in the current compact_control, find blocks
+ * suitable for isolating free pages from and then isolate them.
+ */
+static void isolate_freepages(struct zone *zone,
+ struct compact_control *cc)
+{
+ struct page *page;
+ unsigned long high_pfn, low_pfn, pfn, zone_end_pfn, end_pfn;
+ unsigned long flags;
+ int nr_freepages = cc->nr_freepages;
+ struct list_head *freelist = &cc->freepages;
+
+ /*
+ * Initialise the free scanner. The starting point is where we last
+ * scanned from (or the end of the zone if starting). The low point
+ * is the end of the pageblock the migration scanner is using.
+ */
+ pfn = cc->free_pfn;
+ low_pfn = cc->migrate_pfn + pageblock_nr_pages;
+
+ /*
+ * Take care that if the migration scanner is at the end of the zone
+ * that the free scanner does not accidentally move to the next zone
+ * in the next isolation cycle.
+ */
+ high_pfn = min(low_pfn, pfn);
+
+ zone_end_pfn = zone->zone_start_pfn + zone->spanned_pages;
+
+ /*
+ * Isolate free pages until enough are available to migrate the
+ * pages on cc->migratepages. We stop searching if the migrate
+ * and free page scanners meet or enough free pages are isolated.
+ */
+ for (; pfn > low_pfn && cc->nr_migratepages > nr_freepages;
+ pfn -= pageblock_nr_pages) {
+ unsigned long isolated;
+
+ if (!pfn_valid(pfn))
+ continue;
+
+ /*
+ * Check for overlapping nodes/zones. It's possible on some
+ * configurations to have a setup like
+ * node0 node1 node0
+ * i.e. it's possible that all pages within a zones range of
+ * pages do not belong to a single zone.
+ */
+ page = pfn_to_page(pfn);
+ if (page_zone(page) != zone)
+ continue;
+
+ /* Check the block is suitable for migration */
+ if (!suitable_migration_target(page))
+ continue;
+
+ /*
+ * Found a block suitable for isolating free pages from. Now
+ * we disabled interrupts, double check things are ok and
+ * isolate the pages. This is to minimise the time IRQs
+ * are disabled
+ */
+ isolated = 0;
+ spin_lock_irqsave(&zone->lock, flags);
+ if (suitable_migration_target(page)) {
+ end_pfn = min(pfn + pageblock_nr_pages, zone_end_pfn);
+ isolated = isolate_freepages_block(pfn, end_pfn,
+ freelist, false);
+ nr_freepages += isolated;
+ }
+ spin_unlock_irqrestore(&zone->lock, flags);
+
+ /*
+ * Record the highest PFN we isolated pages from. When next
+ * looking for free pages, the search will restart here as
+ * page migration may have returned some pages to the allocator
+ */
+ if (isolated)
+ high_pfn = max(high_pfn, pfn);
+ }
+
+ /* split_free_page does not map the pages */
+ map_pages(freelist);
+
+ cc->free_pfn = high_pfn;
+ cc->nr_freepages = nr_freepages;
}
/*
cc->nr_freepages = nr_freepages;
}
+/* possible outcome of isolate_migratepages */
+typedef enum {
+ ISOLATE_ABORT, /* Abort compaction now */
+ ISOLATE_NONE, /* No pages isolated, continue scanning */
+ ISOLATE_SUCCESS, /* Pages isolated, migrate */
+} isolate_migrate_t;
+
+/*
+ * Isolate all pages that can be migrated from the block pointed to by
+ * the migrate scanner within compact_control.
+ */
+static isolate_migrate_t isolate_migratepages(struct zone *zone,
+ struct compact_control *cc)
+{
+ unsigned long low_pfn, end_pfn;
+
+ /* Do not scan outside zone boundaries */
+ low_pfn = max(cc->migrate_pfn, zone->zone_start_pfn);
+
+ /* Only scan within a pageblock boundary */
+ end_pfn = ALIGN(low_pfn + pageblock_nr_pages, pageblock_nr_pages);
+
+ /* Do not cross the free scanner or scan within a memory hole */
+ if (end_pfn > cc->free_pfn || !pfn_valid(low_pfn)) {
+ cc->migrate_pfn = end_pfn;
+ return ISOLATE_NONE;
+ }
+
+ /* Perform the isolation */
+ low_pfn = isolate_migratepages_range(zone, cc, low_pfn, end_pfn);
+ if (!low_pfn)
+ return ISOLATE_ABORT;
+
+ cc->migrate_pfn = low_pfn;
+
+ return ISOLATE_SUCCESS;
+}
+
static int compact_finished(struct zone *zone,
struct compact_control *cc)
{
return device_remove_file(&node->dev, &dev_attr_compact);
}
#endif /* CONFIG_SYSFS && CONFIG_NUMA */
+
+#endif /* CONFIG_COMPACTION */
extern bool is_free_buddy_page(struct page *page);
#endif
+#if defined CONFIG_COMPACTION || defined CONFIG_CMA
+
+/*
+ * in mm/compaction.c
+ */
+/*
+ * compact_control is used to track pages being migrated and the free pages
+ * they are being migrated to during memory compaction. The free_pfn starts
+ * at the end of a zone and migrate_pfn begins at the start. Movable pages
+ * are moved to the end of a zone during a compaction run and the run
+ * completes when free_pfn <= migrate_pfn
+ */
+struct compact_control {
+ struct list_head freepages; /* List of free pages to migrate to */
+ struct list_head migratepages; /* List of pages being migrated */
+ unsigned long nr_freepages; /* Number of isolated free pages */
+ unsigned long nr_migratepages; /* Number of pages to migrate */
+ unsigned long free_pfn; /* isolate_freepages search base */
+ unsigned long migrate_pfn; /* isolate_migratepages search base */
+ bool sync; /* Synchronous migration */
+
+ int order; /* order a direct compactor needs */
+ int migratetype; /* MOVABLE, RECLAIMABLE etc */
+ struct zone *zone;
+};
+
+unsigned long
+isolate_freepages_range(unsigned long start_pfn, unsigned long end_pfn);
+unsigned long
+isolate_migratepages_range(struct zone *zone, struct compact_control *cc,
+ unsigned long low_pfn, unsigned long end_pfn);
+
+#endif
/*
* function for dealing with page's order in buddy system.
/* Not a free page */
ret = 1;
}
- unset_migratetype_isolate(p);
+ unset_migratetype_isolate(p, MIGRATE_MOVABLE);
unlock_memory_hotplug();
return ret;
}
nr_pages = end_pfn - start_pfn;
/* set above range as isolated */
- ret = start_isolate_page_range(start_pfn, end_pfn);
+ ret = start_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE);
if (ret)
goto out;
We cannot do rollback at this point. */
offline_isolated_pages(start_pfn, end_pfn);
/* reset pagetype flags and makes migrate type to be MOVABLE */
- undo_isolate_page_range(start_pfn, end_pfn);
+ undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE);
/* removal success */
zone->present_pages -= offlined_pages;
zone->zone_pgdat->node_present_pages -= offlined_pages;
start_pfn, end_pfn);
memory_notify(MEM_CANCEL_OFFLINE, &arg);
/* pushback to free area */
- undo_isolate_page_range(start_pfn, end_pfn);
+ undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE);
out:
unlock_memory_hotplug();
#include <linux/ftrace_event.h>
#include <linux/memcontrol.h>
#include <linux/prefetch.h>
+#include <linux/migrate.h>
#include <linux/page-debug-flags.h>
#include <linux/low-mem-notify.h>
* free pages of length of (1 << order) and marked with _mapcount -2. Page's
* order is recorded in page_private(page) field.
* So when we are allocating or freeing one, we can derive the state of the
- * other. That is, if we allocate a small block, and both were
- * free, the remainder of the region must be split into blocks.
+ * other. That is, if we allocate a small block, and both were
+ * free, the remainder of the region must be split into blocks.
* If a block is freed, and its buddy is also free, then this
- * triggers coalescing into a block of larger size.
+ * triggers coalescing into a block of larger size.
*
* -- wli
*/
__free_pages(page, order);
}
+#ifdef CONFIG_CMA
+/* Free whole pageblock and set it's migration type to MIGRATE_CMA. */
+void __init init_cma_reserved_pageblock(struct page *page)
+{
+ unsigned i = pageblock_nr_pages;
+ struct page *p = page;
+
+ do {
+ __ClearPageReserved(p);
+ set_page_count(p, 0);
+ } while (++p, --i);
+
+ set_page_refcounted(page);
+ set_pageblock_migratetype(page, MIGRATE_CMA);
+ __free_pages(page, pageblock_order);
+ totalram_pages += pageblock_nr_pages;
+}
+#endif
/*
* The order of subdivision here is critical for the IO subsystem.
* This array describes the order lists are fallen back to when
* the free lists for the desirable migrate type are depleted
*/
-static int fallbacks[MIGRATE_TYPES][MIGRATE_TYPES-1] = {
- [MIGRATE_UNMOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_MOVABLE, MIGRATE_RESERVE },
- [MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE, MIGRATE_MOVABLE, MIGRATE_RESERVE },
- [MIGRATE_MOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_RESERVE },
- [MIGRATE_RESERVE] = { MIGRATE_RESERVE, MIGRATE_RESERVE, MIGRATE_RESERVE }, /* Never used */
+static int fallbacks[MIGRATE_TYPES][4] = {
+ [MIGRATE_UNMOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_MOVABLE, MIGRATE_RESERVE },
+ [MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE, MIGRATE_MOVABLE, MIGRATE_RESERVE },
+#ifdef CONFIG_CMA
+ [MIGRATE_MOVABLE] = { MIGRATE_CMA, MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_RESERVE },
+ [MIGRATE_CMA] = { MIGRATE_RESERVE }, /* Never used */
+#else
+ [MIGRATE_MOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_RESERVE },
+#endif
+ [MIGRATE_RESERVE] = { MIGRATE_RESERVE }, /* Never used */
+ [MIGRATE_ISOLATE] = { MIGRATE_RESERVE }, /* Never used */
};
/*
/* Find the largest possible block of pages in the other list */
for (current_order = MAX_ORDER-1; current_order >= order;
--current_order) {
- for (i = 0; i < MIGRATE_TYPES - 1; i++) {
+ for (i = 0;; i++) {
migratetype = fallbacks[start_migratetype][i];
/* MIGRATE_RESERVE handled later if necessary */
if (migratetype == MIGRATE_RESERVE)
- continue;
+ break;
area = &(zone->free_area[current_order]);
if (list_empty(&area->free_list[migratetype]))
* pages to the preferred allocation list. If falling
* back for a reclaimable kernel allocation, be more
* aggressive about taking ownership of free pages
+ *
+ * On the other hand, never change migration
+ * type of MIGRATE_CMA pageblocks nor move CMA
+ * pages on different free lists. We don't
+ * want unmovable pages to be allocated from
+ * MIGRATE_CMA areas.
*/
- if (unlikely(current_order >= (pageblock_order >> 1)) ||
- start_migratetype == MIGRATE_RECLAIMABLE ||
- page_group_by_mobility_disabled) {
- unsigned long pages;
+ if (!is_migrate_cma(migratetype) &&
+ (unlikely(current_order >= pageblock_order / 2) ||
+ start_migratetype == MIGRATE_RECLAIMABLE ||
+ page_group_by_mobility_disabled)) {
+ int pages;
pages = move_freepages_block(zone, page,
start_migratetype);
rmv_page_order(page);
/* Take ownership for orders >= pageblock_order */
- if (current_order >= pageblock_order)
+ if (current_order >= pageblock_order &&
+ !is_migrate_cma(migratetype))
change_pageblock_range(page, current_order,
start_migratetype);
- expand(zone, page, order, current_order, area, migratetype);
+ expand(zone, page, order, current_order, area,
+ is_migrate_cma(migratetype)
+ ? migratetype : start_migratetype);
trace_mm_page_alloc_extfrag(page, order, current_order,
start_migratetype, migratetype);
return page;
}
-/*
+/*
* Obtain a specified number of elements from the buddy allocator, all under
* a single hold of the lock, for efficiency. Add them to the supplied list.
* Returns the number of new pages which were placed at *list.
*/
-static int rmqueue_bulk(struct zone *zone, unsigned int order,
+static int rmqueue_bulk(struct zone *zone, unsigned int order,
unsigned long count, struct list_head *list,
int migratetype, int cold)
{
- int i;
-
+ int mt = migratetype, i;
+
spin_lock(&zone->lock);
for (i = 0; i < count; ++i) {
struct page *page = __rmqueue(zone, order, migratetype);
list_add(&page->lru, list);
else
list_add_tail(&page->lru, list);
- set_page_private(page, migratetype);
+ if (IS_ENABLED(CONFIG_CMA)) {
+ mt = get_pageblock_migratetype(page);
+ if (!is_migrate_cma(mt) && mt != MIGRATE_ISOLATE)
+ mt = migratetype;
+ }
+ set_page_private(page, mt);
list = &page->lru;
}
__mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order));
if (order >= pageblock_order - 1) {
struct page *endpage = page + (1 << order) - 1;
- for (; page < endpage; page += pageblock_nr_pages)
- set_pageblock_migratetype(page, MIGRATE_MOVABLE);
+ for (; page < endpage; page += pageblock_nr_pages) {
+ int mt = get_pageblock_migratetype(page);
+ if (mt != MIGRATE_ISOLATE && !is_migrate_cma(mt))
+ set_pageblock_migratetype(page,
+ MIGRATE_MOVABLE);
+ }
}
return 1 << order;
}
#endif /* CONFIG_COMPACTION */
-/* The really slow allocator path where we enter direct reclaim */
-static inline struct page *
-__alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
- struct zonelist *zonelist, enum zone_type high_zoneidx,
- nodemask_t *nodemask, int alloc_flags, struct zone *preferred_zone,
- int migratetype, unsigned long *did_some_progress)
+/* Perform direct synchronous page reclaim */
+static int
+__perform_reclaim(gfp_t gfp_mask, unsigned int order, struct zonelist *zonelist,
+ nodemask_t *nodemask)
{
- struct page *page = NULL;
struct reclaim_state reclaim_state;
- bool drained = false;
+ int progress;
cond_resched();
reclaim_state.reclaimed_slab = 0;
current->reclaim_state = &reclaim_state;
- *did_some_progress = try_to_free_pages(zonelist, order, gfp_mask, nodemask);
+ progress = try_to_free_pages(zonelist, order, gfp_mask, nodemask);
current->reclaim_state = NULL;
lockdep_clear_current_reclaim_state();
cond_resched();
+ return progress;
+}
+
+/* The really slow allocator path where we enter direct reclaim */
+static inline struct page *
+__alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
+ struct zonelist *zonelist, enum zone_type high_zoneidx,
+ nodemask_t *nodemask, int alloc_flags, struct zone *preferred_zone,
+ int migratetype, unsigned long *did_some_progress)
+{
+ struct page *page = NULL;
+ bool drained = false;
+
+ *did_some_progress = __perform_reclaim(gfp_mask, order, zonelist,
+ nodemask);
if (unlikely(!(*did_some_progress)))
return NULL;
init_waitqueue_head(&pgdat->kswapd_wait);
pgdat->kswapd_max_order = 0;
pgdat_page_cgroup_init(pgdat);
-
+
for (j = 0; j < MAX_NR_ZONES; j++) {
struct zone *zone = pgdat->node_zones + j;
unsigned long size, realsize, memmap_pages;
calculate_totalreserve_pages();
}
-/**
- * setup_per_zone_wmarks - called when min_free_kbytes changes
- * or when memory is hot-{added|removed}
- *
- * Ensures that the watermark[min,low,high] values for each zone are set
- * correctly with respect to min_free_kbytes.
- */
-void setup_per_zone_wmarks(void)
+static void __setup_per_zone_wmarks(void)
{
unsigned long pages_min = min_free_kbytes >> (PAGE_SHIFT - 10);
unsigned long lowmem_pages = 0;
zone->watermark[WMARK_LOW] = min_wmark_pages(zone) + (tmp >> 2);
zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + (tmp >> 1);
+
+ zone->watermark[WMARK_MIN] += cma_wmark_pages(zone);
+ zone->watermark[WMARK_LOW] += cma_wmark_pages(zone);
+ zone->watermark[WMARK_HIGH] += cma_wmark_pages(zone);
+
setup_zone_migrate_reserve(zone);
spin_unlock_irqrestore(&zone->lock, flags);
}
calculate_totalreserve_pages();
}
+/**
+ * setup_per_zone_wmarks - called when min_free_kbytes changes
+ * or when memory is hot-{added|removed}
+ *
+ * Ensures that the watermark[min,low,high] values for each zone are set
+ * correctly with respect to min_free_kbytes.
+ */
+void setup_per_zone_wmarks(void)
+{
+ mutex_lock(&zonelists_mutex);
+ __setup_per_zone_wmarks();
+ mutex_unlock(&zonelists_mutex);
+}
+
/*
* The inactive anon list should be small enough that the VM never has to
* do too much work, but large enough that each inactive page has a chance
__count_immobile_pages(struct zone *zone, struct page *page, int count)
{
unsigned long pfn, iter, found;
+ int mt;
+
/*
* For avoiding noise data, lru_add_drain_all() should be called
* If ZONE_MOVABLE, the zone never contains immobile pages
*/
if (zone_idx(zone) == ZONE_MOVABLE)
return true;
-
- if (get_pageblock_migratetype(page) == MIGRATE_MOVABLE)
+ mt = get_pageblock_migratetype(page);
+ if (mt == MIGRATE_MOVABLE || is_migrate_cma(mt))
return true;
pfn = page_to_pfn(page);
return ret;
}
-void unset_migratetype_isolate(struct page *page)
+void unset_migratetype_isolate(struct page *page, unsigned migratetype)
{
struct zone *zone;
unsigned long flags;
spin_lock_irqsave(&zone->lock, flags);
if (get_pageblock_migratetype(page) != MIGRATE_ISOLATE)
goto out;
- set_pageblock_migratetype(page, MIGRATE_MOVABLE);
- move_freepages_block(zone, page, MIGRATE_MOVABLE);
+ set_pageblock_migratetype(page, migratetype);
+ move_freepages_block(zone, page, migratetype);
out:
spin_unlock_irqrestore(&zone->lock, flags);
}
+#ifdef CONFIG_CMA
+
+static unsigned long pfn_max_align_down(unsigned long pfn)
+{
+ return pfn & ~(max_t(unsigned long, MAX_ORDER_NR_PAGES,
+ pageblock_nr_pages) - 1);
+}
+
+static unsigned long pfn_max_align_up(unsigned long pfn)
+{
+ return ALIGN(pfn, max_t(unsigned long, MAX_ORDER_NR_PAGES,
+ pageblock_nr_pages));
+}
+
+static struct page *
+__alloc_contig_migrate_alloc(struct page *page, unsigned long private,
+ int **resultp)
+{
+ return alloc_page(GFP_HIGHUSER_MOVABLE);
+}
+
+/* [start, end) must belong to a single zone. */
+static int __alloc_contig_migrate_range(unsigned long start, unsigned long end)
+{
+ /* This function is based on compact_zone() from compaction.c. */
+
+ unsigned long pfn = start;
+ unsigned int tries = 0;
+ int ret = 0;
+
+ struct compact_control cc = {
+ .nr_migratepages = 0,
+ .order = -1,
+ .zone = page_zone(pfn_to_page(start)),
+ .sync = true,
+ };
+ INIT_LIST_HEAD(&cc.migratepages);
+
+ migrate_prep_local();
+
+ while (pfn < end || !list_empty(&cc.migratepages)) {
+ if (fatal_signal_pending(current)) {
+ ret = -EINTR;
+ break;
+ }
+
+ if (list_empty(&cc.migratepages)) {
+ cc.nr_migratepages = 0;
+ pfn = isolate_migratepages_range(cc.zone, &cc,
+ pfn, end);
+ if (!pfn) {
+ ret = -EINTR;
+ break;
+ }
+ tries = 0;
+ } else if (++tries == 5) {
+ ret = ret < 0 ? ret : -EBUSY;
+ break;
+ }
+
+ ret = migrate_pages(&cc.migratepages,
+ __alloc_contig_migrate_alloc,
+ 0, false, MIGRATE_SYNC);
+ }
+
+ putback_lru_pages(&cc.migratepages);
+ return ret > 0 ? 0 : ret;
+}
+
+/*
+ * Update zone's cma pages counter used for watermark level calculation.
+ */
+static inline void __update_cma_watermarks(struct zone *zone, int count)
+{
+ unsigned long flags;
+ spin_lock_irqsave(&zone->lock, flags);
+ zone->min_cma_pages += count;
+ spin_unlock_irqrestore(&zone->lock, flags);
+ setup_per_zone_wmarks();
+}
+
+/*
+ * Trigger memory pressure bump to reclaim some pages in order to be able to
+ * allocate 'count' pages in single page units. Does similar work as
+ *__alloc_pages_slowpath() function.
+ */
+static int __reclaim_pages(struct zone *zone, gfp_t gfp_mask, int count)
+{
+ enum zone_type high_zoneidx = gfp_zone(gfp_mask);
+ struct zonelist *zonelist = node_zonelist(0, gfp_mask);
+ int did_some_progress = 0;
+ int order = 1;
+
+ /*
+ * Increase level of watermarks to force kswapd do his job
+ * to stabilise at new watermark level.
+ */
+ __update_cma_watermarks(zone, count);
+
+ /* Obey watermarks as if the page was being allocated */
+ while (!zone_watermark_ok(zone, 0, low_wmark_pages(zone), 0, 0)) {
+ wake_all_kswapd(order, zonelist, high_zoneidx, zone_idx(zone));
+
+ did_some_progress = __perform_reclaim(gfp_mask, order, zonelist,
+ NULL);
+ if (!did_some_progress) {
+ /* Exhausted what can be done so it's blamo time */
+ out_of_memory(zonelist, gfp_mask, order, NULL, false);
+ }
+ }
+
+ /* Restore original watermark levels. */
+ __update_cma_watermarks(zone, -count);
+
+ return count;
+}
+
+/**
+ * alloc_contig_range() -- tries to allocate given range of pages
+ * @start: start PFN to allocate
+ * @end: one-past-the-last PFN to allocate
+ * @migratetype: migratetype of the underlaying pageblocks (either
+ * #MIGRATE_MOVABLE or #MIGRATE_CMA). All pageblocks
+ * in range must have the same migratetype and it must
+ * be either of the two.
+ *
+ * The PFN range does not have to be pageblock or MAX_ORDER_NR_PAGES
+ * aligned, however it's the caller's responsibility to guarantee that
+ * we are the only thread that changes migrate type of pageblocks the
+ * pages fall in.
+ *
+ * The PFN range must belong to a single zone.
+ *
+ * Returns zero on success or negative error code. On success all
+ * pages which PFN is in [start, end) are allocated for the caller and
+ * need to be freed with free_contig_range().
+ */
+int alloc_contig_range(unsigned long start, unsigned long end,
+ unsigned migratetype)
+{
+ struct zone *zone = page_zone(pfn_to_page(start));
+ unsigned long outer_start, outer_end;
+ int ret = 0, order;
+
+ /*
+ * What we do here is we mark all pageblocks in range as
+ * MIGRATE_ISOLATE. Because pageblock and max order pages may
+ * have different sizes, and due to the way page allocator
+ * work, we align the range to biggest of the two pages so
+ * that page allocator won't try to merge buddies from
+ * different pageblocks and change MIGRATE_ISOLATE to some
+ * other migration type.
+ *
+ * Once the pageblocks are marked as MIGRATE_ISOLATE, we
+ * migrate the pages from an unaligned range (ie. pages that
+ * we are interested in). This will put all the pages in
+ * range back to page allocator as MIGRATE_ISOLATE.
+ *
+ * When this is done, we take the pages in range from page
+ * allocator removing them from the buddy system. This way
+ * page allocator will never consider using them.
+ *
+ * This lets us mark the pageblocks back as
+ * MIGRATE_CMA/MIGRATE_MOVABLE so that free pages in the
+ * aligned range but not in the unaligned, original range are
+ * put back to page allocator so that buddy can use them.
+ */
+
+ ret = start_isolate_page_range(pfn_max_align_down(start),
+ pfn_max_align_up(end), migratetype);
+ if (ret)
+ goto done;
+
+ ret = __alloc_contig_migrate_range(start, end);
+ if (ret)
+ goto done;
+
+ /*
+ * Pages from [start, end) are within a MAX_ORDER_NR_PAGES
+ * aligned blocks that are marked as MIGRATE_ISOLATE. What's
+ * more, all pages in [start, end) are free in page allocator.
+ * What we are going to do is to allocate all pages from
+ * [start, end) (that is remove them from page allocator).
+ *
+ * The only problem is that pages at the beginning and at the
+ * end of interesting range may be not aligned with pages that
+ * page allocator holds, ie. they can be part of higher order
+ * pages. Because of this, we reserve the bigger range and
+ * once this is done free the pages we are not interested in.
+ *
+ * We don't have to hold zone->lock here because the pages are
+ * isolated thus they won't get removed from buddy.
+ */
+
+ lru_add_drain_all();
+ drain_all_pages();
+
+ order = 0;
+ outer_start = start;
+ while (!PageBuddy(pfn_to_page(outer_start))) {
+ if (++order >= MAX_ORDER) {
+ ret = -EBUSY;
+ goto done;
+ }
+ outer_start &= ~0UL << order;
+ }
+
+ /* Make sure the range is really isolated. */
+ if (test_pages_isolated(outer_start, end)) {
+ pr_warn("alloc_contig_range test_pages_isolated(%lx, %lx) failed\n",
+ outer_start, end);
+ ret = -EBUSY;
+ goto done;
+ }
+
+ /*
+ * Reclaim enough pages to make sure that contiguous allocation
+ * will not starve the system.
+ */
+ __reclaim_pages(zone, GFP_HIGHUSER_MOVABLE, end-start);
+
+ /* Grab isolated pages from freelists. */
+ outer_end = isolate_freepages_range(outer_start, end);
+ if (!outer_end) {
+ ret = -EBUSY;
+ goto done;
+ }
+
+ /* Free head and tail (if any) */
+ if (start != outer_start)
+ free_contig_range(outer_start, start - outer_start);
+ if (end != outer_end)
+ free_contig_range(end, outer_end - end);
+
+done:
+ undo_isolate_page_range(pfn_max_align_down(start),
+ pfn_max_align_up(end), migratetype);
+ return ret;
+}
+
+void free_contig_range(unsigned long pfn, unsigned nr_pages)
+{
+ for (; nr_pages--; ++pfn)
+ __free_page(pfn_to_page(pfn));
+}
+#endif
+
#ifdef CONFIG_MEMORY_HOTREMOVE
/*
* All pages in the range must be isolated before calling this.
* to be MIGRATE_ISOLATE.
* @start_pfn: The lower PFN of the range to be isolated.
* @end_pfn: The upper PFN of the range to be isolated.
+ * @migratetype: migrate type to set in error recovery.
*
* Making page-allocation-type to be MIGRATE_ISOLATE means free pages in
* the range will never be allocated. Any free pages and pages freed in the
* start_pfn/end_pfn must be aligned to pageblock_order.
* Returns 0 on success and -EBUSY if any part of range cannot be isolated.
*/
-int
-start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn)
+int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
+ unsigned migratetype)
{
unsigned long pfn;
unsigned long undo_pfn;
for (pfn = start_pfn;
pfn < undo_pfn;
pfn += pageblock_nr_pages)
- unset_migratetype_isolate(pfn_to_page(pfn));
+ unset_migratetype_isolate(pfn_to_page(pfn), migratetype);
return -EBUSY;
}
/*
* Make isolated pages available again.
*/
-int
-undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn)
+int undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
+ unsigned migratetype)
{
unsigned long pfn;
struct page *page;
page = __first_valid_page(pfn, pageblock_nr_pages);
if (!page || get_pageblock_migratetype(page) != MIGRATE_ISOLATE)
continue;
- unset_migratetype_isolate(page);
+ unset_migratetype_isolate(page, migratetype);
}
return 0;
}
* all pages in [start_pfn...end_pfn) must be in the same zone.
* zone->lock must be held before call this.
*
- * Returns 1 if all pages in the range is isolated.
+ * Returns 1 if all pages in the range are isolated.
*/
static int
__test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn)
struct vm_struct *vmlist;
static void setup_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va,
- unsigned long flags, void *caller)
+ unsigned long flags, const void *caller)
{
vm->flags = flags;
vm->addr = (void *)va->va_start;
}
static void insert_vmalloc_vm(struct vm_struct *vm, struct vmap_area *va,
- unsigned long flags, void *caller)
+ unsigned long flags, const void *caller)
{
setup_vmalloc_vm(vm, va, flags, caller);
insert_vmalloc_vmlist(vm);
static struct vm_struct *__get_vm_area_node(unsigned long size,
unsigned long align, unsigned long flags, unsigned long start,
- unsigned long end, int node, gfp_t gfp_mask, void *caller)
+ unsigned long end, int node, gfp_t gfp_mask, const void *caller)
{
struct vmap_area *va;
struct vm_struct *area;
struct vm_struct *__get_vm_area_caller(unsigned long size, unsigned long flags,
unsigned long start, unsigned long end,
- void *caller)
+ const void *caller)
{
return __get_vm_area_node(size, 1, flags, start, end, -1, GFP_KERNEL,
caller);
}
struct vm_struct *get_vm_area_caller(unsigned long size, unsigned long flags,
- void *caller)
+ const void *caller)
{
return __get_vm_area_node(size, 1, flags, VMALLOC_START, VMALLOC_END,
-1, GFP_KERNEL, caller);
}
-static struct vm_struct *find_vm_area(const void *addr)
+/**
+ * find_vm_area - find a continuous kernel virtual area
+ * @addr: base address
+ *
+ * Search for the kernel VM area starting at @addr, and return it.
+ * It is up to the caller to do all required locking to keep the returned
+ * pointer valid.
+ */
+struct vm_struct *find_vm_area(const void *addr)
{
struct vmap_area *va;
static void *__vmalloc_node(unsigned long size, unsigned long align,
gfp_t gfp_mask, pgprot_t prot,
- int node, void *caller);
+ int node, const void *caller);
static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
- pgprot_t prot, int node, void *caller)
+ pgprot_t prot, int node, const void *caller)
{
const int order = 0;
struct page **pages;
*/
void *__vmalloc_node_range(unsigned long size, unsigned long align,
unsigned long start, unsigned long end, gfp_t gfp_mask,
- pgprot_t prot, int node, void *caller)
+ pgprot_t prot, int node, const void *caller)
{
struct vm_struct *area;
void *addr;
*/
static void *__vmalloc_node(unsigned long size, unsigned long align,
gfp_t gfp_mask, pgprot_t prot,
- int node, void *caller)
+ int node, const void *caller)
{
return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END,
gfp_mask, prot, node, caller);
if (v->flags & VM_IOREMAP)
seq_printf(m, " ioremap");
+ if (v->flags & VM_DMA)
+ seq_printf(m, " dma");
+
if (v->flags & VM_ALLOC)
seq_printf(m, " vmalloc");
"Reclaimable",
"Movable",
"Reserve",
+#ifdef CONFIG_CMA
+ "CMA",
+#endif
"Isolate",
};
config SND_SOC_SAMSUNG
tristate "ASoC support for Samsung"
- depends on ARCH_S3C24XX || ARCH_S3C64XX || ARCH_S5PC100 || ARCH_S5PV210 || ARCH_S5P64X0 || ARCH_EXYNOS4
+ depends on PLAT_SAMSUNG
select S3C64XX_DMA if ARCH_S3C64XX
select S3C2410_DMA if ARCH_S3C24XX
help
config SND_SOC_SAMSUNG_SMDK_WM8994
tristate "SoC I2S Audio support for WM8994 on SMDK"
- depends on SND_SOC_SAMSUNG && (MACH_SMDKV310 || MACH_SMDKC210 || MACH_SMDK4212)
+ depends on SND_SOC_SAMSUNG && (MACH_SMDKV310 || MACH_SMDKC210 || MACH_SMDK4212 || MACH_SMDK4412 || SOC_EXYNOS5250)
depends on I2C=y && GENERIC_HARDIRQS
select MFD_WM8994
select SND_SOC_WM8994
config SND_SOC_SAMSUNG_SMDK_SPDIF
tristate "SoC S/PDIF Audio support for SMDK"
- depends on SND_SOC_SAMSUNG && (MACH_SMDKC100 || MACH_SMDKC110 || MACH_SMDKV210 || MACH_SMDKV310 || MACH_SMDK4212)
+ depends on SND_SOC_SAMSUNG && (MACH_SMDKC100 || MACH_SMDKC110 || MACH_SMDKV210 || MACH_SMDKV310 || MACH_SMDK4212 || MACH_SMDK4412 || SOC_EXYNOS5250)
select SND_SAMSUNG_SPDIF
help
Say Y if you want to add support for SoC S/PDIF audio on the SMDK.
config SND_SOC_SMDK_WM8994_PCM
tristate "SoC PCM Audio support for WM8994 on SMDK"
- depends on SND_SOC_SAMSUNG && (MACH_SMDKC210 || MACH_SMDKV310 || MACH_SMDK4212)
+ depends on SND_SOC_SAMSUNG && (MACH_SMDKC210 || MACH_SMDKV310 || MACH_SMDK4212 || MACH_SMDK4412 || SOC_EXYNOS5250)
depends on I2C=y && GENERIC_HARDIRQS
select MFD_WM8994
select SND_SOC_WM8994
select SND_SAMSUNG_I2S
select MFD_WM8994
select SND_SOC_WM8994
+
+config SND_SOC_EXYNOS_MAX98095
+ tristate "SoC I2S Audio support for MAX98095 on EXYNOS5"
+ depends on SND_SOC_SAMSUNG && SOC_EXYNOS5250
+ select SND_SOC_MAX98095
+ select SND_SAMSUNG_I2S
+ help
+ Say Y if you want to add support for SoC audio for max98095 on exynos5.
snd-soc-rx1950-uda1380-objs := rx1950_uda1380.o
snd-soc-smdk-wm8580-objs := smdk_wm8580.o
snd-soc-smdk-wm8994-objs := smdk_wm8994.o
+snd-soc-daisy-max98095-objs := daisy_max98095.o
snd-soc-smdk-wm9713-objs := smdk_wm9713.o
snd-soc-s3c64xx-smartq-wm8987-objs := smartq_wm8987.o
snd-soc-goni-wm8994-objs := goni_wm8994.o
obj-$(CONFIG_SND_SOC_SAMSUNG_RX1950_UDA1380) += snd-soc-rx1950-uda1380.o
obj-$(CONFIG_SND_SOC_SAMSUNG_SMDK_WM8580) += snd-soc-smdk-wm8580.o
obj-$(CONFIG_SND_SOC_SAMSUNG_SMDK_WM8994) += snd-soc-smdk-wm8994.o
+obj-$(CONFIG_SND_SOC_EXYNOS_MAX98095) += snd-soc-daisy-max98095.o
obj-$(CONFIG_SND_SOC_SAMSUNG_SMDK_WM9713) += snd-soc-smdk-wm9713.o
obj-$(CONFIG_SND_SOC_SMARTQ) += snd-soc-s3c64xx-smartq-wm8987.o
obj-$(CONFIG_SND_SOC_SAMSUNG_SMDK_SPDIF) += snd-soc-smdk-spdif.o
--- /dev/null
+/*
+ * Exynos machine ASoC driver for boards using MAX98095 codec.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
+ * 02110-1301 USA
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/clk.h>
+#include <linux/io.h>
+
+#include <sound/soc.h>
+#include <sound/soc-dapm.h>
+#include <sound/pcm.h>
+#include <sound/pcm_params.h>
+#include <sound/jack.h>
+#include <sound/max98095.h>
+
+#include <mach/regs-clock.h>
+
+#include "i2s.h"
+#include "s3c-i2s-v2.h"
+#include "../codecs/max98095.h"
+
+static int set_epll_rate(unsigned long rate)
+{
+ int ret;
+ struct clk *fout_epll;
+
+ fout_epll = clk_get(NULL, "fout_epll");
+
+ if (IS_ERR(fout_epll)) {
+ printk(KERN_ERR "%s: failed to get fout_epll\n", __func__);
+ return PTR_ERR(fout_epll);
+ }
+
+ if (rate == clk_get_rate(fout_epll))
+ goto out;
+
+ ret = clk_set_rate(fout_epll, rate);
+ if (ret < 0) {
+ printk(KERN_ERR "failed to clk_set_rate of fout_epll for audio\n");
+ goto out;
+ }
+out:
+ clk_put(fout_epll);
+
+ return 0;
+}
+
+static int daisy_hw_params(struct snd_pcm_substream *substream,
+ struct snd_pcm_hw_params *params)
+{
+ struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ struct snd_soc_dai *codec_dai = rtd->codec_dai;
+ struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
+ int bfs, psr, rfs, ret;
+ unsigned long rclk;
+
+ switch (params_format(params)) {
+ case SNDRV_PCM_FORMAT_U24:
+ case SNDRV_PCM_FORMAT_S24:
+ bfs = 48;
+ break;
+ case SNDRV_PCM_FORMAT_U16_LE:
+ case SNDRV_PCM_FORMAT_S16_LE:
+ bfs = 32;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ switch (params_rate(params)) {
+ case 16000:
+ case 22050:
+ case 24000:
+ case 32000:
+ case 44100:
+ case 48000:
+ case 88200:
+ case 96000:
+ if (bfs == 48)
+ rfs = 384;
+ else
+ rfs = 256;
+ break;
+ case 64000:
+ rfs = 384;
+ break;
+ case 8000:
+ case 11025:
+ case 12000:
+ if (bfs == 48)
+ rfs = 768;
+ else
+ rfs = 512;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ rclk = params_rate(params) * rfs;
+
+ switch (rclk) {
+ case 4096000:
+ case 5644800:
+ case 6144000:
+ case 8467200:
+ case 9216000:
+ psr = 8;
+ break;
+ case 8192000:
+ case 11289600:
+ case 12288000:
+ case 16934400:
+ case 18432000:
+ psr = 4;
+ break;
+ case 22579200:
+ case 24576000:
+ case 33868800:
+ case 36864000:
+ psr = 2;
+ break;
+ case 67737600:
+ case 73728000:
+ psr = 1;
+ break;
+ default:
+ printk(KERN_ERR "rclk = %lu is not yet supported!\n", rclk);
+ return -EINVAL;
+ }
+
+ ret = set_epll_rate(rclk * psr);
+ if (ret < 0)
+ return ret;
+
+ ret = snd_soc_dai_set_fmt(codec_dai, SND_SOC_DAIFMT_I2S |
+ SND_SOC_DAIFMT_NB_NF |
+ SND_SOC_DAIFMT_CBS_CFS);
+ if (ret < 0)
+ return ret;
+
+ ret = snd_soc_dai_set_fmt(cpu_dai, SND_SOC_DAIFMT_I2S |
+ SND_SOC_DAIFMT_NB_NF |
+ SND_SOC_DAIFMT_CBS_CFS);
+ if (ret < 0)
+ return ret;
+
+ ret = snd_soc_dai_set_sysclk(codec_dai, 0, rclk, SND_SOC_CLOCK_IN);
+ if (ret < 0)
+ return ret;
+
+ ret = snd_soc_dai_set_sysclk(cpu_dai, SAMSUNG_I2S_CDCLK,
+ 0, SND_SOC_CLOCK_OUT);
+ if (ret < 0)
+ return ret;
+
+ ret = snd_soc_dai_set_clkdiv(cpu_dai, SAMSUNG_I2S_DIV_BCLK, bfs);
+ if (ret < 0)
+ return ret;
+
+ return 0;
+}
+
+/*
+ * MAX98095 DAI operations.
+ */
+static struct snd_soc_ops daisy_ops = {
+ .hw_params = daisy_hw_params,
+};
+
+static struct snd_soc_jack daisy_mic_jack;
+static struct snd_soc_jack_pin daisy_mic_jack_pins[] = {
+ {
+ .pin = "Mic Jack",
+ .mask = SND_JACK_MICROPHONE,
+ },
+};
+
+static const struct snd_soc_dapm_route daisy_audio_map[] = {
+ {"Mic Jack", "NULL", "MICBIAS2"},
+ {"MIC2", "NULL", "Mic Jack"},
+};
+
+static const struct snd_soc_dapm_widget daisy_dapm_widgets[] = {
+ SND_SOC_DAPM_MIC("Mic Jack", NULL),
+};
+
+static int daisy_init(struct snd_soc_pcm_runtime *rtd)
+{
+ struct snd_soc_codec *codec = rtd->codec;
+ struct snd_soc_dapm_context *dapm = &codec->dapm;
+
+ /* TODO(thutt): This must be hooked up to the headphone-switch
+ * & microphone-detect GPIOs on the Exynos and plumbed through
+ * to to /dev/input. These two GPIOs are presently done as a
+ * rework to existing boards.
+ */
+ snd_soc_jack_new(codec, "Mic Jack", SND_JACK_MICROPHONE,
+ &daisy_mic_jack);
+ snd_soc_jack_add_pins(&daisy_mic_jack,
+ ARRAY_SIZE(daisy_mic_jack_pins),
+ daisy_mic_jack_pins);
+
+ /* Microphone BIAS is needed to power the analog mic.
+ * MICBIAS2 is connected to analog mic (MIC3, which is in turn
+ * connected to MIC2 via 'External MIC') on Daisy.
+ *
+ * Ultimately, the following should hold:
+ *
+ * Microphone in jack => MICBIAS2 enabled &&
+ * 'External Mic' = MIC2
+ * Microphone not in jack => MICBIAS2 disabled &&
+ * 'External Mic' = MIC1
+ */
+ snd_soc_dapm_force_enable_pin(dapm, "MICBIAS2");
+
+ snd_soc_dapm_sync(dapm);
+
+ return 0;
+}
+
+static struct snd_soc_dai_link daisy_dai[] = {
+ { /* Primary DAI i/f */
+ .name = "MAX98095 RX",
+ .stream_name = "Playback",
+ .cpu_dai_name = "samsung-i2s.0",
+ .codec_dai_name = "HiFi",
+ .platform_name = "samsung-audio",
+ .init = daisy_init,
+ .ops = &daisy_ops,
+ }, { /* Capture i/f */
+ .name = "MAX98095 TX",
+ .stream_name = "Capture",
+ .cpu_dai_name = "samsung-i2s.0",
+ .codec_dai_name = "HiFi",
+ .platform_name = "samsung-audio",
+ .ops = &daisy_ops,
+ },
+};
+
+static struct snd_soc_card daisy_snd = {
+ .name = "DAISY-I2S",
+ .dai_link = daisy_dai,
+ .num_links = ARRAY_SIZE(daisy_dai),
+ .dapm_widgets = daisy_dapm_widgets,
+ .num_dapm_widgets = ARRAY_SIZE(daisy_dapm_widgets),
+ .dapm_routes = daisy_audio_map,
+ .num_dapm_routes = ARRAY_SIZE(daisy_audio_map),
+};
+
+static struct platform_device *daisy_snd_device;
+
+static int __init daisy_audio_init(void)
+{
+ struct device_node *dn;
+ int i, ret;
+
+ /* The below needs to be replaced with proper full device-tree probing
+ * of the ASoC device, but the core plumbing hasn't been completed yet
+ * so we're doing this only half-way now.
+ */
+
+ if (!of_machine_is_compatible("google,snow") &&
+ !of_machine_is_compatible("google,daisy"))
+ return -ENODEV;
+
+ dn = of_find_compatible_node(NULL, NULL, "maxim,max98095");
+ if (!dn)
+ return -ENODEV;
+
+ for (i = 0; i < ARRAY_SIZE(daisy_dai); i++)
+ daisy_dai[i].codec_of_node = of_node_get(dn);
+
+ of_node_put(dn);
+
+ daisy_snd_device = platform_device_alloc("soc-audio", -1);
+ if (!daisy_snd_device)
+ return -ENOMEM;
+
+ platform_set_drvdata(daisy_snd_device, &daisy_snd);
+ ret = platform_device_add(daisy_snd_device);
+ if (ret)
+ platform_device_put(daisy_snd_device);
+
+ return ret;
+}
+module_init(daisy_audio_init);
+
+static void __exit daisy_audio_exit(void)
+{
+ platform_device_unregister(daisy_snd_device);
+}
+module_exit(daisy_audio_exit);
+
+MODULE_DESCRIPTION("ALSA SoC DAISY MAX98095");
+MODULE_LICENSE("GPL");
? DMA_MEM_TO_DEV : DMA_DEV_TO_MEM);
dma_info.width = prtd->params->dma_size;
dma_info.fifo = prtd->params->dma_addr;
+ dma_info.dt_dmach_prop = prtd->params->dma_prop;
prtd->params->ch = prtd->params->ops->request(
prtd->params->channel, &dma_info);
}
int dma_size; /* Size of the DMA transfer */
unsigned ch;
struct samsung_dma_ops *ops;
+ struct property *dma_prop;
};
#endif
#include <linux/clk.h>
#include <linux/io.h>
#include <linux/module.h>
+#include <linux/of.h>
#include <linux/pm_runtime.h>
#include <sound/soc.h>
#include <sound/pcm_params.h>
#include <plat/audio.h>
+#include <plat/dma-pl330.h>
#include "dma.h"
#include "idma.h"
struct i2s_dai *i2s_alloc_dai(struct platform_device *pdev, bool sec)
{
struct i2s_dai *i2s;
+ int id;
i2s = devm_kzalloc(&pdev->dev, sizeof(struct i2s_dai), GFP_KERNEL);
if (i2s == NULL)
i2s->i2s_dai_drv.capture.rates = SAMSUNG_I2S_RATES;
i2s->i2s_dai_drv.capture.formats = SAMSUNG_I2S_FMTS;
} else { /* Create a new platform_device for Secondary */
+ if (pdev->dev.of_node) {
+ id = of_alias_get_id(pdev->dev.of_node, "i2s");
+ if (id < 0)
+ dev_err(&pdev->dev,
+ "failed to get alias id:%d\n", id);
+ } else {
+ id = pdev->id;
+ }
+
i2s->pdev = platform_device_register_resndata(NULL,
- pdev->name, pdev->id + SAMSUNG_I2S_SECOFF,
+ "samsung-i2s", id + SAMSUNG_I2S_SECOFF,
NULL, 0, NULL, 0);
if (IS_ERR(i2s->pdev))
return NULL;
static __devinit int samsung_i2s_probe(struct platform_device *pdev)
{
- u32 dma_pl_chan, dma_cp_chan, dma_pl_sec_chan;
+ u32 dma_pl_chan, dma_cp_chan;
+ u32 dma_pl_sec_chan = 0;
struct i2s_dai *pri_dai, *sec_dai = NULL;
struct s3c_audio_pdata *i2s_pdata;
struct samsung_i2s *i2s_cfg;
struct resource *res;
u32 regs_base, quirks;
- int ret = 0;
+ struct property *prop;
+ int ret = 0, id;
/* Call during Seconday interface registration */
- if (pdev->id >= SAMSUNG_I2S_SECOFF) {
+ if (pdev->dev.of_node) {
+ id = of_alias_get_id(pdev->dev.of_node, "i2s");
+ if (id < 0)
+ dev_err(&pdev->dev, "failed to get alias id:%d\n", id);
+ } else {
+ id = pdev->id;
+ }
+
+ if (id >= SAMSUNG_I2S_SECOFF) {
sec_dai = dev_get_drvdata(&pdev->dev);
snd_soc_register_dai(&sec_dai->pdev->dev,
&sec_dai->i2s_dai_drv);
return -EINVAL;
}
- res = platform_get_resource(pdev, IORESOURCE_DMA, 0);
- if (!res) {
- dev_err(&pdev->dev, "Unable to get I2S-TX dma resource\n");
- return -ENXIO;
- }
- dma_pl_chan = res->start;
-
- res = platform_get_resource(pdev, IORESOURCE_DMA, 1);
- if (!res) {
- dev_err(&pdev->dev, "Unable to get I2S-RX dma resource\n");
- return -ENXIO;
- }
- dma_cp_chan = res->start;
-
- res = platform_get_resource(pdev, IORESOURCE_DMA, 2);
- if (res)
- dma_pl_sec_chan = res->start;
- else
- dma_pl_sec_chan = 0;
-
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res) {
dev_err(&pdev->dev, "Unable to get I2S SFR address\n");
goto err;
}
+ if (!pdev->dev.of_node) {
+ res = platform_get_resource(pdev, IORESOURCE_DMA, 0);
+ if (!res) {
+ dev_err(&pdev->dev,
+ "Unable to get I2S-TX dma resource\n");
+ return -ENXIO;
+ }
+ dma_pl_chan = res->start;
+
+ res = platform_get_resource(pdev, IORESOURCE_DMA, 1);
+ if (!res) {
+ dev_err(&pdev->dev,
+ "Unable to get I2S-RX dma resource\n");
+ return -ENXIO;
+ }
+ dma_cp_chan = res->start;
+
+ res = platform_get_resource(pdev, IORESOURCE_DMA, 2);
+ if (res)
+ dma_pl_sec_chan = res->start;
+ } else {
+ prop = of_find_property(pdev->dev.of_node,
+ "tx-dma-channel", NULL);
+ dma_pl_chan = DMACH_DT_PROP;
+ pri_dai->dma_playback.dma_prop = prop;
+
+ prop = of_find_property(pdev->dev.of_node,
+ "rx-dma-channel", NULL);
+ dma_cp_chan = DMACH_DT_PROP;
+ pri_dai->dma_capture.dma_prop = prop;
+ }
+
pri_dai->dma_playback.dma_addr = regs_base + I2STXD;
pri_dai->dma_capture.dma_addr = regs_base + I2SRXD;
pri_dai->dma_playback.client =
sec_dai->dma_playback.dma_addr = regs_base + I2STXDS;
sec_dai->dma_playback.client =
(struct s3c2410_dma_client *)&sec_dai->dma_playback;
+
+ if (pdev->dev.of_node) {
+ prop = of_find_property(pdev->dev.of_node,
+ "tx-dma-channel-secondary",
+ NULL);
+ sec_dai->dma_playback.dma_prop = prop;
+ }
+
/* Use iDMA always if SysDMA not provided */
sec_dai->dma_playback.channel = dma_pl_sec_chan ? : -1;
sec_dai->src_clk = i2s_cfg->src_clk;
return 0;
}
+#ifdef CONFIG_OF
+static const struct of_device_id exynos_i2s_match[] = {
+ { .compatible = "samsung,i2s" },
+ {},
+};
+MODULE_DEVICE_TABLE(of, exynos_i2s_match);
+#endif
+
static struct platform_driver samsung_i2s_driver = {
.probe = samsung_i2s_probe,
.remove = __devexit_p(samsung_i2s_remove),
.driver = {
.name = "samsung-i2s",
.owner = THIS_MODULE,
+ .of_match_table = of_match_ptr(exynos_i2s_match),
},
};
#include <sound/pcm_params.h>
#include <plat/audio.h>
-#include <plat/dma.h>
+#include <mach/dma.h>
#include "dma.h"
#include "pcm.h"
#include "../codecs/wm8994.h"
#include <sound/pcm_params.h>
#include <linux/module.h>
+#include <linux/of.h>
/*
* Default CFG switch settings to use this driver:
{
int ret;
+ if (!of_machine_is_compatible("samsung,smdk5250"))
+ return -ENODEV;
+
smdk_snd_device = platform_device_alloc("soc-audio", -1);
if (!smdk_snd_device)
return -ENOMEM;